The Lifeboat News
[ Message Archive | The Lifeboat News ]

    Thousands Of Misleading Facebook Ads Help Conservatives To ‘Crushing’/Election Victory #EpochUnmake Archived Message

    Posted by Gerard on December 21, 2019, 8:44 am

    Quote; "At the end of October, Twitter announced that it was banning political advertisements. The announcement came after its bigger rival Facebook ruled out the possibility of banning political ads on its own social network. But not only did Facebook refuse to ban political ads, it also refused to subject such ads to fact-checking, something which critics said was necessary in view of the often deceptive nature of posts from political sources.

    Well, this controversial policy appears to have had the desired effect, as the incumbent Conservative government celebrated a "crushing" victory in Thursday's U.K. general election. It gained 365 seats in Parliament compared to the Labour Party's 203, and while Prime Minister Boris Johnson claimed that the result gave his administration a "stonking mandate" for Brexit, the victory may not simply have been a result of the electorate's desire for a swift exit from the EU.

    According to research from fact-checking non-profit First Draft, 88% of the Facebook ads the Conservatives posted in the first four days of December were deemed misleading by Full Fact, one of the U.K.’s biggest fact-checking organisations. By contrast, First Draft said that it couldn’t identify any specific Labour Facebook ads as misleading or false, although the party paid for fewer posts than its rivals.

    In particular, First Draft noted that 5,000 of the 6,749 ads paid for by the Conservatives contained references to the construction of 40 new hospitals. This figure was disputed by Full Fact, given that it had not been costed and given that the Conservatives' spending plans for the next Parliament had allocated funding only for six hospitals to be upgraded by 2025.

    Not only that, but false claims about the cost of Labour's spending plans to the average tax payer made an appearance in more than 4,000 Facebook ads paid for by the Conservatives, while 500 promoted the widely refuted claim that the Tories will create jobs for 50,000 more nurses, even though the promise actually includes simply retaining 18,500 nurses already on the job.

    In other words, the Conservatives' social media election strategy involved subjecting hundreds of thousands, if not millions, of voters to false ad after false ad. Perhaps voters were swayed more by the desire to progress with Brexit when casting their votes, but the simple fact that the Conservatives paid for these “post-truth” ads indicates that the party believed they would have a tangible effect. If not, then why pay for them?

    Yet putting aside the question of the ethics of the Conservative Party's conduct here, much of the scrutiny must reside with Facebook and its founder, Mark Zuckerberg.

    Zuckerberg is on record defending his company's political ads policy, having argued in recent weeks that "People should be able to see for themselves what politicians are saying." Likewise, a Facebook spokesperson told me, "We don’t believe a private company like Facebook should censor politicians. Our approach is instead to introduce unprecedented levels of transparency so anyone can see every political advert and who it’s from. This means that voters, journalists and fact-checkers can scrutinise the claims being made by politicians and hold them to account."

    At best this is a naive take on the issue, while at worst it's downright cynical and dangerous. It's wrong to assume or believe that the general public will simply look at misleading or false ads and quickly realise that they and the people who wrote them are lying.

    Because once again, the fact that certain political parties pay for thousands of untruthful ads indicates that such parties truly believe that much of the public will believe the lies said ads peddle. They do not think, "Let's allow people to see what misleading statements and lies we're propagating." No, they think, "Let's lie to the public and hope our lies convince the public to vote for us."

    Studies support the notion that Facebook ads are persuasive. A 2018 paper published by researchers at the University of Warwick found that, in the context of the 2016 presidential election, Facebook ads "had a significant effect in persuading undecided voters to support Mr Trump, and in persuading Republican supporters to turn out on polling day." More specifically, it found that "neutral voters who access Facebook on a daily basis are up to twice more likely to support the Republican candidates than neutral voters who do not have a Facebook account."

    So, not only were the vast majority of the Conservative Party ads misleading, but they were also likely persuasive, as indicated by yet more research from 2018 which concluded that Facebook ads increased the number of Trump voters in 2016 by 10%.

    If Mark Zuckerberg is an intelligent person (and there's no reason to think he isn't), he will be well aware of all this. And the fact that he doesn't change Facebook's policy to prevent political misinformation suggests that he's happy with his social network being used–on an industrial scale–to disseminate such misinformation.

    As the First Draft report indicates, it's a right-wing party that has spread much more misinformation than its centrist and left-wing rivals. And in general, similar reports and studies reinforce the suspicion that right-wing parties are more likely to use social media to deceive and manipulate public opinion.*

    Looking at the 2018 Brazilian presidential election, for instance, investigations from a number of national journals revealed that various companies supportive of the far-right Jair Bolsonaro spent millions of dollars on WhatsApp posts and ads, many of which were false. In the EU elections in May, researchers at the University of Arizona discovered that 12% of tweets using hashtags promoted by far-right EU parties came from bots, compared to only 6% for tweets associated with all parties. And in the good ol’ United States, a 2018 analysis by the University of Oxford’s Internet Institute found that "ultra-rightwing conservatives" shared more false stories on Facebook than all other political groups put together in the quarter leading up to President Trump’s State of the Union address.

    It's arguable that the right-wing’s greater attraction to falsity is a trend Zuckerberg is happy to profit from and facilitate. That's because it's left-wing parties, candidates and intellectuals that are threatening to regulate Facebook, tax it, break it up, or even nationalize it. Indeed, secret recordings leaked in October reveal that, in response to Elizabeth Warren's threats to regulate and break up Facebook, Mark Zuckerberg is gearing up for a “fight” with any organization or government that might attempt to restrict its cancerous growth.

    "But look, at the end of the day, if someone’s going to try to threaten something that existential, you go to the mat and fight," he said, in a speech to Facebook employees that also saw him indicate that he'd be prepared to sue the U.S. government, should it attempt to follow through with any policies such as Warren's.

    It gets worse, because (also in October) Politico revealed that the Facebook CEO had been holding meetings with conservative journalists, commentators and even a Republican lawmaker. These meetings related to issues of “free speech,” and appeared to be an attempt by Zuckerberg to woo elements of the right, and to assure the right that Facebook would support its efforts to use the social network to communicate with the public.

    "The White House is looking for meaningful steps from Facebook on a number of fronts,” a senior Trump administration official told Politico, clarifying that these fronts included “competition, free speech for everybody including conservatives, and privacy.”

    This is potentially ominous, because in conjunction with Zuckerberg's apparent opposition to attempts to regulate or tame Facebook, they appear to indicate that the social network's political allegiances are shifting to the right, at a time when far-right governments are in the ascendence internationally.

    But regardless of how true this is, the fact that Facebook refuses to fact-check political ads does not reflect well on the social network, particularly in combination with the fact that the Conservative Party has just used thousands of misleading Facebook ads to help it win another U.K. election. It reveals that in strictly functional terms, and with specific regards to its ads policy, Facebook is indistinguishable from a social network that is avowedly and openly in favor of right-wing governments and politics." https://www.forbes.com/sites/simonchandler/2019/12/14/thousands-of-misleading-facebook-ads-help-conservatives-to-crushing-uk-election-victory/#3d9a1d7c3382

    Quote; "Facebook said on Friday that it had removed hundreds of accounts with ties to the Epoch Media Group, parent company of the Falun Gong-related publication and conservative news outlet The Epoch Times.

    The accounts, including pages, groups and Instagram feeds meant to be seen in both the United States and Vietnam, presented a new wrinkle to researchers: fake profile photos generated with the help of artificial intelligence.

    The idea that artificial intelligence could be used to create wide-scale disinformation campaigns has long been a fear of computer scientists. And they said it was worrying to see it already being used in a coordinated effort on Facebook.

    While the technology used to create the fake profile photos was most likely a far cry from the sophisticated A.I. systems being created in labs at big tech companies like Google, the network of fake accounts showed “an eerie, tech-enabled future of disinformation,” said Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab.

    Scientists have already shown that machines can generate images and sounds that are indistinguishable from the real thing or spew vast volumes of fake text, which could accelerate the creation of false and misleading information. This year, researchers at a Canadian company even built a system that learned to imitate the voice of the podcaster Joe Rogan by analyzing audio from his old podcasts. It was a shockingly accurate imitation.

    The people behind the network of 610 Facebook accounts, 89 Facebook Pages, 156 Groups and 72 Instagram accounts posted about political news and issues in the United States, including President Trump’s impeachment, conservative ideology, political candidates, trade and religion.

    “This was a large, brazen network that had multiple layers of fake accounts and automation that systematically posted content with two ideological focuses: support of Donald Trump and opposition to the Chinese government,” Mr. Brookie said in an interview.

    The Atlantic Council’s lab and another company, Graphika, which also studies disinformation, released a joint report analyzing the Facebook takedown.

    The Epoch Media Group denied in an email sent to The New York Times that it was linked to the network targeted by Facebook, and said that Facebook had not contacted the company before publishing its conclusions.

    The people behind the network used artificial intelligence to generate profile pictures, Facebook said. They relied on a type of artificial intelligence called generative adversarial networks. These networks can, through a process called machine learning, teach themselves to create realistic images of faces, even though they do not belong to a real person.

    Nathaniel Gleicher, Facebook’s head of security policy, said in an interview that “using A.I.-generated photos for profiles” has been talked about for several months, but for Facebook, this is “the first time we’ve seen a systemic use of this by actors or a group of actors to make accounts look more authentic.”

    He added that this A.I. technique did not actually make it harder for the company’s automated systems to detect the fakes, because the systems focus on patterns of behavior among accounts.

    Ben Nimmo, director of investigations at Graphika, said that “we need more research into A.I.-generated imagery like this, but it takes a lot more to hide a fake network than just the profile pictures.”

    Facebook said the accounts masked their activities by using a combination of fake and authentic American accounts to manage pages and groups on the platforms. The coordinated, inauthentic activity, Facebook said, revolved around the media outlet The BL — short for “The Beauty of Life” — which the fact-checking outlet Snopes said in November was “building a fake empire on Facebook and getting away with it.”

    Mr. Gleicher said Facebook began its investigation into The BL in July, and accelerated its efforts when the network became more aggressive in posting this fall. It is continuing to investigate “other links and networks” tied to The BL, he said.

    Facebook said the network had spent less than $9.5 million on Facebook and Instagram ads. On Friday, Facebook said The BL would be banned from the social network.

    The Epoch Times and The BL have denied being linked, but Facebook said it had found coordinated, inauthentic behavior from the network to the Epoch Media Group and individuals in Vietnam working on its behalf.

    The Epoch Media Group said in its email that The BL was founded by a former employee and employs some of its former employees. “However, that some of our former employees work for BL is not evidence of any connection between the two organizations,” the company said.

    A Facebook spokeswoman said executives The BL were active administrators on Epoch Media Group Pages as recently as Friday morning.

    In August, Facebook banned advertising from The Epoch Times after NBC News published a report that said The Epoch Times had obscured its connection to Facebook ads promoting President Trump and conspiracy content.

    Twitter said on Friday that the social network was also aware of The BL network, and had already “identified and suspended approximately 700 accounts originating from Vietnam for violating our rules around platform manipulation.” A company spokeswoman added that its investigation was still open, but Twitter has not identified links between the accounts and state-backed actors.

    Facebook also said on Friday that it had taken down a network of more than 300 pages and 39 Facebook accounts and their coordinated, inauthentic activities on domestic political news in Georgia.

    Facebook said the network tried to conceal its coordination but it found that the accounts responsible were run by the Georgian Dream-led government, and Panda, a local advertising agency in the country. The owners of the Facebook pages masqueraded as news organizations and impersonated public figures, political parties and activist groups.

    In a related move, Twitter said it also took down 32 million tweets from nearly 6,000 accounts related to a Saudi Arabian social media marketing company called Smaat, which ran political and commercial influence operations.

    Smaat was led in part by Ahmed Almutairi, a Saudi man wanted by the F.B.I. on charges that he recruited two Twitter employees to search internal company databases for information about critics of the Saudi government, said Renee DiResta, a disinformation researcher at the Stanford Internet Observatory, which separately analyzed Twitter’s takedown.

    The operation was “extremely high volume,” and automatically generated by “Twitter apps that made religious posts, posts about the weather” and other topics, Ms. DiResta said.

    At times, the accounts were used for “more tailored purposes,” including more than 17,000 tweets related to Jamal Khashoggi, a Saudi dissident and columnist for The Washington Post, who was killed while visiting a Saudi consulate in October last year.

    Many of the tweets claimed that those criticizing the Saudi government for their involvement were doing so for their own political purposes." https://www.nytimes.com/2019/12/20/business/facebook-ai-generated-profiles.html


    *Italics mine.

    Funny how the fake acc was only discovered after the election..both the acc. loss and the revelation could easily have influenced the result!

    Message Thread:

    • Thousands Of Misleading Facebook Ads Help Conservatives To ‘Crushing’/Election Victory #EpochUnmake - Gerard December 21, 2019, 8:44 am