Facebook blocks fake accounts but election battle goes on

November 7, 2018 GMT

NEW YORK (AP) — Facebook’s recent disclosures on blocking suspicious accounts show that the company’s efforts to root out election meddling are working — to a point.

What’s not known is how much the company isn’t catching and whether this “whack-a-mole” fight will ever end, as those wanting to influence U.S. and other elections can easily create replacement Facebook pages, groups and accounts.

Facebook said it blocked an unspecified number of additional accounts on Election Day because of suspected connections to foreign efforts to interfere in the voting through disinformation on social media. That’s on top of the 115 accounts Facebook shut down earlier this week and the 652 pages, groups and accounts removed in August.

Facebook said the additional accounts were identified after a website that claimed to be associated with the Russia-based Internet Research Agency published a list of Instagram accounts it says it created. Facebook said it had blocked most of the listed accounts already and has now blocked the rest.

“This is a timely reminder that these bad actors won’t give up,” Nathaniel Gleicher, Facebook’s head of cyber security policy, said in a statement.

U.S. tech companies have stepped up efforts to fight disinformation campaigns by Russian groups, whom authorities accuse of swaying the 2016 presidential election. The companies were caught embarrassingly off-guard then. This time around, there are clear signs they are making some progress.

Sam Gill of the nonprofit John S. and James L. Knight Foundation, which recently commissioned a study on misinformation on social media, said that while tech companies cannot declare victory yet, “the leaders of the companies don’t talk any more that it isn’t a problem — they talk about how important it is to get it right.”

That’s in contrast to Facebook CEO Mark Zuckerberg’s now-infamous quip in November 2016 calling the idea that fake news on Facebook influenced the elections “pretty crazy.”

But social media companies still have work to do . By some measures, the spread of fake news on Facebook has declined since 2016, but the same can’t always be said for Twitter.

The Knight study on misinformation points to a central problem that has emerged since 2016: It isn’t just Russian agents spreading misinformation. Plenty of homegrown sites are at it, too.

The study found that fake news is still being spread on Twitter, the vast majority from just a few sources.

Gill said that, at this point, we simply “don’t know enough” to say how the spread of misinformation has changed since 2016. That’s despite a slew of academic studies that attempt to measure the spread and consumption of fake news on these services.

“We need a lot more basic research studying the relationship between social media and democracy,” he said. “We need to see more and understand more from the companies. We need access to more information.”

Long criticized for not giving academic researchers access to its data, Facebook launched a program in April designed to address this issue — though only when it comes to elections. The initiative solicits proposals from outside researchers, then works with Facebook to give researchers access to Facebook data. Facebook doesn’t get to pre-approve research and provides no funding.

But until there is more research, social media companies must contend with present-day problems around misinformation, hate and propaganda, playing whack-a-mole as new fake accounts and trolls pop up trying to misuse their services.

After all, the 2020 presidential election is less than two years away — and jockeying for that contest starts now.