The COMPROP project, which is funded by the EU, is set out to investigate networks of automated social media accounts and the role that they play in shaping the opinion of the public. The project was initially focused on Twitter but the team at the University of Oxford’s Programme on Democracy and Technology found computational propaganda – algorithms put to work for a political agenda – on Facebook, Instagram, Telegram, YouTube, and even dating app Tinder. “We didn’t expect over the course of the project the problem would grow as bad as it did,” notes principal investigator Philip Howard. “We can see how some governments, lobbyists, the far right and white supremacists all use these to manipulate democracies.”

“When Malaysia Airlines Flight 17 was shot down over Ukraine, there were multiple ridiculous stories of what transpired – that democracy advocates shot it down, that United States troops shot it down, that a lost tank from WWII came out of the forest and shot it down,” adds Howard.

via

The goal was to spread misinformation. Spreading multiple conflicting stories, authoritarian regimes prevent their citizens from knowing which narrative to respond to. The strategy was eventually turned outward to undermine the social movements and destabilise foreign nations. “Sometimes campaigns are about a specific crisis or person, but often the goal is to undermine trust in courts, police, journalism, science, or government at large,” explains Howard.

The target audience for these bots is perhaps only 10-20 % of the population, typically disaffected, conservative-leaning adults who are politically active. In a highly polarised country, swaying 10 % of the electorate can have a resounding impact. Howard adds that these campaigns are particularly bad for the role of women and minorities in public life: “It’s much easier to drive a woman out of public life than a man.”

The COMPROP project is heavily focused on the COVID misinformation. Howard notes that this comes from three main sources, Russian media, Chinese media, and United States former president Donald Trump. While Trump’s disinformation was tied to domestic United States politics, Russia and China pushed three broad themes intended for foreign audiences.

“The first was that democracy can’t help us, elected leaders are too weak to make decisions,” adds Howard. “The second message was that Russian or Chinese scientists were going to get the vaccine first, and the third was that Russia or China was leading on humanitarian assistance efforts.” Howard says more effort is needed to contain these propaganda networks. “We’re past the point of self-regulation by industry. If tech firms stepped up, and governments imposed fines on politicians who commission these programmes, that set of initiatives would go a long way.”

It has proven difficult to identify which of the social media accounts are automated. “One bot writer in Germany said his team would read our methodology papers and adjust their algorithms to just below our catchment,” says Howard. “We were in a sort of dialogue with these programmers.”

Howard and his team is now focusing on how machine learning technology will power a new generation of computational propaganda. “If someone can take your social media feed and behavioural data, and come up with political messages you’ll respond to, they’ll do that,” he concludes. “This is the next great threat.”