OPINION / VIEWPOINT
Computational propaganda poses challenge
Published: Sep 12, 2017 08:58 PM

Illustration: Liu Rui/GT



 

A research team from the University of Oxford has recently found that computational propaganda is a pervasive and global phenomenon.

Computational propaganda is the assemblage of social media platforms like Facebook and Twitter, autonomous agents, algorithms, and big data directed toward the regimentation, and even manipulation of public opinion. It has been widely used in different countries.

Computational propaganda figures prominently in policy debates, elections, and national and political crises. Computational propaganda troops involve military units, political parties and strategic communication firms that take contracts from governments for social media campaigns. Brexit and Donald Trump's victory in the 2016 US presidential election illustrate the potential power of computational propaganda.

Four factors are driving and facilitating the rise of computational propaganda. First, the increasing sophistication of artificial intelligence (AI) and advances in "computational social science" have given rise to machine-driven communication tools, which help establish a technological foundation for the development of computational propaganda.

Second, principles from cognitive psychology and the science of persuasion are skillfully utilized to serve computational propaganda.

Third, cyberspace generally, and social media in particular, have increasingly become key platforms for citizens to access information and deepen their civic engagement. Such forums are particularly popular among young people and help develop their political identities and loyalties. As a result, it is hardly surprising to see that "social bots" - software programs designed to mimic human social media users on platforms like Facebook and Twitter - are most commonly deployed in sophisticated computational propaganda. In this sense, social bots are becoming "political bots."

Last but not least, bot traffic has exploded in cyberspace. For example, bots produced about 50 percent of sampled total visits to 100,000 randomly-selected domains on the Incapsula network during 2012-15. More importantly, "bad" bot activity fluctuated around the 30 percent mark during this period. Bad bots are becoming a key part of computational propaganda. Emilio Ferrara, a researcher at the University of Southern California, suggests we may see the possibility of "a black market for reusable political disinformation bots."

Computational propaganda can take many forms, not all of which are sophisticated. The basic mode involves the strategic deployment of political bots to demobilize an opposing party's followers, attack foreign states or prominent political figures on social media, deliver pro-government or pro-candidate microblog messages, and inflate politicians' social media follower lists.

More advanced forms of computational propaganda involve the hybrid usage of algorithmic distribution and human curation - political bots and trolls working together. Specifically, political bots are employed over social media to generate disinformation, game hashtags, manufacture trends, megaphone particular content or messages and launch astroturfing. Even more controversially, they are used in collaboration with human Internet trolls to spam the opposition, harass and attack journalists and activists. This superior but insidious employment of computational propaganda usually lies in the hands of powerful and well-resourced political actors.

The impact that computational propaganda has, as a result, is complex. There are actually examples of positive contributions from algorithm and automation underlying computational propaganda. For example, a Canadian case study clearly identifies some complex algorithm and bots that seek to undertake constructive public services, improve journalism, and generate public knowledge. However, computational propaganda, more often than not, creates profound socio-political and ethical problems, primarily because it is often closely linked to malicious activities including disinformation, spamming and trolling. As a result, computational propaganda may pose a serious threat to political security and social development by exposing citizens open to the influence of a "firehose of falsehood," diminishing public trust in established institutions, and worsening social divisions and political polarization.

Computational propaganda is likely to become more important in the era of "post-truth" politics, where fake news easily goes viral as objective facts are less influential in shaping public opinion than are appeals to emotion and personal belief. Computational propaganda is increasingly intertwined with post-truth politics.

The long-held idealist view of social media for deepening civic engagement and improving democracy is now seriously challenged by insidious and malicious activities associated with computational propaganda. Cyberspace seems to have more to do with the logic of realpolitik, rather than that of noopolitik characterized by the universal sharing of ideas.

The rise of computational propaganda calls for a new model of global Internet governance. Global Internet governance has been paralyzed due to the dichotomization between the "multi-stakeholder" and "multilateral" approach to governance. It is time to seek a middle way to rebuild global Internet governance and address the terrible beauty of computational propaganda born in the era of Pax Technica.

The author is a research fellow at the Institute of Public Policy, South China University of Technology. Prof. Mark Beeson (The University of Western Australia) and Mr. Dulguun Baasandavaa (Harvard Kennedy School) helped polish this article. opinion@globaltimes.com.cn