Login

Lost your password?
Don't have an account? Sign Up

Algorithms, Lies, and Social Media

Achieving a more transparent and less manipulative online media may well be the defining political battle of the 21st century.


By Stephan Lewandowsky and Anastasia Kozyreva

There was a time when the internet was seen as an unequivocal force for social good. It propelled progressive social movements from Black Lives Matter to the Arab Spring; it set information free and flew the flag of democracy worldwide. But today, democracy is in retreat and the internet’s role as driver is palpably clear. From fake news bots to misinformation to conspiracy theories, social media has commandeered mindsets, evoking the sense of a dark force that must be countered by authoritarian, top-down controls.

This paradox—that the internet is both savior and executioner of democracy—can be understood through the lenses of classical economics and cognitive science. In traditional markets, firms manufacture goods, such as cars or toasters, that satisfy consumers’ preferences. Markets on social media and the internet are radically different because the platforms exist to sell information about their users to advertisers, thus serving the needs of advertisers rather than consumers. On social media and parts of the internet, users “pay” for free services by relinquishing their data to unknown third parties who then expose them to ads targeting their preferences and personal attributes. In what Harvard social psychologist Shoshana Zuboff calls “surveillance capitalism,” the platforms are incentivized to align their interests with advertisers, often at the expense of users’ interests or even their well-being.

This economic model has driven online and social media platforms (however unwittingly) to exploit the cognitive limitations and vulnerabilities of their users. For instance, human attention has adapted to focus on cues that signal emotion or surprise. Paying attention to emotionally charged or surprising information makes sense in most social and uncertain environments and was critical within the close-knit groups in which early humans lived. In this way, information about the surrounding world and social partners could be quickly updated and acted on.

But when the interests of the platform do not align with the interests of the user, these strategies become maladaptive. Platforms know how to capitalize on this: To maximize advertising revenue, they present users with content that captures their attention and keeps them engaged. For example, YouTube’s recommendations amplify increasingly sensational content with the goal of keeping people’s eyes on the screen. A study by Mozilla researchers confirms that YouTube not only hosts but actively recommends videos that violate its own policies concerning political and medical misinformation, hate speech, and inappropriate content.

In the same vein, our attention online is more effectively captured by news that is either predominantly negative or awe inspiring. Misinformation is particularly likely to provoke outrage, and fake news headlines are designed to be substantially more negative than real news headlines. In pursuit of our attention, digital platforms have become paved with misinformation, particularly the kind that feeds outrage and anger. Following recent revelations by a whistle-blower, we now know that Facebook’s newsfeed curation algorithm gave content eliciting anger five times as much weight as content evoking happiness. (Presumably because of the revelations, the algorithm was changed.) We also know that political parties in Europe began running more negative ads because they were favored by Facebook’s algorithm.

Besides selecting information on the basis of its personalized relevance, algorithms can also filter out information considered harmful or illegal, for instance by automatically removing hate speech and violent content. But until recently, these algorithms went only so far. As Evelyn Douek, a senior research fellow at the Knight First Amendment Institute at Columbia University, points out, before the pandemic, most platforms (including Facebook, Google, and Twitter) erred on the side of protecting free speech and rejected a role, as Mark Zuckerberg put it in a personal Facebook post, of being “arbiters of truth.” But during the pandemic, these same platforms took a more interventionist approach to false information and vowed to remove or limit COVID-19 misinformation and conspiracy theories. Here, too, the platforms relied on automated tools to remove content without human review.

Even though the majority of content decisions are done by algorithms, humans still design the rules the tools rely upon, and humans have to manage their ambiguities: Should algorithms remove false information about climate change, for instance, or just about COVID-19? This kind of content moderation inevitably means that human decision makers are weighing values. It requires balancing a defense of free speech and individual rights with safeguarding other interests of society, something social media companies have neither the mandate nor the competence to achieve.

What can be done to shift this balance of power and to make the online world a better place?

None of this is transparent to consumers, because internet and social media platforms lack the basic signals that characterize conventional commercial transactions. When people buy a car, they know they are buying a car. If that car fails to meet their expectations, consumers have a clear signal of the damage done because they no longer have money in their pocket. When people use social media, by contrast, they are not always aware of being the passive subjects of commercial transactions between the platform and advertisers involving their own personal data. And if users experience has adverse consequences—such as increased stress or declining mental health—it is difficult to link those consequences to social media use. The link becomes even more difficult to establish when social media facilitates political extremism or polarization.

Users are also often unaware of how their newsfeed on social media is curated. Estimates of the share of users who do not know that algorithms shape their newsfeed range from 27 percent to 62 percent. Even people who are aware of algorithmic curation tend not to have an accurate understanding of what that involves. A Pew Research paper published in 2019 found that 74 percent of Americans did not know that Facebook maintained data about their interests and traits. At the same time, people tend to object to collection of sensitive information and data for the purposes of personalization and do not approve of personalized political campaigning.

They are often unaware that the information they consume and produce is curated by algorithms. And hardly anyone understands that algorithms will present them with information that is curated to provoke outrage or anger, attributes that fit hand in glove with political misinformation.

People cannot be held responsible for their lack of awareness. They were neither consulted on the design of online architectures nor considered as partners in the construction of the rules of online governance.

What can be done to shift this balance of power and to make the online world a better place?

Google executives have referred to the internet and its applications as “the world’s largest ungoverned space,” unbound by terrestrial laws. This view is no longer tenable. Most democratic governments now recognize the need to protect their citizens and democratic institutions online.

Protecting democracy itself requires a redesign of the current online “attention economy” that has misaligned the interests of platforms and consumers.

Protecting citizens from manipulation and misinformation, and protecting democracy itself, requires a redesign of the current online “attention economy” that has misaligned the interests of platforms and consumers. The redesign must restore the signals that are available to consumers and the public in conventional markets: users need to know what platforms do and what they know, and society must have the tools to judge whether platforms act fairly and in the public interest. Where necessary, regulation must ensure fairness.

Four basic steps are required:

  • There must be greater transparency and more individual control of personal data. Transparency and control are not just lofty legal principles; they are also strongly held public values. European survey results suggest that nearly half of the public wants to take a more active role in controlling the use of personal information online. It follows that people need to be given more information about why they see specific ads or other content items. Full transparency about customization and targeting is particularly important because platforms can use personal data to infer attributes—for example, sexual orientation—that a person might never willingly reveal. Until recently, Facebook permitted advertisers to target consumers based on sensitive characteristics such as health, sexual orientation, or religious and political beliefs, a practice that may have jeopardized users’ lives in countries where homosexuality is illegal.
  • Platforms must signal the quality of the information in a newsfeed so users can assess the risk of accessing it. A palette of such cues is available. “Endogenous” cues, based on the content itself, could alert us to emotionally charged words geared to provoke outrage. “Exogenous” cues, or commentary from objective sources, could shed light on contextual information: Does the material come from a trustworthy place? Who shared this content previously? Facebook’s own research, said Zuckerberg, showed that access to COVID-related misinformation could be cut by 95 percent by graying out content (and requiring a click to access) and by providing a warning label.
  • The public should be alerted when political speech circulating on social media is part of an ad campaign. Democracy is based on a free marketplace of ideas in which political proposals can be scrutinized and rebutted by opponents; paid ads masquerading as independent opinions distort that marketplace. Facebook’s “ad library” is a first step toward a fix because, in principle, it permits the public to monitor political advertising. In practice, the library falls short in several important ways. It is incomplete, missing many clearly political ads. It also fails to provide enough information about how an ad targets recipients, thus preventing political opponents from issuing a rebuttal to the same audience. Finally, the ad library is well known among researchers and practitioners but not among the public at large.
  • The public must know exactly how algorithms curate and rank information and then be given the opportunity to shape their own online environment. At present, the only public information about social media algorithms comes from whistle-blowers and from painstaking academic research. Independent agencies must be able to audit platform data and identify measures to remedy the spigot of misinformation. Outside audits would not only identify potential biases in algorithms but also help platforms maintain public trust by not seeking to control content themselves.

Several legislative proposals in Europe suggest a way forward, but it remains to be seen whether any of these laws will be passed. There is considerable public and political skepticism about regulations in general and about governments stepping in to regulate social media content in particular. This skepticism is at least partially justified because paternalistic interventions may, if done improperly, result in censorship. The Chinese government’s censorship of internet content is a case in point. During the pandemic, some authoritarian states, such as Egypt, introduced “fake news laws” to justify repressive policies, stifling opposition and further infringing on freedom of the press. In March 2022, the Russian parliament approved jail terms of up to 15 years for sharing “fake” (as in contradicting official government position) information about the war against Ukraine, causing many foreign and local journalists and news organizations to limit their coverage of the invasion or to withdraw from the country entirely.

In liberal democracies, regulations must not only be proportionate to the threat of harmful misinformation but also respectful of fundamental human rights. Fears of authoritarian government control must be weighed against the dangers of the status quo. It may feel patermalistic for a government to mandate that platform algorithms must not radicalize people into bubbles of extremism. But it’s also paternalistic for Facebook to weight anger-evoking content five times more than content that makes people happy, and it is far more paternalistic to do so in secret.

The best solution lies in shifting control of social media from unaccountable corporations to democratic agencies that operate openly, under public oversight. There’s no shortage of proposals for how this might work. For example, complaints from the public could be investigated. Settings could preserve user privacy instead of waiving it as the default.

In addition to guiding regulation, tools from the behavioral and cognitive sciences can help balance freedom and safety for the public good. One approach is to research the design of digital architectures that more effectively promote both accuracy and civility of online conversation. Another is to develop a digital literacy tool kit aimed at boosting users’ awareness and competence in navigating the challenges of online environments.

Achieving a more transparent and less manipulative media may well be the defining political battle of the 21st century.


This story originally appeared on OpenMind, a digital magazine tackling science controversies and deceptions.

author avatar
Editorial1