Agents with links to the Russian government set up an endless array of fake accounts and websites and purchased a slew of advertisements on Google and Facebook, spreading dubious claims that seemed intended to sow division all along the political spectrum Ã¢Â€Â” Ã¢Â€Âœa cultural hack,Ã¢Â€Â in the words of one expert.
Yet the psychology behind social media platforms Ã¢Â€Â” the dynamics that make them such powerful vectors of misinformation in the first place Ã¢Â€Â” are at least as important, experts say, especially for those who think theyÃ¢Â€Â™re immune to being duped. For all the suspicions about social media companiesÃ¢Â€Â™ motives and ethics, it is the interaction of the technology with our common, often subconscious psychological biases that make so many of us vulnerable to misinformation, and this has largely escaped notice.
Skepticism of online Ã¢Â€ÂœnewsÃ¢Â€Â serves as a decent filter much of the time, but our innate biases allow it to be bypassed, researchers have found Ã¢Â€Â” especially when presented with the right kind of algorithmically selected Ã¢Â€Âœmeme.Ã¢Â€Â
At a time when political misinformation is in ready supply, and in demand, Ã¢Â€ÂœFacebook, Google, and Twitter function as a distribution mechanism, a platform for circulating false information and helping find receptive audiences,Ã¢Â€Â said Brendan Nyhan, a professor of government at Dartmouth College.
For starters, said Colleen Seifert, a professor of psychology at the University of Michigan, Ã¢Â€ÂœPeople have a benevolent view of Facebook, for instance, as a curator, but in fact it does have a motive of its own. What itÃ¢Â€Â™s actually doing is keeping your eyes on the site. ItÃ¢Â€Â™s curating news and information that will keep you watching.Ã¢Â€Â
That kind of curating acts as a fertile host for falsehoods by simultaneously engaging two predigital social-science standbys: the urban myth as Ã¢Â€Âœmeme,Ã¢Â€Â or viral idea; and individual biases, the automatic, subconscious presumptions that color belief.
The first process is largely data-driven, experts said, and built into social media algorithms. The wide circulation of bizarre, easily debunked rumors Ã¢Â€Â” so-called Pizzagate, for example, the canard that Hillary Clinton was running a child sex ring from a Washington-area pizza parlor Ã¢Â€Â” is not entirely dependent on partisan fever (though that was its origin).
For one, the common wisdom that these rumors gain circulation because most people conduct their digital lives in echo chambers or Ã¢Â€Âœinformation cocoonsÃ¢Â€Â is exaggerated, Dr. Nyhan said.
In a forthcoming paper, Dr. Nyhan and colleagues review the relevant research, including analyses of partisan online news sites and Nielsen data, and find the opposite. Most people are more omnivorous than presumed; they are not confined in warm bubbles containing only agreeable outrage.
But they donÃ¢Â€Â™t have to be for fake news to spread fast, research also suggests. Social media algorithms function at one level like evolutionary selection: Most lies and false rumors go nowhere, but the rare ones with appealing urban-myth Ã¢Â€ÂœmutationsÃ¢Â€Â find psychological traction, then go viral.
There is no precise formula for such digital catnip. The point, experts said, is that the very absurdity of the Pizzagate lie could have boosted its early prominence, no matter the politics of those who shared it.
Ã¢Â€ÂœMy experience is that once this stuff gets going, people just pass these stories on without even necessarily stopping to read them,Ã¢Â€Â Mr. McKinney said. Ã¢Â€ÂœTheyÃ¢Â€Â™re just participating in the conversation without stopping to look hardÃ¢Â€Â at the source.
Digital social networks are Ã¢Â€Âœdangerously effective at identifying memes that are well adapted to surviving, and these also tend to be the rumors and conspiracy theories that are hardest to correct,Ã¢Â€Â Dr. Nyhan said.
One reason is the raw pace of digital information sharing, he said: Ã¢Â€ÂœThe networks make information run so fast that it outruns fact-checkersÃ¢Â€Â™ ability to check it. Misinformation spreads widely before it can be downgraded in the algorithms.Ã¢Â€Â
The extent to which Facebook and other platforms function as Ã¢Â€ÂœmarketersÃ¢Â€Â of misinformation, similar to the way they market shoes and makeup, is contentious. In 2015, a trio of behavior scientists working at Facebook inflamed the debate in a paper published in the prominent journal Science.
The authors analyzed the newsfeeds of some 10 million users in the United States who posted their political views, and concluded that Ã¢Â€ÂœindividualsÃ¢Â€Â™ choices played a stronger role in limiting exposureÃ¢Â€Â to contrary news and commentary than FacebookÃ¢Â€Â™s own algorithmic ranking Ã¢Â€Â” which gauges how interesting stories are likely to be to individual users, based on data they have provided.
Outside critics lashed the study as self-serving, while other researchers said the analysis was solid and without apparent bias.
The other dynamic that works in favor of proliferating misinformation is not embedded in the software but in the biological hardware: the cognitive biases of the human brain.
Purely from a psychological point of view, subtle individual biases are at least as important as rankings and choice when it comes to spreading bogus news or Russian hoaxes Ã¢Â€Â” like a false report of Muslim men in Michigan collecting welfare for multiple wives.
For starters, merely understanding what a news report or commentary is saying requires a temporary suspension of disbelief. Mentally, the reader must temporarily accept the stated Ã¢Â€ÂœfactsÃ¢Â€Â as possibly true. A cognitive connection is made automatically: Clinton-sex offender, Trump-Nazi, Muslim men-welfare.
And refuting those false claims requires a person to first mentally articulate them, reinforcing a subconscious connection that lingers far longer than people presume.
Over time, for many people, it is that false initial connection that stays the strongest, not the retractions or corrections: Ã¢Â€ÂœWas Obama a Muslim? I seem to remember that….Ã¢Â€Â
In a recent analysis of the biases that help spread misinformation, Dr. Seifert and co-authors named this and several other automatic cognitive connections that can buttress false information.
Another is repetition: Merely seeing a news headline multiple times in a newsfeed makes it seem more credible before it is ever read carefully, even if itÃ¢Â€Â™s a fake item being whipped around by friends as a joke.
And, as salespeople have known forever, people tend to value the information and judgments offered by good friends over all other sources. ItÃ¢Â€Â™s a psychological tendency with significant consequences now that nearly two-thirds of Americans get at least some of their news from social media.
Ã¢Â€ÂœYour social alliances affect how you weight information,Ã¢Â€Â said Dr. Seifert. Ã¢Â€ÂœWe overweight information from people we know.Ã¢Â€Â
The casual, social, wisecracking nature of thumbing through and participating in the digital exchanges allows these biases to operate all but unchecked, Dr. Seifert said.
Stopping to drill down and determine the true source of a foul-smelling story can be tricky, even for the motivated skeptic, and mentally itÃ¢Â€Â™s hard work. Ideological leanings and viewing choices are conscious, downstream factors that come into play only after automatic cognitive biases have already had their way, abetted by the algorithms and social nature of digital interactions.
Ã¢Â€ÂœIf I didnÃ¢Â€Â™t have direct evidence that all these theories were wrongÃ¢Â€Â from the scanner,Ã¢Â€Â Mr. McKinney said, Ã¢Â€ÂœI might have taken them a little more seriously.Ã¢Â€Â