Misinformation is not a new problem, but there are plenty of indications that the advent of social media has made things worse. Academic researchers have responded by trying to understand the scope of the problem, identifying the most misinformation-filled social media networks, organized government efforts to spread false information, and even prominent individuals who are the sources of misinformation.
All of that’s potentially valuable data. But it skips over another major contribution: average individuals who, for one reason or another, seem inspired to spread misinformation. A study released today looks at a large panel of Twitter accounts that are associated with US-based voters (the work was done back when X was still Twitter). It identifies a small group of misinformation superspreaders, which represent just 0.3 percent of the accounts but are responsible for sharing 80 percent of the links to fake news sites.
While you might expect these to be young, Internet-savvy individuals who automate their sharing, it turns out this population tends to be older, female, and very, very prone to clicking the “retweet” button.
Finding superspreaders
The work, done by Sahar Baribi-Bartov, Briony Swire-Thompson, and Nir Grinberg, relies on a panel of over 650,000 Twitter accounts that have been associated with voting registrations in the US, using full names and location information. Those voting records, in turn, provide information about the individuals, as well as location information that can be associated with the average demographics of that voting district. All of these users were active on the platform in the lead-up to the 2020 elections, although the study stopped before the post-election surge in misinformation.
The researchers first identified tweets made by these users, which contain political content, using a machine-learning classifier that had previously been validated by having its calls checked by humans. The researchers focused on those tweets that contained links to news sites. Those links were then checked against a list of “news” websites that were known to disseminate election misinformation.
This approach has a couple of caveats. The researchers can’t confirm whether the voter in question had full control (or any control) over their account during the election season. And the accuracy of the individual stories behind the links being shared wasn’t tested. So, while these sites may have been consistent sources of misinformation, there’s still a chance they published some accurate articles that were shared. Still, due to the size of the population being checked, and the corresponding number of tweets, these aren’t likely to be major considerations.
From this population, Baribi-Bartov, Swire-Thompson, and Grinberg identify just 2,107 accounts that are responsible for 80 percent of the tweets linking to sources of misinformation. They refer to these as misinformation superspreaders. For the analyses they perform, the superspreaders are compared to a random sample of the total population and the heaviest sharers of links to reliable news sources.