Despite strict policies to ban people like former President Donald Trump and stop the spread of incendiary comments, a new study shows misinformation and hate speech increased online in the days after the Capitol riots.
Advertising analytics company DoubleVerify found in a report looking at trends from 2020 and the beginning of this year that there was a 21 percent increase in "inflammatory news and political content" on websites, a term the company used to classify fake news and misinformation, in the week following the January 6 riots. It also found hate speech increased three times in the 10 days after the events compared to the same period before. "The issues of hate speech and the issues of disinformation and misinformation are not necessarily issues of the social media platform," DoubleVerify senior policy manager Zachary Hecht. "They are problems of society."
While many critics worried that removing certain people and blocking hashtags on popular social platforms may violate freedom of speech, the study shows bad actors were able to find other widespread avenues. It may indicate that limiting expression, on Facebook, Twitter, and other platforms, doesn't necessarily remove opposing viewpoints from the Internet. It also highlights the challenging road ahead to figure out how to allow people to share their thoughts while removing problematic content that persists everywhere online.
DoubleVerify tracks where ads run online in order to assure its clients that their marketing dollars are being spent in legitimate places. As a result, it has a pulse on the growth of misinformation and hate speech. Inflammatory content, in particular, increased 83 percent year-over-year during November 2020, especially around the time of the U.S. presidential election and the initial results of the Pfizer-BioNTech vaccine trials. Hate speech in June 2020, which was the month following George Floyd's death and the beginning of social justice protests in the United States, was up 212 percent compared to the five months prior.
Besides sowing discord, there could be a financial incentive to create misinformation. Because many websites that publish fake news utilize trending terms to target their content, people searching for news on hot-button topics are more likely to find this content. Websites hosting misinformation in November 2020, for example, received double the amount of impressions compared to the year before, which could indicate double the amount of revenue.
"We also know disinformation pays, and we also know that financial incentives are a major motivator for this content," said Hecht.
Though it is challenging to moderate the online space, Hecht says there are some things everyone can do to make sure they don't help bad content proliferate. People need to be able to discern between legitimate and illegitimate news sources. Many companies including Google, Twitter, and Facebook have created labels to warn their users about potentially false information. Other services can refuse to host this kind of content, such as when AWS rescinded hosting services from Parler.
Another thing advertisers can do is to make sure they aren't funding fake news with their marketing dollars, even though these sites may get a lot of traffic. Part of this includes knowing where their ads are running.
"There are so many different levers that could be pulled," Hecht said. "It's all about creating an environment that is not suitable for disinformation funding and that disincentivizing funding."