November 13, 2020
Shortly after midnight the day after the election, President Donald Trump went back to his favorite online podium — Twitter — to make unsubstantiated claims about voter fraud.
"We are up BIG, but they are trying to STEAL the Election," he tweeted. "We will never let them do it. Votes cannot be cast after the Polls are closed!"
The tweet — and many subsequent posts after that — were labeled with a warning by Twitter as part of strict guidelines to curb election misinformation. For a while, Trump's feed was a stream of notices about false information. Users could still read the tweets if they wanted, but they had to acknowledge the veracity of his claims were in question. For several tweets, the ability to like, retweet, or comment were removed or severely restricted.
"Last night, we took quick action to limit engagement on a number of Tweets that may have needed more context or violated the Twitter Rules," a Twitter spokesperson said in a statement. "Our teams continue to monitor Tweets that attempt to spread misleading information about voting, accounts engaged in spammy behavior, and Tweets that make premature or inaccurate claims about election results. Our teams remain vigilant and will continue working to protect the integrity of the election conversation on Twitter."
In subsequent days, Facebook removed a group called "Stop The Steal," which it alleged was organizing around the delegitimization of the election, with some members calling for violence. YouTube also removed advertising from some videos presenting unofficial election results as truth and claiming voter fraud, though it stopped short of removing the clips altogether.
Though many have applauded the move of online platforms to crack down on false facts, others have questioned if it has come a little too late after years of allowing misinformation.
More Buy-In From Marketers
Overall, though they may seem like more heated places after the election, the stricter rules may make it a less toxic place going forward — and that could be good news.
"Anything that makes social media platforms safer and more engaging is a win-win, both for the people using them and for businesses advertising on them," said Yuval Ben-Itzhak, CEO of social media marketing company Socialbakers.
It could also mean more ad revenue for the companies in the long run if Twitter, Facebook, and YouTube can continue to monitor the content.
"The more they do to make themselves brand-safe — and I would include societal safety to make themselves brand-safe — you will see more brands would prefer to want to use the platforms," said Brian Wieser, global president of business intelligence at media agency GroupM.
Marketers don't want to see their brands next to false and controversial content. Programmatic ad platform TripleLift partnered with NewsGuard to find and remove sites with low credibility ratings so its client ads wouldn't run on these outlets.
"We fired 18 customers in the first week, at a material cost to our business," said Ben Winkler, senior vice president of agency strategy at TripleLift. "But we've found removing misinformation sites isn't just good for advertisers; it's good for legitimate publishers. Advertisers are already increasingly skittish about news sites, which frankly is a threat to democracy. If advertisers can be sure they're not funding misinformation, maybe they'll feel better about running on news."
Even YouTube's stance — to leave the content up but not allow companies to make money on their videos if there is misinformation — can be seen as a safer bet, Winkler added.
"YouTube isn't really 'regulating' anything," he said. "There's no 'right to monetization' enshrined in the Constitution. By leaving these videos up and demonetizing them, YouTube avoids the headaches and possibly boycotts of advertisers who find themselves not only next to this content, but directly subsidizing it."
Appearance of Bias
Still, stronger rules mean the platforms face accusations of political ideology bias. Opponents of the stricter rules are asking if Silicon Valley should control the floodgates on speech. In essence, these companies are choosing how information is disseminated. The stricter they become, the more their power is becoming evident.
"The algorithms are the content, and it makes the social media companies publishers," Group M's Wieser explained. "They are liable for their algorithms."
While Section 230 of the Communications Decency Act provides these platforms immunity from lawsuits based on comments made by third-parties, Congress has already called into question whether or not these companies should be allowed to make their own rules without repercussions. The companies say they are open to regulation, but how much they are willing to give up is the question. With a likely bipartisan government remaining for another term, it could be years before laws are changed.
"The issue remains that the social platforms are crying out for regulation," Socialbakers' Yuval Ben-Itzhak. "But labels, AI and human fact-checkers alone are not enough to help the platforms to monitor and control and tag all of the content posted. Given the scale and open nature of the social media platforms, cracking down on harmful content is an uphill battle for regulators and the platforms themselves."