As the Democratic presidential field narrows and the Trump re-election campaign gears up, social media companies are facing pressure to better prepare for a potential onslaught of disinformation campaigns leading up to the 2020.
"I think there's every reason to worry that the purveyors of disinformation have learned from what the Russians did in 2016 and will be back with new, different, and more challenging forms of untruth," Paul Barrett, the deputy director of NYU Stern Center for Business and Human Rights, told Cheddar.
Earlier this month, the center released its most recent study of online disinformation (authored by Barrett) and its guidance for social media companies. Among several recommendations, Barrett advises social media platforms to increase support for proposed regulations of online political ads and to institute more aggressive policies for removing false posts.
"Right now, they demote it or de-rank, or sometimes label [false content], but they leave it on the site and it's therefore available to be shared," he said. "I would urge once they confidently identify it as false they ought to go ahead and take it down."
For instance, Facebook decided to label as false and limit the reach of — but not to take down — a doctored video of House Speaker Nancy Pelosi earlier this summer.
"They've got to do more," he said, arguing that platforms should hire more content moderators and fact-checkers, and improve their filtering algorithms. The center also argues that major platforms should invest in teaching their users how to "take responsibility for recognizing false content" and increase collaboration between companies.
In particular, the study urges Facebook to pay more attention to Instagram (which Barrett says "remains vulnerable" to image- and meme-based disinformation) and to limit users' ability to forward content on its messaging service WhatsApp.
Barrett emphasized that there is a greater volume of disinformation produced domestically than by foreign actors. "In a meaningful way, we do this to ourselves," he said. "It's Americans distributing and disseminating disinformation from all different types of sources."
Still, he added that we should expect disinformation to come from Russia, and — potentially — Iran and China. He pointed to China's state-backed disinformation efforts in Hong Kong that aimed to undermine the region's pro-democracy protests.
"But not every foreign source of disinformation would have the same approach," he cautioned. "[Unlike Russia] presumably the Chinese would not be friendly toward President Trump."
He added that an emerging cause for concern is deepfakes. Barrett notes that while these falsified videos have not yet been deployed in massive disinformation campaigns, he said the technology should create cause for concern.
"The scariest real-world scenario is that on the eve of the election, a candidate is portrayed saying or doing something very embarrassing or illegal — or what-have-you — and there's no way to correct the record fast enough that voters would understand that this AI-driven false video is indeed not true."