As university researchers, big tech companies like Facebook and Microsoft, and even the Defense Department, push efforts to detect and combat the spread of deepfakes, a handful of startups are embracing the technology behind these videos and trying to find ways to commercialize it.
These aren't clandestine operations hiding in the dark corners of the internet, manipulating political opinion, or scamming people for money. Instead, they want to harness the controversial AI-powered video technology for use by advertisers, sales representatives, and more.
One such company: Tel Aviv-based Canny AI, the startup behind the infamous deepfake that appeared to show Facebook CEO Mark Zuckerberg giving a speech, first reported by Motherboard, which also went viral in June.
Canny's cofounder, Omer Ben-Ami, explains that its technology, called video dialogue replacement, functions as an artificial-intelligence-powered form of dubbing. The company's AI "trains" on the face of an intended speaker — such as Mark Zuckerberg — essentially "studying" their facial movements and speaking style. It also trains on another video, one with new dialogue from another speaker (in the Zuckerberg deepfake's case, a voice actor).
Once its AI is "fluent" in both faces and dialogues, Canny can translate between them, enabling the startup to replace the speech in videos of high-profile figures like Zuckerberg, or Kim Kardashian, with dialogue that's entirely new.
This ability means that instead of re-filming a clip in every language, the same video could be dubbed, using voice actors, with previously unachievable realism.
Ben-Ami won't share how many clients he has, though he says they're primarily advertising and production companies. To demonstrate its technology to the public, Canny released a video of world leaders, including Donald Trump, Kim Jong-un, and Vladimir Putin, all "singing" John Lennon's "Imagine" — an aspirationally-inspiring clip that reveals the strange, non-realities Ben-Ami says we ought to expect.
Victor Riparbelli, the co-founder and CEO of U.K.-based synthetic video firm Synthesia AI, explains that a single ad could easily be localized to every country. "Basically, you're taking one advertisement and then creating many versions of it where you're slightly changing the script," he says. Synthesia has already worked with the BBC, Accenture, and even the Dallas Mavericks (the team's billionaire owner, Mark Cuban, is reportedly an investor in Synthesia).
An example of Synthesia's tech is its own feel-good video: an ad that aimed to raise global awareness of malaria, featuring British celebrity David Beckham 'speaking' languages the soccer legend is clearly not fluent in.
Riparbelli adds that another potential deepfake customer base is large multinational companies, which could use the editing technology to easily produce the same corporate communication video in multiple languages.
While Synthesia and Canny, and deepfakes more broadly, have focused on building photorealistic video, startups like Modulate AI and Dessa are both working on artificial intelligence-powered tech for creating convincing, synthetic voice, which could presumably be combined with Ben-Ami and Riparbelli's video tech.
But synthetic video technology hasn't seen the warmest of welcomes, with many wondering whether the emergence of deepfakes could make us even less inclined to trust one another online, especially as the technology has become more openly available through online applications. Notably, deepfake apps have allowed online users to create doctored images of naked women, contributing to the proliferation of "revenge porn" and the online harassment of women.
Some politicians are considering how to curb, and even ban the technology, over concerns that deepfakes will simply inflame the scourge of fake news. One example: a deepfake PSA developed by Jordan Peele and Buzzfeed features the director as he appears to "take over" President Barack Obama's face and voice to call President Donald Trump a nasty name and share other opinions that the former commander-in-chief would be unlikely to publicly articulate.
That segment became one of several deepfakes of politicians that have raised concerns as to how these videos might be used to mislead the public, especially during an election season.
"The scariest real-world scenario is that on the eve of the election, a candidate is portrayed saying or doing something very embarrassing or illegal — or what-have-you — and there's no way to correct the record fast enough that voters would understand that this AI-driven false video is indeed not true," Paul Barrett, the deputy director of NYU Stern Center for Business and Human Rights, told Cheddar earlier this month.
But the startups aren't deterred (and neither provide their technology to the general public). Ben-Ami argues that synthetic video technology is inevitable, and that "negative" applications shouldn't outweigh the positive. He likened unease over the emerging technology to concerns over the 3D-printing of homemade guns: "I don't think it makes sense just to disregard 3D-printing and all the good it can do just because someone misused it."
"The truth is you're surrounded by synthetic media already today," adds Synthesia's Riparbelli, pointing to Snapchat filters and greenscreens. Instead, he says concern should center on consent. "What you're more interested in is not if it's synthetic but if it's consensual or not." Synthesia has a policy that someone will not be re-enacted without their permission.
Both Ben-Ami and Riparbelli acknowledged the need for the public to be better understand -- and know how to spot -- deepfakes, and both companies have staff involved in building detection technologies.
Meanwhile, brand reputation monitoring services note that deepfakes will only make worse the threat of fake news to companies. After all, should that fake video of Mark Zuckerberg have been believed, it could have — at least temporarily — damaged the social media giant's already strained public reputation.
And the Zuckerberg video was not the first deepfake that featured a corporate leader. For instance, in May, Ad Age reported on how one creative professional made a deepfake imitating executives in an effort to land a job.
"Damaged reputation often results in decreased sales. Consumers who believe something about the company, that happens not to be true, are very likely not to do business with that company any more," explains William Comcowich, the acting CEO of the brand-reputation management service, which also offers a fake news-tracking service. He warns that the technology could be "a significant risk to publicly traded companies," and hazards that deepfakes could be used to manipulate the news in order to short-sell.
Jean-Claude Goldenstein, the head of the social intelligence firm CREOpoint, says that concern over doctored videos is gradually growing, pointing to aerospace brands that have grown increasingly nervous about fake videos that claim to have been shot aboard the crashed Ethiopian air flight in March, when 157 people were killed.
This summer, CREOpoint announced that it was granted a patent for a new method of monitoring online discussions about fake news — and impacted companies and leaders — that relies on the integration of natural language processing and a network of human experts. The company says it could help track the proliferation of discussions about a deepfake video to indicate how "truthful" the content might be. Goldenstein's point: limiting the impacts of deepfakes will be less about finding the person who created the augmented video, and more about the people in a social media system that are spreading misinformation, deliberately or not.
But developing methods of monitoring these augmented videos could be a race against time should the technology proliferate as the startups anticipate. Rarapbelli says the tech available now is only "a very small glimpse into what the future of content creation is going to look like," and that, eventually, synthetic video will be "massively democratized," similar to how services like GarageBand helped make music production easier for the average user.
They emphasize that many of these new videos will be done in collaboration with executives and brand ambassadors. Some could even be interactive. Ben-Ami predicts that "to some extent, you're going to have a chatbot that looks like Kim Kardashian — and that really answers and will respond — I think that's going to be in the near future."
He says that might even work for those hoping to bring a dead celebrity back to life. That's if, Ben-Ami says, "they have the IP and it's legal."