Skip to content
Link copied to clipboard

How fake-porn opponents are fighting back

Forensic and technical experts are working to combat deepfakes, but one says "it will probably forever be a cat-and-mouse game."

The best hope for fighting computer-generated fake-porn videos might come from a surprising source: The artificial intelligence software itself.

Technical experts and online trackers say they are developing tools that could automatically spot these "deepfakes" by using the software's skills against it, deploying image-recognition algorithms that could help detect the ways their imagery bends belief.

The Defense Advanced Research Projects Agency, the Pentagon's high-tech research arm known as DARPA, is funding researchers with hopes of designing an automated system that could identify the kinds of fakes that could be used in propaganda campaigns or political blackmail. Military officials have advertised the contracts — code-named "MediFor," for "media forensics" — by saying they want "to level the digital imagery playing field, which currently favors the manipulator."

The photo-verification start-up Truepic checks for manipulations in videos and saves the originals into a digital vault so other viewers — insurance agencies, online shoppers, antifraud investigators — can confirm for themselves. The company wants to embed its software across a range of sensors and social-media platforms so as to validate footage against what it calls a "definitive point of truth."

The company's chief executive, Jeffrey McGregor, said its engineers are working to refine detective techniques by looking for the revealing giveaways of fakes: the soft fluttering of hair, the motion of ears, the reflection of light on their eyes. One Truepic computer-vision engineer designed a test to look for the pulse of blood in a person's forehead, he said.

However, the rise of fake-spotting has spurred a technical blitz of detection, pursuit, and escape, in which digital con artists work to refine and craft evermore deceptive fakes. In some recent pornographic deepfakes, the altered faces appear to blink naturally — a sign that creators have already conquered one of the telltale indicators of early fakes, in which the actors never closed their eyes.

Hany Farid, a Dartmouth College computer-science professor and Truepic adviser, said he receives a new plea every day from someone asking him to investigate what they suspect could be a deepfake. But the group of forensic specialists working to build these systems is "still totally outgunned," he said.

The underlying technology also continues to evolve: In September, researchers at DeepMind, the trailblazing AI firm owned by Google's parent company Alphabet, said they had trained the programs behind deepfakes, known as generative adversarial networks, or GANs, "at the largest scale yet attempted," allowing them to create high-quality fake images that looked more realistic than ever.

“The counterattacks have just gotten worse over time, and deepfakes are the accumulation of that,” McGregor said. “It will probably forever be a cat-and-mouse game.”