Reliable Identification of 'Deepfakes' Remains Major Challenge in Cyberspace

Among thousands of submissions for Facebook's competition to identify AI-manipulated videos, the most effective algorithm is only about 65 percent accurate. There is growing concern that such doctored videos showing someone doing something outrageous or inflammatory could sway an election or trigger violence.

Photo: Bigstock

Following an open competition that drew thousands of submissions from around the world, a senior Facebook executive said June 12 that even though the most effective algorithm for identifying doctored videos is imperfect, it will help the company prevent misuse of its platforms in the future.   

The contest to identify deepfakes, namely videos that were doctored using artificial intelligence, was held amid growing concern that it is only matter of time before such AI-produced clips showing someone doing something outrageous or inflammatory could sway an election or trigger violence.

The social network said its first-ever Deepfake Detection Challenge found that among the thousands of algorithms submitted, even the most effective one for spotting AI-manipulated videos is only about 65 percent accurate. 

About 2,100 participants were evaluated based on how their algorithms handled two test sets: a public test set of 10-second clips created specifically for the competition, and a private test set that contestants could not access, half of which were taken from the internet.

Facebook said several algorithms scored above 82% accuracy on the public test set but their accuracy dropped significantly on the other set. The winning algorithm was only able to identify “challenging real world examples” of deepfakes with an average accuracy of about 65 percent, but even that was a significant achievement, according to the company. 

“Honestly, the contest has been more of a success than I could have ever hoped for,” said Mike Schroepfer, Facebook’s chief technology officer, in a telephone call with the media. He added that researchers could use the results as a basis for more progress. 

He said deepfakes are “currently not a big issue” for Facebook but the company wants to have detection tools ready for possible use in the future. 

“The lesson I learned the hard way over the last couple of years is I want to be prepared in advance and not be caught flat footed,” the CTO said. “I want to be really prepared for a lot of bad stuff that never happens rather than the other way around.”

Schroepfer also said Facebook is developing its deepfake detection technology separate from the contest. “We have deepfake detection technology in production and we will be improving it based on this context,” he said. According to the company, the technology is being kept secret to prevent it from being reverse-engineered.

Under the terms of the competition, the winner of the $500,000 top prize, Selim Seferbekov, who according to his LinkedIn profile is a machine learning engineer in Belarus, must make his algorithm publicly available. The Facebook CTO said the company's engineers were likely to borrow techniques and ideas from the winning algorithm. 

The social network announced the competition along with Microsoft, Amazon Web Services and several other partners in September 2019, with total prize money of $1 million.

Facebook said most of the top algorithms tried to detect subtle aberrations in the faces or parts of the faces in the videos, rather than using digital forensic techniques such as searching for “digital fingerprints” left behind on images created using cameras rather than software. 


You might be interested also