Facebook, Microsoft and the Partnership on AI among organizations supporting the Deepfake Detection Challenge

This December, the Conference on Neural Information Processing Systems (NeurIPS) will host the first Deepfake Detection Challenge. The competition, which is supported by Facebook, Microsoft, the Partnership on AI, along with several U.S. and U.K. universities, including MIT, University of Oxford, Cornell Tech, UC Berkeley, University of Maryland, College Park, and University at Albany-SUNY, will give participants the opportunity to compete to improve the detection of altered media, or deepfakes, using a data set commissioned by Facebook.

The use of AI to tamper and distort media, colloquially known as deepfakes, has become a growing concern both among the pubic and lawmakers, who fear the technology could be used in a variety of unethical and malicious ways. To date, misinformation and disinformation have been primarily based on false written information; deepfake technology has the potential to change this, adding a layer of visual distortion to the accuracy of information we encounter online.

Why it matters

While altered media has been a subject of academic study for over 20 years, it has only been elevated to public attention in the last two years. In a 2017 article, brilliantly title AI-Assisted Fake Porn Is Here and We’re All Fucked, Samantha Cole reported on the recent phenomenon of AI-generated images of celebrities being superimposed onto porn videos. Similar technology has since been used to alter video and audio content of numerous public figures, including Barack Obama, Mark Zuckerberg, and (a truly shocking number featuring) Nicolas Cage.

In a recent article for Wired, Sam Whitney argued that the perceived threat of deepfakes may be overstated. Whitney’s article suggests the primary threat of altered media will continue to be to individuals whose images are manipulated and then used to bully or harass them.

Others contend that manipulated media constitutes a serious global threat to truth itself, and by extension, politics, elections, and democracy. A report out of NYU’s Stern School of Business in September 2019 listed deepfakes as one of eight primary threats to the upcoming 2020 U.S. election.

University of Oxford Professor Philip Torr is among the academics contributing to the Deepfake Detection Challenge. Torr explains why he considers deepfakes an existential threat to democracy:

Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance, as it is a fundamental threat to democracy, and hence freedom.

Regardless of whether such technology is employed to undermine elections, cause individual harm, or sow public discord more generally, the rise of deepfakes and altered media is an imminent problem, which as yet lacks a solution.

Organizations such as Witness and Deeptrace have been working on the problem of deepfakes for the past several years. (Witness is a nonprofit dedicated to tracking the impacts of altered media, while Deeptrace focuses on developing tech to detect such content.) So far, however, there remains no solution to the impending problem of deepfake technology.

The Deepfake Detection Challenge hopes to change that. The event will be the first major contest aimed at developing open source methods of identifying and mitigating the use of deepfake technology. The official website of the competition describes the event as an invitation to “people around the world to build innovative new technologies that can help detect deepfakes and tampered media. Identifying tampered content is technically challenging as deepfakes rapidly evolve, so we’re working together to build better detection tools.”

The event will be co-sponsored by Microsoft, the Partnership for AI, and Facebook, with the latter contributing over $10 million to the project.

As Facebook CTO Mike Schroepfer explained in a blog post following the event’s announcement, the rise of deepfakes marks a new and significant shift in the nature of disinformation:

“Deepfake” techniques… have significant implications for determining the legitimacy of information presented online. Yet the industry doesn’t have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes.

Since 2016, Facebook has faced a torrent of criticism for a range of ethically questionable behaviors. The extractive nature of Facebook’s “free” service and the effects of brokering of personal data, particularly for political purposes, have been topics of concern among lawmakers and the public, as has the company’s failure to police false information on its platform. (Facebook maintains, under section 230 of the Communications Decency Act, that it is not responsible for the content that appears on its site.)

The company’s support of the Deepfake Detection Challenge is a positive and ethically grounded decision to proactively address an impending digital problem that requires immediate action. For this, Facebook, Microsoft, and all the sponsors of the project should be applauded.