OpenAI doesn’t want people to use DALL-E for deepfakes

OpenAI doesn’t want people to use DALL-E for deepfakes

Source Node: 2571352

OpenAI is introducing a detection tool specifically for content produced by its widely used image generator, DALL-E. The leading AI startup admits, however, that this tool represents just a fraction of the resources required to effectively combat the spread of advanced synthetic media, commonly known as deepfakes.

OpenAI is developing a “deepfake detector”

On Tuesday, according to The New York Times, OpenAI announced plans to distribute this new deepfake detection tool to a select group of disinformation researchers. This will allow them to evaluate the tool’s effectiveness in real-world scenarios and identify potential areas for enhancement.

OpenAI reported that its new tool can accurately identify 98.8 percent of images produced by DALL-E 3, the most recent iteration of its image generator. However, the tool is not equipped to recognize images from other widely used generators, such as Midjourney and Stability. Given the probabilistic nature of this deepfake detection technology, achieving perfection is unattainable. Consequently, OpenAI, along with various other organizations including nonprofits and academic institutions, is exploring additional strategies to address this challenge.

Similar to major technology firms like Google and Meta, OpenAI has joined the steering committee of the Coalition for Content Provenance and Authenticity (C2PA). This group aims to establish a standard akin to a “nutrition label” for digital content, which would detail the origins and modifications of images, videos, audio clips, and other files, including those altered by AI.

OpenAI doesn't want people to use DALL-E for deepfakes
(Image credit)

OpenAI is also developing techniques to embed “watermarks” in AI-generated audio to facilitate immediate identification. The company is focused on ensuring these watermarks are resistant to removal, enhancing the traceability of synthetic content.

The industry is under growing scrutiny to manage the outputs of its technologies responsibly. Experts are urging companies to prevent the creation of deceptive and harmful content and to develop methods for tracking its origins and distribution.

With key elections occurring globally this year, the demand for effective tools to trace the origins of AI-generated content has intensified. Recently, manipulated audio and visual content have already influenced political events and elections in countries like Slovakia, Taiwan, and India.

While OpenAI’s newly introduced deepfake detector offers a means to mitigate these issues, it is not a complete solution. As stated by Ms. Agarwal, there is no definitive solution in the battle against deepfakes; it remains a complex challenge requiring multifaceted approaches.


Featured image credit: Jonathan Kemper/Unsplash

Time Stamp:

More from Dataconomy