In response to growing concerns about the impact of AI-generated material on this year's worldwide elections, Microsoft-backed firm OpenAI said on Tuesday that it is releasing a tool that can identify photos produced by its text-to-image generator, DALL-E 3, according to Reuters.
In internal testing, the company said that the tool detected photographs made by DALL-E 3 around 98% of the time. It can also handle standard alterations like compression, cropping, and saturation modifications with little to no effect.
The creator of ChatGPT also intends to include watermarking that is resistant to tampering, which will be used to label digital files like audio or images with a signal that is difficult to remove.
A standard that could help in tracing the origins of various media has been planned by OpenAI, which has also joined an industry organization that consists of Google, Microsoft, and Adobe as part of its efforts.
Fake videos of two Bollywood stars criticizing Prime Minister Narendra Modi went viral online in April amid the current general election in India.
Deepfake and AI-generated material are being utilized more and more in elections throughout the globe, including those in Indonesia, Pakistan, and the United States, as well as in India.
To promote AI education, OpenAI and Microsoft announced that they will be establishing a $2 million "societal resilience" fund.