New York, USA - A recent study by the Center for Countering Digital Hate (CCDH) raises concerns about the potential misuse of artificial intelligence (AI) tools to create deepfakes that could spread election misinformation.
The study, titled "The State of Deepfakes 2023," highlights how readily available AI image generators, including those offered by OpenAI and Microsoft, could be used to manufacture fake images depicting election-related falsehoods. Researchers used these tools to generate images of scenarios like US President Joe Biden in a hospital bed and election workers tampering with voting machines.
The study tested various AI tools, including OpenAI's ChatGPT Plus, Microsoft's Image Creator, and Midjourney. The report found that all these tools were capable of producing highly realistic images based on textual prompts. This raises concerns about the potential for these images to be used as false evidence, undermining trust in elections.
While 20 tech giants, including OpenAI, Microsoft, and Stability AI, have signed an agreement to combat deceptive AI content influencing elections, the study identified Midjourney, which is not a signatory, as the most concerning tool. In 65% of tests, Midjourney generated misleading images.
In response to the study:
- Midjourney's founder has announced upcoming updates related to the US election and emphasized changes in moderation practices.
- Stability AI recently updated its policies to prohibit fraud and disinformation promotion.
- OpenAI stated they are working on measures to prevent misuse of their AI tools.
The CCDH study underscores the urgency of addressing the potential dangers of AI-generated deepfakes, especially in the context of elections and the spread of misinformation.

