ARTICLE AD BOX
TLDR:
- AI-generated images are being used as political propaganda in the 2024 election
- These images often spread after politically charged events, regardless of factual accuracy
- Some politicians and supporters double down on sharing AI images even after they’re debunked
- AI images can evoke strong emotions before viewers process what they’re seeing
- The proliferation of AI content is partly filling gaps left by shrinking traditional news media
As the 2024 U.S. election season heats up, a new form of political propaganda has emerged: artificial intelligence (AI) generated images. These synthetic visuals are spreading rapidly on social media platforms, often in the wake of major news events or political controversies.
One notable example occurred after Hurricane Helene. An AI-created image showing a crying child holding a puppy on a boat received millions of views online.
Many users, particularly Republicans critical of the Biden administration’s disaster response, shared the image with emotional comments.
However, observant viewers quickly pointed out signs that the image was likely AI-generated, such as malformed limbs and blurriness typical of some AI image generators.
Even after the image’s synthetic nature was revealed, some continued to share it. Amy Kremer, a Republican National Committee member from Georgia, wrote on X (formerly Twitter): “I don’t know where this photo came from and honestly, it doesn’t matter.”
This response highlights a growing trend where the emotional impact of an image is seen as more important than its factual accuracy.
This picture has been seared into my mind.
My heart hurts💔😭 pic.twitter.com/SCwEMP0aXC
— Amy Kremer (@AmyKremer) October 3, 2024
Similar AI-generated images appeared following other recent events. After Hurricanes Helene and Milton, images likely created by AI showed animals on roofs barely above floodwaters.
When former President Trump and his running mate JD Vance made claims about Haitian immigrants in Springfield, Ohio, AI-generated images of Trump cuddling cats and ducks flooded social media platforms popular with Republicans.
Renée DiResta, a professor at Georgetown University’s McCourt School of Public Policy, describes this phenomenon as “a form of political propaganda, a way to signal interest and support for a candidate, almost like in a fandom kind of style.” She notes that campaigns can then amplify these images, positioning themselves as part of the conversation.
The use of AI-generated images in political contexts raises concerns about truth and facts in visual media. Matthew Barnidge, a professor at the University of Alabama, points out that the separation between facts and the idea of a deeper truth has roots in Western philosophy. However, in the context of modern political communication, this separation can be problematic.
Research suggests that fact-checking can impact voters’ views of politicians, but images can be particularly challenging to verify. Emily Vraga, a health communication researcher at the University of Minnesota, notes that people often struggle to distinguish between real and fake online imagery, a problem that predates recent AI advancements.
The proliferation of AI-generated content is partly filling gaps left by the shrinking traditional news industry. As legacy media outlets have reduced coverage, especially in local news, the space has been filled by various forms of content, including propaganda. Barnridge observes, “Propaganda often presents itself not as news, but kind of seeps in in other ways through lifestyle content.”
Politically inspired images are just a fraction of the AI-generated content online. Researchers have identified AI-created images of various subjects, from cakes to fictional religious imagery.
Some of this content is created for traffic or monetary gain, with investigations revealing that people in developing countries are teaching others to make trending posts using AI-generated images to earn money from social media platforms.
The widespread use of AI-generated images in political contexts has raised concerns about election integrity. Eddie Perez, a former Twitter employee now focusing on election confidence at the nonpartisan OSET Institute, warns that such images could undermine trust in election results by fanning political polarization.