AI-generated images showing extreme poverty, children, and survivors of sexual violence are increasingly appearing on stock photo sites and being used by leading health NGOs, raising alarm among global health professionals over a new wave of “poverty porn.”
“Noah Arnold from Fairpicture says, ‘All over the place, people are using it. Some are actively using AI imagery, others are experimenting,’” said researchers. Fairpicture is a Swiss organization promoting ethical imagery in global development.
Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, studies the production of global health images. He said the pictures replicate stereotypes: children with empty plates, cracked earth, and other visual clichés. Alenichev has collected over 100 AI-generated images used by NGOs or individuals on social media campaigns against hunger or sexual violence. He described the trend in a Lancet Global Health article as “poverty porn 2.0.”
The prevalence of AI-generated images is hard to measure, but experts say it is growing. Concerns over consent and costs drive the trend. Arnold noted that US funding cuts to NGOs have worsened the situation. “Organizations consider synthetic images because they are cheap and bypass consent issues,” he said.
Stock photo sites like Adobe Stock and Freepik now offer dozens of AI-generated poverty images, often with racialized and exaggerated captions. Examples include “Photorealistic kid in refugee camp” and “Asian children swim in a river full of waste.” Some images sell for about £60 per license.
“These images are deeply racialized and reinforce harmful stereotypes about Africa and Asia,” Alenichev said.
Freepik CEO Joaquín Abela stated that responsibility for using such images lies with media consumers, not platforms. While the site attempts to reduce bias in other areas, he admitted it is difficult to fully control AI-generated content.
Leading charities have already used AI-generated images in campaigns. In 2023, Plan International Netherlands released a campaign against child marriage featuring AI-generated images of a girl with a black eye, an older man, and a pregnant teenager. Last year, the UN posted a video with AI-generated re-enactments of sexual violence, which was later removed after concerns were raised.
A UN spokesperson said the video, produced over a year ago, was taken down due to improper AI use and risks to information integrity. The UN reaffirmed its commitment to support survivors of conflict-related sexual violence through innovative advocacy.
Experts warn that using AI images without consent undermines ethical storytelling in global health campaigns. Arnold said, “It is easier to use ready-made AI visuals because they do not involve real people.”
NGO communications consultant Kate Kardol expressed concern over the trend. “It saddens me that ethical representation now extends to unreal images,” she said.
AI tools can replicate and amplify societal biases, and widespread use of AI poverty images may worsen prejudice online. Alenichev added that these images could train future AI models, potentially spreading harmful stereotypes further.
Plan International now advises against using AI to depict individual children. The NGO stated that their 2023 campaign used AI imagery to protect the privacy and dignity of real girls while advocating against child marriage.