OpenAI’s new video app, Sora, is drawing alarm for creating hyperrealistic AI videos of deceased public figures. Users have generated clips showing figures like Karl Marx, Martin Luther King Jr., and Princess Diana in surreal, often offensive scenarios. The app launched in October in the US and Canada via invitation only and hit one million downloads within five days.
Sora allows anyone to type a prompt and receive a 10-second video within minutes. Unlike other low-quality AI clips online, Sora videos have high production value and can be shared on its TikTok-style feed or exported elsewhere. The app lets users depict celebrities, politicians, and other famous people, but only if they are deceased. Living individuals must give consent, while historical figures are exempt.
The main feed is filled with bizarre content. Users have depicted Adolf Hitler in a shampoo ad, Queen Elizabeth II falling off a pub table, and MLK Jr. making jokes at a gas station. Families of those depicted have expressed distress. Ilyasah Shabazz, Malcolm X’s daughter, described seeing her father’s likeness in crude and disrespectful videos. OpenAI announced it had paused Martin Luther King Jr. clips after requests from his estate while working to strengthen safety controls.
Other families have raised similar concerns. Zelda Williams, Robin Williams’s daughter, urged users to stop generating AI videos of her father, calling them disrespectful and inconsistent with his wishes. Kelly Carlin, daughter of George Carlin, called the videos “overwhelming and depressing.” Videos also feature Stephen Hawking, Kobe Bryant, and Amy Winehouse, often in shocking or disturbing situations.
Experts warn that this type of AI content risks distorting history and the legacies of public figures. Henry Ajder, a generative AI researcher, said Sora could change how people remember the dead. The app’s algorithm rewards shock value, producing content that is often grotesque or offensive.
Legal experts note that AI depictions of the deceased remain a grey area. While living people have protections under US libel and publicity laws, most states do not grant similar rights to the dead. Only New York, California, and Tennessee provide postmortem publicity rights. Estates may face challenges in holding AI companies liable under current law, including Section 230 protections.
Some argue OpenAI is testing legal boundaries. By allowing users to depict the dead, the company can explore what content is permissible while minimizing its own liability. Generating videos purely for entertainment, with watermarks and non-commercial intent, may comply with existing laws. However, monetized AI content could expose users and the company to lawsuits from estates if they profit from the likenesses of the deceased.
In response to backlash, OpenAI said families of recently deceased public figures could request that their likenesses be blocked on the platform. The company has not defined “recently deceased” or detailed how these requests will be handled. OpenAI also shifted to an opt-in model for copyright holders after some content raised infringement concerns.
Experts predict ongoing legal disputes as courts define AI liability. Generative AI researcher Bo Bergstedt described the situation as a “Whac-A-Mole” approach, with guardrails adjusted in response to issues as they arise. The Sora controversy raises broader questions about who controls one’s image and legacy in the age of AI. Henry Ajder warns it is worrying if people accept that their likeness could be used in hyperrealistic AI content without consent.
As Sora continues to grow, families, legal experts, and the public are watching closely. The app highlights the tension between technological innovation, entertainment, and respect for historical and personal legacies in an era of AI-generated media.