Researchers Warn Against Commercial Exploitation of AI ‘Deadbots’ in Advertising
New concerns are emerging about the use of AI ‘deadbots’ — digital avatars programmed to simulate deceased individuals — as a novel platform for advertising, potentially undermining ethical boundaries in the digital afterlife industry.
Deadbots, also referred to as griefbots or postmortem avatars, are artificial intelligence chatbots that replicate the personality, language patterns, and behavioral quirks of people who have passed away. They are created by training AI on the digital footprints, such as text messages and social media data, left behind by deceased loved ones. These bots allow users to hold conversations with AI re-creations of the dead, offering a controversial form of digital presence and mourning facilitation.
While some see deadbots as a comforting tool for the bereaved, researchers and ethicists warn about their commercialization and manipulation. A key issue raised is the prospect of monetizing these AI avatars through advertising. For instance, companies could insert targeted ads within conversations with deadbots, or program the bots to collect personal preference data from users, which would be exploited by marketers to push products or services. Such tactics could mimic traditional advertising breaks but in highly personal and emotionally sensitive interactions.
Jason Quinn, an AI researcher, noted that firms are already exploring ways to profit from AI avatars of both living and deceased individuals, including endorsement-like interactions and data gleaning through conversational AI. He cautioned that while many implementations are still internal or experimental, it is only a matter of time before widespread deadbot monetization emerges, raising concerns about ethical misuse.
The University of Cambridge has issued urgent calls for safety protocols and design safeguards to prevent what it terms “unwanted hauntings” by these AI chatbots. Their studies, published in the journal Philosophy and Technology, highlight risks such as psychological harm to users, unsolicited advertising spam from digital afterlife services, and the exploitation of deceased persons’ identities without family consent. These AI representations could become a tool for companies to not only commodify the deceased’s likeness but also to intrude on relatives’ privacy.
Additional studies emphasize the risks of misrepresentation—deadbots may inaccurately simulate personality or views, potentially tarnishing the memory of the departed. Furthermore, once consent or contractual agreements are set for a deadbot’s creation, families may find themselves powerless to halt the bot’s operation, even in the face of objection.
Cases such as that of Joshua Barbeau, who engaged with a chatbot simulating his late girlfriend, illustrate both the emotional depth and potential dangers involved with these AI companions. Deadbots raise profound questions about grief, memory, and the digital afterlife, now complicated by pressures toward commercialization.
Industry experts urge transparent guidelines, ethical design frameworks, and regulatory oversight to ensure that deadbots serve the interests of the grieving without becoming platforms for intrusive advertising or exploitative commercial practices.
As AI technology continues to advance, the collision between innovation, ethics, and profit in the realm of digital afterlife avatars creates a pressing dialogue among researchers, technologists, and society at large.