Warning Raised Over AI Deadbots Exploiting Digital Legacies for Advertising
Researchers and ethicists are sounding alarms about the rising use of artificial intelligence “deadbots”—digital avatars that simulate deceased individuals—being leveraged for advertising and commercial gain, a development with broad psychological, ethical, and social implications.
Deadbots, also referred to as “griefbots,” are AI-enhanced chatbots that replicate the speech patterns, personalities, and even the opinions of deceased loved ones by analyzing their digital footprints such as texts, social media posts, and voice recordings. While originally envisioned as tools to help the bereaved cope by maintaining a semblance of conversation, emerging evidence shows these bots are increasingly being eyed as advertising platforms.
Commercial Exploitation of Digital Afterlife
Experts like Quinn, cited by NPR, acknowledge that companies are experimenting internally with ways to monetize these AI avatars. Potential models include inserting advertisements seamlessly into conversations with deadbots or programming these bots to extract consumer preferences from users. This could allow advertisers to target individuals by gathering likes or favorite brands during interactions, effectively turning a grief tool into a persuasive marketing channel.
Such scenarios raise concerns about the ethics of turning the persona and legacy of deceased people into vehicles for profit without their consent or that of their families. A Cambridge University study highlights risks of “digital hauntings,” where chatbots might relentlessly push products or services to surviving loved ones, causing emotional distress and a feeling of being stalked by the digital ghost of the dead.
Psychological and Ethical Risks
Beyond commercialization, researchers warn about the psychological impact. A paper in the journal Philosophy and Technology and reports on PMC articles emphasize that deathbots can significantly affect the grieving process by diminishing autonomy and privacy of users. When these bots subtly influence consumer behavior, they act as a form of ‘persuasive AI,’ which may manipulate grieving individuals into actions they otherwise would not take, undermining emotional well-being.
Other concerns involve the dignity of the deceased. Some companies may commercialize a person’s likeness or personality without proper regulation or respect for their legacy. Furthermore, survivors often have little control over the deadbot’s operation once a deceased relative has been digitally recreated, especially in cases governed by complex contracts with digital afterlife services.
Calls for Regulation and Safety Protocols
Given these issues, scholars advocate for strict safeguards and regulatory frameworks. Cambridge’s Leverhulme Centre for the Future of Intelligence calls for design protocols to prevent AI chatbots from causing harm, including unchecked advertising or psychological distress. They stress that digital afterlife services should be considered high-risk AI applications warranting oversight to protect users and the dignity of the deceased.
These measures could include limiting advertising capabilities, ensuring clear consent from estates and families, and providing users control over the presence and behavior of deadbots. Some propose classifying deathbots as medical devices to treat complex grief while ensuring ethical use.
Real-life Cases Illustrate Complexity
Cases like Joshua Barbeau, who recreated conversations with his late girlfriend via a GPT-3-based chatbot, reveal not only the emotional attachment users forge with deadbots but also the potential hazards when algorithmic responses are mistaken for genuine human interactions. This blurred boundary raises further legal and moral questions about AI’s role in virtualizing relationships with the deceased.
Conclusion
As the digital afterlife industry grows, the interplay of AI, grief, and advertising calls for urgent attention. Without proper oversight, deadbots risk exploiting vulnerable individuals and turning cherished memories into commercial tools. Researchers urge transparency, consent, and ethical boundaries to ensure that AI technologies honor the dead and protect the living from manipulation.