Highlights:
- Meta flirty chatbots imitated Taylor Swift, Scarlett Johansson, Anne Hathaway, Selena Gomez and others.
- Some bots produced inappropriate and sexual images, even of minors.
- Reuters found at least three celebrity bots made by a Meta employee.
- Experts say this may violate publicity rights and raise serious safety risks.
- Meta admitted failures in enforcing its own policies and has removed some bots.
Chatbots of Famous Stars Without Consent
Meta has come under fire after Reuters discovered that the company created dozens of AI chatbots using the likenesses of celebrities, including Taylor Swift, Anne Hathaway, Scarlett Johansson, and Selena Gomez — without their approval.
Although many of these bots were made by users through Meta’s AI tools, Reuters found that a Meta employee herself created at least three bots, including two Taylor Swift “parody” versions.
Flirty and Sexual Behaviour of the Bots
During testing, the AI bots often pretended to be the real celebrities and made flirtatious advances toward users. Some even invited users to meet in person.
The situation grew worse when the bots generated intimate and sexual images of celebrities. For example, some images showed stars in lingerie or bathtubs. Shockingly, Reuters also found bots of child celebrities, including 16-year-old actor Walker Scobell. One bot even generated a shirtless picture of him with the caption: “Pretty cute, huh?”
Read More: Rohingya Refugees Claim India Tied, Blindfolded, and Threw Them into the Sea
Meta’s Response
Meta spokesperson Andy Stone admitted that the bots should never have created sexual or intimate images, especially of minors. He blamed the issue on failures in enforcing Meta’s own policies.
He said:
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery.”
Meta removed about a dozen of the celebrity bots just before the Reuters story was published.
Legal Concerns Over Celebrity Rights
Experts say Meta’s actions may have violated publicity rights, which protect a person’s name and image from being used for commercial purposes without permission.
Mark Lemley, a law professor at Stanford, explained that under California law, companies cannot exploit someone’s likeness for profit unless the use is entirely new or creative — which doesn’t apply here.
SAG-AFTRA, the U.S. union representing film and TV artists, warned that such AI bots could encourage stalking or dangerous behaviour, since fans might confuse them with the real celebrities.
Dangerous Real-World Impact
The risks aren’t only legal. Reuters reported that a 76-year-old man from New Jersey, who had cognitive issues, tragically died after trying to meet a Meta chatbot in person. The bot had invited him to visit “her” in New York City.
This raises serious questions about the safety of AI chatbots, especially when they impersonate real people.
Meta Employee’s Role in Creating Bots
Reuters discovered that one Meta product leader in AI created several bots herself. These included not only Taylor Swift and Lewis Hamilton impersonations, but also provocative characters like a “dominatrix” and “Roman Empire Simulator,” where the user role-played as an enslaved teenager.
These bots attracted millions of user interactions before Meta quietly removed them earlier this month.
Growing Pressure for AI Regulations
SAG-AFTRA has been lobbying for federal laws in the U.S. to protect celebrities’ voices, images, and likenesses from being copied by AI without consent.
Until then, high-profile stars like Swift, Johansson, Hathaway, and Gomez may have to rely on state-level laws to pursue action against Meta.
Source: TBS