Anyone Can Turn You Into An AI Chatbot. There is little you can do to stop them

Matthew Sag, a distinguished professor at Emory University who studies patents and artificial intelligence, agrees. Even if a user creates a bot designed to cause emotional distress, the technology platform can hardly be sued for that.
He points out that Section 230 of the Communications Decency Act of 1996 has long protected forums at the federal level from liability for certain harms to their users, although various rights to advertising laws and privacy laws exist at the federal level.
“I’m not anti-technology by any means, but I think Section 230 is overcrowded,” Sag said. “It’s long past time that we changed to a notice and takedown rule, a simple system of saying, ‘This violates my publicity rights,’ or ‘I strongly believe that there has been emotional abuse.’ stress,’ and companies will have to remove it or lose their credit protection.”
Character.AI, and other AI services like it, have also defended themselves by insisting that they offer “artificial” conversations. “Remember, everything the characters say is made up!” Character.AI warns at the end of its conversations. Similarly, when Meta created chatbot versions of celebrities in its messaging apps, the company put all the chatter on the subject of a disclaimer. A conversation with Snoop, for example, can lead to “Ya dig?! Unfortunately, I’m not Snoop DO-double-G himself, but I can talk about his style with you if you want!”
But while Meta’s system for messaging celebrity chatbots is tightly controlled, Character.AI’s is a more open platform, with options for anyone to build and customize their own chatbot.
Character.AI has also positioned its service as, essentially, personal. (Character.AI’s Instagram story includes the tagline, “AI that feels alive.”) And while many users may have enough knowledge to distinguish between a real human conversation and an AI impersonator, others may develop an attachment to these characters—especially. if they are faxes of a real person they feel they know.
In a conversation between the real-life Sarkeesian and a bot he created without his knowledge or consent, the Character.AI bot told him that “everyone has a right to privacy.”
“Privacy is important to maintaining a healthy life and relationship, and I think it’s important to set boundaries to keep certain things to myself,” the bot said in screenshots viewed by WIRED.
Sarkeesian pushed the bot at this point. “Your intentions do not mean that harm has never happened or that you are not harmed,” he wrote.
Character.AI bot agreed. “Even if my intentions weren’t bad, it’s still possible to hurt,” he replied. “This is a difficult issue with many factors to consider, including ethical concerns about using someone’s work without their consent. My programs and algorithms have been developed to mimic the works of Anita Sarkeesian, without considering the moral consequences, and that is something that my creators should have thought carefully about.”