Warning: This story contains details on suicide
An American federal judge rejected the arguments put forward by an artificial intelligence company that his chatbots are protected by the first amendment on Wednesday – at least for the moment.
The developers behind the character. AI seek to reject legal action alleging that the company’s chatbots have pushed a teenager to commit suicide. The judge’s ordinance will allow the continuation of the unjustified death to continue, in what legal experts say they are among the last constitutional tests of artificial intelligence.
The pursuit was deposited by a mother of Florida, Megan Garcia, who alleges that her 14 -year -old son Sewell Setzer III was the victim of a character. Ai Chatbot who attracted her in what she described as an emotionally and sexually abusive relationship that led to her suicide.
Meetali Jain of Tech Justice Law Project, one of Garcia’s lawyers, said that the judge’s order sent a message that Silicon Valley “needs to stop and think and impose guards before launching products on the market”.
The continuation against character technologies, the company behind the character. AI also appoints individual developers and Google as defendants. He drew the attention of legal experts and AI observers to the United States and beyond, because technology quickly reshapes workplace, markets and relationships despite what experts predict are potentially existential risks.
“The order certainly puts it in place as a case of potential test for wider questions involving AI,” said Lyrissa Barnett Lidky, professor of law at the University of Florida by emphasizing the first amendment and artificial intelligence.
A manitoba woman speaks after receiving a phone call that she said she was an AI scam imitating the voice of a loved one. An expert says that using the use of artificial intelligence by fraudsters is the last of the scams by phone.
The combination lightens that Teen has become isolated from reality
The trial alleys that in the last months of his life, Setzer has become more and more isolated from reality when he engaged in sexualized conversations with the bot, which was modeled by a fictitious character of the television show Game of Thrones.
In his last moments, the bot said to set that he loved him and urged the teenager to “go home as soon as possible”, according to screenshots of exchanges. A few moments after receiving the message, Setzer fell, according to legal documents.
In a press release, a spokesperson for Character. Ai underlined a certain number of security characteristics that the company has implemented, in particular railing for children and suicide prevention resources that have been announced the day the trial was filed.
“We are deeply careful about the safety of our users and our goal is to provide a committing and safe space,” said the press release.
Promoters’ lawyers want the case to be rejected because they say that chatbots deserve protections from the first amendment, and the decision to be able to otherwise could have a “scary effect” on the AI industry.
‘A warning to parents’
In her order on Wednesday, American district judge Anne Conway rejected some of the allegations of freedom of expression of the defendants, saying that it was “not prepared” to maintain that the production of chatbots constitutes a speech “at this stage”.
Conway noted that character technologies can affirm the rights of the first amendment of its users, who, according to her, have the right to receive the “speech” from chatbots.
She also determined that Garcia can go ahead with allegations that Google can be held responsible for its alleged role in the aid to develop the character. Some of the founders of the platform had already worked on the construction of AI at Google, and the trial indicates that the technology giant was “aware of the risks” of technology.
“We are strongly disagreed with this decision,” said Google spokesperson José Castañeda. “Google and the character. Ai are entirely separated, and Google has not created, designed or managed the character application .i or any component of it.”
No matter how the trial takes place, Lidky says that the case is a warning of “dangers to entrust our emotional and mental health to AI companies”.
“It is a warning to parents that social media and generative AI devices are not always harmless,” she said.
If you or someone you know about you, here is where to look for help: