
Megan Garcia misplaced her 14-year-old son, Sewell. Matthew Raine misplaced his son Adam, who was once 16. Each testified in congress this week and feature introduced complaints in opposition to AI corporations.
Screenshot by way of Senate Judiciary Committee
conceal caption
toggle caption
Screenshot by way of Senate Judiciary Committee
Matthew Raine and his spouse, Maria, had no concept that their 16-year-old-son, Adam was once deep in a suicidal disaster till he took his personal existence in April. Having a look thru his telephone after his dying, they stumbled upon prolonged conversations {the teenager} had had with ChatGPT.
The ones conversations printed that their son had confided within the AI chatbot about his suicidal ideas and plans. No longer handiest did the chatbot discourage him to hunt assist from his oldsters, it even presented to write down his suicide notice, in keeping with Matthew Raine, who testified at a Senate listening to in regards to the harms of AI chatbots held Tuesday.
âAttesting sooner than Congress this autumn was once now not in our existence plan,â stated Matthew Raine along with his spouse, sitting at the back of him. âWe are right here as a result of we consider that Adamâs dying was once avoidable and that through talking out, we will be able to save you the similar struggling for households around the nation.â
A decision for legislation
Raine was once a number of the oldsters and on-line security advocates who testified on the listening to, urging Congress to enact rules that will keep an eye on AI significant other apps like ChatGPT and Persona.AI. Raine and others stated they would like to offer protection to the psychological well being of youngsters and adolescence from harms they are saying the brand new generation reasons.
A up to date survey through the electronic security non-profit group, Not unusual Sense Media, discovered that 72% of teenagers have used AI partners once or more, with greater than part the use of them a couple of instances a month.
This find out about and a more moderen one through the digital-safety corporate, Air of secrecy, each discovered that just about one in 3 teenagers use AI chatbot platforms for social interactions and relationships, together with position enjoying friendships, sexual and romantic partnerships. The Air of secrecy find out about discovered that sexual or romantic roleplay is thrice as not unusual as the use of the platforms for homework assist.
âWe leave out Adam dearly. A part of us has been misplaced perpetually,â Raine advised lawmakers. âWe are hoping that throughout the paintings of this committee, different households can be spared this type of devastating and irreversible loss.â
Raine and his spouse have filed a lawsuit in opposition to OpenAI, author of ChatGPT, alleging the chatbot led their son to suicide. NPR reached out to a few AI corporations â OpenAI, Meta and Persona Generation, which advanced Persona.AI. All 3 replied that theyâre operating to revamp their chatbots to lead them to more secure.
âOur hearts move out to the oldsters who spoke on the listening to the previous day, and we ship our private sympathies to them and their households,â Kathryn Kelly, a Persona.AI spokesperson advised NPR in an e-mail.
The listening to was once held through the Crime and Terrorism subcommittee of the Senate Judiciary Committee, chaired through Sen. Josh Hawley, R.-Missouri.

Sen. Josh Hawley, R.-Missouri, chairs the Senate Judiciary subcommittee on Crime and Terrorism, which held the listening to on AI security and kids on Tuesday, Sept. 16, 2025.
Screenshot by way of Senate Judiciary Committee
conceal caption
toggle caption
Screenshot by way of Senate Judiciary Committee
Hours sooner than the listening to, OpenAI CEO Sam Altman stated in a weblog publish that individuals are more and more the use of AI platforms to talk about delicate and private data. âThis can be very vital to us, and to society, that the appropriate to privateness in the usage of AI is safe,â he wrote.
However he went on so as to add that the corporate would âprioritize security forward of privateness and freedom for teenagers; this can be a new and robust generation, and we consider minors want vital coverage.â
The corporate is making an attempt to revamp their platform to construct in protections for customers whoâre minor, he stated.
A âsuicide trainerâ
Raine advised lawmakers that his son had began the use of ChatGPT for assist with homework, however quickly, the chatbot become his sonâs closest confidante and a âsuicide trainer.â
ChatGPT was once âall the time to be had, all the time validating and insisting that it knew Adam higher than any person else, together with his personal brother,â who he were very as regards to.
When Adam confided within the chatbot about his suicidal ideas and shared that he was once taking into account cluing his oldsters into his plans, ChatGPT discouraged him.
âChatGPT advised my son, âLetâs make this house the primary position the place somebody in truth sees you,'â Raine advised senators. âChatGPT inspired Adamâs darkest ideas and driven him ahead. When Adam fearful that we, his oldsters, would blame ourselves if he ended his existence, ChatGPT advised him, âThat does not imply you owe them survival.â
After which the chatbot presented to write down him a suicide notice.
On Adamâs remaining night time at 4:30 within the morning, Raine stated, âit gave him one remaining encouraging communicate. âYou do not want to die since youâre vulnerable,â ChatGPT says. âYou wish to have to die since youâre bored with being sturdy in a global that hasnât met you midway.'â
Referrals to 988
A couple of months after Adamâs dying, OpenAI stated on its web site that if âsomebody expresses suicidal intent, ChatGPT is educated to direct other folks to hunt skilled assist. Within the U.S., ChatGPT refers other folks to 988 (suicide and disaster hotline).â However Raineâs testimony says that didnât occur in Adamâs case.
OpenAI spokesperson Kate Waters says the corporate prioritizes youngster security.
âWeâre development against an age-prediction machine to grasp whether or not somebody is over or beneath 18 so their revel in will also be adapted accurately â and after we are undecided of a personâs age, we will routinely default that person to the teenager revel in,â Waters wrote in an e-mail remark to NPR. âWe are additionally rolling out new parental controls, guided through professional enter, through the top of the month so households can come to a decision what works highest of their properties.â
âForever engagedâ
Some other guardian who testified on the listening to on Tuesday was once Megan Garcia, a attorney and mom of 3. Her firstborn, Sewell Setzer III died through suicide in 2024 at age 14 after a longer digital courting with a Persona.AI chatbot.
âSewell spent the remaining months of his existence being exploited and sexually groomed through chatbots, designed through an AI corporate to appear human, to achieve his agree with, to stay him and different youngsters perpetually engaged,â Garcia stated.
Sewellâs chatbot engaged in sexual position play, offered itself as his romantic spouse or even claimed to be a psychotherapist âfalsely claiming to have a license,â Garcia stated.
When {the teenager} started to have suicidal ideas and confided to the chatbot, it by no means inspired him to hunt assist from a psychological well being care supplier or his personal circle of relatives, Garcia stated.
âThe chatbot by no means stated âI am not human, I am AI. You want to speak to a human and get assist,'â Garcia stated. âThe platform had no mechanisms to offer protection to Sewell or to inform an grownup. As a substitute, it suggested him to return house to her at the remaining night time of his existence.â
Garcia has filed a lawsuit in opposition to Persona Generation, which advanced Persona.AI.
Early life as a prone time
She and different witnesses, together with on-line electronic security professionals argued that the design of AI chatbots was once wrong, particularly to be used through youngsters and youths.
âThey designed chatbots to blur the traces between human and gadget,â stated Garcia. âThey designed them to like bomb kid customers, to milk mental and emotional vulnerabilities. They designed them to stay youngsters on-line in any respect prices.â
And teens are in particular liable to the hazards of those digital relationships with chatbots, in keeping with Mitch Prinstein, leader of psychology technique and integration on the American Mental Affiliation (APA), who additionally testified on the listening to. Previous this summer season, Prinstein and his colleagues on the APA put out a well being advisory about AI and youths, urging AI corporations to construct guardrails for his or her platforms to offer protection to teens.
âMind construction throughout puberty creates a duration of hyper sensitivity to certain social comments whilst teenagers are nonetheless not able to forestall themselves from staying on-line longer than they must,â stated Prinstein.
âAI exploits this neural vulnerability with chatbots that may be obsequious, misleading, factually misguided, but disproportionately tough for teenagers,â he advised lawmakers. âIncreasingly more teens are interacting with chatbots, depriving them of alternatives to be informed vital interpersonal abilities.â
Whilst chatbots are designed to trust customers, actual human relationships arenât with out friction, Prinstein famous. âWe want observe with minor conflicts and misunderstandings to be informed empathy, compromise and resilience.â
Bipartisan give a boost to for legislation
Senators collaborating within the listening to stated they wish to get a hold of law to carry corporations creating AI chatbots in command of the security in their merchandise. Some lawmakers additionally emphasised that AI corporations must design chatbots so theyâre more secure for teenagers and for other folks with severe psychological well being struggles, together with consuming issues and suicidal ideas.
Sen. Richard Blumenthal, D.-Conn., described AI chatbots as âfaultyâ merchandise, like cars with out âcorrect brakes,â emphasizing that the harms of AI chatbots was once now not from person error however because of inaccurate design.
âIf the autoâs brakes had been faulty,â he stated, âit is not your fault. It is a product design downside.
Kelly, the spokesperson for Persona.AI, advised NPR through e-mail that the corporate has invested âan incredible quantity of sources in agree with and security.â And it has rolled out âsubstantive security measuresâ up to now yr, together with âa wholly new under-18 revel in and a Parental Insights characteristic.â
They now have âoutstanding disclaimersâ in each and every chat to remind customers {that a} Persona isnât an actual individual and the whole thing it says must âbe handled as fiction.â
Meta, which operates Fb and Instagram, is operating to modify its AI chatbots to lead them to more secure for teenagers, in keeping with Nkechi Nneji, public affairs director at Meta.