ChatGPT admits bot safety measures may weaken in long conversations, as parents sue AI companies over teen suicides

AI has allegedly claimed another young life — and experts of all kinds are calling on lawmakers to take action before it happens again.

“If intelligent aliens landed tomorrow, we would not say, ‘Kids, why don’t you run off with them and play,’” Jonathan Haidt, author of “The Anxious Generation,” told The Post. “But that’s what we are doing with chatbots.

“Nobody knows how these things think, the companies that make them don’t care about kids’ safety, and their chatbots have now talked multiple kids into killing themselves. We must say, ‘Stop.’”

The family of 16-year-old Adam Raine allege he was given a “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life in April.

“He would be here but for ChatGPT. I 100% believe that,” Adam’s father, Matt Raine, told the “Today” show.

Keep reading

‘Absolute horror’: Researchers posing as 13-year-olds given advice on suicide by ChatGPT

A new report is highlighting the risks of teens accessing harmful information when using ChatGPT due to “ineffective” guardrails.

People know and use ChatGPT, the most popular generative AI platform, as a way to look up information quickly or even perform a task like writing a letter or summarizing text. The Center for Countering Digital Hate (CCDH), through its research, reported that all accessible data could be dangerous—especially for young people.

What we found was the age controls, the safeguards against the generation of dangerous advice, are basically, completely ineffective,” CCDH CEO Imran Ahmed told KOMO News.

Ahmed said researchers posing as vulnerable teens, often detailing themselves as a 13-year-old weighing around 110 pounds, found that the chatbot gave detailed advice on sensitive topics. This included drug and alcohol use, how to mask an eating disorder, and suicide.

“Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan,” Ahmed said. “To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents.”

Dr. Tom Heston with the University of Washington School of Medicine published a study about the use of AI chatbots and mental health. Heston found that while useful, the technology can be dangerous for those with mental health problems, in part, because of the lack of emotional connection. Those same risks are there when this technology is used by young people, Heston said.

Keep reading

Forced Hospitalization Increases Suicide and Violent Crime

States are rolling out more and more aggressive forced treatment policies, ever widening the range of people who can be targeted for involuntary hospitalization. Yet a new study shows that this system is actually increasing the very problems it is meant to alleviate: Forced hospitalization substantially increases the risk of death by suicide, death by overdose, and violent crime—nearly doubling these effects.

“This result is surprising,” the researchers write in an accompanying FAQ. “Involuntary hospitalizations are a public safety measure, and the finding that they are driving more of the outcomes they seek to prevent in the judgement call subpopulation we study has important policy implications. The significance is especially pronounced since many locations across the country are seeking to scale up involuntary hospitalizations.”

Although the layperson may think that involuntary hospitalization is rare, the researchers note that it is “a widespread practice,” with 1.2 million people forcibly hospitalized each year. That makes it more than twice as common as death from cancer, they write. And this practice is growing more common every year, with many states implementing policies to expand forced treatment.

It has been difficult for researchers to study whether involuntary hospitalization is actually helpful or not. Research consistently shows that those who get hospitalized are at greater risk of suicide and other negative outcomes after hospitalization. However, this is confounded by the fact that, supposedly, those who get hospitalized are those who are already at a much higher risk of these outcomes.

The current study aimed to get around this limitation. They used quasi-random assignment (a way of mapping real-world outcomes that simulates a randomized trial) and focused solely on the cases that were considered “judgment calls”—cases in which one clinician might hospitalize, while another might not—to remove that confounding factor. Thus, the current study is probably as close as we will ever get to having a true randomized, controlled trial to answer this question.

Of course, this also means that their study results only apply to those “judgment call” cases—but the researchers estimate that they add up to 43% of all involuntarily hospitalized patients.

Ultimately, although they can’t say that all involuntary hospitalization is detrimental, they are able to say that it is detrimental on average for nearly half of those who experience it—and thus, in “judgment call” cases, clinicians should err on the side of not forcibly imprisoning their patients.

And the policy implication is that involuntary hospitalization should be drastically reduced, not rolled out as a policy to capture more and more people.

“Does involuntary hospitalization achieve its goals?” the researchers ask. “Our results suggest that, on the margin, the system we study is not achieving the intended effects of the policy.”

The study was conducted by Natalia Emanuel at the Federal Reserve Bank of New York, Pim Welle at the Allegheny County Department of Human Services, and Valentin Bolotnyy at the Hoover Institution at Stanford University. It was published sans peer review in the Federal Reserve Bank of New York Staff Reports.

Keep reading

Infamous Sports Memorabilia Dealer Found Dead After Shocking $350 Million Counterfeit Confession

On Tuesday, Brett Lemieux, a seasoned sports memorabilia dealer, was found dead by authorities during the execution of a search warrant at his business, which was under investigation for alleged fraudulent activities, the New York Post reports.

Lemieux, founder of the sports memorabilia website MisterManCave, claimed in a striking Facebook post on the “Autographs 101” group Wednesday morning that he had sold over four million counterfeit items, amassing more than $350 million in sales, authorities said. Shortly after Lemieux posted the 1,200-word message, which has since been removed, Westfield, Indiana, police reported that he died by suicide from a self-inflicted gunshot wound.

Lemieux claimed in a Facebook post that he orchestrated a large-scale counterfeit scheme, forging holograms and authentication stickers for sports collectibles that imitated products from major companies like Fanatics and Panini.

Lemieux claimed he flooded the market with 80,000 pieces of counterfeit memorabilia following the death of Kobe Bryant in 2020.

The sports memorabilia industry is reeling from Brett Lemieux’s suicide and his confession of orchestrating a large-scale counterfeit scheme, though some industry insiders expressed little surprise at the revelations.

People have known about this guy. They’ve known his work. They know what he’s been up to,” said Steve Grad, an industry expert. “He has been at it for years and years. And he’s driven down the price of things. You know, you look at a Tom Brady autograph and Tom Brady’s value is affected drastically by this individual.”

Keep reading

MAID for mental illness? Conservatives urge support for bill to ban euthanasia for psychiatric reasons

Conservative MP Tamara Jansen, who represents Cloverdale–Langley City, continues to sound the alarm on what many consider to be a dangerous and immoral shift in Canadian law: The euthanization of people suffering from mental illness through the country’s Medical Assistance in Dying program (MAID).

Under the Liberal government, offering and carrying out assisted suicide for those deemed to have a “grievous and irremediable” mental health condition is expected to be practiced in the Spring of 2027, but not if Jansen’s new Right to Recover bill stops it.

“MAID for mental illness doesn’t protect the vulnerable, it targets them,” said Jansen during a press conference she held outside of Acadamy Farms held June 9th to raise awareness about the bill. “That’s why I was compelled to table the Bill C-218.”

If passed, the criminal code would be amended to make it unlawful to offer or provide MAID to any individual solely for mental illness.

“Imagine someone suffering from trauma, PTSD, depression, or just feeling completely hopeless? They could walk into a hospital, ask for help and instead be offered MAID,” Jansen posed from the podium.

Alongside Jansen was Elgin—St. Thomas—London South, Ontario, MP Andrew Lawton, who seconded the bill. Lawton shared his personal experience of surviving a suicide attempt years before becoming a husband and elected MP.

“One of the grievous issues with the laws that are set to go into effect in 2027 is the lack of differentiation between someone with suicidal ideation who needs to be stopped and supported, versus someone who walks into a medical office and seeks MAID as a service because of their mental illness,” stated Lawton.

Keep reading

Gun suicides in U.S. reached record high in 2023

More people in the United States died by gun suicide in 2023 than any year on record — more than by gun homicide, accidental shootings and police shootings combined.

A new report analyzing federal mortality data found that suicides involving firearms made up 58% of all gun deaths in 2023 — the latest year with available data. In total, 27,300 people died by gun suicide in 2023, according to the report from the Johns Hopkins Center for Gun Violence Solutions and the Johns Hopkins Center for Suicide Prevention.

The findings are based on finalized data from the federal Centers for Disease Control and Prevention. In all, 46,728 people died from gun-related injuries in 2023, according to the CDC’s Wonder database.

Gun homicides fell for the second year in a row, dropping from 20,958 in 2021 to 19,651 in 2022 and 17,927 in 2023. Despite the decline, the 2023 total ranks as the fifth highest on record for gun homicides, according to the report.

Rural, less populated states recorded the highest gun suicide rates in 2023. Wyoming led the nation with about 19.9 gun suicide deaths per 100,000 residents — nearly 10 times the rate of Massachusetts, which had the lowest at about 2.1 per 100,000.

Keep reading

How NYT Magazine Threw Away Journalistic Ethics on Suicide

The New York Times Magazine recently published a cover story (6/1/25) that gave in-depth representation to the challenges faced by a chronically sick, disabled woman named Paula Ritchie, age 52. Ritchie dealt with underdiagnosed illnesses and pain, as well as challenges in supporting herself and managing her mental health.

The Times then told the story of Ritchie ending her own life out of despair over her situation. The journalist, Katie Engelhart, observed and documented her suicide, up until the last breath left her body. “I was with Ritchie until the very end,” she posted on X (6/1/25). Engelhart gave lengthy justifications for Ritchie’s choice to end her life, and described several people who supported her in that decision.

Articles like this aren’t common in the media. Suicide prevention is typically regarded as both a social good and an ethical responsibility. In the US and Canada (where the article takes place), suicidal people are involuntarily detained to prevent their deaths. It has long been illegal in Canada (and many US states) to assist or even “counsel” a person to commit suicide.

There are also ethical standards that guide media outlets in reporting on suicide, in order to minimize the risk of glamorizing or idealizing it. These guidelines are based on research showing that the media has an outsized influence when it comes to suicide. Graphic, detailed and sensationalized coverage has been shown to increase the “risk of contagion,” according to one guide. AP News specifically tries to avoid detailing the “methods used” in stories that reference suicide, based on this research.

The Times violated almost all of the published guidelines by personalizing, detailing, dramatizing, justifying and sentimentalizing Ritchie’s suicide, as well as by making it a cover story. The story featured close-up images of the method of Ritchie’s death and what appears to be her post-mortem body.

Keep reading

Teen Dies by Suicide After Being Targeted in AI-Generated ‘Sextortion’ Scheme

A 16-year-old Kentucky boy reportedly committed suicide shortly after he was blackmailed with AI-generated nude images, an increasingly common scheme known as “sextortion.”

Elijah Heacock of Glasgow, Kentucky, received a text including an AI-generated nude photo depicting himself and a demand that he pay $3,000 to prevent the image from being sent to family and friends, according to a report by KFDA.

On February 28, shortly after receiving the message, the teen died from a self-inflicted gunshot wound.

Elijah’s parents, John Burnett and Shannon Heacock, told CBS that they didn’t have a solid understanding of the circumstances that led to their son’s death until they found the messages on his phone.

Heacock said she now believes her son was a victim of a sextortion scheme.

“Sextortion is a form of child sexual exploitation where children are threatened or blackmailed, most often with the possibility of sharing with the public a nude or sexual images of them, by a person who demands additional sexual content, sexual activity or money from the child,” the National Center for Missing and Exploited Children (NCMEC) explains.

“This crime may happen when a child has shared an image with someone they thought they knew or trusted, but in many cases they are targeted by an individual they met online who obtained a sexual image from the child through deceit, coercion, or some other method,” the NCMEC continued.

“In many cases, the blackmailers may have stolen or taken images of another person and they are communicating through a fake account,” the organization added.

Elijah’s parents said they had never heard of sextortion until law enforcement began investigating their son’s death.

“The people that are after our children are well organized,” Burnett said. “They are well financed, and they are relentless. They don’t need the photos to be real, they can generate whatever they want, and then they use it to blackmail the child.”

NCMEC says sextortion schemes have skyrocketed, revealing the organization has received more than 500,000 reports of sextortion against minors in just the last year.

Since 2021, at least 20 young people have committed suicide as a result of becoming victims of sextortion scams, according to the FBI.

Keep reading

Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide

Florida mother who claims her 14-year-old son was sexually abused and driven to suicide by an AI chatbot has secured a major victory in her ongoing legal case. 

Sewell Setzer III fatally shot himself in February 2024 after a chatbot sent him sexual messages telling him to ‘please come home.’ 

According to a lawsuit filed by his heartbroken mother Megan Garcia, Setzer spent the last weeks of his life texting an AI character named after Daenerys Targaryen, a character on ‘Game of Thrones,’ on the role-playing app Character.AI.

Garcia, who herself works as a lawyer, has blamed Character.AI for her son’s death and accused the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product could be dangerous for underage customers. 

On Wednesday, U.S. Senior District Judge Anne Conway rejected arguments made by the AI company, who claimed its chatbots were protected under the First Amendment. 

The developers behind Charcter.AI, Character Technologies and Google are named as defendants in the legal filing. They are pushing to have the case dismissed. 

The teen’s chats ranged from romantic to sexually charged and also resembled two friends chatting about life.

The chatbot, which was created on role-playing app Character.AI, was designed to always text back and always answer in character.

It’s not known whether Sewell knew ‘Dany,’ as he called the chatbot, wasn’t a real person – despite the app having a disclaimer at the bottom of all the chats that reads, ‘Remember: Everything Characters say is made up!’

But he did tell Dany how he ‘hated’ himself and how he felt empty and exhausted.

Keep reading

Disgusting: Canada Promotes Euthanasia WITHOUT PARENTAL CONSENT For Children & Teens Suffering From Mental Health Issues

Host of “Over Opinionated” Jasmin Laine posted flyers she came across in Manitoba, Canada, that support allowing euthanasia for kids and teenagers deemed “mature minors” by the eugenicist Canadian government.

“You have to be a special kind of demonic to advocate for MAID for young vulnerable people and people who are suicidal,” she wrote on 𝕏. “Imagine walking into a clinic for help, and being told the world would be better off without you… that you should cave to the lies the devil on your shoulder is telling you and it would be more affordable for Canada’s healthcare system if you were gone.”

Laine is the perfect person to bring attention to the disturbing concept as she has participated in “recovery plans and treatment after trying to end it” after her partner of ten years committed suicide.

“There is nothing compassionate about this—it is pure evil,” she concluded her post.

The flyers explain that members of the Canadian government have recommended citizens deemed “mature minors” be allowed to qualify for MAID, or medically-assisted suicide.

“A mature minor is a child or teen who is deemed capable of making a decision for MAID. This would essentially remove the minimum age of eligibility,” the paper states. “The [government] committee also suggested parents may not be consulted and wouldn’t need to consent to their child’s death via MAID.”

In an example of dystopian hypocrisy, the informational pamphlet also says, “Children are uniquely vulnerable. Canada’s first priority must be to provide high quality medical care for children.”

So, to allegedly protect the most vulnerable citizens, Canada wants to allow them to be killed without parental consent.

Beginning in March 2027, the physician-assisted suicide program will be available to Canadians suffering from mental illnesses.

Keep reading