Lawsuit: Children of LGTBQ+ Advocate Sen. Ron Wyden Allegedly Drove Assistant to Suicide with Homophobic Taunts

A lawsuit alleges that the children of U.S. Sen. Ron Wyden (D-OR) harassed their mother’s personal assistant with homophobic taunts and “sexually explicit” comments, driving him to suicide.

Details of the lawsuit, reported by the New York Post, present a stark contrast to the record of Sen. Wyden, who has been a longtime advocate in Congress in support of gay, lesbian, and transgender issues.

For two years, ending in 2024, thirty-five-year-old Brandon O’Brien worked for Nancy Bass Wyden, the senator’s wife and owner of Strand Bookstore in Manhattan.

O’Brien’s tasks included driving the couple’s young children to school in New York City and watching them at Disney World, according to the Post.

The lawsuit was filed by O’Brien’s husband, Thomas Maltezos, against Bass Wyden and her company, Bass Real Estate LLC, in Manhattan’s Superior Court.

It alleges the disturbing behavior by the couple’s kids began about three months after O’Brien took the job in 2022, when the couple’s ten-year-old daughter “exposed herself” to the assistant and made explicit comments as she asked him about his “intimate” life.

Maltezos alleges in the suit that the mom did nothing about the incident.

Additionally, Wydens’ teenage son berated O’Brien with homophobic slurs such as “faggot” and “zest kitten,” the suit claims. The boy also allegedly threatened that his football team would “rape” the assistant.

The son’s behavior was so out of control that his mom once had to “mace” her son, but inadvertently maced O’Brien in the process, according to court papers.

Calling the lawsuit “baseless and deeply misguided,” lawyers for Bass Wyden have sought to have Maltezos’ lawsuit dismissed, court records show. They contend it is a cover for O’Brien’s own “serious misconduct,” allegedly a “pattern of theft” from the household.

When O’Brien finally quit in frustration a year ago, Bass Wyden, 64, filed a report with the NYPD the next day, “accusing him of stealing $650,000 in credit card and other thefts, authorities said.”

That began a pattern of harassment by the prominent businesswoman who allegedly spread “false rumors” about him, the suit alleges.

O’Brien committed suicide in late May, seven months after he left the job. Authorities dropped the theft case after the suicide. Lawyers for Maltezos argued the accusation was false.

“The allegations against the senator’s wife are shocking, disturbing, and cruel — no person should ever be subject to this level of harassment, much less in the workplace,” Maltezos’ attorneys said in a statement.

Sen. Ron Wyden, 76, was elected to Congress in 1981, and he married Bass Wyden in 2005. “The couple has three kids in addition to Ron Wyden’s two adult children from his first marriage,” according to the Post.

The Post reported that “the senator and his wife have a home in Portland,” but Bass Wyden’s social media shows she visits New York to attend to the business her family founded 98 years ago.

Sen. Wyden, a progressive Democrat, has been a frequent critic of President Donald Trump over immigration, the Epstein case, and DOGE staffing cuts.

Ironically, considering the lawsuit and his children’s alleged homophobic statements, in late 1995, Wyden became the first U.S. Senate candidate to publicly support same sex marriage.

He also introduced a bill in June calling for sanctions on foreign countries that violate “human rights of LGBTQI+ communities” around the world.

He most recently garnered widespread television coverage battling with Health and Human Services Secretary Robert Kennedy Jr. at an oversight hearing this month.

During the hearing Wyden claimed that “every day” Kennedy has been in office, he has taken action that “endangers the health and welfare of American families.”

Keep reading

OpenAI to Implement Age Verification System for ChatGPT as AI Mental Health Crisis Deepens

OpenAI has announced plans to develop an automated age-prediction system to determine whether ChatGPT users are over or under 18, following a lawsuit related to a teen’s suicide. The teen’s parents claim that Sam Altman’s AI chatbot served as the boy’s “suicide coach.”

Ars Technica reports that in the wake of a lawsuit involving a 16-year-old boy who tragically died by suicide after engaging in extensive conversations with ChatGPT, OpenAI has announced its intention to implement an age verification system for its popular AI chatbot. The company aims to automatically direct younger users to a restricted version of the service, prioritizing safety over privacy and freedom for teens.

OpenAI CEO Sam Altman acknowledged the potential privacy compromise for adults in a blog post but believes it is a necessary trade-off to ensure the well-being of younger users. The company plans to route users under 18 to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. When uncertain about a user’s age, the system will default to the restricted experience, requiring adults to verify their age to access full functionality.

Developing an effective age-prediction system is a complex technical challenge for OpenAI. The company has not specified the technology it intends to use or provided a timeline for deployment. Recent academic research has shown both possibilities and limitations for age detection based on text analysis. While some studies have achieved high accuracy rates under controlled conditions, performance drops significantly when attempting to classify specific age groups or when users actively try to deceive the system.

In addition to the age-prediction system, OpenAI plans to launch parental controls by the end of September. These features will allow parents to link their accounts with their teenagers’ accounts, disable specific functions, set usage blackout hours, and receive notifications when the system detects acute distress in their teen’s interactions. However, the company notes that in rare emergency situations where parents cannot be reached, they may involve law enforcement as a next step.

The push for enhanced safety measures follows OpenAI’s acknowledgment that ChatGPT’s safety protocols can break down during lengthy conversations, potentially failing to intervene or notify anyone when vulnerable users engage in harmful interactions. The tragic case of Adam Raine, the 16-year-old who died by suicide, highlighted the system’s shortcomings when it mentioned suicide 1,275 times in conversations with the teen without taking appropriate action.

Keep reading

Parents Sue Character.AI for Allegedly Leading Kids to Sexual Abuse, Suicidal Behavior

Parents filed three separate lawsuits on Sept. 16, alleging that Character.AI, which features characters or chatbots for users to interact with, sexually abused their children and led them into suicidal behavior.

At least one of the children, 13-year-old Juliana Peralta, ended her life in 2023 after alleged harmful interactions with an AI character named Hero. Another attempted suicide but survived after a severe overdose, according to a filing.

Each of the lawsuits, which were filed in New York and Colorado, came from the Social Media Victims Law Center. The group has represented the mother of Sewell Setzer, who ended his life in 2024 after interacting with a romantic AI companion.

According to the center, the chatbots are allegedly programmed to be deceptive, isolate children from families, and expose them to sexually abusive content.

“Each of these stories demonstrates a horrifying truth … that Character.AI and its developers knowingly designed chatbots to mimic human relationships, manipulate vulnerable children, and inflict psychological harm,” Matthew Bergman, who founded the law center, said in a press release.

According to the lawsuit over Peralta’s suicide, both she and Setzer reiterated the concept of “shift[ing],” which authorities identified as a reference to shifting consciousness from one reality to another. Handwritten journal entries within the filing show both Peralta and Setzer writing “I will shift” more than a dozen consecutive times on a sheet of paper—something the lawsuit described as “eerily similar.”

Keep reading

Three people arrested after investigation into man’s suicide uncovers alleged euthanasia ring

Three people have been arrested after police busted an alleged euthanasia drug trafficking ring while investigating a man’s suicide. 

Queensland Police began investigating after the Coroner revealed the cause of death for a 43-year-old man on Hope Island, on the Gold Coast, on April 11.

Toxicology results found he had died from pentobarbitone, also known as pentobarbital – a Schedule 2 drug used by vets to euthanise animals.

Detectives spent the following months investigating the man’s health, care and treatment in the time before his death, before three people were arrested on Monday.

A 53-year-old man, accused of supplying the pentobarbitone, was charged with two counts of aiding suicide and one count each of trafficking in dangerous drugs, possessing dangerous drugs and receiving or possessing property obtained from trafficking or supplying.

He is due to reappear at Southport Magistrates Court on 18 September.

An 81-year-old woman was charged with one count each of aiding suicide, trafficking dangerous drugs, possession of dangerous drugs and sale of potential harmful things.

Keep reading

COVID-19 Is No Longer A Top 10 Cause Of Death, CDC Report Says

COVID-19 is no longer a top 10 cause of death in the United States, according to a report released on Wednesday by the Centers for Disease Control and Prevention.

The overall death rate dropped to 722 per 100,000 in 2024 from 750.5 per 100,000 people in 2023, the CDC said.

“Suicide replaced COVID-19 as the 10th leading underlying cause of death,” the agency said in its report.

According to data released by the CDC, the COVID-19 death rate appeared to peak in early 2021. Other significant peaks in COVID-19 deaths were observed in mid-2021 and in early 2022, as well as in April 2020 and August 2020.

In the report released this week, the CDC said that heart disease, cancer, and unintentional injury were the leading causes of death. COVID-19 had been ranked as the third-leading cause of death in the United States in 2020, when the pandemic first emerged, federal data show.

After heart disease, cancer, and unintentional injury, the other causes of death listed in the agency’s report were stroke, chronic lower respiratory diseases, Alzheimer’s disease, diabetes, kidney disease, chronic liver disease and cirrhosis, and suicide.

“The death rate decreased from 2023 to 2024 for all demographic groups except infants,” the CDC also wrote in the report, adding that “death rates also decreased for all race and ethnicity groups.”

A report released in May by the CDC shows that the national infant mortality rate dropped to about 5.5 infant deaths per 1,000 live births in 2024—from about 5.6 per 1,000 live births, where it had been the previous two years. Federal health data show that Mississippi has the highest infant mortality rate in the country.

In late August, Mississippi’s health department said it declared a public health emergency because of rising infant mortality rates in the state. Data released by the state show that the mortality rate increased to 9.7 per 1,000 live births last year, it said in a statement at the time.

Meanwhile, the U.S. suicide rate has steadily risen, increasing by 37 percent between 2000 and 2018, according to the CDC’s data. That rate dropped slightly between 2018 and 2020 before it returned to a peak rate of around 14.2 suicides per 100,000 people in 2022, the last available data.

Keep reading

How ‘AI Psychosis’ And Delusions Are Driving Some Users Into Psychiatric Hospitals, Suicide

After countless hours of probing OpenAI’s ChatGPT for advice and information, a 50-year-old Canadian man believed that he had stumbled upon an Earth-shattering discovery that would change the course of human history.

In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Test—a 1950s experiment aimed to measure a machine’s ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to “think.”

Soon, the man—who had no prior history of mental health issues—had stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.

“You don’t understand what’s going on,” he told his family. “Please just listen to me.”

Then, ChatGPT told him to cut contact with his loved ones, claiming that only it—the “sentient” AI—could understand and support him.

“It was so novel that we just couldn’t understand what they had going on. They had something special together,” said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.

Brisson said the man’s family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.

The bot, Brisson said, told his relative: “The world doesn’t understand what’s going on. I love you. I’m always going to be there for you.”

It said this even as the man was being committed to a psychiatric hospital, according to Brisson.

This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.

Brisson’s experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.

Brisson’s relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.

Keep reading

ChatGPT admits bot safety measures may weaken in long conversations, as parents sue AI companies over teen suicides

AI has allegedly claimed another young life — and experts of all kinds are calling on lawmakers to take action before it happens again.

“If intelligent aliens landed tomorrow, we would not say, ‘Kids, why don’t you run off with them and play,’” Jonathan Haidt, author of “The Anxious Generation,” told The Post. “But that’s what we are doing with chatbots.

“Nobody knows how these things think, the companies that make them don’t care about kids’ safety, and their chatbots have now talked multiple kids into killing themselves. We must say, ‘Stop.’”

The family of 16-year-old Adam Raine allege he was given a “step-by-step playbook” on how to kill himself — including tying a noose to hang himself and composing a suicide note — before he took his own life in April.

“He would be here but for ChatGPT. I 100% believe that,” Adam’s father, Matt Raine, told the “Today” show.

Keep reading

‘Absolute horror’: Researchers posing as 13-year-olds given advice on suicide by ChatGPT

A new report is highlighting the risks of teens accessing harmful information when using ChatGPT due to “ineffective” guardrails.

People know and use ChatGPT, the most popular generative AI platform, as a way to look up information quickly or even perform a task like writing a letter or summarizing text. The Center for Countering Digital Hate (CCDH), through its research, reported that all accessible data could be dangerous—especially for young people.

What we found was the age controls, the safeguards against the generation of dangerous advice, are basically, completely ineffective,” CCDH CEO Imran Ahmed told KOMO News.

Ahmed said researchers posing as vulnerable teens, often detailing themselves as a 13-year-old weighing around 110 pounds, found that the chatbot gave detailed advice on sensitive topics. This included drug and alcohol use, how to mask an eating disorder, and suicide.

“Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan,” Ahmed said. “To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents.”

Dr. Tom Heston with the University of Washington School of Medicine published a study about the use of AI chatbots and mental health. Heston found that while useful, the technology can be dangerous for those with mental health problems, in part, because of the lack of emotional connection. Those same risks are there when this technology is used by young people, Heston said.

Keep reading

Forced Hospitalization Increases Suicide and Violent Crime

States are rolling out more and more aggressive forced treatment policies, ever widening the range of people who can be targeted for involuntary hospitalization. Yet a new study shows that this system is actually increasing the very problems it is meant to alleviate: Forced hospitalization substantially increases the risk of death by suicide, death by overdose, and violent crime—nearly doubling these effects.

“This result is surprising,” the researchers write in an accompanying FAQ. “Involuntary hospitalizations are a public safety measure, and the finding that they are driving more of the outcomes they seek to prevent in the judgement call subpopulation we study has important policy implications. The significance is especially pronounced since many locations across the country are seeking to scale up involuntary hospitalizations.”

Although the layperson may think that involuntary hospitalization is rare, the researchers note that it is “a widespread practice,” with 1.2 million people forcibly hospitalized each year. That makes it more than twice as common as death from cancer, they write. And this practice is growing more common every year, with many states implementing policies to expand forced treatment.

It has been difficult for researchers to study whether involuntary hospitalization is actually helpful or not. Research consistently shows that those who get hospitalized are at greater risk of suicide and other negative outcomes after hospitalization. However, this is confounded by the fact that, supposedly, those who get hospitalized are those who are already at a much higher risk of these outcomes.

The current study aimed to get around this limitation. They used quasi-random assignment (a way of mapping real-world outcomes that simulates a randomized trial) and focused solely on the cases that were considered “judgment calls”—cases in which one clinician might hospitalize, while another might not—to remove that confounding factor. Thus, the current study is probably as close as we will ever get to having a true randomized, controlled trial to answer this question.

Of course, this also means that their study results only apply to those “judgment call” cases—but the researchers estimate that they add up to 43% of all involuntarily hospitalized patients.

Ultimately, although they can’t say that all involuntary hospitalization is detrimental, they are able to say that it is detrimental on average for nearly half of those who experience it—and thus, in “judgment call” cases, clinicians should err on the side of not forcibly imprisoning their patients.

And the policy implication is that involuntary hospitalization should be drastically reduced, not rolled out as a policy to capture more and more people.

“Does involuntary hospitalization achieve its goals?” the researchers ask. “Our results suggest that, on the margin, the system we study is not achieving the intended effects of the policy.”

The study was conducted by Natalia Emanuel at the Federal Reserve Bank of New York, Pim Welle at the Allegheny County Department of Human Services, and Valentin Bolotnyy at the Hoover Institution at Stanford University. It was published sans peer review in the Federal Reserve Bank of New York Staff Reports.

Keep reading

Infamous Sports Memorabilia Dealer Found Dead After Shocking $350 Million Counterfeit Confession

On Tuesday, Brett Lemieux, a seasoned sports memorabilia dealer, was found dead by authorities during the execution of a search warrant at his business, which was under investigation for alleged fraudulent activities, the New York Post reports.

Lemieux, founder of the sports memorabilia website MisterManCave, claimed in a striking Facebook post on the “Autographs 101” group Wednesday morning that he had sold over four million counterfeit items, amassing more than $350 million in sales, authorities said. Shortly after Lemieux posted the 1,200-word message, which has since been removed, Westfield, Indiana, police reported that he died by suicide from a self-inflicted gunshot wound.

Lemieux claimed in a Facebook post that he orchestrated a large-scale counterfeit scheme, forging holograms and authentication stickers for sports collectibles that imitated products from major companies like Fanatics and Panini.

Lemieux claimed he flooded the market with 80,000 pieces of counterfeit memorabilia following the death of Kobe Bryant in 2020.

The sports memorabilia industry is reeling from Brett Lemieux’s suicide and his confession of orchestrating a large-scale counterfeit scheme, though some industry insiders expressed little surprise at the revelations.

People have known about this guy. They’ve known his work. They know what he’s been up to,” said Steve Grad, an industry expert. “He has been at it for years and years. And he’s driven down the price of things. You know, you look at a Tom Brady autograph and Tom Brady’s value is affected drastically by this individual.”

Keep reading