A Court Banned a Man from ChatGPT. No One Asked If That’s Constitutional.

On April 13, a California Superior Court judge granted a temporary restraining order requiring OpenAI to keep a user locked out of ChatGPT until at least May 6.

The user, identified in court filings only as “John Roe,” has been arrested on four felony counts, found incompetent to stand trial, and recently ordered released from custody on a technicality.

His ex-girlfriend, proceeding as “Jane Doe,” filed a lawsuit and emergency application alleging that ChatGPT fed Roe’s delusional thinking, generated fake psychological reports about her, and helped facilitate a months-long stalking campaign.

We obtained a copy of the complaint for you here.

The facts in the complaint are disturbing. But the court’s order raises a question that no one in the courtroom appears to have seriously grappled with, and that matters far more than this one case: can a judge order a person cut off from an AI platform without considering whether that violates the First Amendment?

OpenAI at least mentioned the problem. The company’s opposition brief cited Packingham v. North Carolina, the 2017 Supreme Court decision that struck down a state law barring sex offenders from social media.

Justice Kennedy, writing for a unanimous Court, called the internet “the modern public square” and warned against broadly restricting access to platforms where people speak, read, and think.

OpenAI’s lawyers argued that a court-ordered ban on a user’s access to a general-purpose AI service raises the same kind of constitutional concern. The plaintiff’s lawyers did not address it at all.

San Francisco Superior Court Judge Harold Kahn granted the TRO anyway, ordering Roe’s accounts to remain suspended.

According to Eugene Volokh, the George Mason law professor and First Amendment scholar who followed the hearing through a research assistant, there was no meaningful discussion of the user’s speech rights by the court.

That should worry anyone who cares about the principle that the government cannot casually strip individuals of access to communications technology, even individuals who have done terrible things.

Keep reading

OpenAI Supports Illinois Bill to Limit AI Companies’ Liability for Mass Casualty Incidents, Financial Disasters

OpenAI is backing an Illinois state bill that would protect AI companies from legal responsibility when their technology contributes to severe societal harms, including mass deaths or catastrophic financial losses.

Wired reports that the ChatGPT maker has testified in favor of Illinois Senate Bill 3444, legislation that would shield frontier AI developers from liability for critical harms caused by their models under certain conditions. The bill represents what several AI policy experts describe as a notable evolution in OpenAI’s legislative approach, which until now had focused primarily on opposing measures that would increase liability for AI companies.

SB 3444 would define critical harms as incidents causing death or serious injury to 100 or more people, or at least $1 billion in property damage. Under the proposed law, AI labs would be protected from liability as long as they did not intentionally or recklessly cause such an incident and had published safety, security, and transparency reports on their websites. The bill defines frontier models as those trained using more than $100 million in computational costs, a threshold that would likely apply to major American AI company including OpenAI, Google, xAI, Anthropic, and Meta.

The legislation specifically identifies several scenarios of concern to the AI industry, including the use of AI by malicious actors to develop chemical, biological, radiological, or nuclear weapons. It also covers situations where an AI model independently engages in conduct that would constitute a criminal offense if committed by a human, provided such actions lead to the extreme outcomes defined in the bill.

Jamie Radice, an OpenAI spokesperson, said in an emailed statement: “We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses—small and big—of Illinois. They also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.”

Caitlin Niedermeyer, a member of OpenAI’s Global Affairs team, delivered testimony supporting the bill and echoed the call for federal AI regulation. Her arguments aligned with the Trump administration’s opposition to inconsistent state-level AI safety laws. Niedermeyer emphasized the importance of avoiding what she called “a patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.” She also suggested that state laws can be valuable when they “reinforce a path toward harmonization with federal systems.”

Keep reading

Our Post-Truth, Post-Trust World

That we inhabit a post-truth world seems to accepted wisdom. But that’s only half of it. We also live in a post-trust world. In a post-truth world, everything is shaped by the implicit goals of the entity claiming to state the “truth,” as the entire point of claiming to state the “truth” is to persuade the target populace to agree to something favorable to the issuer of the claimed “truth.”

In other words, the “truth” as something that has no intentional spin of self-interest no longer exists. What is passed off as “truth” is spin intended / designed to serve the interests of those doing the spinning.

This is the definition of propaganda and marketing, which are pure expressions of self-interest, and they’ve been around since the dawn of civilization, as persuading others to do what serves your private interests is much lower cost / more profitable than having to modify their behaviors with force.

The first step in the con of propaganda and marketing is to win the trust of the mark. This is a fascinating process, as some people are willing believers and others are skeptical, and so the trust campaign must speak to both the skeptics and those primed to embrace the message for reasons that have less to do with the entity issuing the message and more to do with their internal beliefs.

The trick with skeptics is to present persuasive evidence–the “facts.” These can be first-person accounts, scientific studies, or something presented as self-evident. The con artist presents the facts as if they are objective and the mark is invited to “decide for yourself:” the con artist claims he has no intent to persuade.

This is humorously illustrated in Melville’s classic novel The Confidence-Man.

The rise of the collection of data and the scientific method introduced the idea of “objective truth” that was based on facts collected from observations that were repeatable by anyone able to isolate the same variables. In other words, these truths could be verified by anyone using the same tools to collect data that isolated the same variables, so it wasn’t a private truth, it was a public truth everyone had to accept as fact.

The power of “objective fact” was too good to pass up, and so manipulating the metrics of data collection and analysis became the new territory of developing trust and establishing “truth” to serve private interests. Sample sizes were kept small, subjects were selected for their likelihood of yielding the desired data, and analytic tools weeded out outliers that undermined or contradicted the pre-selected “results.”

As McLuhan observed, The medium is both the message and the massage, and so the synthetic media that broadcast the human voice and visual images captured our attention and imagination in ways the written word could not. Now we have AI, which mimics human speech so engagingly that we attribute it with human characteristics: intelligence, emotions, empathy, etc.

With social media and smartphones, these media/ AI technologies have scalable visibility and virulence: they are ubiquitous (everywhere) and extremely contagious / virulent, spreading quickly through vast populations.

Keep reading

Tech company launches AI Jesus you can talk to for $2 a minute

A tech company has set up a new platform that allows users to have conversations with an AI avatar of Jesus Christ.

Just Like Me is a tech company that creates AI versions of certain celebrities, experts and personalities. According to the company, these “digital twins” are intended to “provide guidance, mentorship, support and friendship” to users of the site.

Users can pay to have “video calls” with AI versions of personalities such as MySpace co-founder Chris DeWolfe, deceased political commentator Charlie Kirk, or fictional characters such as Santa Claus.

How the model was trained

The latest personality to be added to the website is an avatar of Jesus Christ. The AI version of Jesus is said to offer words of prayer and encouragement, speak a variety of languages, and recall prior conversations with Just Like Me users.

Just Like Me CEO Chris Breed said their model was trained on both the King James Bible and its understanding of scripture comes from sermons given by various preachers. Visually, the avatar was based on actor Jonathon Roumie’s portrayal of Jesus in the TV show The Chosen.

Just Like Me charges $1.99 per minute to speak to the avatar. Alternatively, a package deal priced at $49.99 gives users 45 minutes with the AI Jesus each month.

Keep reading

Another Wrongful Arrest Based on Faulty Facial Recognition Raises Growing Concerns

Jason Killinger walked into Reno’s Peppermill Casino in September of 2023, later that evening as he was exiting the building he would be arrested. The casino’s facial recognition system had flagged him as a “100 percent match” for an individual that had previously been banned from the property. The only problem? It was completely wrong.

After the system flagged Killinger, casino security would approach him, referring to him as “Mike”, an individual who had been previously removed from the property. Despite his insistence and ability to prove that he was in fact not Mike, security would surround and handcuff Killinger before calling the Reno Police Department. Shortly thereafter rookie Officer Richard Jager would arrive on the scene.

Killinger quickly proved he was not the man identified in the system, as he was carrying three valid forms of identification, including a Nevada Real ID compliant drivers license, his Peppermill player’s card, and a debit card, all with his name on it. When this wasn’t enough he offered to retrieve more from his vehicle, which included a pay stub, vehicle registration, and a medical card. Despite all of this copious documentation proving who he was, Officer Jager declined to investigate further, not bothering to look at any of his other identifying documents. Killinger would be arrested and charged with criminal trespass.

He would then spend 11 hours in police custody, only after a fingerprint check at Washoe County jail confirmed his identity would he finally be released.

Keep reading

AMD’s senior director of AI thinks ‘Claude has regressed’ and that it ‘cannot be trusted to perform complex engineering’

Anthropic’s Claude has reportedly gotten much worse at generating code recently. That’s according to a head member of the AI Group at AMD.

Last week, user StellarAccident took to the Claude Code GitHub to vent their frustrations at Claude’s AI-generated code. This account was soon discovered to be tied to Stellar Laurenzo, the senior director of AI at AMD, who linked to that GitHub account for a post on LinkedIn (via The Register) three weeks prior.

In their GitHub complaint, they state that “Claude has regressed to the point it cannot be trusted to perform complex engineering.”

Laurenzo claims Claude performed better in January, and says it now ignores instructions, suggests fixes that are incorrect, directly contradicts requests and claims completion despite, well, not completing the task. Laurenzo is using the Opus model and claims the result they get is replicable with the same prompt.

Laurenzo says, “We have a very consistent, high complexity work environment and data mined months of logs to understand why—essentially—starting in February, we have noticed a degradation performing complex engineering tasks.” She goes on to say, “Claude has been good to us, and we are leaving this in the hopes that Anthropic can address these concerns.”

What follows this is a report on what has changed since January and how to improve it, which is, rather ironically, generated by Claude. It argues that Claude shows its thinking less, and it ‘thinks’ less deeply now, often editing code before fully ‘reading it’.

Users have been complaining about Claude’s February update for some time now. Over on the Claude Code Subreddit, one user says, “I can no longer in good conscience recommend Claude Code to clients.”

Keep reading

Florida Launches Investigation into OpenAI: ‘Linked to Criminal Behavior’

Florida is launching an investigation into OpenAI, Florida Attorney General James Uthmeier announced on Thursday, noting that it has been linked to “criminal behavior,” including the deadly shooting at Florida State University (FSU) in April 2025.

“Today we are launching an investigation into OpenAI. The development and rollout of artificial intelligence is a monumental leap in technology, but it has not been without concern for public safety and national security,” Uthmeier began in his Thursday announcement.

“AI is built on its ability to gather data, and there are concerns about whether open AI’s data and AI technologies that can be used against America are falling into the hands of America’s enemies,” he said, naming the Chinese Communist Party (CCP) as an example.

Further, Chat GPT has been linked to a variety of criminal behavior, he continued, “including child sex abuse, material use by child predators and the encouragement of suicide and self harm.” Not only that, but Uthmeier pointed out that the AI chatbot was likely used to assist Phoenix Ikner, the suspect who allegedly opened fire on the FSU campus on April 17, 2025, plan the attack.

“We’ve also learned that Chat GPT may likely have been used to assist the murderer in a recent masculine shooting at Florida State University. The Tragically, AI should exist to supplement support and advance mankind, not lead to an existential crisis or our ultimate demise,” he said.

“As big tech rolls out these technologies, they should not they cannot put our safety and security at risk. We support innovation that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies or threaten our national security, companies that do so will be held accountable to the fullest extent,” he continued, formally announcing the investigation and warning that subpoenas were on their way.

Uthmeier also called on the Florida legislature to take action and implement protections to safeguard children from the dangers of AI.

Keep reading

Musk’s xAI Sues Colorado Over AI Law, Saying It Forces Developers to Back State’s Views

An artificial intelligence company founded by Elon Musk filed a lawsuit on April 9 over a Colorado law it claims makes AI developers endorse “Colorado’s views on diversity, equity, and inclusion or face significant compliance costs and civil fines.”

xAI, whose flagship product is the chatbot Grok, named Colorado Attorney General Philip Weiser as the defendant. The lawsuit states that the law’s provisions “prohibit developers of AI systems from producing speech that the State of Colorado dislikes, while compelling them to conform their speech to a State-enforced orthodoxy on controversial topics of great public concern.” The lawsuit says the Colorado law violates the First Amendment.

Weiser didn’t respond to an email seeking comment.

The lawsuit questions the use of the term “algorithmic discrimination” in the law, calling it vague.

The text of the law defines it this way: “Algorithmic discrimination means any condition in which the use of an artificial intelligence system results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”

The bill, SB24-205, was introduced in April 2024, passed the next month, and will take effect on June 30, 2026.

Keep reading

Scientists invented a fake disease. AI told people it was real

Got sore, itchy eyes? You’re probably one of the millions of people who spend too much time staring at screens, being bombarded with blue light. Rub your eyes too much and your eyelids might turn a slight, pinkish hue.

So far, so normal. But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.

Keep reading

Voters Oust Half of City Council for Greenlighting $6,000,000,000 AI Data Center

Voters in a Missouri town ousted four city council members Tuesday for supporting an AI data center in the area.

Voters in the town of Festus, Missouri, voted against four members of the town’s eight-member city council who voted to greenlight a $6 billion data center project a week prior, according to St. Louis Public Radio. Anger has been brewing in several localities across the U.S. against data center projects, with voters in a Wisconsin town overwhelmingly opting Tuesday to crack down on a proposed development.

Tuesday’s vote followed months of at times raucous opposition against CRG Clayco’s plan to build a hyperscale data center on 360 acres in the town. The four city council members who lost their reelection bids were defeated by candidates who ran against the data centers and supported more transparency in the data center approval process.

“This data center fight has struck this community to the core and really, honestly ignited a community-driven effort here,” Dan Moore, who defeated pro-data center incumbent Bobby Benz, told St. Louis Public Radio. “People are awake now, and we’re not going to let this continue on anymore.”

Residents opposed to the AI data center flooded a local gymnasium to voice their frustrations during a March 31 city council meeting where the council voted to approve a framework of requirements for CRG’s planned construction, St. Louis Public Radio reported.

“I am not against growth,” Festus resident Lauren Albers said during the raucous city council meeting. “I’m against putting data centers between homes. I am against rushing into development before residents get real information, real answers and a real voice.”

Keep reading