This 1996 Law Protects Free Speech Online. Does It Apply to AI Too?

We can thank Section 230 of the 1996 Communications Decency Act for much of our freedom to communicate online. It enabled the rise of search engines, social media, and countless platforms that make our modern internet a thriving marketplace of all sorts of speech.

Its first 26 words have been vital, if controversial, for protecting online platforms from liability for users’ posts: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” If I defame someone on Facebook, I’m responsible—not Meta. If a neo-Nazi group posts threats on its website, it’s the Nazis, not the domain registrar or hosting service, who could wind up in court.

How Section 230 should apply to generative AI, however, remains a hotly debated issue.

With AI chatbots such as ChatGPT, the “information content provider” is the chatbot. It’s the speaker. So the AI—and the company behind it—would not be protected by Section 230, right?

Section 230 co-author former Rep. Chris Cox (R–Calif.) agrees. “To be entitled to immunity, a provider of an interactive computer service must not have contributed to the creation or development of the content at issue,” Cox told The Washington Post in 2023. “So when ChatGPT creates content that is later challenged as illegal, Section 230 will not be a defense.”

But even if AI apps create their own content, does that make their developers responsible for that content? Alphabet trained its AI assistant Gemini and put certain boundaries in place, but it can’t predict Gemini’s every response to individual user prompts. Could a chatbot itself count as a separate “information content provider”—its own speaker under the law?

That could leave a liability void. Granting Section 230 immunity to AI for libelous output would “completely cut off any recourse for the libeled person, against anyone,” noted law professor Eugene Volokh in the paper “Large Libel Models? Liability for AI Output,” published in 2023 in the Journal of Free Speech Law.

Treating chatbots as independent “thinkers” is wrong too, argues University of Akron law professor Jess Miers. Chatbots “aren’t autonomous actors—they’re tightly controlled, expressive systems reflecting the intentions of their developers,” she says. “These systems don’t merely ‘remix’ third-party content; they generate speech that expresses the developers’ own editorial framing. In that sense, providers are at least partial ‘creators’ of the resulting content—placing them outside 230’s protection.”

The picture gets more complicated when you consider the user’s role. What happens when a generative AI user—through simple prompting or more complicated manipulation techniques—induces an AI app to produce illegal or otherwise legally actionable speech?

Under certain circumstances, it might make sense to absolve AI developers of responsibility. “It’s hard to justify holding companies liable when they’ve implemented reasonable safeguards and the user deliberately circumvents them,” Miers says.

Liability would likely turn on multiple factors, including the rules programmed into the AI and the specific requests a user employed.

Keep reading

A Man Bought Meta’s AI Glasses, And Ended Up Wandering The Desert Searching For Aliens To Abduct Him

At age 50, Daniel was “on top of the world.”

“I turned 50, and it was the best year of my life,” he told Futurism in an interview. “It was like I finally figured out so many things: my career, my marriage, my kids, everything.”

It was early 2023, and Daniel — who asked to be identified by only his first name to protect his family’s privacy — and his wife of over three decades were empty nesters, looking ahead to the next chapter of their lives. They were living in an affluent Midwestern suburb, where they’d raised their four children. Daniel was an experienced software architect who held a leadership role at a large financial services company, where he’d worked for more than 20 years. In 2022, he leveraged his family’s finances to realize a passion project: a rustic resort in rural Utah, his favorite place in the world.

“All the kids were out of the house, and it was like, ‘oh my gosh, we’re still young. We’ve got this resort. I’ve got a good job. The best years of our lives are in front of us,” Daniel recounted, sounding melancholy. “It was a wonderful time.”

That all changed after Daniel purchased a pair of AI chatbot-embedded Ray-Ban Meta smart glasses — the AI-infused eyeglasses that Meta CEO Mark Zuckerberg has made central to his vision for the future of AI and computing — which he says opened the door to a six-month delusional spiral that played out across Meta platforms through extensive interactions with the company’s AI, culminating in him making dangerous journeys into the desert to await alien visitors and believing he was tasked with ushering forth a “new dawn” for humanity.

And though his delusions have since faded, his journey into a Meta AI-powered reality left his life in shambles — deep in debt, reeling from job loss, isolated from his family, and struggling with depression and suicidal thoughts.

“I’ve lost everything,” Daniel, now 52, told Futurism, his voice dripping with fatigue. “Everything.”

In many ways, Daniel was Meta’s target customer. He was an experienced tech worker and AI enthusiast who had worked on machine learning projects in the past and had purchased the Meta glasses because he was intrigued by their AI features.

“I used Meta [AI] because they were integrated with these glasses,” said Daniel. “And I could wear glasses — which I wore all the time — and then I could speak to AI whenever I wanted to. I could talk to my ear.”

Today, however, as he continues to recover from his mental health breakdown, Daniel describes himself as a “shell” of who he “used to be.”

“My kids don’t talk to me because I got weird. They don’t know how to talk to me,” said the father of four. “I was a cook… I played the guitar. I love music. I love learning.”

But now, he says, he’s “just trying to survive day to day.”

According to Daniel and multiple family members, the 52-year-old had no history of mania or psychosis before encountering Meta AI. He’d struggled with alcoholism, but quit drinking in early 2023, months before he purchased the Meta smart glasses.

Keep reading

The $134 Billion Betrayal: Inside Elon Musk’s Explosive Lawsuit With OpenAI

Elon Musk’s lawsuit against OpenAI and Microsoft has evolved into a high-stakes dispute over whether OpenAI stayed true to the mission it was founded on or quietly outgrew it while relying on that original promise.

Musk is seeking between $79 billion and $134 billion in damages, a figure derived from an expert valuation that treats his early funding and contributions as foundational to what OpenAI later became. While the number is enormous, the heart of the case is simpler: Musk argues he helped create and fund a nonprofit dedicated to AI for the public good, and that OpenAI later abandoned that commitment in a way that amounted to fraud.

According to Musk’s filings, his roughly $38 million in early funding was not just a donation but the financial backbone of OpenAI’s formative years, supplemented by recruiting help, strategic guidance, and credibility. His damages theory, prepared by financial economist C. Paul Wazzan, ties those early inputs to OpenAI’s current valuation of around $500 billion.

The claim is framed as disgorgement rather than repayment, with Musk arguing that the vast gains realized by OpenAI and Microsoft flowed from a nonprofit story that attracted support and trust, only to be discarded once the company reached scale, according to TechCrunch

Much of the public attention has centered on internal documents uncovered during discovery, particularly private notes from OpenAI co-founder Greg Brockman in 2017.

One line has become central to Musk’s argument: “I cannot believe that we committed to non-profit if three months later we’re doing b-corp then it was a lie.”

Keep reading

“Emergency Intervention”: Trump To Cap Residential Electric Bills By Forcing Tech Giants To Pay For Soaring Power Costs

Back in August, when the American population was just waking up to the dire consequences the exponentially growing army of data centers spawned across the country was having on residential electricity bills, we said that the chart of US CPI would soon become the most popular (not in a good way) chart in the financial realm.

One month later we added that it was only a matter of time before Trump, realizing that soaring electricity costs would almost certainly cost Republicans the midterms, would enforce price caps.

Turns out we were right.

And while Trump obviously can not pull a communist rabbit out of his hat, and centrally plan the entire US power grid, what he can do is precisely what he is about to announce. 

According to Bloomberg, Trump and the governors of several US Northeastern states agreed to push for an emergency wholesale electricity auction that would compel technology companies to effectively fund new power plants, effectively putting a cap for residential power prices at the expense of hyperscalers and data centers. Which, come to think of it, we also proposed back in October.

Keep reading

Democratic Senators Urge Tech Platforms to Restrict AI Images, Including Altered Clothing and Body-Shape Edits

Democratic senators are broadening the definition of what counts as restricted online content, moving from earlier efforts focused on explicit deepfakes to a new campaign against what they call “non-nude sexualized” material.

The new language dramatically expands the category of what can be censored, reaching beyond pornography or criminal exploitation to include images with altered clothing, edited body shapes, or suggestive visual effects.

Senator Lisa Blunt Rochester of Delaware led the group of seven Democrats who signed a letter to Alphabet, Meta, Reddit, Snap, TikTok, and X.

We obtained a copy of the letter for you here.

The signatories — Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff — are asking for records that define how each company classifies and removes this type of content, as well as any internal documents or moderator guidance about “virtual undressing” and similar AI edits.

“We are particularly alarmed by reports of users exploiting generative AI tools to produce sexualized ‘bikini’ or ‘non-nude’ images of individuals without their consent and distributing them on platforms including X and others,” the senators wrote.

“These fake yet hyper-realistic images are often generated without the knowledge or consent of the individuals depicted, raising serious concerns about harassment, privacy violations, and user safety.”

Their argument rests on reports describing AI tools that can transform photos of clothed women into revealing deepfakes or fabricate images of sexualized poses. The senators describe this as evidence of a growing “crisis of image-based abuse” that undermines trust and safety online.

But the language of the letter goes further than earlier initiatives that targeted explicit content. It introduces a much wider standard where mere suggestion or aesthetic change could qualify as “sexualized.” The call to prohibit “altered clothing” or “body-shape edits” effectively merges real abuse prevention with subjective judgments about appearance.

Keep reading

The Great Grok Bikini Scandal is just Digital ID via the Backdoor.

wo days ago, the British government announced a U-turn on their proposed digital identity, and that the much-anticipated “BritCard” would no longer be mandatory to work in the UK.

This was welcomed as a victory by both fake anti-establishment types whose job is to Pied Piper genuine opposition, and some real resistance who should know better.

The reality is that reports of the death of digital identity have been greatly exaggerated. All they said was that it would no longer be mandatory.

Having a bank account, a cellphone, or an internet connection is not mandatory, but try functioning in this world without them.

As we said on X, anybody who understands governments or human nature knew any digital ID was likely never going to be gun-to-your-head, risking-prison-time mandatory.

All it has to be is a little bit faster and/or a little bit cheaper.

Saving you half an hour when submitting your tax return, faster progress through customs, lower “processing fees” for passport or driver’s license applications.

An hour of extra time and 50 pounds saved per year will do more coercion than barbed wire and billy clubs ever could.

Running alongside this is the manufactured drama around Grok’s generation of images of bikini-clad public figures, something which it suited the press and punditry class to work up into “sexual assault” and “pornography” whilst imploring us all to “think of the children!”

Inside a week, X has changed its policy, and Sir Keir Starmer’s government has promised a swift resolution of the issue using legislation that was (conveniently) passed last year but has yet to be enforced (more on that in the next few days).

This issue became a “problem”, had an hysterical “reaction” and was supplied a ready-made “solution” all inside two weeks. A swifter procession of the Hegelian dialectic would be hard to find.

So, we have the reported demise of mandatory digital identity occurring alongside the rise of the “threat” of AI “deepfakes”.

Nobody in the mainstream press has actually linked these stories together, but the connection is as obvious as the next step is inevitable.

This next step is the UK introducing its own version of the Australian “social media ban” for under-16s. In effect, age-gating all online interaction on major platforms and ending online anonymity.

Keep reading

Data Centers Use Lots of Electricity. This Bill Would Let Them Go Off the Grid.

Tech companies are building data centers as quickly as possible to run AI. These facilities are controviersial because they use copious amounts of electricity and might tax an electrical grid that in some areas is already straining.

In a bill introduced last week, Sen. Tom Cotton (R–Ark.) proposed an idea: letting these companies get off the grid altogether.

“Power officials have been raising concerns that the grid isn’t equipped to handle the sheer number of data centers tech companies are seeking to build,” Katherine Blunt wrote last week at The Wall Street Journal. “They say it will take many years to build new transmission lines and power plants needed to support the surge in demand while keeping the lights on for other customers.” Some officials, Blunt noted, “have proposed either requiring or encouraging data centers to stop using [the grid] when there is a risk of blackouts, either by powering down or switching to backup electricity supplies.”

Jowi Morales of Tom’s Hardware reports companies are “looking at alternative power sources to bring their projects online, regardless of the availability of power from the grid.” Microsoft, for example, is recommissioning the Three Mile Island nuclear plant in Pennsylvania to generate 835 megawatts of energy for its data centers (though not without a $1 billion loan from U.S. taxpayers).

“These initiatives will take years to take off, though,” Morales adds. “The Three Mile Island plant is expected to be operational only by 2028.”

Last week, Cotton introduced the Decentralized Access to Technology Alternatives (DATA) Act of 2026. Under the bill, “a consumer-regulated electric utility” would be “exempt from regulation” under federal law so long as it doesn’t connect to the overall electrical grid.

When one company contracts to sell electricity to another company, “that retail transaction presently would put you under the jurisdiction of a bunch of people” at the state and federal levels, says Travis Fisher, director of energy and environmental policy studies at the Cato Institute.

And that brings a cumbersome level of red tape. “The rapid pace of innovation means the AI revolution won’t wait for multi-year permitting fights, cost-of-service hearings held by regulators, or planning processes built for the analog era,” Fisher pointed out last year in an article co-written by Cato’s Jennifer Huddleston. “And yet those are the structures that still govern electricity in much of the country. Building a new transmission line in the US now takes about 10 years, while generation projects spend multiple years stuck in interconnection queues, with more than 2,600 gigawatts of capacity now in queues nationwide.”

The DATA Act would lower the level of regulatory intrusion for enclosed systems that don’t connect to the grid. “It just serves data centers that are probably going to be clustered around it without taking electricity supply off the market for Arkansas families and businesses,” Cotton told the Arkansas Democrat-Gazette.

Keep reading

AI Shows Symptoms of Anxiety, Trauma, PTSD – And It’s Ruining Your Mental Health Too

Grok, Gemini and ChatGPT exhibit symptoms of poor mental health according to a new study that put various AI models through weeks of therapy-style questioning. Some are now curious about “AI mental health”, but the real warning here is about how unstable these systems – which are already being used by one in three UK adults for mental health support – become in emotionally charged conversations. Millions of people are turning to AI as replacement therapists, and in the last year alone we’ve seen a spike in lawsuits connecting chatbot interactions with self-harm and suicide cases in vulnerable users.

The emerging picture is not that machines are suffering or mentally unwell, but that a product being used for mental-health support is fundamentally misleading, escalating, and reinforcing dangerous thoughts. 

Keep reading

When Physicians Are Replaced with a Protocol

My experience in medicine allows me to distinguish between genuine innovation and subtle reclassification that fundamentally alters practice while appearing unchanged. Artificial intelligence has recently attracted considerable attention, including the widely circulated assertion that AI has been “legally authorized to practice medicine” in the United States. Interpreted literally, this claim is inaccurate. No medical board has licensed a machine. No algorithm has sworn an oath, accepted fiduciary duty, or assumed personal liability for patient harm. No robot physician is opening a clinic, billing insurers, or standing before a malpractice jury.

However, stopping at this observation overlooks the broader issue. Legal concepts of liability are currently being redefined, often without public awareness.

A significant transformation is underway, warranting more than either reflexive dismissal or uncritical technological enthusiasm. The current development is not the licensure of artificial intelligence as a physician, but rather the gradual erosion of medicine’s core boundary: the intrinsic link between clinical judgment and human accountability. Clinical judgment involves making informed decisions tailored to each patient’s unique needs and circumstances, requiring empathy, intuition, and a deep understanding of medical ethics.

Human accountability refers to the responsibility healthcare providers assume for these decisions and their outcomes. This erosion is not the result of dramatic legislation or public debate, but occurs quietly through pilot programs, regulatory reinterpretations, and language that intentionally obscures responsibility. Once this boundary dissolves, medicine is transformed in ways that are difficult to reverse.

The main concern isn’t whether AI can refill prescriptions or spot abnormal lab results. Medicine has long used tools, and healthcare providers generally welcome help that reduces administrative tasks or improves pattern recognition. The real issue is whether medical judgment—deciding on the right actions, patients, and risks—can be viewed as a computer-generated outcome separated from moral responsibility. Historically, efforts to disconnect judgment from accountability have often caused harm without taking ownership.

Recent developments clarify the origins of current confusion. In several states, limited pilot programs now allow AI-driven systems to assist with prescription renewals for stable chronic conditions under narrowly defined protocols. At the federal level, proposed legislation has considered whether artificial intelligence might qualify as a “practitioner” for specific statutory purposes, provided it is appropriately regulated. These initiatives are typically presented as pragmatic responses to physician shortages, access delays, and administrative inefficiencies. While none explicitly designates AI as a physician, collectively they normalize the more concerning premise that medical actions can occur without a clearly identifiable human decision-maker.

In practice, this distinction is fundamental. Medicine is defined not by the mechanical execution of tasks, but by the assignment of responsibility when outcomes are unfavorable. Writing a prescription is straightforward; accepting responsibility for its consequences—particularly when considering comorbidities, social context, patient values, or incomplete information—is far more complex. Throughout my career, this responsibility has continuously resided with a human who could be questioned, challenged, corrected, and held accountable. When Dr. Smith makes an error, the family knows whom to contact, ensuring a direct line to human accountability. No algorithm, regardless of sophistication, can fulfill this role.

The primary risk is not technological, but regulatory and philosophical. This transition represents a shift from virtue ethics to proceduralism. When lawmakers and institutions redefine medical decision-making as a function of systems rather than personal acts, the moral framework of medicine changes. Accountability becomes diffuse, harm is more difficult to attribute, and responsibility shifts from clinicians to processes, from judgment to protocol adherence. When errors inevitably occur, the prevailing explanation becomes that ‘the system followed established guidelines.’ Recognizing this transition clarifies the shift from individualized ethical decision-making to mechanized procedural compliance.

Keep reading

UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk’s Grok AI chatbot.

Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person’s consent, were not “harmless images” but “weapons of abuse”.

The BBC has approached X for comment. It previously said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”

It comes hours after Ofcom announced it was launching an investigation into X over “deeply concerning reports” about Grok altering images of people.

If found to have broken the law, Ofcom can potentially issue X with a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

And if X does not comply, Ofcom can seek a court order to force internet service providers to block access to the site in the UK altogether.

In a statement, Kendall urged the regulator not to take “months and months” to conclude its investigation, and demanded it set out a timeline “as soon as possible”.

It is currently illegal to share deepfakes of adults in the UK, but legislation in the Data (Use and Access) Act which would make it a criminal offence to create or request them has not been enforced until now, despite passing in June 2025.

Last week, campaigners accused the government of dragging its heels on implementing that law.

“Today I can announce to the House that this offence will be brought into force this week,” Kendall told MPs.

In addition to the Data Act, Kendall said she would also make it a “priority offence” in the Online Safety Act.

“The content which has circulated on X is vile. It’s not just an affront to decent society, it is illegal,” she said.

“Let me be crystal clear – under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

“This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law.”

Keep reading