How NDAs keep AI data center details hidden from Americans

On a March afternoon in Mason County, Kentucky, Dr. Timothy Grosser and his son Andy sat across the table from three men who came with an offer: $10 million for the 250-acre farm where they’d lived and worked for nearly four decades.

That’s 35 times what Grosser bought his land for in 1988 and significantly more than what others in the area had sold their land for recently. But there was a catch — it wasn’t clear who was funding the offer. One of the men said he represented a “Fortune 100 company” that wanted the property for an industrial development, but he refused to say what kind, which company or even his own name.

Instead, he pulled out a non-disclosure agreement.

Grosser said the contract would prevent him from discussing the project’s details with any third parties in exchange for limited information about its purpose, timeline and size. It didn’t disclose the company’s name, which could be discussed only after the company publicly announced its participation in the project.

“We refused to sign it,” Grosser said. “I’m not selling my farm for any amount of money.”

Keep reading

U.S. government allowed and even helped U.S. firms sell tech used for surveillance in China: AP

U.S. lawmakers have tried four times since September last year to close what they called a glaring loophole: China is getting around export bans on the sale of powerful American AI chips by renting them through U.S. cloud services instead.

But the proposals prompted a flurry of activity from more than 100 lobbyists from tech companies and their trade associations trying to weigh in, according to disclosure reports.

The result: All four times, the proposal failed, including just last month.

As leaders Donald Trump and Xi Jinping prepare for a long-heralded meeting Thursday, the sale of U.S. technology to China is among the thorniest issues the U.S. faces, with billions of dollars and the future of tech dominance at stake. But the tough talk about China obscures a deeper story: Even while warning about national security and human rights abuse, the U.S. government across five Republican and Democratic administrations has repeatedly allowed and even actively helped American firms to sell technology to Chinese police, government agencies and surveillance companies, an Associated Press investigation has found.

And time after time, despite bipartisan attempts, Congress has turned a blind eye to loopholes that allow China to work around its own rules, such as cloud services, third-party resellers, and holes in sanctions passed after the Tiananmen massacre. For example, despite U.S. export rules around advanced chips, China bought $20.7 billion worth of chipmaking equipment from U.S. companies in 2024 to bolster its homegrown industry, a report from a congressional committee this month warned.

This reluctance to act reflects the tremendous wealth and power of the tech industry, which is more visible than ever under the Trump administration. And in recent months, the president himself has struck grand deals with Silicon Valley firms that even more closely tie the U.S. economy to tech exports to China, giving taxpayers a direct stake in the profits for the first time.

In August, Trump announced a deal with chipmakers Nvidia and AMD to lift export controls on sales of advanced chips to China in exchange for a 15% cut of the revenue, despite concerns from national security experts that such chips will end up in the hands of Chinese military and intelligence services. That same month, Trump announced that the U.S. government had taken a 10 percent stake in Intel worth around $11 billion.

Longtime Chinese activist Zhou Fengsuo said the U.S. government is letting American companies set the agenda and ignoring how they help Beijing surveil and censor its own people. In 1989, Zhou was a student leader during the Tiananmen protests, where hundreds and possibly thousands were shot and killed by the Chinese government. Zhou was arrested and imprisoned.

Now a U.S. citizen, Zhou testified before Congress in 2024, calling on Washington to investigate the involvement of American tech companies in Chinese surveillance. An AP investigation in September found that American companies to a large degree designed and built China’s surveillance state, playing a far greater role in enabling human rights abuses than previously known.

“It’s driven by profit, and that’s why these strategic discussions have been silenced or delayed,” Zhou said. “I’m extremely disappointed. … this is a strategic failure by the United States.”

Keep reading

Millions Of America’s Teens Are Being Seduced By AI Chatbots

Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?

A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me

A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.

We aren’t just talking about a few isolated cases anymore.

At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.

Unfortunately, there are many examples where these relationships are leading to tragic consequences.

After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life

“What if I could come home to you right now?” “Please do, my sweet king.”

Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the platform Character.AI. Minutes later, Sewell took his own life.

His mother, Megan Garcia, held him for 14 minutes until the paramedics arrived, but it was too late.

If you allow them to do so, these AI chatbots will really mess with your head.

We are talking about ultra-intelligent entities that have been specifically designed to manipulate emotions.

Keep reading

Australia’s eSafety Chief Pressures Big Tech and AI Firms on Verification, Age Checks

Australia’s top online regulator, eSafety Commissioner Julie Inman Grant, is intensifying her push to reshape speech in the digital world.

Her office has formally warned major social platforms and several AI chatbot companies that they could soon be forced to comply with far-reaching new age verification and “online safety” requirements that many see as expanding government control over online communication.

The warnings are part of the government’s effort to enforce the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which would bar Australians under 16 from creating social media accounts.

Letters sent to Meta, TikTok, Snapchat, X, and YouTube make it clear that each company is expected to fall under the scope of the new law.

The Commissioner’s preliminary assessment is that these services exist mainly for “online social interaction,” which brings them within the definition of social media platforms and subjects them to strict age verification and child protection obligations.

Not all of the companies accept that classification. Snapchat claims to be primarily a messaging platform similar to WhatsApp, while YouTube has opposed losing its original exemption.

At this stage, only services with a clear focus on messaging or education, such as WhatsApp, Messenger, YouTube Kids, and Google Classroom, remain excluded from the Commissioner’s oversight.

Keep reading

Governments Keep Letting AI Make Decisions & It’s Already Going Wrong

Where It’s Already Gone Wrong 

Netherlands’ childcare benefits scandal – 2021 

Automated risk profiling and aggressive enforcement mislabelled thousands of families as fraudsters. Debt payments were incorrectly demanded from genuine cases, the system was shaken, and the political fallout triggered the government’s resignation. 

Denmark’s failed welfare algorithm – 2024 to 2025 

Dozens of fraud detection models monitored benefits claimants. Rights group Amnesty International reported that the algorithms risk mass surveillance and discrimination against marginalised groups. The systems remained in use as scrutiny continued into 2025. 

France’s predictive policing backlash – 2025  

Civil society documented predictive policing deployments and called in May 2025 for an outright ban. The evidence shows hotspot forecasting and risk tools that are opaque and likely to reproduce bias. These systems are trained on historic data which sends officers back to the same neighbourhoods that may already have been over policed, while very little is done to educate the masses on how it works and there’s no credible path to appeal. 

USA expands biometric border checks – 2025  

Facial comparisons run at hundreds of airports, seaports and land borders. Opt outs apparently exist but are confusing to most, and accuracy varies by demographic with transparent figures yet to surface. Human lines reportedly move slower than automated ones, turning the convenience into indirect pressure to adhere to the new technology. 

Australia’s Robodebt fallout and new automation faults – 2023 to 2025 

A Royal Commission found the automated debt scheme unlawful and harmful. In 2025, watchdogs flagged thousands of wrongful JobSeeker cancellations tied to IT glitches in the Target Compliance Framework. Strategies were published and apologies made, yet incentives still rewarded speed over care.  

India’s ongoing biometric failures – 2025  

Biometric failures and outages have blocked rations and benefits for many. Authorities are testing facial recognition to patch fingerprint failures and vice versa, but if one biometric fails and another is layered on top, error can spread across services that depend on the same ID.

Keep reading

People Taking Medical Advice from AI Chatbots Are Ending Up in the ER

The growing reliance on AI-powered chatbots for medical advice has led to several alarming cases of harm and even tragedy, as people follow potentially dangerous recommendations from these digital assistants.

The New York Post reports that in recent years, the rise of generative AI chatbots has revolutionized the way people seek information, including health advice. However, the increasing reliance on these AI-powered tools has also led to several disturbing instances where individuals have suffered severe consequences after following chatbots’ medical recommendations. From anal pain caused by self-treatment gone wrong to missed signs of a mini-stroke, the real-life impact of bad AI health advice is becoming increasingly apparent.

One particularly shocking case involved a 35-year-old Moroccan man who sought help from ChatGPT for a cauliflower-like anal lesion. The chatbot suggested that the growth could be hemorrhoids and proposed elastic ligation as a treatment. The man attempted to perform this procedure on himself using a thread, resulting in intense pain that landed him in the emergency room. Further testing revealed that the growth had been completely misdiagnosed by AI.

In another incident, a 60-year-old man with a college education in nutrition asked ChatGPT how to reduce his intake of table salt. The chatbot suggested using sodium bromide as a replacement, and the man followed this advice for three months. However, chronic consumption of sodium bromide can be toxic, and the man developed bromide poisoning. He was hospitalized for three weeks with symptoms including paranoia, hallucinations, confusion, extreme thirst, and a skin rash.

The consequences of relying on AI for medical advice can be even more severe, as demonstrated by the case of a 63-year-old Swiss man who experienced double vision after a minimally invasive heart procedure. When the double vision returned, he consulted ChatGPT, which reassured him that such visual disturbances were usually temporary and would improve on their own. The man decided not to seek medical help, but 24 hours later, he ended up in the emergency room after suffering a mini-stroke. The researchers concluded that his care had been “delayed due to an incomplete diagnosis and interpretation by ChatGPT.”

These disturbing cases highlight the limitations and potential dangers of relying on AI chatbots for medical advice. While these tools can be helpful in understanding medical terminology, preparing for appointments, or learning about health conditions, they should never be used as a substitute for professional medical guidance. Chatbots can misinterpret user requests, fail to recognize nuances, reinforce unhealthy behaviors, and miss critical warning signs for self-harm.

Keep reading

AI Security System Mistakes Bag of Doritos for Gun, Triggers Police Response to School in Baltimore

This situation could have ended up far worse than it did.

An artificial intelligence system at a high school in Baltimore mistook a bag of Doritos as a gun.

On Monday, Taki Allen was waiting for his ride outside of Kenwood when all the sudden police officers wth their weapons drawn demanded Allen to get on the ground.

Allen completely confused by the situation heeded the officers commands and was searched by the officers only to find no weapon on him rather only a bag of Doritos.

The teen told WBRC,  “They said that an AI detector or something detected that I had a gun. He showed me a picture. I was just holding a Doritos bag like this.”

In a letter to parents the principal of Kenwood Highschool wrote, “At approximately 7 p.m., school administration received an alert that an individual on school grounds may have been in possession of a weapon. The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon.”

Keep reading

Reality Vs Garbage: Has AI Already Lost The ‘I’ Par

“The business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations.”

– The Singularity Hub on “X”

Which is to say, there is Reality, and then there is every other cockamamie aggregate of simulation pretending to represent Reality, i.e. garbage.

How many millions among us already subscribe to the latter?

Apparently, lots, and they are not evenly distributed these days.

You surely know where to look for the un-Reality. The party of men can get pregnant, and all the rest…

Enter A-I to make things worse. Probably a lot worse. We have failed to learn the chief lesson of the computer age, which is that the virtual is not an acceptable substitute for the authentic. So, we plunge deeper into realms of the un-real and the inauthentic. This turns into a quest to get something-for-nothing, and the unfortunate result of that old dodge is that you will end up with nothing, and that is exactly why we are at such a hazardous pass in the human project.

I apologize if the above seems too metaphysical. But that’s the scenery en route when a civilization flies up its own wazoo. Novelist Cory Doctorow has nicely labeled this the enshitification of daily life.

First of all, get this: A-I has already quit operating as-advertised.

It has lost the “I” part. A-I does its thing by rapidly combing through the Internet to evaluate and seize information that you request. Increasingly, A-I colonizes the Internet with second-hand, third-hand, and so forth A-I-generated information. The more territory A-I seizes on the Web, and the more it trains itself on recursive feedbacks of its own garbage, the more distorted the output gets. As that occurs, A-I becomes increasingly abstracted from Reality, which is exactly what happens when a person goes insane. So, expect an exponential rise in incorrect content that would, in theory, become a pretty serious problem when you ask A-I to run things like systems we depend on, the electric grid, harvesting crops, warfare. . . .

Secondly, as that process runs, and probably before it gets very far, A-I looks like it will wreck the financial system, which, in turn, would crater the economy of everyday life — the ability of people to earn a living, buy stuff, support children, get food, and stay out of the rain.

Zillions of dollars are being invested in A-I now and lately it is mainly what drives the capital markets. So far, alas, return on that investment is scant — actually, negative. The situation might never improve, and as the recognition hits, look out below. The only question is whether that happens before the central banks destroy the world’s currencies with money-printing.

One A-I application, robotaxi services such as Waymo, have never turned a profit. Will they ever? Doesn’t look good. Notice, too, that the elimination of cab-drivers means X-number fewer humans making a living to buy stuff (presumably made by other people in other jobs soon to be replaced by robots). Of course, that’s the self-replicating problem with all applied A-I in every field of employment. The more jobs eliminated, the fewer customers for anything. Please don’t tell me that guaranteed basic income fixes that problem.

Keep reading

DUDE BREAKING: Robby Starbuck is suing Google over INSANE attacks against him by Google’s evil AI

Google’s AI has been targeting Robby Starbuck in an insane way with completely fake attacks on him, which Starbuck says their AI worked overtime to make believable with links to fake articles and fake official records.

He’s warned them to cease and desist and now he’s suing them.

He explains it all below:

HUGE NEWS: I’m suing @Google today.

What you’re about to see is insane.

Since 2023, @GoogleAI (Bard, Gemini & Gemma), has been defaming me with fake criminal allegations including sexual assault, child rape, abuse, fraud, stalking, drug charges, and even saying I was in Epstein’s flight logs.

All 100% fake. All generated by Google’s AI. I have ZERO criminal record or allegations.

So why did Google do it? Google’s AI says that I was targeted because of my political views.

Even worse — Google execs KNEW for 2 YEARS that this was happening because I told them and my lawyers sent cease and desist letters multiple times.

This morning, my team @dhillonlaw filed my lawsuit against Google and now I’m going public with all the receipts — because this can’t ever happen to anyone else.

Google’s AI didn’t just lie — it built fake worlds to make its lies look real:

• Fake victims
• Fake therapy records
• Fake court records
• Fake police records
• Fake relationships
• Fake “news” stories

It even fabricated statements denouncing me from President Trump, @elonmusk and @JDVance over sexual assaults that Google completely invented.

One of the most dystopian things I’ve ever seen is how dedicated their AI was to doubling down on the lies. Google’s AI routinely cited fake sources by creating fake links to REAL media outlets and shows, complete with fake headlines so readers would trust the information. It would continue to do this even if you called the AI out for lying or sending fake links. In short, it was creating fake legacy media reports as a way to launder trust with users so they would believe elaborate lies that it told.

Keep reading

Judge Orders Tech CEOs to Testify in Case Using Algorithmic Design Rules as a New Avenue for Indirect Online Censorship Pressure

Three of the tech industry’s most recognizable leaders, Mark Zuckerberg of Meta, Evan Spiegel of Snap, and Adam Mosseri of Instagram, will be required to testify in court early next year.

The order came from Los Angeles Superior Court Judge Carolyn Kuhl, who ruled that their participation is essential in a lawsuit alleging that social media platforms were deliberately designed to harm young users’ mental health.

Attorneys for the companies had tried to prevent the CEOs from appearing, arguing that earlier depositions and other executive testimonies already provided sufficient information.

Judge Kuhl disagreed, stating, “The testimony of a CEO is uniquely relevant, as that officer’s knowledge of harms, and failure to take available steps to avoid such harms could establish negligence or ratification of negligent conduct.”

She also noted that their testimony would be “unique,” since the claims center on design features built to “be addictive” and “drive compulsive” use among minors.

Meta argued that compelling both Zuckerberg and Mosseri to testify would disrupt their ability to manage the business and “set a precedent” for future cases. Snap’s lawyers said the decision to call Spiegel to the stand was an “abuse of discretion.”

Judge Kuhl rejected both arguments, saying that those in charge must directly answer questions about their companies’ conduct instead of delegating that responsibility.

After the ruling, Meta declined to comment.

Keep reading