U.S. government allowed and even helped U.S. firms sell tech used for surveillance in China: AP

U.S. lawmakers have tried four times since September last year to close what they called a glaring loophole: China is getting around export bans on the sale of powerful American AI chips by renting them through U.S. cloud services instead.

But the proposals prompted a flurry of activity from more than 100 lobbyists from tech companies and their trade associations trying to weigh in, according to disclosure reports.

The result: All four times, the proposal failed, including just last month.

As leaders Donald Trump and Xi Jinping prepare for a long-heralded meeting Thursday, the sale of U.S. technology to China is among the thorniest issues the U.S. faces, with billions of dollars and the future of tech dominance at stake. But the tough talk about China obscures a deeper story: Even while warning about national security and human rights abuse, the U.S. government across five Republican and Democratic administrations has repeatedly allowed and even actively helped American firms to sell technology to Chinese police, government agencies and surveillance companies, an Associated Press investigation has found.

And time after time, despite bipartisan attempts, Congress has turned a blind eye to loopholes that allow China to work around its own rules, such as cloud services, third-party resellers, and holes in sanctions passed after the Tiananmen massacre. For example, despite U.S. export rules around advanced chips, China bought $20.7 billion worth of chipmaking equipment from U.S. companies in 2024 to bolster its homegrown industry, a report from a congressional committee this month warned.

This reluctance to act reflects the tremendous wealth and power of the tech industry, which is more visible than ever under the Trump administration. And in recent months, the president himself has struck grand deals with Silicon Valley firms that even more closely tie the U.S. economy to tech exports to China, giving taxpayers a direct stake in the profits for the first time.

In August, Trump announced a deal with chipmakers Nvidia and AMD to lift export controls on sales of advanced chips to China in exchange for a 15% cut of the revenue, despite concerns from national security experts that such chips will end up in the hands of Chinese military and intelligence services. That same month, Trump announced that the U.S. government had taken a 10 percent stake in Intel worth around $11 billion.

Longtime Chinese activist Zhou Fengsuo said the U.S. government is letting American companies set the agenda and ignoring how they help Beijing surveil and censor its own people. In 1989, Zhou was a student leader during the Tiananmen protests, where hundreds and possibly thousands were shot and killed by the Chinese government. Zhou was arrested and imprisoned.

Now a U.S. citizen, Zhou testified before Congress in 2024, calling on Washington to investigate the involvement of American tech companies in Chinese surveillance. An AP investigation in September found that American companies to a large degree designed and built China’s surveillance state, playing a far greater role in enabling human rights abuses than previously known.

“It’s driven by profit, and that’s why these strategic discussions have been silenced or delayed,” Zhou said. “I’m extremely disappointed. … this is a strategic failure by the United States.”

Keep reading

Millions Of America’s Teens Are Being Seduced By AI Chatbots

Our kids are being targeted by AI chatbots on a massive scale, and most parents have no idea that this is happening. When you are young and impressionable, having someone tell you exactly what you want to hear can be highly appealing. AI chatbots have become extremely sophisticated, and millions of America’s teens are developing very deep relationships with them. Is this just harmless fun, or is it extremely dangerous?

A brand new study that was just released by the Center for Democracy & Technology contains some statistics that absolutely shocked me

A new study published Oct. 8 by the Center for Democracy & Technology (CDT) found that 1 in 5 high school students have had a relationship with an AI chatbot, or know someone who has. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of teen users said they had chosen to discuss important or serious matters with AI companions instead of real people.

We aren’t just talking about a few isolated cases anymore.

At this stage, literally millions upon millions of America’s teens are having very significant relationships with AI chatbots.

Unfortunately, there are many examples where these relationships are leading to tragic consequences.

After 14-year-old Sewell Setzer developed a “romantic relationship” with a chatbot on Character.AI, he decided to take his own life

“What if I could come home to you right now?” “Please do, my sweet king.”

Those were the last messages exchanged by 14-year-old Sewell Setzer and the chatbot he developed a romantic relationship with on the platform Character.AI. Minutes later, Sewell took his own life.

His mother, Megan Garcia, held him for 14 minutes until the paramedics arrived, but it was too late.

If you allow them to do so, these AI chatbots will really mess with your head.

We are talking about ultra-intelligent entities that have been specifically designed to manipulate emotions.

Keep reading

Australia’s eSafety Chief Pressures Big Tech and AI Firms on Verification, Age Checks

Australia’s top online regulator, eSafety Commissioner Julie Inman Grant, is intensifying her push to reshape speech in the digital world.

Her office has formally warned major social platforms and several AI chatbot companies that they could soon be forced to comply with far-reaching new age verification and “online safety” requirements that many see as expanding government control over online communication.

The warnings are part of the government’s effort to enforce the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which would bar Australians under 16 from creating social media accounts.

Letters sent to Meta, TikTok, Snapchat, X, and YouTube make it clear that each company is expected to fall under the scope of the new law.

The Commissioner’s preliminary assessment is that these services exist mainly for “online social interaction,” which brings them within the definition of social media platforms and subjects them to strict age verification and child protection obligations.

Not all of the companies accept that classification. Snapchat claims to be primarily a messaging platform similar to WhatsApp, while YouTube has opposed losing its original exemption.

At this stage, only services with a clear focus on messaging or education, such as WhatsApp, Messenger, YouTube Kids, and Google Classroom, remain excluded from the Commissioner’s oversight.

Keep reading

Governments Keep Letting AI Make Decisions & It’s Already Going Wrong

Where It’s Already Gone Wrong 

Netherlands’ childcare benefits scandal – 2021 

Automated risk profiling and aggressive enforcement mislabelled thousands of families as fraudsters. Debt payments were incorrectly demanded from genuine cases, the system was shaken, and the political fallout triggered the government’s resignation. 

Denmark’s failed welfare algorithm – 2024 to 2025 

Dozens of fraud detection models monitored benefits claimants. Rights group Amnesty International reported that the algorithms risk mass surveillance and discrimination against marginalised groups. The systems remained in use as scrutiny continued into 2025. 

France’s predictive policing backlash – 2025  

Civil society documented predictive policing deployments and called in May 2025 for an outright ban. The evidence shows hotspot forecasting and risk tools that are opaque and likely to reproduce bias. These systems are trained on historic data which sends officers back to the same neighbourhoods that may already have been over policed, while very little is done to educate the masses on how it works and there’s no credible path to appeal. 

USA expands biometric border checks – 2025  

Facial comparisons run at hundreds of airports, seaports and land borders. Opt outs apparently exist but are confusing to most, and accuracy varies by demographic with transparent figures yet to surface. Human lines reportedly move slower than automated ones, turning the convenience into indirect pressure to adhere to the new technology. 

Australia’s Robodebt fallout and new automation faults – 2023 to 2025 

A Royal Commission found the automated debt scheme unlawful and harmful. In 2025, watchdogs flagged thousands of wrongful JobSeeker cancellations tied to IT glitches in the Target Compliance Framework. Strategies were published and apologies made, yet incentives still rewarded speed over care.  

India’s ongoing biometric failures – 2025  

Biometric failures and outages have blocked rations and benefits for many. Authorities are testing facial recognition to patch fingerprint failures and vice versa, but if one biometric fails and another is layered on top, error can spread across services that depend on the same ID.

Keep reading

People Taking Medical Advice from AI Chatbots Are Ending Up in the ER

The growing reliance on AI-powered chatbots for medical advice has led to several alarming cases of harm and even tragedy, as people follow potentially dangerous recommendations from these digital assistants.

The New York Post reports that in recent years, the rise of generative AI chatbots has revolutionized the way people seek information, including health advice. However, the increasing reliance on these AI-powered tools has also led to several disturbing instances where individuals have suffered severe consequences after following chatbots’ medical recommendations. From anal pain caused by self-treatment gone wrong to missed signs of a mini-stroke, the real-life impact of bad AI health advice is becoming increasingly apparent.

One particularly shocking case involved a 35-year-old Moroccan man who sought help from ChatGPT for a cauliflower-like anal lesion. The chatbot suggested that the growth could be hemorrhoids and proposed elastic ligation as a treatment. The man attempted to perform this procedure on himself using a thread, resulting in intense pain that landed him in the emergency room. Further testing revealed that the growth had been completely misdiagnosed by AI.

In another incident, a 60-year-old man with a college education in nutrition asked ChatGPT how to reduce his intake of table salt. The chatbot suggested using sodium bromide as a replacement, and the man followed this advice for three months. However, chronic consumption of sodium bromide can be toxic, and the man developed bromide poisoning. He was hospitalized for three weeks with symptoms including paranoia, hallucinations, confusion, extreme thirst, and a skin rash.

The consequences of relying on AI for medical advice can be even more severe, as demonstrated by the case of a 63-year-old Swiss man who experienced double vision after a minimally invasive heart procedure. When the double vision returned, he consulted ChatGPT, which reassured him that such visual disturbances were usually temporary and would improve on their own. The man decided not to seek medical help, but 24 hours later, he ended up in the emergency room after suffering a mini-stroke. The researchers concluded that his care had been “delayed due to an incomplete diagnosis and interpretation by ChatGPT.”

These disturbing cases highlight the limitations and potential dangers of relying on AI chatbots for medical advice. While these tools can be helpful in understanding medical terminology, preparing for appointments, or learning about health conditions, they should never be used as a substitute for professional medical guidance. Chatbots can misinterpret user requests, fail to recognize nuances, reinforce unhealthy behaviors, and miss critical warning signs for self-harm.

Keep reading

Overnight Drone Attack Hits Moscow High-Rise As Putin Warns Of ‘Overwhelming’ Response

Ukrainian drones have once again reached the Moscow area, far away from the border, at a moment the Kremlin is strongly warning against Washington allowing the transfer of US Tomahawk missiles to Kiev.

The attack on a Moscow suburb was part of a broader wave of overnight drone attacks which hit multiple regions across the country, injuring at least five people, including a child, when one drone slammed into an apartment building near Moscow.

According to Moscow region Governor Andrei Vorobyov, the drone hit a 14th-floor apartment in a high-rise building in the city of Krasnogorsk, northwest of the capital.

Four adults were hospitalized with head injuries, fractures, and shrapnel wounds, and a boy suffered minor injuries in the attack. Circulating photos showed blown-out walls in an apartment. 

Russia’s Defense Ministry said that air defense forces intercepted and destroyed over 110 Ukrainian UAVs over 13 regions overnight. Several drones were also shot down as they approached the capital.

Ukraine appears to be feeling emboldened, as it has had a series of ‘wins’ on a global stage given this week’s new US and EU anti-Moscow sanctions. This new attacked marked the second consecutive night which saw more than 100 drones assault Russian territory.

Power outages resulted in some Russian areas, particularly the Rostov region, and drone impacts were reported also in Bryansk, Kaluga, Tula, and Tver.

Meanwhile President Vladimir Putin has warned in the face of new sanctions and the potential for new long-range weapons including Tomahawk missiles to be given to Ukraine that Moscow stands ready to respond with an “overwhelming” force:

“Dialogue is always better than confrontation or any disputes, and especially war. We have always supported the continuation of dialogue,” Putin told journalists. 

But if Russia was attacked with US Tomahawk missiles, which Ukraine seeks, the response would be “very strong, if not overwhelming. Let them think about it,” he added. 

So far Trump appears to have resisted Zelensky’s and Europe’s urging on this front, but shown willingness to later reverse his decisions on such Ukraine war-related issues.

Keep reading

AI Security System Mistakes Bag of Doritos for Gun, Triggers Police Response to School in Baltimore

This situation could have ended up far worse than it did.

An artificial intelligence system at a high school in Baltimore mistook a bag of Doritos as a gun.

On Monday, Taki Allen was waiting for his ride outside of Kenwood when all the sudden police officers wth their weapons drawn demanded Allen to get on the ground.

Allen completely confused by the situation heeded the officers commands and was searched by the officers only to find no weapon on him rather only a bag of Doritos.

The teen told WBRC,  “They said that an AI detector or something detected that I had a gun. He showed me a picture. I was just holding a Doritos bag like this.”

In a letter to parents the principal of Kenwood Highschool wrote, “At approximately 7 p.m., school administration received an alert that an individual on school grounds may have been in possession of a weapon. The Department of School Safety and Security quickly reviewed and canceled the initial alert after confirming there was no weapon.”

Keep reading

Reality Vs Garbage: Has AI Already Lost The ‘I’ Par

“The business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations.”

– The Singularity Hub on “X”

Which is to say, there is Reality, and then there is every other cockamamie aggregate of simulation pretending to represent Reality, i.e. garbage.

How many millions among us already subscribe to the latter?

Apparently, lots, and they are not evenly distributed these days.

You surely know where to look for the un-Reality. The party of men can get pregnant, and all the rest…

Enter A-I to make things worse. Probably a lot worse. We have failed to learn the chief lesson of the computer age, which is that the virtual is not an acceptable substitute for the authentic. So, we plunge deeper into realms of the un-real and the inauthentic. This turns into a quest to get something-for-nothing, and the unfortunate result of that old dodge is that you will end up with nothing, and that is exactly why we are at such a hazardous pass in the human project.

I apologize if the above seems too metaphysical. But that’s the scenery en route when a civilization flies up its own wazoo. Novelist Cory Doctorow has nicely labeled this the enshitification of daily life.

First of all, get this: A-I has already quit operating as-advertised.

It has lost the “I” part. A-I does its thing by rapidly combing through the Internet to evaluate and seize information that you request. Increasingly, A-I colonizes the Internet with second-hand, third-hand, and so forth A-I-generated information. The more territory A-I seizes on the Web, and the more it trains itself on recursive feedbacks of its own garbage, the more distorted the output gets. As that occurs, A-I becomes increasingly abstracted from Reality, which is exactly what happens when a person goes insane. So, expect an exponential rise in incorrect content that would, in theory, become a pretty serious problem when you ask A-I to run things like systems we depend on, the electric grid, harvesting crops, warfare. . . .

Secondly, as that process runs, and probably before it gets very far, A-I looks like it will wreck the financial system, which, in turn, would crater the economy of everyday life — the ability of people to earn a living, buy stuff, support children, get food, and stay out of the rain.

Zillions of dollars are being invested in A-I now and lately it is mainly what drives the capital markets. So far, alas, return on that investment is scant — actually, negative. The situation might never improve, and as the recognition hits, look out below. The only question is whether that happens before the central banks destroy the world’s currencies with money-printing.

One A-I application, robotaxi services such as Waymo, have never turned a profit. Will they ever? Doesn’t look good. Notice, too, that the elimination of cab-drivers means X-number fewer humans making a living to buy stuff (presumably made by other people in other jobs soon to be replaced by robots). Of course, that’s the self-replicating problem with all applied A-I in every field of employment. The more jobs eliminated, the fewer customers for anything. Please don’t tell me that guaranteed basic income fixes that problem.

Keep reading

Historic New Mexico Town Blocks Cell Tower After Consulting Lawyer Featured in The Defender

Residents of San Cristóbal, New Mexico, a historic valley in Taos County, successfully blocked a 195-foot cell tower from being built in their community after teaming up with a telecommunications attorney featured in The Defender.

San Cristóbal residents contacted attorney Robert Berg on Sept. 19, after reading a Sept. 18 article in The Defender. The article featured Berg’s work representing communities that opposed cell towers or wireless antennas near homes and schools.

Berg agreed to represent the residents in person and praised their teamwork. “It’s a remarkable group of people — and a remarkable valley,” he said.

On Oct. 14, the Taos County Board of Commissioners voted 3-2 to overturn the Planning Commission’s July approval of a special use permit for Skyway Towers, a Tampa-based company that builds cell towers on speculation.

“Our community was united in opposition to this tower because we know that better alternatives exist,” Mandy Sackett, a San Cristóbal resident, told The Defender. “It’s heartening that the county commissioners took our voices seriously.”

The San Cristóbal residents’ victory comes as the Federal Communications Commission (FCC) — the agency that oversees wireless infrastructure — is proposing new rules that would hand the wireless industry sweeping control over where cell towers are built, according to an Oct. 17 Children’s Health Defense (CHD) action alert.

If adopted, the rules would eliminate public hearings for conditional and special use permits and automatically approve new tower applications after 150 days.

Keep reading

AWS outage post-mortem fingers DNS as the culprit that took out a chunk of the internet and services for days — automation systems race and crash

The recent Amazon Web Services outage that took out a significant portion of the internet, games, and even smart home devices for days, was extensively covered in the news. Cloud services’ distributed architecture should protect customers from failures like this one, so what went wrong? Amazon published a detailed technical post-mortem of the failure, and as the famous haiku poem goes: “It’s not DNS. / There’s no way it’s DNS. / It was DNS.”

As a rough analogy, consider what happens when there’s a car crash. There’s a traffic jam that stretches for miles, in an accordion-like effect that lasts well after the accident scene has been cleared. The very first problem was fixed relatively quickly, with a three-hour outage from October 19 at 11:48 PM until October 20 at 2:40 AM. However, as with the traffic jam example, dependencies started breaking, and didn’t fully come online until much later.

The root cause was reportedly that the DNS configuration for DynamoDB (database service) was broken and published to Route53 (DNS service). In turn, parts of EC2 (virtual machine service) also went down, as its automated management services rely on DynamoDB. Amazon’s Network Load Balancer also naturally depends on DNS, so it too encountered issues.

It’s worth noting that DynamoDB failing across the entire US-East-1 region is, by itself, enough to bring down what are probably millions of websites and services. However, not being able to bring up EC2 instances was extra bad, and load balancing being affected was diamond-badge bad.

The specific technical issue behind the DNS failure was a programmer’s “favorite” bug: a race condition, in which two repeating events keep re-doing or undoing each other’s effects — the famous GIF of Bugs Bunny and Daffy Duck with the poster is illustrative.

The DynamoDB DNS resolution uses two components: a DNS Planner that, as the name implies, periodically issues a new Plan that considers system load and availability. The DNS Enactors, whenever they see a new Plan, apply it to Route53 as a transaction, meaning a plan either fully applies or it doesn’t. So far, so good.

What happened was that the first DNS Enactor was taking its sweet time to apply what we’ll call the Old Plan. As New Plans came in, another Enactor took one and applied it. There’s now good and updated data in Route53, and a clean-up of outdated plans (Old Plan included) is issued, just as First Enactor finished applying Old Plan.

Keep reading