Thailand orders suspension of iris scans and deletion of data collected from 1.2 million users

Thailand’s Personal Data Protection Committee has ordered TIDC Worldverse to suspend its iris-scanning services and delete biometric data collected from 1.2 million users, citing non-compliance with Thailand’s Personal Data Protection Act.

TIDC Worldverse is part of Sam Altman’s World ID project, which has faced scrutiny over potential links to cryptocurrency scams and unauthorised data use, including cases where people were allegedly hired to scan irises for others.

The National Health Security Office in Thailand has ordered the suspension of iris biometric data collection by TIDC Worldverse and has demanded the deletion of biometric data already collected from approximately 1.2 million Thai citizens.

TIDC Worldverse is the Thai representative of Sam Altman’s Tools for Humanity, which operates the World ID project (formerly Worldcoin) in Thailand. The initiative uses iris-scanning “Orb” devices to provide a digital “proof-of-human” credential.  Participants receive Worldcoin (“WLD”) tokens as an incentive for biometric verification.

Explaining in simple terms what the “Orb” is, Business Insider said, “The Orb is a polished, volleyball-sized metal sphere that scans irises to generate a ‘World ID’ – a kind of digital passport meant to distinguish humans from machines online.”

Keep reading

Federal Uniformity Sounds Good – Until Big Tech Writes the Rules

Big Tech is jamming preemption of state AI laws into the National Defense Authorization Act (NDAA) at the last minute, but it is unclear if they are fully aware of what it would actually mean. If Congress prohibits states from implementing AI policies that would protect their citizens, it would have far-reaching consequences. True conservatives in Congress must uphold their pro-American values and refuse to support any preemption effort that would ultimately be a coronation of Big Tech as our country’s new rulers.

The United States is the dominant leader in AI on the global stage, and we are in a high-stakes race with adversaries – especially China – to maintain that advantage. We cannot afford to cut corners on oversight and safety while Big Tech develops AI systems at a rapid pace. States are best positioned to test thoughtful safeguards that address the most pressing concerns – from public safety to protecting children. The federal government, by contrast, is lagging behind.

States have been laboratories of democracy on every pressing issue of our time. The issue of AI should not be any different. The federal government is behind the states in terms of simply thinking about the ramifications of AI, and Congress should allow the states to try to find effective policy solutions that cater to our most imminent concerns.

Preemption is a clear violation of the principle of federalism inherent in the 10th Amendment to the Constitution.

Additionally, this provision is a blatant cover for Big Tech. It allows Big Tech to continue to exploit kids, creators, and conservatives. This provision will not empower small businesses and entrepreneurs in AI because they simply don’t have $40 billion in funding to put toward artificial general intelligence (AGI) development and $100 million bonuses to hand out to potential employees.

They are already shut out of the industry by people like OpenAI CEO Sam Altman, who popularized the “patchwork” characterization of state policies that is now being used in smaller circles in support of preemption.

If we intend to outpace China on AI, we must abandon misguided proposals that undermine federalism. The federal government should focus on enacting strong, strategic measures that protect our national security and prevent U.S. technologies and advanced chips from ending up in the wrong hands.

Keep reading

GrapheneOS Quits France, Citing Unsafe Climate for Open Source Tech

GrapheneOS, the privacy-focused Android operating system, has ended all operations in France, saying the country is no longer a safe place for open source privacy projects.

Although French users will still be able to install and use the software, the project is moving every related service, including its website, forums, and discussion servers, outside French territory.

Until now, GrapheneOS used OVH Bearharnois, a hosting provider based in France, for some of its infrastructure. That setup is being dismantled.

The Mastodon, Discourse, and Matrix servers will operate from Toronto on a mix of local and shared systems. These changes are designed to remove any dependency on French service providers.

The developers said their systems do not collect or retain confidential user data and that no critical security infrastructure was ever stored in France. Because of that, the migration will not affect features such as update verification, digital signature checks, or downgrade protection.

The decision also applies to travel and work policies. Team members have been told not to enter France, citing both personal safety concerns and the government’s endorsement of the European Union’s Chat Control proposal.

That measure would allow authorities to scan private communications for illegal material, something privacy developers see as incompatible with secure digital design.

Keep reading

2 U.S. citizens and 2 Chinese nationals accused of illegally exporting highly advanced Nvidia AI chips to China

Two Americans and two Chinese nationals have been arrested and charged after being accused of exporting Nvidia AI chips to the People’s Republic of China (PRC), violating sensitive export controls and threatening national security.

The Department of Justice (DOJ) announced that the two American citizens include 34-year-old Tampa, Florida resident Hon Ning “Mathew” Ho, who is now a U.S. citizen despite being born in Hong Kong, China, as well as 46-year-old Huntsville, Alabama resident Brian Curtis Raymond.

The two Chinese nationals arrested include Cham “Tony” Li, a resident of San Leandro, California, and Jing “Harry” Chen, who was living in Tampa, Florida, on an F-1 student visa.

“On Wednesday, November 19th, 2025, Ho and Chen were arrested and appeared in court in the Middle District of Florida, while Raymond was arrested and appeared in the North District of Alabama. Li was also arrested yesterday and is scheduled to appear today in the Northern District of California,” the DOJ wrote in a Thursday press release.

The highly advanced Graphics Processing Units (GPUs), which accommodate advanced artificial intelligence (AI) applications, have faced strict export controls as the PRC “seeks to become the world leader in AI by 2030 and seeks to use AI for its military modernization efforts and in connection with the design and testing of weapons of mass destruction and deployment of advanced AI surveillance tools,” according to the DOJ release.

According to the indictment, Ho, Raymond, Li, and Chen conspired to violate the export controls from September 2023 to November 2025 “by illegally exporting advanced GPUS to the PRC through Malaysia and Thailand.”

The conspirators allegedly attempted to conceal their actions through Janford Realter, LLC, a front company based in Tampa, Florida, which was “never involved in any real estate transactions” — despite its name.

“Raymond, through his Alabama-based electronics company, supplied NVIDIA GPUs to Ho and others for illegal export to the PRC as part of the conspiracy,” the release detailed.

The conspiracy involved four separate export attempts. The first two exports resulted in 400 NVIDIA A100 GPUs being exported to the PRC between October 2024 and January this year.

“The third and fourth exports to the PRC were disrupted by law enforcement and therefore not completed. These attempted exports related to ten Hewlett Packard Enterprises supercomputers containing NVIDIA H100 GPUs and 50 separate NVIDIA H200 GPUs.”

The release goes on to note that despite knowing licenses were required to export the GPUs to the PRC, “none of the conspirators ever sought or obtained a license for any of these exports. Instead, they lied about the intended destination of the GPUs to evade U.S. export controls.”

Keep reading

What Has the Government Done to Our Cars

The modern car is an abomination. These once glorious machines have been covered in so many needless hoses, cords, sensors, and plastic shields that a man cannot just open his hood and look at his engine. They are full of other invasive features due to persistent regulatory creep combined with the desire to find ways to charge more for every model. The primary culprits are gas and emissions requirements—based on the hypocritical desire to constantly harangue us about energy use—and the competing demand that cars become ever safer typified by insane government initiatives to get down to zero traffic deaths.

The root of the problem is that the best way to get better gas mileage is to make cars lighter, while the best way to make cars safer is to have heavy frames. Thus, cars have been filled with devices seeking to square this circle which inevitably make the cars more expensive and harder to work on. On top of this, cars constantly become more “online” in nightmarish ways and thus easier for the government or other malicious actors to track, hack, and disable. Unless something changes, for the paranoid and curmudgeonly among us, the only solution will be to adopt an ethos like the Cubans and keep pre-2010 cars running for the rest of our lives.

There is an internet meme that says something like, “Cars manuals used to tell you how to adjust valve lashes and now they tell you to not drink antifreeze.” While I don’t doubt the average American man has less skill at car repair than he did forty years ago, this has more to do with the cars than it does with us: cars that have that in the manual were designed to be user-serviceable. The car is supposed to be a great symbol of liberty and independence in America, but now cars have been designed in such a way that make you wholly dependent on others to keep them running, while being loaded up with new things to break. It is true that some features are popular and that people like good gas mileage, but consumers have also been given few choices as auto companies have been required to include endless add-ons. Plenty of old little cars get better gas mileage than modern SUVs, itself a category invented to skirt gas mileage regulations. Further, many safety features do not make cars meaningfully safer because most deadly car accidents are caused by catastrophic operator error and are not remediated by a myriad of minor features. What they instead do is make cars more annoying.

Gas mileage requirements are especially egregious, as they rely on a fifty year old scarcity mindset which didn’t anticipate the awesome increase in oil production in the United States and worldwide. As safety and environmental busybodies are more motivated than normal people who like cars—also the automotive companies like excuses to add expensive and complicated features—there has been almost no push-back to these changes to our cars. New manual transmissions are all but disappearing, which is framed as a consumer preference but also reflects the fact that modern automatic transmissions get slightly better gas mileage. Thus it becomes hard to produce many models in a manual transmission while meeting MPG requirements; of course, automatic transmissions are less reparable, far more reliant on computers, and when they go out the car is commonly permanently off the road, whereas replacing a clutch is normal maintenance and can be done at home if one is so inclined. All of this has the impact of getting you stuck with a newer car full of features you don’t need and may not want as the used car listings fill up with high mileage automatics no experienced person would buy due to the risk of sudden death.

Keep reading

Gmail Explainer: How to Stop Google AI from Snooping Through Your Emails

Google has quietly started accessing Gmail users’ private emails and attachments to train its AI models, requiring manual opt-out to avoid participation. To make the process even trickier, Gmail users have to opt out in two separate places for the change to work. Follow these steps to protect your privacy from Google’s invasive AI endeavors.

Malwarebytes reports that Google has recently implemented changes that enable Gmail to access all private messages and attachments for the purpose of training its AI models. This means that unless users take action to opt out, their emails could be analyzed to improve Google’s AI assistants, such as Smart Compose or AI-generated replies.

The motivation behind this change is Google’s push to enhance Gmail’s features with the company’s Gemini AI, aiming to help users write emails more efficiently and manage their inboxes more effectively. To accomplish this, Google is utilizing real email content, including attachments, to train and refine its AI models. These settings are now reportedly switched on by default, rather than requiring explicit opt-in consent.

As a result, if users do not manually disable these settings, their private messages may be used for AI training without their knowledge. While Google assures strong privacy measures are in place, such as data anonymization and security during the AI training process, those handling sensitive or confidential information may find little comfort in these promises.

To fully opt out of Gmail’s AI training, users must change settings in two separate locations. This article features a guide and images for opting out on desktop, but the selections are very similar if accessing Gmail via the mobile app.

Keep reading

Is Global Technocracy Inevitable Or Dangerously Delusional?

The bewildering truth behind human technological enslavement is that it is impossible without the voluntary participation of the intended slaves. People must welcome technocracy into their lives in order for it to succeed. The populace has to believe, blindly, that they cannot live without it, or that authoritarianism by algorithmic consensus is “inevitable.”

For example, the average person living in a first world economy voluntarily carries a cell phone everywhere they go at all times without fail. To be without it, in their minds, is to be naked, at risk, unprepared and disconnected from civilization. I grew up in the 1980s and we did just fine without having a phone on our hip every moment of the day. Even now, I refuse to carry one.

Why? First, as most people should be aware of by now (the Edward Snowden revelations left no doubt), a cell phone is a perfect technocratic device. It has multilayered tracking, using GPS, WiFi routers, and cell tower triangulation to track your every step. Not only that, but it can be used to record your daily patterns, your habits, who your friends are, where you were on any given day many months or years ago.

Then there’s the backdoor functions hidden in app software that allows governments and corporations to to access your cell’s microphone and camera, even when you think the device is shut off. The private details of your life could be recorded and collated. In a world where privacy is being declared “dead” by boasting technocrats, why help them out by carrying something that listens to everything you say and chronicles everything you do?

Keep reading

The Algorithm Accountability Act’s Threat to Free Speech

A new push in Congress is taking shape under the banner of “algorithmic accountability,” but its real effect would be to expand the government’s reach into online speech.

Senators John Curtis (R-UT) and Mark Kelly (D-AZ) have introduced the Algorithm Accountability Act, a bill that would rewrite Section 230 of the Communications Decency Act to remove liability protections from large, for-profit social media platforms whose recommendation systems are said to cause “harm.”

We obtained a copy of the bill for you here.

The proposal applies to any platform with more than a million users that relies on algorithms to sort or recommend content.

These companies would be required to meet a “duty of care” to prevent foreseeable bodily injury or death.

If a user or family member claims an algorithm contributed to such harm, the platform could be sued, losing the legal shield that has protected online speech for nearly three decades.

Although the bill’s authors describe it as a safety measure, the structure of the law would inevitably pressure platforms to suppress or downrank lawful content that might later be portrayed as dangerous.

Most major social networks already rely heavily on automated recommendation systems to organize and personalize information. Exposing them to lawsuits for what those systems display invites broad, quiet censorship under the guise of caution.

Keep reading

Future computers could grow their own memory – from mushrooms

Computers run on silicon and metal. Mushrooms grow in soil. Yet now, scientists are finding that one could stand in for the other.

Fungi might replace parts of the machines that shape our digital world. The idea sounds strange until you realize how intelligent and resilient these organisms are.

Mushrooms in computing

At The Ohio State University, researchers found that edible mushrooms such as shiitake could act like organic memory chips. When connected to circuits, the mushrooms stored and processed information like a living brain.

Study lead author John LaRocco is a research scientist in psychiatry at Ohio State’s College of Medicine.

“Being able to develop microchips that mimic actual neural activity means you don’t need a lot of power for standby or when the machine isn’t being used,” said LaRocco.

The fungal chips performed surprisingly well. Each could switch electrical states thousands of times per second with high accuracy.

These organic systems did not rely on costly rare-earth minerals or energy-intensive factories, which makes them an appealing alternative to traditional semiconductors.

Learning from nature

Fungi already form vast underground networks that pass signals between roots and trees. The researchers realized these same biological systems could be repurposed to store information.

The mycelium – the thread-like part of a fungus – responds to electrical pulses by changing its resistance. Those shifts act like memories.

In tests, mushrooms adjusted their conductivity when exposed to repeated voltage cycles. Their ability to change behavior after each signal mirrored how neurons in the brain learn.

Over time, the fungi “remembered” patterns of stimulation and became more stable in performance. That self-tuning nature could one day lead to energy-efficient devices that learn continuously, much like biological systems.

Testing mushrooms for computers

To explore this, the team grew shiitake and button mushrooms on organic materials such as wheat germ, hay, and farro seeds.

Once the fungal mats reached maturity, they were dried in sunlight to maintain shape and later sprayed with water to restore conductivity.

“We would connect electrical wires and probes at different points on the mushrooms because distinct parts of it have different electrical properties,” said LaRocco.

Each part responded differently to signals, showing that the internal structure of mushrooms influences how electricity flows.

At specific frequencies, the fungi displayed classic memory loops known as hysteresis curves, confirming their potential as memristors.

Keep reading

WaPo Defends Data Centers—With Few Disclosures That Amazon Depends on Them

US electricity prices, you may have noticed, keep going up. And in some parts of the country, like here in the DC region, they’re soaring. In Virginia, for example, electricity rates are up 13% this year, an issue Democrats highlighted as they swept back into power in Richmond earlier this month.

Burgeoning electric bills also factored into Democrats’ November wins in New Jersey and Georgia. But let’s stick with Virginia for a moment, where energy-sucking data centers are so plentiful that if northern Virginia’s DC suburbs were to secede, the new country would have more data center capacity than China.

As a result of these data centers, this new country would likely suffer from crippling electric bills. “Wholesale electricity [now] costs as much as 267% more than it did five years ago in areas near data centers. That’s being passed on to customers,” read a recent Bloomberg subhead.

Keep reading