Google Partners with the Pentagon to Sell Your Data

There has always been this convenient belief that Big Tech operates independently from government, as if the data you store, search, and upload exists in some neutral corporate space, but that illusion is breaking down rapidly as the lines between Silicon Valley and Washington disappear in real time.

Google has now entered into a classified agreement with the Pentagon allowing its artificial intelligence systems to be used for “any lawful government purpose,” which is a phrase that sounds benign until you understand what it actually means in practice.

This is not a narrow contract tied to a single project. It opens the door for integration into mission planning, intelligence analysis, and even weapons targeting systems operating on classified networks, and once those systems are embedded, the distinction between commercial technology and state infrastructure effectively disappears.

At the same time, Google does not retain control over how that technology is ultimately used, because under the terms being reported, the company has no ability to veto lawful government operations, meaning once access is granted, the downstream application is no longer in their hands.  Please be reminded that Google has been collecting data on everyone and everything for decades: Google Maps, Google Search, Google Photos, Google Drive, Gmail, etc.

This is where the narrative people have been told begins to collapse, because for years the assumption was that your data sat within a corporate ecosystem governed by terms of service and internal policies, yet what is now being constructed is something entirely different, a shared infrastructure where private data, artificial intelligence, and state power intersect.

Keep reading

Federal In-car Monitoring Mandate Expands Data Collection and Control Powers

A federal mandate rooted in a 2021 bipartisan law is set to reshape every new car sold in the United States, and potentially the boundaries of personal mobility itself. By the 2027 model year, vehicles will be required to include systems that monitor drivers for impairment and can intervene if necessary. Supporters frame it as a safety breakthrough. Critics call it a “kill switch.”

The policy has broad political backing. It passed with support from both Democrats and Republicans and has remained intact across administrations, including under the recent Consolidated Appropriations Act, which preserved both funding and the mandate. In January, that support was tested when the House voted down an amendment that would have stripped funding for the requirement, effectively keeping the rule on track.

One of the most persistent critics is Representative Thomas Massie (R-Ky.), who continues to lead opposition alongside a small group of lawmakers. Massie warns that Congress is normalizing continuous monitoring inside privately owned vehicles, a shift he argues carries implications far beyond roadway safety.

The Law

The requirement comes from the Infrastructure Investment and Jobs Act, specifically Section 24220. The law directs regulators to establish a safety standard for what it calls “advanced impaired driving prevention technology.”

The statute defines that technology as a system that can

(i) passively monitor the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired; and
(ii) prevent or limit motor vehicle operation if an impairment is detected;

It also allows for systems that can “passively and accurately detect whether the blood alcohol concentration of a driver … is equal to or greater than” the legal limit, with authority to intervene. The law sets the objective, not the method.

It also cites research from the Insurance Institute for Highway Safety (IIHS) estimating that such technology “can prevent more than 9,400 alcohol-impaired driving fatalities annually.”

The mandate and its funding were reaffirmed in early 2026, when President Donald Trump signed the Consolidated Appropriations Act, ensuring the requirement remains in force.

From Safety Feature to Standard Equipment

Driver monitoring is not new. It is already embedded in many modern vehicles, especially those equipped with advanced driver-assistance systems.

General Motors says its Super Cruise system “tracks the driver’s head position and/or the driver’s gaze” and alerts the driver when attention drifts. Chevrolet describes the system as using a camera mounted on the steering wheel to track “head and eye movement.”

Similarly, Ford’s BlueCruise uses “a driver-facing camera and infrared lighting” to confirm that the driver remains focused on the road. Subaru’s DriverFocus system uses comparable technology, capable of alerting occupants if the driver appears drowsy or distracted.

Today, these systems primarily issue warnings. Under the federal rule, similar technology could become standard in every new vehicle. It would not simply prompt the driver. It could help determine whether the vehicle should start or continue operating.

The National Highway Traffic Safety Administration (NHTSA) describes the current landscape in similar terms. Its 2026 report to Congress explains that indirect systems infer driver state “through camera-based monitoring and vehicle inputs.” It also notes that most current systems are designed to detect “drowsiness, inattention, and sudden sickness,” not alcohol impairment.

That distinction matters. A system designed to detect distraction is not automatically capable of reliably identifying intoxication. Yet the mandate moves in that direction, turning optional in-cabin monitoring into a required compliance system.

Keep reading

Pokémon Go — The Largest Mapped Data Collection Ploy in History

When Pokémon Go was released, it appeared to be a harmless game encouraging people to go outside and explore, yet beneath that surface was a far more sophisticated system that directed human movement into very specific locations where data was needed most, turning millions of users into mobile data collectors. The placement of Pokémon, Gyms, and PokéStops was not random, but concentrated around landmarks, businesses, and dense urban corridors, meaning players were repeatedly funneled into high-value mapping zones, often returning to the same locations over and over again, capturing them from multiple angles, at different times of day, and under varying conditions, which is exactly how high-quality spatial datasets are built.

For many reading this, particularly those who never played the game, it is important to understand what this actually looked like in practice, because this was not some passive background process, it required people to physically walk through neighborhoods, parks, shopping districts, and even residential areas while holding up their phones, actively scanning their surroundings to “catch” virtual creatures that did not exist. The game encouraged users to point their cameras at real-world objects, move around them, and interact with the environment. The system was capturing detailed imagery not just of public landmarks but also of surrounding areas, including streets, entryways, and private homes, all embedded in what appeared to be a simple entertainment experience.

Keep reading

Massachusetts Agrees to Delete Data From App It ‘Secretly Installed’ During Pandemic

Massachusetts officials have agreed to delete data from a contact tracing application that people said was installed on their phones without their permission during the COVID-19 pandemic.

Under a settlement agreement approved by a federal judge on March 31, the Massachusetts Department of Public Health “shall (a) destroy any Primary Data in the Department’s possession, custody, and control, which the Department, exercising all due diligence, has located and … that was made available to the Department from the COVID Exposure Notification Setting on Android Devices; and (b) certify in writing to Class Counsel that such data has been destroyed and will not be provided to any third party.”

The state’s health commissioner also promised not to have data collecting applications installed on people’s phones without their permission for five years.

The settlement came in a case brought by plaintiffs who said the app in question, known as MassNotify v.3 or Exposure Notification Settings Feature-MA, was “secretly installed” on their phones without their permission.

American Institute of Economic Research senior fellow Robert Wright, who lives in Massachusetts, said the app was downloaded onto his Android phone around July 1, 2021, without his knowledge. Johnny Kula, a New Hampshire resident who travels to Massachusetts on a daily basis for work, also said he discovered the app on his phone around the same time, and that it was back on the phone later in 2021 after he uninstalled it.

The plaintiffs’ claims echoed reviews from app store users complaining they had not downloaded the app, but it appeared on their phones. The app, which allowed people to say they had tested positive for COVID-19, and alerted others who had recently been close in location to those people, was downloaded more than one million times, according to court filings. Similar applications were developed by at least 24 other states.

Keep reading

FBI Resumes Buying Americans’ Location Data Without Warrants

The FBI is buying Americans’ location data again. Director Kash Patel confirmed it to lawmakers on Wednesday, confirming what we already knew: that it has resumed purchasing commercial surveillance data, including detailed location histories, from data brokers.

The brokers feeding that data pipeline source much of it from phone apps and games that people use daily without realizing they’re being tracked.

By the time a precise location record reaches a federal agency, it may have originated from a weather app or a mobile game, passed through an advertising middleman, and been packaged for resale, with the person who generated it never consulted or notified.

Senator Ron Wyden asked Patel directly whether the FBI would commit to not buying Americans’ location data without a warrant. Patel declined. The agency “uses all tools…to do our mission,” he told the committee.

He followed up by confirming that “we do purchase commercially available information that is consistent with the Constitution and the laws under the Electronic Communications Privacy Act,” adding that it “has led to some valuable intelligence for us.”

Wyden called that arrangement exactly what it is: the government buying what it cannot legally seize. Purchasing information on Americans without a warrant is “an outrageous end-run around the Fourth Amendment,” he said, referring to the constitutional protection against unreasonable searches and seizures.

The workaround is not unique to the FBI. Federal agencies are generally required to convince a judge that probable cause exists before demanding private records from a tech or phone company.

The commercial data market offers a way around that requirement entirely. Agencies simply purchase what they would otherwise need a warrant to obtain, creating a market for data grabbing and exploiting a legal gap that courts have not yet addressed.

Wyden and other lawmakers introduced the Government Surveillance Reform Act last week, which would require a court-authorized warrant before any federal agency can purchase Americans’ data from brokers. The bill is bipartisan and bicameral. Without it, the gap that lets agencies buy their way around the Fourth Amendment remains open.

Keep reading

FTC Says Companies Can Collect Kids’ Personal Data, As Long As It’s Called “Age Verification”

The FTC just told companies they can collect children’s personal data without parental consent, as long as it’s for “age verification.”

That’s the practical effect of a policy statement the agency issued this week. Under COPPA, websites collecting data on kids under 13 generally need verifiable parental consent first. The FTC’s new statement carves out an exception: gather whatever personal information you need to verify someone’s age, and the Commission won’t come after you for it.

The agency calls this child protection. The infrastructure it’s enabling looks different.

Christopher Mufarrige, director of the FTC’s Bureau of Consumer Protection, said “Age verification technologies are some of the most child-protective technologies to emerge in decades,” and framed the announcement as a tool for parents.

What the statement actually does is green-light personal data collection from minors, on the theory that knowing someone’s age requires knowing who they are first.

The exemption is conditional. To avoid enforcement, sites must delete age verification data “promptly” after use, restrict third-party sharing to vendors with adequate security assurances, post clear notices about what they’re collecting, and use methods likely to produce “reasonably accurate” results. These requirements are unverifiable by the people whose data gets collected, and enforced by an agency that just announced it won’t enforce.

COPPA supposedly exists precisely because children’s personal data is sensitive and companies can’t be trusted to protect it without legal pressure.

The FTC’s new exemption uses that same sensitive data as the price of admission for age verification, then steps back from enforcement. The agency is weakening the law’s protections in order to expand the infrastructure that the law was supposedly designed to regulate.

Keep reading

Palantir, Fractal And Your Personal Data Privacy – Get used to being used, because YOU are the product

Who controls the data the government collected from you for a generation?

Your insurance company collected data on your driving – so did your Lexus – who owns that data?

You told your doctor about controlled substances you used – and now it gets brought up in an interview.

If you can’t exclude someone from using your data, then you don’t control it. That means you really don’t own it. It’s that simple.

What does “own” mean here, let’s define the terms.

Owning the data means you can do anything you want with it – share it, sell it, mine it or build an A.I. language model with it.

From birth until the last Social Security check gets cashed, your data is collected by federal and state agencies, corporations and of course the internet.

Your teen daughter puts every waking moment on Facebook or Instagram – so who owns those hundreds of images?

TSA Pre Check, Medicare/Medicaid, Social Security, government or military retirement, Tri-Care, veterans hospitals, and of course, the IRS – gather more data about every citizen than has ever been gathered in the history of mankind.

Each agency gathers different data, at different times, for slightly different purposes. And those purposes may change over time.

Who owns the rights to that data?

It’s a far stickier question than you think.

The knee jerk response is the government owns the data. They collected it for their purposes, so it’s theirs.

The government will certainly say so.

Keep reading

Thailand – A Case Study for Biometric Data Control

Thailand has become a test case for the use of biometric data in every facet of life. Facial recognition data is required for any single transfer above 50,000 baht (around $1,580), daily transfers above 200,000 baht, and any international transfers from personal accounts.  All major Thai banks, such as Bangkok Bank, Kasikorn (KBank), SCB, Krungthai, and Krungsri, require customers to submit biometric data, and the Bank of Thailand (BOT) provides the general guidelines that these banks must follow.

It may begin with banking and documentation, but the ultimate goal is to develop digital IDs that are stored on a centralized database. The board of Thailand’s National Broadcasting and Telecommunications Commission (NBTC) proposed that users must submit biometric data to register SIM cards. The rule went into effect in August and applies to everyone in Thailand, including tourists.

The Thai Ministry of Public Health (MOPH), the Thai Red Cross Society, and the National Science and Technology Development Agency (NSTDA) has implemented the use of biometric data to track undocumented persons. Health agencies claim the technology can identify the spread of disease and assist in providing humanitarian aid and medical services. The MOPH claims the technology is 99.75% accurate. According to the Department of Labour’s Bureau of Alien Workers Administration, over 1 million undocumented migrants were in the nation as of July 2025.

“The application of biometric technology not only improves healthcare, disease prevention and control, medical services, and humanitarian aid with accuracy and inclusivity, but also reflects the protection of human rights and dignity of undocumented people in Thailand. It also creates opportunities for education and research by Thai public health professionals to develop further benefits for the general population,” Health Minister Somsak Thepsuthin stated.

The Thai Red Cross Society is a branch of the global Red Cross agency. Thailand’s Personal Data Protection Act (PDPA) claims all personal data will be securely protected, but they have already begun sharing with international agencies.

Keep reading

Comprehensive data privacy laws go into effect in 8 more states this year

This year, comprehensive privacy laws are going into effect in eight states to regulate how businesses handle digital information and to give consumers more protections over their personal data.

The laws in DelawareIowaMinnesotaNebraskaNew HampshireNew Jersey and Tennessee have taken effect already this year, according to a database from the International Association of Privacy Professionals’ Westin Research Center. Maryland’s privacy law, signed by Democratic Gov. Wes Moore last year, will go into effect Oct. 1.

Privacy laws enacted in IndianaKentucky and Rhode Island will go into effect next year.

Several other states are considering comprehensive privacy bills during this year’s legislative sessions. They include MassachusettsMichiganNorth CarolinaPennsylvania and Wisconsin.

When a person visits a website, applies to a job or logs into an online portal, they may be sharing their personal information. Comprehensive privacy laws can apply to a wide range of companies that participate in this kind of data collection.

These laws generally include two types of provisions — those related to consumer rights and those that establish business obligations, according to the association.

Under each of the new laws, consumers have the right to control when and how their data is collected and shared. Some of those provisions include the right to delete data from a company’s database, the ability to opt out of sharing sensitive information and the right to prohibit a company from selling their data.

The new measures also require businesses to ask consumers if they want to opt in to data collection. In some states, businesses are required to complete consumer data risk assessments and identify ways in which discrimination could take place. Some companies also may be required to limit to how consumer data is processed.

Keep reading

Technocrat Sweep: US Health Officials, Tech Executives To Launch Data-Sharing Plan

Why would Technocrats care about your health data? When they see the public as a herd of cattle, they naturally move to “manage the herd.” RFK, Jr. earlier bragged that he wants all citizens to don wearable medical devices within four years, to collect mountains of data. This initiative is headed by AMY GLEASON, the Administrator of DOGE.

Amy Gleason worked at the predecessor of DOGE from 2018-2021 during the first Trump administration, where she played a key role on the White House Coronavirus Task Force’s data team managing critical pandemic data. She was named an Obama-era “Champion of Change” for her work in patient advocacy and precision medicine. She has emerged as a key Technocrat with her association with Elon Musk. Court records clearly show that Musk was never in charge of DOGE, but rather Amy Gleason.

Keep reading