Data Collection Can Be Effective and Legal

MEMORANDUM FOR: The President
FROM: Veteran Intelligence Professionals for Sanity (VIPS)
SUBJECT: Data Collection Can Be Effective and Legal

Introduction

It’s an Artificial Conundrum

It is not necessary to make an end-run around the U.S. Constitution to thwart terrorism and other crimes.

Those claiming otherwise have been far from candid – especially since June 2013, when Edward Snowden revealed gross violations of the Fourth Amendment by NSA’s bulk electronic collection. U.S. citizens have been widely misled into believing that their Constitutional right to privacy had to yield to a superseding need to combat terrorism.

The choice was presented as an Either-Or conundrum. In what follows, we will show that this is a false choice. Rather, the “choice” can be a Both-And. In sum, all that is needed is to place advanced technology that has been already demonstrated into the hands of officials not driven by lust for a cushy retirement.

Sophisticated collection and processing technology that also protects the right to privacy has been available for decades, enabling highly efficient and discriminating collection. Despite that, top officials have opted for quasi-legal, cumbersome, ineffective – and wildly expensive – technology that has done little more than line the pockets of contractors and “old-friend” retirees.

U.S. officials have been caught lying under oath – with impunity – with false claims about the effectiveness of the intrusive, high price-tag technology they procured and implemented.

In the Annex to this Memo we briefly portray the illustrative behavior of one such senior official. We do so in the belief that a short case study may shed light on the apparent motivation of many senior officials who seem to take far too lightly their oath to defend and protect the Constitution of the United States.

We took the same oath. It has no expiration date.

Keep reading

Florida AG Subpoenas Medical Firms Over ‘Backdoor’ on China-Made Devices

Florida Attorney General James Uthmeier has subpoenaed two medical companies selling Chinese-made patient monitors over concerns that the devices could send patient data to China.

Uthmeier’s office stated in a press release that they had taken legal action against Contec Medical Systems, a China-based company known for making patient monitors, and Epsimed, a Miami-based company that resells Contec-made monitors under its own brand name.

The office alleged that Contec “concealed serious security problems” in its products, including a built-in “backdoor” that could “allow bad actors to manipulate data” on the devices without knowledge of either the patient or the provider, and programming that automatically sends patient information to an IP address that belongs to a university in China.

“Some of the most private, personal information” is going to China “without the consent, and in most cases, the awareness of the patient,” Uthmeier told The Epoch Times. “I think there’s a major consumer protection issue for Floridians, for Americans as a whole, and we’re not going to stand for it.”

Uthmeier’s office alleged that Contec and Epsimed may have violated a state law, the Deceptive and Unfair Trade Practices Act, in their assurances on product quality when the products appear to fall far short of standards given their security vulnerabilities. He threatened to pursue damages, civil penalties, and injunctive relief to protect consumers.

Contec Medical Systems is headquartered in Qinhuangdao, a port city located in northern China’s Hebei Province. It has an affiliate called Contec Medical Systems USA Inc. in Illinois to handle the U.S. market.

Keep reading

One of NHS’s biggest AI projects is halted after fears it used health data of 57 MILLION people without proper permissions

NHS England has paused a ground-breaking AI project designed to predict an individual’s risk of health conditions after concerns were raised data from 57 million people was being used without the right permissions.

Foresight, which uses Meta‘s open-source AI model, Llama 2, was being tested by researchers at University College London and King’s College London as part of a national pilot scheme exploring how AI could be used to tailor healthcare plans for patients based on their medical history.

But the brakes were applied to the pioneering scheme after experts warned even anonymised records could contain enough information to identify individuals, The Observer reported.

A joint IT committee between the British Medical Association (BMA) and the Royal College of General Practitioners (RCGP) also said it they had not been made aware that data collected for research into Covid was now being used to train the AI model. 

The bodies have also accused the research consortium, led by Health Data Research UK, of failing to consult an advisory body of doctors before feeding the health data of tens of millions of patients into Foresight.

Both BMA and RGCP have asked NHS England to refer itself to the Information Commissioner over the matter.

Professor Kamila Hawthorne, chair of RGCP, said the issue was one of ‘fostering patient trust’ that their data was not being used ‘beyond what they’ve given permission for.’

She said: ‘As data controllers, GPs take the management of their patients’ medical data very seriously, and we want to be sure data isn’t being used beyond its scope, in this case to train an AI programme.

Keep reading

OpenAI Is Ordered to Save Every ChatGPT Chat — Even the Ones You Delete

A federal court order requiring OpenAI to retain all ChatGPT conversations, including those users have deleted, should strong concern among privacy advocates and added pressure to a growing legal battle over the use of copyrighted material in AI systems.

On May 13, US Magistrate Judge Ona T. Wang directed OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going-forward basis until further order of the Court.” Although the order was issued several weeks ago, it only came to wider attention this week as OpenAI began taking formal steps to challenge it.

The ruling stems from multiple lawsuits filed by media organizations, including The New York Times, that accuse OpenAI of unlawfully using their copyrighted content to train and operate ChatGPT.

In response, OpenAI submitted a filing urging US District Judge Sidney H. Stein to overturn what it described as a “sweeping, unprecedented order.”

The company argued that the directive forces it to ignore user choices about data deletion, jeopardizing the privacy of millions. OpenAI also pointed to a statement from The New York Times editorial board asserting that Americans “should be able to control what happens to their personal data.”

Keep reading

Opinion: The Most Terrifying Company in America Is Probably One You’ve Never Heard Of

Most Americans have never heard of Palantir. That’s by design. It doesn’t make phones or social platforms. It doesn’t beg for your data with bright buttons or discount codes. Rather, it just takes it. Quietly. Legally. Systematically. Palantir is a back-end beast, the silent spine of modern surveillance infrastructure.

Palantir’s influence isn’t hypothetical. It’s operational. From the battlefields of Ukraine to the precincts of Los Angeles, its software guides drone strikes, predicts crime, allocates police resources, and even helps governments decide which children might someday become “threats.” These aren’t sci-fi hypotheticals. They are pilot programs, already integrated, already scaling.

This software—Gotham, Foundry, and now its Artificial Intelligence Platform (AIP)—is designed to swallow everything: hospital records, welfare files, license plate scans, school roll calls, immigration logs and even your tweets. It stitches these fragments into something eerily complete—a unified view of you. With each data point, the image sharpens.

If Facebook turned people into products, Palantir turns them into probabilities. You’re not a user. You’re a variable—run through predictive models, flagged for anomalies, and judged in silence.

This is not just surveillance. It’s prediction. And that distinction matters: Surveillance watches. Prediction acts. It assigns probabilities. It flags anomalies. It escalates risk. And it trains bureaucrats and law enforcement to treat those algorithmic suspicions as fact. In short: the software decides, and people follo

Keep reading

Privacy and hunger groups sue over USDA attempt to collect personal data of SNAP recipients

Privacy and hunger relief groups and a handful of people receiving food assistance benefits are suing the federal government over the Trump administration’s attempts to collect the personal information of millions of U.S. residents who use the Supplemental Nutrition Assistance Program

Privacy and hunger groups sue over USDA attempt to collect personal data of SNAP recipientsBy REBECCA BOONEAssociated PressThe Associated Press

Privacy and hunger relief groups and a handful of people receiving food assistance benefits are suing the federal government over the Trump administration’s attempts to collect the personal information of millions of U.S. residents who use the Supplemental Nutrition Assistance Program.

The lawsuit filed in Washington, D.C., on Thursday says the U.S. Department of Agriculture violated federal privacy laws when it ordered states and vendors to turn over five years of data about food assistance program applicants and enrollees, including their names, birth dates, personal addresses and social security numbers.

The lawsuit “seeks to ensure that the government is not exploiting our most vulnerable citizens by disregarding longstanding privacy protections,” National Student Legal Defense Network attorney Daniel Zibel wrote in the complaint. The Electronic Privacy Information Center and Mazon Inc.: A Jewish Response to Hunger joined the four food assistance recipients in bringing the lawsuit.

The White House did not immediately respond to a request for comment.

The Supplemental Nutrition Assistance Program, or SNAP, is a social safety net that serves more than 42 million people nationwide. Under the program formerly known as food stamps, the federal government pays for 100% of the food benefits but the states help cover the administrative costs. States also are responsible for determining whether people are eligible for the benefits, and for issuing the benefits to enrollees.

As a result, states have lots of highly personal financial, medical, housing, tax and other information about SNAP applicants and their dependents, according to the lawsuit.

President Donald Trump signed an executive order March 20 directing agencies to ensure “unfettered access to comprehensive data from all state programs” as part of the administration’s effort to stop “ waste, fraud and abuse by eliminating information silos.”

Keep reading

Google Hit with Historic $1.375 Billion Settlement for Secretly Tracking People’s Movements, Private Searches, Voiceprints, and Facial Data

Texas Attorney General Ken Paxton has delivered a knockout punch to Google, securing a record-shattering $1.375 billion settlement for the Big Tech’s covert surveillance of everyday Americans.

This staggering sum is nearly a billion dollars more than what 40 states combined were able to wring from Google for similar offenses — a testament to Paxton’s unrelenting crusade against Big Tech tyranny.

In 2022, Texas Attorney General Ken Paxton has filed a 44-page lawsuit against Google, accusing the multibillion-dollar corporation of “systematically misleading” and “deceiving” Texans for years in order to secretly track their every move — and rake in obscene profits from it.

The lawsuit lays out a damning case against Google, alleging that the tech behemoth “covertly harvested” users’ precise geolocation data, voiceprints, and even facial geometry — all while leading users to believe they had turned off such invasive tracking.

According to the lawsuit, Google duped its users by creating a maze of confusing and misleading settings, falsely telling Texans they could protect their privacy by turning off features like “Location History.” But in reality, Google was still logging user data using obscure and hard-to-find settings like “Web & App Activity,” storing data in shadowy internal databases with Orwellian names like “Footprints.”

Keep reading

NIH To Build Massive Health Data Platform Linking Health Records, Genomic Profiles, and Smartwatch Data for Medical Research

The National Institutes of Health is quietly assembling a vast digital mosaic of Americans’ private medical histories, pulling sensitive data from both government-run health systems and commercial sources to support autism research tied to Health and Human Services Secretary Robert F. Kennedy Jr.’s latest project. The new scheme involves a sweeping plan to integrate diverse streams of health data into a single platform, raising significant concerns about privacy, oversight, and long-term use.

According to NIH Director Dr. Jay Bhattacharya, the data aggregation includes pharmacy transactions, insurance claims, clinical test results, and even personal metrics collected from wearable tech such as fitness trackers and smartwatches.

Health information from the Department of Veterans Affairs and the Indian Health Service is also being funneled in, creating a massive, centralized repository with a wide lens on the US population.

As Bhattacharya told agency advisers on Monday, the objective is to eliminate the fragmentation that currently limits access to existing health data sets. He said the new system would cut down on redundancies and make it easier for researchers to conduct large-scale analysis.

“The idea of the platform is that the existing data resources are often fragmented and difficult to obtain. The NIH itself will often pay multiple times for the same data resource. Even data resources that are within the federal government are difficult to obtain,” he said.

Keep reading

Report: AI Company DeepSeek ‘Funnels’ American User Data To Red China

The Chinese artificial intelligence (AI) company DeepSeek is allegedly syphoning American user data to China’s communist government, according to a new congressional report.

Released on Wednesday by the House Select Committee on the Chinese Communist Party (CCP), the 16-page analysis contends that the China-based AI firm “collects detailed user data, which it transmits via backend infrastructure that is connected to China Mobile.”

A state-owned telecommunications giant, China Mobile was flagged by the Pentagon earlier this year for having ties to Beijing’s military. In 2019, the Federal Communications Commission (FCC) prohibited the firm from operating within the U.S. over concerns that “unauthorized access to customer … data could create irreparable damage to U.S. national security.”

The FCC subsequently added China Mobile to its list of “national security threats” in 2022, according to ABC News.

“While the extent of data transmission remains unconfirmed, DeepSeek’s integration with China Mobile infrastructure raises serious concerns about potential foreign access to Americans’ private information,” the House analysis reads. “By relying on China Mobile’s infrastructure, DeepSeek ensures that Americans’ data is stored and transmitted through networks controlled by the Chinese government.”

Among the data DeepSeek reportedly collects from users who utilize its chatbot function are their “chat history, device details, and even the way a person types,” according to the report. The House committee cited DeepSeek’s privacy policy, which discloses that the company stores the information it gathers from users “in secure servers located in the People’s Republic of China.”

It’s worth noting that the CCP passed a so-called “national intelligence law” in 2017 granting the communist government access to data held by businesses operating in China. As noted by the Department of Homeland Security, this law “compels all PRC firms and entities to support, assist, and cooperate with the PRC intelligence services, creating a legal obligation for those entities to turn over data collected abroad and domestically to the PRC.”

Keep reading

Which AI Chatbots Collect The Most Data About You?

The harbinger of the AI revolution, ChatGPT, remains the most popular AI tool on the market, with more than 200 million weekly active users.

But amongst all its competitors, which AI chatbots are collecting the most user data? And why does that matter?

Visual Capitalist’s Marcus Lu visualizes data from Surfshark which identified the most popular AI chatbots and analyzed their privacy details on the Apple App Store.

At first place, Google’s Gemini (released March, 2023) collects 22 different data points across 10 categories, from its users.

Data collected ranges from general diagnostics (that all bots in this study collect) to access to contacts (that no other bot identified collects).

xAI’s Grok (released November, 2023) collects the least unique data points (7).

China’s DeepSeek (released Jan 2025), sits comfortably in the middle of the pack at 11 points.

The kind of data collected by each of these AI tools varies. All of them collected general diagnostics information. However, only Gemini and Perplexity look at purchases.

And then, nearly all but Perplexity.ai and Grok collect user content.

Keep reading