The Feds Are Investing in Wearable Health Trackers. That Could Put Your Private Data at Risk.

By gathering continuous data about sleep, heart rate, and physical activity, biowearable devices can give individuals more control over their well-being. But they also create a detailed digital record of our daily lives—one that the federal government may soon be able to access readily.

Consider this scenario.

You’ve recently received a government-subsidized biowearable. Accordingly, the authorities now know when you’re sleeping, because the device reports your sleep cycle, location, and daily movements in real time to a cloud server accessible through a legal process. It knows when you’re home. It knows when you leave.

Those data are then obtained by an FBI field office (either through direct purchase or, if necessary, a legal process), because a federal prosecutor has decided that your criticism of immigration enforcement operations and your social media posts supporting Immigration and Customs Enforcement protesters constitute “incitement to violence” against federal agents. Under the Trump administration’s elastic (and legally dubious) domestic terrorism definitions and designations, that is enough to open a criminal investigation.

And because the government has known for weeks when you’re at home sleeping, it knows exactly when to break down your door.

That scenario may sound far-fetched, but it is getting closer to reality. In March, the Department of Health and Human Services (HHS) announced that the Advanced Research Projects Agency for Health (ARPA-H) would begin investing in new biowearable technologies through a program it called Delphi, after the ancient Greek sanctuary where the maxim “know thyself” was inscribed. It’s a fitting name for a program designed to help people understand their bodies, but it also raises an uncomfortable question: Who else might come to know them just as well?

The program aims to develop biosensors capable of continuously monitoring cytokines (cellular inflammation markers) and hormone levels, going substantially beyond what current wearables can detect. Funding will be determined on a competitive basis as private-sector stakeholders submit proposals; no specific appropriation has been announced.

It remains unclear why this taxpayer funding is necessary in a field that is already thriving. The global wearables market was valued at roughly $43 billion in 2024 and is projected to exceed $168 billion by 2030.

Devices worn on the wrist, finger, or skin can already monitor heart ratesblood oxygen levelssleep patterns, physical activity, and—in the case of continuous glucose monitors—blood sugar levels in real time. Some smartwatches can even conduct electrocardiograms capable of detecting irregular heart rhythms, such as atrial fibrillation.

Until recently, people could access most of this information only during periodic visits to a clinic or hospital. Biowearables now enable people to monitor many of these signals continuously in everyday life.

Keep reading

White House AI Framework Pushes Age Verification ID Mandate

The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”

The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.

Alongside a genuine free speech provision, the document contains age verification mandateschat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.

The White House is presenting all of this as part of the same coherent package.

Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.”  They are not.

There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.

The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.

The only way to prove that someone is old enough to use a site is to collect personal data about who they are.

In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.

Keep reading

Meta is Ending Instagram Direct Message End-to-End Encryption

Meta is quietly dismantling one of its few genuine privacy commitments. Starting May 8, end-to-end encryption for Instagram direct messages disappears, taking with it the one technical guarantee that kept those conversations private from Meta itself.

“If you have chats that are impacted by this change, you will see instructions on how you can download any media or messages you may want to keep,” the company said in a help document, framing the loss of message privacy as a data export problem. Collect your things, the walls are coming down.

The feature being removed was never universal anyway. End-to-end encryption for Instagram DMs had been available only in certain regions, not enabled by default, since Meta began testing it in 2021 as part of what CEO Mark Zuckerberg called his “privacy-focused vision for social networking.”

That vision apparently has an expiration date. Meta also made encrypted DMs available to all adult users in Ukraine and Russia in February 2022, weeks after the Russian invasion began. That access, too, is ending.

The timing is revealing. TikTok told the BBC last week that it has no plans to bring end-to-end encryption to its DMs, arguing that privacy makes users less safe. Meta is now arriving at the same destination from a different direction.

The stakes are straightforward. End-to-end encryption means only the people in a conversation can read it, a technical lock that excludes the platform, third parties, and anyone who might later obtain a warrant.

When that lock disappears, Meta and its employees can read Instagram DMs, law enforcement can subpoena them, and advertisers may eventually benefit from what gets learned.

Instagram users who relied on encrypted DMs have until May 8 to decide what to archive. After that, their private conversations are Meta’s to read.

Keep reading

Britain had meltdown when China hacked voter files, but U.S. intel kept it secret in America

The United States expressed outrage when Great Britain revealed two years ago that its voter registration databases were hacked by China in what became a global scandal. But it turns out the U.S. intelligence harbored its own secret at the time, knowing since 2020 that Beijing also gained access to American voter registration data, according to documents reviewed by Just the News and interviews with officials with direct knowledge.

“[Redacted] Chinese intelligence officials analyzed multiple U.S. states’ [Redacted] election voter registration data, [Redacted] to conduct public opinion analysis on the 2020 US general election,” stated a once highly classified April 2020 National Intelligence Council memo entitled “Cyber Operations Enabling Expansive Authoritarianism.” 

You can read that document here.

NICM-Declassified-Cyber-Operations-Enabling-Expansive-Digital-Authoritarianism-20200407–2022.pdf

That memo, heavily redacted and quietly declassified by the Biden administration two years after it was written, has escaped most public notice.

That means six years later that the U.S. intelligence community has yet to fully inform the American people or the Congress on the breadth of evidence it possesses of China’s actions, how Beijing got the data, and what operations it has taken or contemplated. 

The gap in public knowledge is particularly politically sensitive as the Senate this week debates a new election security bill that is a top priority for President Donald Trump. Officials told Just the News that Director of National Intelligence Tulsi Gabbard and CIA Director John Ratcliffe are working to declassify a potentially explosive tranche of documents showing what China did, and who in U.S. government knew and when.

The secrecy surrounding China’s access to voter registration has been so persistent that even Republican National Committee Chairman Joe Gruters, President Donald Trump’s point man for the 2026 mid-term elections, said he was unaware of the intelligence. “What’s crazy is the fact that China has access to these voter rolls, but we don’t,” Gruters told John Solomon Reports podcast in an episode set to air Tuesday.

Keep reading

AI Can Now Unmask Anonymous Internet Users, New Study Finds

It looks like AI can now unmask any anonymous account on the internet. That’s according to a new study by Simon Lermen (MATS), Daniel Paleka (ETH Zurich), Joshua Swanson (ETH Zurich), Michael Aerni (ETH Zurich), Nicholas Carlini (Anthropic), and Florian Tramèr (ETH Zurich), published on arXiv.

In the paper, “Large-Scale Online Deanonymization with LLMs,” the researchers show that modern large language models (LLMs) can re-identify people behind pseudonymous online accounts at a scale and accuracy that far surpass previous techniques.

The core contribution is an automated deanonymization pipeline powered by LLMs, according to the new study. Instead of relying on structured datasets or hand-engineered features—like earlier attacks on the Netflix Prize dataset—the system works directly on raw, unstructured text.

Given posts, comments, or interview transcripts written under a pseudonym, the pipeline extracts identity-relevant signals, searches for likely matches using semantic embeddings, and then uses higher-level reasoning to verify the most promising candidates while filtering out false positives. The result is a scalable attack that mirrors—and in some cases exceeds—the effectiveness of a dedicated human investigator.

To evaluate their approach, the researchers constructed three datasets with known ground truth. The first links pseudonymous Hacker News users to real-world LinkedIn profiles, relying on cross-platform clues embedded in public text. The second matches users across movie discussion communities on Reddit. The third takes a single Reddit user’s history, splits it into two time-separated profiles, and tests whether the system can reconnect them.

Across all three settings, LLM-based methods dramatically outperformed classical baselines, which often achieved near-zero recall.

The headline numbers are striking. In some experiments, the system achieved up to 68% recall at 90% precision—meaning it correctly identified a substantial portion of targets while keeping false accusations low. Even when matching temporally split Reddit accounts separated by a year, performance remained strong. In contrast, traditional non-LLM approaches struggled to produce meaningful matches. The findings suggest that advances in reasoning and representation learning have transformed deanonymization from a niche, data-hungry attack into a broadly applicable capability.

Keep reading

The App Store Accountability Act Is A Privacy Nightmare Disguised As Child Protection

Washington has discovered a familiar political trick: wrap a flawed policy in the language of protecting children and hope nobody reads the fine print. The latest example is the App Store Accountability Act, a bill championed by lawmakers who appear eager to regulate the internet without understanding how it actually works.

Supporters insist the legislation will protect kids online. In reality, it risks undermining privacy, violating constitutional protections, and creating a cybersecurity disaster in the process.

And remarkably, Congress is pushing forward with this even though federal courts have already signaled that this exact regulatory model is unconstitutional.

The App Store Accountability Act would require app stores to verify the ages of every user and share age information with app developers. On paper, that sounds straightforward. In practice, it would force companies to collect massive amounts of sensitive personal data simply to download everyday apps.

Want to download a weather app? Verify your age.

Want to install a calculator? Verify your age.

Want to read the news? Verify your age.

The practical result is obvious: app stores would be compelled to gather highly sensitive identity data on tens of millions of Americans and then distribute that information to countless third-party developers.

This could be one of the largest digital identity honeypots ever conceived.

Security experts have been warning about this for months. In fact, 419 cybersecurity and privacy academics from 30 countries recently signed an open letter warning that large-scale age verification systems are “dangerous and socially unacceptable” because they create enormous new attack surfaces for hackers and data thieves.

The logic is simple. If every app download requires age verification, that means sensitive identity data must be stored, transmitted, and accessed across thousands of services. Instead of limiting the spread of personal information, the bill effectively multiplies it.

For cybercriminals, it would be a dream target.

Keep reading

UK Parliament Plans ISP Blocking and Age Verification Powers

If you wanted a case study in how modern democracies widen state oversight step by step, Britain has offered a clear example. On March 9, two major surveillance-related bills advanced through Parliament, each pointing toward broader government authority, reduced personal privacy, and tighter limits on protest activity.

These measures advanced through procedural votes and technical amendments that sounded administrative, yet carry consequences for how millions of people use the internet and exercise civic rights.

The main legislative action unfolded in the House of Commons during debate on the Children’s Wellbeing and Schools Bill. Members of Parliament actually rejected amendments from the House of Lords that would have required age verification for VPNs and certain user-to-user services.

But don’t get too excited. Replacement amendments approved by MPs would grant significant new authority to the state. The powers allow the government to require internet service providers to block or restrict children’s access to specific online platforms, impose time-of-day limits on when services can be used, and mandate age verification across nearly any platform that enables users to post or share content.

Keep reading

Roblox Introduces AI System That Rewrites Users’ Chat Messages in Real Time

Roblox has started rewriting its users’ chat messages in real time using AI, altering what people actually typed into something the platform considers more appropriate.

The feature, rolling out now, goes further than the existing filter that replaces flagged words with “#” symbols. Under the new system, banned language gets silently reworded into what Roblox calls “more respectful language that remains closer to the user’s original intent.”

The platform’s example: type “Hurry TF up!” and the message your recipient sees reads “Hurry up!” Roblox says everyone in the chat is notified when this happens, though the person who typed the original message has no way to stop the substitution before it goes out.

The definition of “banned language” extends beyond profanity. It covers “misspellings, special characters, or other methods to evade detection of profanity,” meaning the AI is also tasked with catching deliberate workarounds and rewriting those too.

Roblox is simultaneously expanding its text filtering system to “detect more variations of language that break its Community Standards,” so the net is getting wider at the same time, and the consequences of being caught in it are changing.

What Roblox has built is a system that goes beyond blocking speech. It replaces it. The message that leaves your keyboard is not the message that arrives. The recipient reads words you didn’t choose, attributed to you, with a notification that your original phrasing was deemed unacceptable. The platform decides what you said.

Keep reading

TikTok Says Privacy Makes Users Less Safe

Over the past five years, the largest social platforms settled on a clear position about private messaging. Lock it down. Facebook turned on end-to-end encryption. Instagram and Messenger did the same. X joined the club. Yes, metadata is still an issue and the protocols used matter; but, generally speaking, the move was toward more privacy of actual messages.

TikTok looked at that trend and made a different choice. Then it scheduled a briefing in London with the BBC to explain the reasoning.

The explanation was safety.

In the UK, TikTok belongs to ByteDance, a Chinese technology company that operates under Beijing’s jurisdiction. China maintains strict limits on end-to-end encryption inside its borders. TikTok, after its own review of the issue, reached the same policy outcome for its messaging system.

Alan Woodward, a cybersecurity professor at Surrey University, raised that point directly. The company’s “Chinese influence might be behind the decision,” he said, adding that end-to-end encryption is “largely banned in China.”

TikTok declined to engage with that suggestion, of course. The remark hung in the air. However, it’s worth adding that the US operation of TikTok has made no indication that it is moving towards private messaging standards either.

End-to-end encryption is simple in theory. Only the people in a conversation can read the messages. The platform running the service cannot access the content. Governments cannot request it. Engineers inside the company cannot view it.

TikTok’s system operates in a different way. Messages on the platform remain readable to the company. Employees can access them under defined circumstances. Law enforcement agencies can request them through legal channels.

TikTok argues that readable messages allow the company to identify harmful activity.

The debate turns on a basic technical fact. “We can read your messages to catch predators,” and “we can read your messages” describe the same system.

Keep reading

Congress Is Considering Abolishing Your Right to Be Anonymous Online

In August 2024, the Biden administration hosted hundreds of influencers at the White House for the first-ever Creator Economy Conference. Neera Tanden, a senior Biden adviser, took to the stage and bemoaned anonymity online. The influencers alongside her agreed, pushing the idea that anonymous speech on the internet is harmful, and regulation is needed to force the use of real names on social media. The audience whispered excitedly as those on stage spoke about how proposed laws like the Kids Online Safety Act, or KOSA, could unmask every troll. 

This narrative of online safety, particularly in relation to children, has become central to the bipartisan effort to censor and deanonymize the internet for everyone. Today, a package of a dozen “child online safety” bills is moving forward in the House of Representatives with bipartisan support. The laws, framed as a way to crack down on harmful content and make the internet safer, would force social media companies to enact invasive identity verification measures in order to keep children from accessing online spaces.

The problem is that there’s no way to reliably verify someone’s age without verifying who they are. A platform cannot magically discern that a user is 16 without collecting identifying information, whether through government documents such as a passport, payment information like a credit card, or other identity-disclosing data. Whether that data is stored by the platform itself or outsourced to a vendor, the result is always the same: A user’s offline identity is forever linked with their online behavior.

Stripping anonymity from the internet would constitute one of the most sweeping rollbacks of civil rights in recent history. It would allow for unprecedented levels of mass surveillance and censorship, endangering the most marginalized members of society. Whistleblowers exposing corporate wrongdoing could be tracked and fired, government employees speaking out about illegal behavior or bad policies could face prosecution, and activists organizing protests could be identified and surveilled before ever setting foot on the street.

Keep reading