New Company Hopes to Build Age-Verification Tech into Vape Cartridges 

Their goal is to use biometric data and blockchain to build age-verification measures directly into disposable vape cartridges.

Wired reports on a partnership between vape/cartridge manufacturer Ispire Technology and regulatory consulting company Chemular (which specializes in the nicotine market) — which they’ve named “Ike Tech”:[Using blockchain-based security, the e-cig cartridge] would use a camera to scan some form of ID and then also take a video of the user’s face. Once it verifies your identity and determines you’re old enough to vape, it translates that information into anonymized tokens. That info goes to an identity service like ID.me or Clear. If approved, it bounces back to the app, which then uses a Bluetooth signal to give the vape the OK to turn on.

“Everything is tokenized,” [says Ispire CEO Michael Wang]. “As a result of this process, we don’t communicate consumer personal private information.” He says the process takes about a minute and a half… After that onetime check, the Bluetooth connection on the phone will recognize when the vape cartridge is nearby and keep it unlocked. Move the vape too far away from the phone, and it shuts off again. Based on testing, the companies behind Ike Tech claim this process has a 100 percent success rate in age verification, more or less calling the tech infallible. “The FDA told us it’s the holy grail technology they were looking for,” Wang says. “That’s word-for-word what they said when we met with them….”

Wang says the goal is to implement additional features in the verification process, like geo-fencing, which would force the vape to shut off while near a school or on an airplane. In the future, the plan is to license this biometric verification tech to other e-cig companies. The tech may also grow to include fingerprint readers and expand to other product categories; Wang suggests guns, which have a long history of age-verification features not quite working.

Keep reading

Digital IDs are insecure and can be hacked

On Wednesday, Norman Fenton, Professor Emeritus at Queen Mary University of London, published an article describing how he lost control of his Twitter (now X) account to a hacker on 18 March but has now regained limited control. 

“After sending a spam ‘vote for me’ direct message (with a malicious link) to all my mutual followers on 20 March, the attackers began posting spam publicly. Not sporadically, but continuously: the bot was posting roughly once per second, pausing only intermittently to repost content from accounts I follow. By the time I regained access, there were thousands of spam posts and reposts,” he said.  “For now, anyone visiting my timeline will see little but pages of spam.”

Prof. Fenton ended his article by saying, “Regaining access to my account ultimately required persistence, external pressure, and, quite likely, legal escalation. Many users facing similar attacks will not have those options.

“If X cannot reliably protect accounts, respond promptly to verified compromise reports, and restore control without unnecessary barriers, then users are left dangerously exposed.

“What’s clear right now is that when things go wrong, you will be largely on your own.”

Read more: I’ve regained control of my hacked X account, Norman Fenton, 25 March 2026

In response to Prof. Fenton’s experience, Dr. Scott McLachlanformerly a lecturer in Digital Technologies for Healthcare at King’s College London, posted a Substack note taking Prof. Fenton’s final remarks a logical step further.

Keep reading

The Verdict Against Meta and Google That Could End the Anonymous Internet

A Los Angeles jury has found Meta and YouTube negligent in the design of their platforms and awarded $3 million to a plaintiff identified as K.G.M., a young woman who testified that years of near-constant social media use contributed to depression, anxiety, and body dysmorphia. The jury assigned 70% of the responsibility to Meta and 30% to YouTube. Punitive damages came to another $6 million.

The verdict is being reported as a landmark for child safety. It also represents a significant legal mechanism for dismantling anonymous internet access, built in plain sight, with bipartisan enthusiasm and a CEO’s enthusiastic assistance.

K.G.M.’s attorneys built their claim not around what users posted, which Section 230 of the Communications Decency Act largely shields platforms from liability for, but around how the platforms were designed.

Infinite scroll, algorithmically amplified notifications, engagement loops engineered to maximize time on site. The argument treats social media architecture the way product liability law treats a car without brakes. A defective product that the public needs to be protected from.

Keep reading

The Kill Switch Society

There was a time — not very long ago — when the automobile represented one of the clearest expressions of individual choice in a free society. Limited only by fuel, roads, and imagination, a person could choose where to go, when to go, and how to get there. The car was not merely a machine. It was mobility made personal — an extension of autonomy and freedom.

Sadly, that is no longer the case. Increasingly, this same instrument, once a tool to facilitate individual independence, has been repurposed into a system of monitoring and control. Though advertised as safety measures for the consumer, these measures were clearly designed to empower the state.

Modern vehicles are no longer just mechanical devices; they are computers on wheels. Embedded sensors track speed, braking patterns, seatbelt usage, location, and even driver attention. Event Data Recorders — commonly referred to as “black boxes” — have been standard in most new vehicles for years. Originally justified as instruments to reconstruct accidents, these devices record data in the moments before a crash. Few object to understanding the causes of collisions. But it is worth noting that once data exists, its use rarely remains confined to its original purpose.

Insurance companies now seek access to driving data to adjust premiums. Law enforcement agencies have used vehicle data in criminal investigations. Courts have admitted such data as evidence. Each of these developments can be justified in isolation. Together, they represent a quiet but unmistakable shift: the automobile is no longer simply your property — it is a source of information about you.

More recently, legislative developments have accelerated this trend. The federal infrastructure legislation passed in 2021 includes a mandate for advanced impaired driving prevention technology to be installed in all new vehicles within the coming years. While often described in benign terms — systems that passively detect intoxication or driver impairment — the practical reality is that these systems must continuously monitor driver behavior in order to function. Monitoring creates data. And data, once created, rarely remains unused. It takes on a life of its own.

Proposals and discussions around remote vehicle disablement — popularly referred to as “kill switches” — have raised further concerns. While proponents argue that such features could prevent high-speed chases or stop stolen vehicles, the existence of remote-control capabilities introduces a fundamentally different relationship between the individual and the machine. A car that can be disabled remotely is clearly not under the control of its owner.

History suggests that powers granted for limited purposes seldom remain limited. Civil asset forfeiture, initially justified as a tool against organized crime, expanded into widespread seizures affecting ordinary citizens. Surveillance authorities granted for national security purposes have been used in far broader contexts. It would be historically naïve to assume that vehicle control technologies would be immune to similar expansion.

Keep reading

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection

FC Barcelona got fined €500,000 ($579,219) for scanning the faces and recording the voices of over 100,000 members without doing the legal homework first.

Spain’s data protection authority, the AEPD, found the club had deployed biometric identity verification during a membership census update and processed all of it without a valid Data Protection Impact Assessment.

Members renewing their details remotely were required to either submit a facial scan through their device camera or record their voice. Both systems were live, both were processing biometric data at scale, and the documentation Barcelona produced to justify any of it didn’t meet the bar GDPR sets for high-risk processing.

Article 35 of the GDPR requires organizations to conduct a DPIA before deploying any system likely to create a high risk for individuals. Biometric data used for identification qualifies automatically.

Processing that touches more than 100,000 people, including minors, qualifies. Using new technologies qualifies. Barcelona’s system hit all three. The AEPD concluded the club’s documentation was missing the essential components of a genuine assessment: no real necessity and proportionality analysis, no adequate evaluation of what the processing actually risks for the people whose faces and voices it captured.

The AEPD’s decision in case PS-00450-2024 makes one point with particular clarity: consent doesn’t substitute for a DPIA. Barcelona had asked members to agree to biometric data collection, and members had agreed.

That agreement is legally irrelevant to the separate procedural obligation to assess risk before the system goes live. The GDPR treats them as independent requirements. Satisfying one doesn’t discharge the other.

What a valid DPIA actually requires, according to the decision, is a clear description of the processing, a genuine necessity and proportionality assessment, a detailed risk evaluation, proposed mitigation measures, and a residual risk assessment after mitigations are applied. Organizations that generate DPIA documentation as a compliance checkbox, without substantively working through those questions, remain exposed regardless of what consent language they put in front of users.

The appetite for facial biometric data has become near-universal across industries, and the Barcelona case lands in a moment when that appetite is accelerating faster than the rules meant to govern it.

Keep reading

Canada’s Public Safety Minister Defends Mass Surveillance Bill

Canada’s Public Safety Minister, Gary Anandasangaree, wants you to know that Bill C-22 is not a surveillance bill. He said so twice.

“I want to be very clear about what C-22 is not. It is not about the surveillance of honest, hard-working Canadians going on about their daily lives,” Anandasangaree told an audience that included police chiefs and law enforcement officials.

Then, a few sentences later: “We’re not looking for sneaky ways to surveil Canadians. We are doing our part to combat bad actors in both the physical and digital worlds.”

What he described is a surveillance bill.

The Lawful Access Act, introduced this month, compels electronic service providers to retain Canadians’ metadata for a year and gives police and CSIS new mechanisms to access it. That includes location data, device identifiers, and daily movement patterns, all stored in advance, on every Canadian, not just suspects, held ready for law enforcement retrieval.

Keep reading

Hong Kong Police Can Now Demand Phone or Computer Passwords of Suspects Under National Security Law

Your passwords are no longer safe in Hong Kong.

Ever since Hong Kong returned from British rule to China in 1997, the island has devolved towards the totalitarian ways of mainland China.

In 2020, the National Security Law (NSL) imposed by Beijing crystallized this new reality.

The NSL criminalizes acts of ‘secession, subversion, terrorism, collusion with foreign forces, treason, espionage, sabotage, and external interference’.

With penalties up to life imprisonment, it even allows some cases to be tried on the mainland, and it grants police ‘broad surveillance and detention powers’.

Needless to say, this has curtailed political dissent, free speech, and pro-democracy activities in Hong Kong.

And now, it arises that Hong Kong police can demand phone or computer passwords from suspects under the NSL.

BBC reported:

“Those who refuse could face up to a year in jail and a fine of up to HK$100,000 ($12,700; £9,600), and individuals who provide ‘false or misleading information’ could face up to three years in jail.

It comes as part of new amendments to a bylaw under the NSL that the government gazetted on Monday.”

Keep reading

The Feds Are Investing in Wearable Health Trackers. That Could Put Your Private Data at Risk.

By gathering continuous data about sleep, heart rate, and physical activity, biowearable devices can give individuals more control over their well-being. But they also create a detailed digital record of our daily lives—one that the federal government may soon be able to access readily.

Consider this scenario.

You’ve recently received a government-subsidized biowearable. Accordingly, the authorities now know when you’re sleeping, because the device reports your sleep cycle, location, and daily movements in real time to a cloud server accessible through a legal process. It knows when you’re home. It knows when you leave.

Those data are then obtained by an FBI field office (either through direct purchase or, if necessary, a legal process), because a federal prosecutor has decided that your criticism of immigration enforcement operations and your social media posts supporting Immigration and Customs Enforcement protesters constitute “incitement to violence” against federal agents. Under the Trump administration’s elastic (and legally dubious) domestic terrorism definitions and designations, that is enough to open a criminal investigation.

And because the government has known for weeks when you’re at home sleeping, it knows exactly when to break down your door.

That scenario may sound far-fetched, but it is getting closer to reality. In March, the Department of Health and Human Services (HHS) announced that the Advanced Research Projects Agency for Health (ARPA-H) would begin investing in new biowearable technologies through a program it called Delphi, after the ancient Greek sanctuary where the maxim “know thyself” was inscribed. It’s a fitting name for a program designed to help people understand their bodies, but it also raises an uncomfortable question: Who else might come to know them just as well?

The program aims to develop biosensors capable of continuously monitoring cytokines (cellular inflammation markers) and hormone levels, going substantially beyond what current wearables can detect. Funding will be determined on a competitive basis as private-sector stakeholders submit proposals; no specific appropriation has been announced.

It remains unclear why this taxpayer funding is necessary in a field that is already thriving. The global wearables market was valued at roughly $43 billion in 2024 and is projected to exceed $168 billion by 2030.

Devices worn on the wrist, finger, or skin can already monitor heart ratesblood oxygen levelssleep patterns, physical activity, and—in the case of continuous glucose monitors—blood sugar levels in real time. Some smartwatches can even conduct electrocardiograms capable of detecting irregular heart rhythms, such as atrial fibrillation.

Until recently, people could access most of this information only during periodic visits to a clinic or hospital. Biowearables now enable people to monitor many of these signals continuously in everyday life.

Keep reading

White House AI Framework Pushes Age Verification ID Mandate

The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”

The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.

Alongside a genuine free speech provision, the document contains age verification mandateschat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.

The White House is presenting all of this as part of the same coherent package.

Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.”  They are not.

There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.

The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.

The only way to prove that someone is old enough to use a site is to collect personal data about who they are.

In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.

Keep reading

Meta is Ending Instagram Direct Message End-to-End Encryption

Meta is quietly dismantling one of its few genuine privacy commitments. Starting May 8, end-to-end encryption for Instagram direct messages disappears, taking with it the one technical guarantee that kept those conversations private from Meta itself.

“If you have chats that are impacted by this change, you will see instructions on how you can download any media or messages you may want to keep,” the company said in a help document, framing the loss of message privacy as a data export problem. Collect your things, the walls are coming down.

The feature being removed was never universal anyway. End-to-end encryption for Instagram DMs had been available only in certain regions, not enabled by default, since Meta began testing it in 2021 as part of what CEO Mark Zuckerberg called his “privacy-focused vision for social networking.”

That vision apparently has an expiration date. Meta also made encrypted DMs available to all adult users in Ukraine and Russia in February 2022, weeks after the Russian invasion began. That access, too, is ending.

The timing is revealing. TikTok told the BBC last week that it has no plans to bring end-to-end encryption to its DMs, arguing that privacy makes users less safe. Meta is now arriving at the same destination from a different direction.

The stakes are straightforward. End-to-end encryption means only the people in a conversation can read it, a technical lock that excludes the platform, third parties, and anyone who might later obtain a warrant.

When that lock disappears, Meta and its employees can read Instagram DMs, law enforcement can subpoena them, and advertisers may eventually benefit from what gets learned.

Instagram users who relied on encrypted DMs have until May 8 to decide what to archive. After that, their private conversations are Meta’s to read.

Keep reading