New Company Hopes to Build Age-Verification Tech into Vape Cartridges 

Their goal is to use biometric data and blockchain to build age-verification measures directly into disposable vape cartridges.

Wired reports on a partnership between vape/cartridge manufacturer Ispire Technology and regulatory consulting company Chemular (which specializes in the nicotine market) — which they’ve named “Ike Tech”:[Using blockchain-based security, the e-cig cartridge] would use a camera to scan some form of ID and then also take a video of the user’s face. Once it verifies your identity and determines you’re old enough to vape, it translates that information into anonymized tokens. That info goes to an identity service like ID.me or Clear. If approved, it bounces back to the app, which then uses a Bluetooth signal to give the vape the OK to turn on.

“Everything is tokenized,” [says Ispire CEO Michael Wang]. “As a result of this process, we don’t communicate consumer personal private information.” He says the process takes about a minute and a half… After that onetime check, the Bluetooth connection on the phone will recognize when the vape cartridge is nearby and keep it unlocked. Move the vape too far away from the phone, and it shuts off again. Based on testing, the companies behind Ike Tech claim this process has a 100 percent success rate in age verification, more or less calling the tech infallible. “The FDA told us it’s the holy grail technology they were looking for,” Wang says. “That’s word-for-word what they said when we met with them….”

Wang says the goal is to implement additional features in the verification process, like geo-fencing, which would force the vape to shut off while near a school or on an airplane. In the future, the plan is to license this biometric verification tech to other e-cig companies. The tech may also grow to include fingerprint readers and expand to other product categories; Wang suggests guns, which have a long history of age-verification features not quite working.

Keep reading

Apple UK Age Verification Chaos: Users Face Failed Scans, Rejected Passports, and Forced Content Filters

Apple’s iOS 26.4 age verification system is failing UK users who don’t have a credit card or photocard driving license, leaving them with no way to prove they’re adults on devices they’ve owned for years.

The system arrived without warning, without explanation, and without any apparent consideration for the people who don’t fit Apple’s narrow assumptions about what a British adult looks like.

No Warning, No Communication

Apple sent no email. Included no mention of age verification in the iOS 26.4 release notes it shared publicly.

Unless you’d been following the developer beta track, where the feature appeared in February or reading Reclaim The Net’s earlier coverage, the first you knew about it was a prompt on your screen after restarting your phone.

That’s how 35 million UK iPhone users found out their devices now require identity documents to function normally. A “Confirm You Are 18+” label appeared at the top of Settings, and anyone who couldn’t or wouldn’t comply got silently downgraded. Apple’s Web Content Filter switched on, blocking websites across Safari and every third-party browser. Communication Safety is activated, scanning images and videos in Messages and FaceTime for nudity. Features that worked fine the day before now require government-approved proof of adulthood.

A company that controls what software runs on every iPhone it sells decided overnight that UK users needed to hand over identity documents to keep using the devices they already paid for. And it didn’t bother to tell them it was coming.

Keep reading

Digital IDs are insecure and can be hacked

On Wednesday, Norman Fenton, Professor Emeritus at Queen Mary University of London, published an article describing how he lost control of his Twitter (now X) account to a hacker on 18 March but has now regained limited control. 

“After sending a spam ‘vote for me’ direct message (with a malicious link) to all my mutual followers on 20 March, the attackers began posting spam publicly. Not sporadically, but continuously: the bot was posting roughly once per second, pausing only intermittently to repost content from accounts I follow. By the time I regained access, there were thousands of spam posts and reposts,” he said.  “For now, anyone visiting my timeline will see little but pages of spam.”

Prof. Fenton ended his article by saying, “Regaining access to my account ultimately required persistence, external pressure, and, quite likely, legal escalation. Many users facing similar attacks will not have those options.

“If X cannot reliably protect accounts, respond promptly to verified compromise reports, and restore control without unnecessary barriers, then users are left dangerously exposed.

“What’s clear right now is that when things go wrong, you will be largely on your own.”

Read more: I’ve regained control of my hacked X account, Norman Fenton, 25 March 2026

In response to Prof. Fenton’s experience, Dr. Scott McLachlanformerly a lecturer in Digital Technologies for Healthcare at King’s College London, posted a Substack note taking Prof. Fenton’s final remarks a logical step further.

Keep reading

The Age Verification Con

Politicians on both sides of the Atlantic are competing to look tough on Silicon Valley. They hold hearings, write bills, and pose for photographs with parents who say their kids’ lives were ruined by social media algorithms they somehow couldn’t pull them away from.

The cause is protecting children from social media, and it supposedly polls so well that it has achieved something almost unheard of in modern politics: genuine bipartisan consensus. Republicans and Democrats in Washington. Labour and Conservatives in Westminster. The Australian parliament voted the whole thing through with barely a whisper of dissent.

There is just one problem with the narrative. The tech giants these politicians claim to be fighting are spending record sums to help them do it. And the tool they have all converged on, age verification, is not really about checking whether someone is 15 or 16. It is the architecture for a verified internet, one where anonymous access is replaced by identity checkpoints, and where using a social media account, downloading an app, or browsing a website requires you to show your papers first.

The campaign is presented as protecting children. The infrastructure being built will apply to everyone.

Keep reading

Apple Forces UK iPhone Age Checks in iOS 26.4

With iOS 26.4, Apple has turned every iPhone in the UK into an identity checkpoint. The update, released March 24, requires all UK users to confirm they’re 18 or older before accessing certain features and services on their Apple Account.

UK communications regulator Ofcom called it “a real win for children and families.”

The infrastructure being built is more of a problem than that framing suggests.

Apple has, without warning, placed a gatekeeper on the devices of 35 million UK users who paid good money for full-featured smartphones and now find themselves holding something closer to a supervised children’s tablet.

It’s a corporate ultimatum: hand over sensitive personal data or lose functionality you already paid for.

The verification prompt appears immediately after the update installs.

Apple checks whether your account already has a credit card linked or whether the account has existed long enough to establish you as an adult.

For many existing users, the process is essentially automatic. For everyone else, the options narrow quickly: link a credit card, scan a government-issued photo ID, or accept that your account defaults to teen restrictions, with Apple’s Web Content Filter and Communication Safety features switched on across all browsers and messaging apps and FaceTime, monitoring communications.

Web Content Filter blocks websites Apple classifies as explicit, operating across Safari and third-party browsers alike.

Communication Safety scans incoming and outgoing images and videos for nudity. Both activate silently for anyone who hasn’t cleared the adult threshold. Skip verification, or lack a credit card and a government ID, and Apple decides what you’re allowed to see.

Users without a credit card or government ID have no other path. Reports from UK users confirm it. Scan the card, upload the ID, or live with restricted access. The system doesn’t offer alternatives.

Ofcom praised the rollout in a statement, saying it had coordinated extensively with Apple and others on age assurance under the Online Safety Act: “Apple’s decision that the UK will be one of the first countries in the world to receive new child safety protections on devices is a real win for children and families…We’ve worked closely with Apple and other services to ensure they can be applied in a variety of contexts in order to ensure users are protected. This will build on the strong foundations of the Online Safety Act, from widespread age checks that keep young people away from harmful content, to blocking high-risk sites and stepping up action against child sexual abuse material.”

Keep reading

GrapheneOS Defies Age Verification Surveillance Laws, Vowing to Protect User Privacy Worldwide

GrapheneOS has a simple answer to the wave of age verification laws moving through US state legislatures and already live in Brazil: no.

The privacy-focused Android fork announced last Friday that it won’t implement the age data collection these laws demand. “GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account,” the project stated.

“If GrapheneOS devices can’t be sold in a region due to their regulations, so be it.” That’s a blunter response than most OS developers are willing to give, and it’s worth understanding what it’s actually refusing.

Brazil’s Digital ECA (Law 15.211) came into force on March 17, hitting OS providers with fines of up to R$50 million, roughly $9.5 million per violation, for failing to build age verification into device setup.

California’s Digital Age Assurance Act, AB-1043, signed by Governor Newsom in October 2025 and effective January 1, 2027, goes further: it requires every OS provider to collect a user’s age or date of birth during account setup, then push that data to app stores and developers through a real-time API.

Colorado’s SB26-051 cleared the state senate on March 3 with similar demands. The architecture these laws collectively envision is an age-linked identity layer baked into the operating system itself, present before you’ve opened a single app.

GrapheneOS is developed by the GrapheneOS Foundation, a registered Canadian nonprofit.

California’s AB-1043 carries civil penalties of up to $2,500 per affected child for negligent violations and $7,500 for intentional ones, enforced by the state attorney general. The Canadian nonprofit status provides some distance but not a guarantee.

The stakes grew more concrete after GrapheneOS and Motorola announced a partnership at MWC on March 2, bringing the hardened OS to future Motorola hardware and ending GrapheneOS’s long exclusivity to Google Pixel devices. A GrapheneOS-powered Motorola phone is expected in 2027.

Once a major hardware manufacturer ships devices with GrapheneOS pre-installed, those products need to comply with local regulations in every market where they’re sold, or Motorola will have to restrict sales geographically.

The defiant stance that’s easy for a nonprofit software project becomes a commercial problem for a global device manufacturer.

Keep reading

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.

The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.

All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.

What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.

We obtained a copy of the bill for you here.

The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.

Blackburn’s bill repeals it entirely, after a two-year transition period.

Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”

AI platforms would be incentivized to heavily monitor users.

Keep reading

Sweden Breach Shows the Security Risks of National Digital ID Systems

A hacker group calling itself ByteToBreach has posted what it claims is source code stolen from CGI’s Swedish division, among the allegedly compromised systems: the codebase powering BankID logins for the Swedish Tax Agency.

It’s a ransacked filing cabinet inside the architecture of a country that digitized itself completely, then discovered the cost of doing so.

BankID is the single authentication layer Swedes use for nearly everything; government services, banking, digital signatures, and tax filings.

Over 8.6 million people in a country of just over 10 million run their digital lives through it. That’s a national dependency, a single point of failure dressed up as infrastructure modernization.

The dump appeared on Breached.

Journalists at Dagens Nyheter reviewed portions of the leaked material and reported finding source code, passwords, and encryption keys. Breached was taken offline over the weekend as part of a cybersecurity operation, limiting independent verification.

Also reportedly being sold separately: databases containing Swedish citizens’ personal data and electronic signature documents. The breach exposes a layered vulnerability.

CGI confirms it, but frames it narrowly

Keep reading

White House AI Framework Pushes Age Verification ID Mandate

The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”

The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.

Alongside a genuine free speech provision, the document contains age verification mandateschat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.

The White House is presenting all of this as part of the same coherent package.

Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.”  They are not.

There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.

The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.

The only way to prove that someone is old enough to use a site is to collect personal data about who they are.

In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.

Keep reading

Brazil Launches Mandatory Age Verification Law for Online Platforms

Brazil’s Digital ECA (Estatuto da Criança e do Adolescente Digital) took effect today, March 17, requiring nearly every tech product accessible to children to clear a long list of compliance obligations.
Apps, operating systems, app stores, video games, social networks: all potentially covered, all facing fines of up to 50 million Brazilian reais (roughly US$9.44 million) or 10% of their Brazilian revenue for non-compliance.

As always, the framing is child protection. The infrastructure being built is a national age verification system woven into the fabric of internet access.

“Brazil has stepped forward as the first country in Latin America to pass a dedicated law to protect children’s online privacy and safety,” goes the official line.

Every major technology platform operating in Brazil must now determine how old its users are and restrict what they can see accordingly. The checkbox that said “I am over 18” is explicitly banned.

Article 37’s sole paragraph states that regulations “may not, under any circumstances, impose, authorize, or result in the implementation of mechanisms of massive, generic, or indiscriminate surveillance.”

Then Article 9 bans self-reported age. Article 12 demands “auditable” verification. The law prohibits the only mechanism that would make the law work.

Auditable, non-self-declaration age verification requires collecting something real about you.

The law permits a range of methods: government ID, biometric face scanning, behavioral pattern analysis that watches how you type and what you click, age inference from activity data, and educational history.

Every single one of these collects sensitive personal information and creates a record. There is no method on the approved list that doesn’t involve building exactly the kind of identity infrastructure Article 37 claims to forbid. The legislators either didn’t notice the contradiction or they noticed and didn’t care.

The obligation falls on platforms, not directly on every individual user. But the effect is the same. Platforms that want to comply need to verify who you are and how old you are before showing you restricted content. If you want to see it, you provide the data. If you don’t provide the data, you don’t get access.

Keep reading