New Company Hopes to Build Age-Verification Tech into Vape Cartridges 

Their goal is to use biometric data and blockchain to build age-verification measures directly into disposable vape cartridges.

Wired reports on a partnership between vape/cartridge manufacturer Ispire Technology and regulatory consulting company Chemular (which specializes in the nicotine market) — which they’ve named “Ike Tech”:[Using blockchain-based security, the e-cig cartridge] would use a camera to scan some form of ID and then also take a video of the user’s face. Once it verifies your identity and determines you’re old enough to vape, it translates that information into anonymized tokens. That info goes to an identity service like ID.me or Clear. If approved, it bounces back to the app, which then uses a Bluetooth signal to give the vape the OK to turn on.

“Everything is tokenized,” [says Ispire CEO Michael Wang]. “As a result of this process, we don’t communicate consumer personal private information.” He says the process takes about a minute and a half… After that onetime check, the Bluetooth connection on the phone will recognize when the vape cartridge is nearby and keep it unlocked. Move the vape too far away from the phone, and it shuts off again. Based on testing, the companies behind Ike Tech claim this process has a 100 percent success rate in age verification, more or less calling the tech infallible. “The FDA told us it’s the holy grail technology they were looking for,” Wang says. “That’s word-for-word what they said when we met with them….”

Wang says the goal is to implement additional features in the verification process, like geo-fencing, which would force the vape to shut off while near a school or on an airplane. In the future, the plan is to license this biometric verification tech to other e-cig companies. The tech may also grow to include fingerprint readers and expand to other product categories; Wang suggests guns, which have a long history of age-verification features not quite working.

Keep reading

The Age Verification Con

Politicians on both sides of the Atlantic are competing to look tough on Silicon Valley. They hold hearings, write bills, and pose for photographs with parents who say their kids’ lives were ruined by social media algorithms they somehow couldn’t pull them away from.

The cause is protecting children from social media, and it supposedly polls so well that it has achieved something almost unheard of in modern politics: genuine bipartisan consensus. Republicans and Democrats in Washington. Labour and Conservatives in Westminster. The Australian parliament voted the whole thing through with barely a whisper of dissent.

There is just one problem with the narrative. The tech giants these politicians claim to be fighting are spending record sums to help them do it. And the tool they have all converged on, age verification, is not really about checking whether someone is 15 or 16. It is the architecture for a verified internet, one where anonymous access is replaced by identity checkpoints, and where using a social media account, downloading an app, or browsing a website requires you to show your papers first.

The campaign is presented as protecting children. The infrastructure being built will apply to everyone.

Keep reading

Apple Forces UK iPhone Age Checks in iOS 26.4

With iOS 26.4, Apple has turned every iPhone in the UK into an identity checkpoint. The update, released March 24, requires all UK users to confirm they’re 18 or older before accessing certain features and services on their Apple Account.

UK communications regulator Ofcom called it “a real win for children and families.”

The infrastructure being built is more of a problem than that framing suggests.

Apple has, without warning, placed a gatekeeper on the devices of 35 million UK users who paid good money for full-featured smartphones and now find themselves holding something closer to a supervised children’s tablet.

It’s a corporate ultimatum: hand over sensitive personal data or lose functionality you already paid for.

The verification prompt appears immediately after the update installs.

Apple checks whether your account already has a credit card linked or whether the account has existed long enough to establish you as an adult.

For many existing users, the process is essentially automatic. For everyone else, the options narrow quickly: link a credit card, scan a government-issued photo ID, or accept that your account defaults to teen restrictions, with Apple’s Web Content Filter and Communication Safety features switched on across all browsers and messaging apps and FaceTime, monitoring communications.

Web Content Filter blocks websites Apple classifies as explicit, operating across Safari and third-party browsers alike.

Communication Safety scans incoming and outgoing images and videos for nudity. Both activate silently for anyone who hasn’t cleared the adult threshold. Skip verification, or lack a credit card and a government ID, and Apple decides what you’re allowed to see.

Users without a credit card or government ID have no other path. Reports from UK users confirm it. Scan the card, upload the ID, or live with restricted access. The system doesn’t offer alternatives.

Ofcom praised the rollout in a statement, saying it had coordinated extensively with Apple and others on age assurance under the Online Safety Act: “Apple’s decision that the UK will be one of the first countries in the world to receive new child safety protections on devices is a real win for children and families…We’ve worked closely with Apple and other services to ensure they can be applied in a variety of contexts in order to ensure users are protected. This will build on the strong foundations of the Online Safety Act, from widespread age checks that keep young people away from harmful content, to blocking high-risk sites and stepping up action against child sexual abuse material.”

Keep reading

Politicians Want To Ban Gambling Ads To Stop Youth Addiction. What Do the Data Say About Teens and Betting?

Are young boys everywhere on the verge of being pulled down into the abyss of online gambling? 

If you’ve been reading the news lately, you might be tempted to think so. Young men are all addicted to sports betting, and now the industry’s incessant advertising is luring in kids, so say some media outlets. If something isn’t done to limit ads for gambling apps, some argue, kids will continue to be taken advantage of. 

These dire predictions have reached Colorado lawmakers, who are now considering Senate Bill 26-131, which would place major restrictions on sports betting in the state. In addition to barring adults from making more than five separate deposits with an individual betting operator within a 24-hour period, the bill would also make it illegal to broadcast an ad “for a sports betting operation from 8 a.m. to 10 p.m. or during a live broadcast of an athletic competition.” . 

The bill’s supporters defend this prohibition by saying it’s necessary to protect kids. In reality, gambling ads pose very little threat to children. Even for those who find ways around age restrictions, the data show that they make bets only occasionally (similar to most adults). While gambling can and does ruin lives for the small fraction of adults who become addicted to it, it’s a relatively harmless form of entertainment for the vast majority of users. And for those who do have a genuine gambling problem, advertisements for legitimate betting apps can help direct them toward well-regulated companies and steer them away from dangerous, illegal gambling operations. 

The moral panic over online sports betting has made some people wildly overestimate the power of TV commercials. State Sen. Matt Ball (D–Denver), one of the bill’s sponsors, compared sports betting ads to those for cigarettes. “The whole point is we try to restrict that advertising from getting to kids,” he told Denver 7, a local ABC News affiliate. “At the end of the day, gambling is an addiction. It’s like alcoholism. It’s like substance abuse.” The theory seems to be that, if kids see ads for gambling apps, they’ll start gambling themselves and immediately develop an addiction that will destroy their lives. 

recent survey by Common Sense Media on gambling amongst minors is helping to fuel those concerns. The survey found that 36 percent of boys aged 11 to 17 said they gambled online within the past year.    

Keep reading

White House AI Framework Pushes Age Verification ID Mandate

The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”

The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.

Alongside a genuine free speech provision, the document contains age verification mandateschat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.

The White House is presenting all of this as part of the same coherent package.

Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.”  They are not.

There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.

The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.

The only way to prove that someone is old enough to use a site is to collect personal data about who they are.

In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.

Keep reading

‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading

Brazil Launches Mandatory Age Verification Law for Online Platforms

Brazil’s Digital ECA (Estatuto da Criança e do Adolescente Digital) took effect today, March 17, requiring nearly every tech product accessible to children to clear a long list of compliance obligations.
Apps, operating systems, app stores, video games, social networks: all potentially covered, all facing fines of up to 50 million Brazilian reais (roughly US$9.44 million) or 10% of their Brazilian revenue for non-compliance.

As always, the framing is child protection. The infrastructure being built is a national age verification system woven into the fabric of internet access.

“Brazil has stepped forward as the first country in Latin America to pass a dedicated law to protect children’s online privacy and safety,” goes the official line.

Every major technology platform operating in Brazil must now determine how old its users are and restrict what they can see accordingly. The checkbox that said “I am over 18” is explicitly banned.

Article 37’s sole paragraph states that regulations “may not, under any circumstances, impose, authorize, or result in the implementation of mechanisms of massive, generic, or indiscriminate surveillance.”

Then Article 9 bans self-reported age. Article 12 demands “auditable” verification. The law prohibits the only mechanism that would make the law work.

Auditable, non-self-declaration age verification requires collecting something real about you.

The law permits a range of methods: government ID, biometric face scanning, behavioral pattern analysis that watches how you type and what you click, age inference from activity data, and educational history.

Every single one of these collects sensitive personal information and creates a record. There is no method on the approved list that doesn’t involve building exactly the kind of identity infrastructure Article 37 claims to forbid. The legislators either didn’t notice the contradiction or they noticed and didn’t care.

The obligation falls on platforms, not directly on every individual user. But the effect is the same. Platforms that want to comply need to verify who you are and how old you are before showing you restricted content. If you want to see it, you provide the data. If you don’t provide the data, you don’t get access.

Keep reading

Florida Gives Tech Platforms Deadline for Age ID Checks

Florida’s attorney general has handed tech companies an ultimatum: build identity verification systems into your platforms by April 8, or his office starts filing lawsuits.

The deadline comes as a federal appeals court hears arguments this week on whether the state can legally force millions of users to prove who they are before accessing social media.

The law driving this, HB 3, bans anyone under 14 from social media entirely and requires parental consent for 14- and 15-year-olds. It also forces adult content sites to verify visitors are 18 or older.

Attorney General James Uthmeier gave tech companies 30 days to implement age restrictions and 60 days to deploy parental consent mechanisms. “It is the law of the land,” he said at an Orlando event on March 9. Non-compliance means litigation.

What Florida is actually mandating is a digital ID checkpoint at the entrance to the internet. The law doesn’t specify which verification methods qualify as “reasonable.” It doesn’t cap how long platforms can retain identity documents. It doesn’t limit what platforms can do with the surveillance infrastructure once it’s built. Florida gets the policy win.

Users hand over their documents. The data sits in corporate systems indefinitely, available for breaches, subpoenas, and purposes nobody has disclosed yet.

Uthmeier even named TikTok and Discord specifically. Discord’s attempt to introduce digital ID age verification has been met with much backlash, especially after a leak over over 70,000 government IDs. Uthmeier appears unconcerned.

NetChoice, co-plaintiff in the legal challenge, named this directly: the law creates a security risk by “mandating the surrender of sensitive information.” That’s the part Florida’s child-protection framing is designed to obscure. Every minor blocked from TikTok requires millions of adults to first prove they aren’t minors. The verification burden falls on everyone.

Keep reading

The App Store Accountability Act Is A Privacy Nightmare Disguised As Child Protection

Washington has discovered a familiar political trick: wrap a flawed policy in the language of protecting children and hope nobody reads the fine print. The latest example is the App Store Accountability Act, a bill championed by lawmakers who appear eager to regulate the internet without understanding how it actually works.

Supporters insist the legislation will protect kids online. In reality, it risks undermining privacy, violating constitutional protections, and creating a cybersecurity disaster in the process.

And remarkably, Congress is pushing forward with this even though federal courts have already signaled that this exact regulatory model is unconstitutional.

The App Store Accountability Act would require app stores to verify the ages of every user and share age information with app developers. On paper, that sounds straightforward. In practice, it would force companies to collect massive amounts of sensitive personal data simply to download everyday apps.

Want to download a weather app? Verify your age.

Want to install a calculator? Verify your age.

Want to read the news? Verify your age.

The practical result is obvious: app stores would be compelled to gather highly sensitive identity data on tens of millions of Americans and then distribute that information to countless third-party developers.

This could be one of the largest digital identity honeypots ever conceived.

Security experts have been warning about this for months. In fact, 419 cybersecurity and privacy academics from 30 countries recently signed an open letter warning that large-scale age verification systems are “dangerous and socially unacceptable” because they create enormous new attack surfaces for hackers and data thieves.

The logic is simple. If every app download requires age verification, that means sensitive identity data must be stored, transmitted, and accessed across thousands of services. Instead of limiting the spread of personal information, the bill effectively multiplies it.

For cybercriminals, it would be a dream target.

Keep reading

Online age-verification tools spread across U.S. for child safety, but adults are being surveilled

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

“There’s a big spectrum,” said Joe Kaufmann, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gewirtz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 

Keep reading