They’re All Ears: Apple’s Plan to Read Your Mind

We’ve handed over our location, our browsing history, our voice, our face, and our purchasing habits. In exchange, we’ve gotten convenience. Now Apple wants the one thing each of us might have thought was still ours—the electrical activity of our brain. And this time, they’re not even asking. What are we talking about here?

In January 2023, Apple quietly filed patent US20230225659A1 with the U.S. Patent and Trademark Office. The filing describes a wearable electronic device—an earbud—equipped with multiple electrodes embedded directly into the ear tip and housing. These electrodes aren’t for audio. They are not there to improve our sound quality. No indeed. Instead, they are there to read our brain—using the same EEG technology doctors use to monitor neurological activity in clinical settings. And because every ear canal is shaped differently, Apple’s patent describes a machine-learning model that figures out which electrode combinations work best for each person’s specific anatomy, then keeps refining that over time. The result is a read that is accurate, continuous, and tailored to each of us personally. The digital signal is then transmitted wirelessly to our phone—and, per the patent’s own language, to a server, where it can be stored as “historic data” accessible by “another person given permission.”

Read that sentence again.

What EEG Actually Reveals
This is not science fiction, and it is worth understanding what EEG data actually captures—because it is a lot more than Apple’s marketing department will ever tell you. Brain waves are not background noise. They are a direct readout of our inner life. The alpha, beta, delta, theta, and gamma frequencies each correspond to distinct mental states—relaxation, intense focus, deep sleep, creativity, active learning. Together they paint an individual portrait of our mind that is more revealing than anything we have ever typed into a search bar or whispered to a smart speaker. These frequencies, as Loyola University researchers have noted, are also the same signals measured in polygraph tests—the ones used to determine whether someone is lying. They can reveal our stress levels, our concentration, our emotional state, and potentially flag neurological conditions that have not yet been diagnosed. As one researcher at the Neurorights Foundation put it in a Science Friday interview, neural circuits in the brain create our thoughts, emotions, memories, decision-making, and our very sense of self.

Apple wants that data streaming off our ears into their servers.

Are There Any Upsides?
Fair is fair—applications for in-ear EEG technology are being floated, and it’s worth addressing them. As Neurofounders reports, startups like NextSense are already developing in-ear EEG devices to improve clinical sleep staging. Detecting seizure disorders from continuous passive monitoring is another possibility. Early signals for degenerative diseases like Alzheimer’s may surface in EEG data years before symptoms appear. And researchers have argued that natural-environment EEG collection—on the couch, at work, during real life rather than inside a sterile lab—would produce more accurate data on attention and cognitive states than anything gathered under clinical conditions.

These applications sound compelling on the surface. But step back for a second. Americans are not sleeping poorly because they lack a brain-monitoring device. They are sleeping poorly because they are overprescribed, overstimulated, and undernourished—and the same medical system profiting from that reality is not exactly rushing to fix it. Handing our neural data to Apple is not a solution to a pharmaceutical-created problem. It is just a new layer of surveillance dressed up as fake wellness. The idea that we should surrender the electrical activity of our brains as the price of entry for better sleep tracking should raise more than a few eyebrows.

Who Gets the Data?
Here is where things get serious. A 2024 Neurorights Foundation report pulled back the curtain on 30 companies already selling consumer neurotechnology devices. What they found should stop you cold. Twenty-nine of the thirty companies claimed unlimited rights to their users’ neural data. Most had quietly written third-party data sharing directly into their terms—buried in the kind of legal language nobody reads until it’s too late. Fewer than half even encrypt the data or de-identify users. There is no federal law in the United States governing how neural data collected by consumer devices can be used or sold. A handful of states—Colorado, California, Illinois—have moved to address this, but protections remain patchwork at best.

As a published paper in PMC bluntly put it, bulk sales of neural data by tech giants to third parties may already be occurring with minimal accountability. Data brokers could soon be cataloging individual “brain fingerprints” on a mass scale—data as uniquely identifying as a fingerprint, and infinitely more revealing.

Apple has faced its own data breach history. As Pearl Cohen’s legal analysts note, the patent describes data transmission to external servers accessible by parties beyond the user. The company that couldn’t keep our FaceID data secure wants a continuous stream of our brain’s electrical activity.

Keep reading

Apple Fixes Bug That Allowed FBI To Read Deleted Signal Messages

Tech giant Apple has fixed a security flaw that had allowed the FBI to access a Signal user’s deleted messages through their phone’s push notification database, despite the app being deleted and messages being set to disappear.

In a security advisory released on Wednesday, Apple said it had fixed a bug that allowed “notifications marked for deletion” to be “unexpectedly retained on the device.”

In an X post on Wednesday, Signal said the update fixed the issue that made a user’s messages retrievable by law enforcement.

“Apple’s advisory confirmed that the bugs that allowed this to happen have been fixed in the latest iOS release,” Signal said.

Signal uses end-to-end encryption to secure messages between its users. The bug is a reminder that messaging encryption may not be enough to keep data protected when using certain devices or operating systems.

Keep reading

Apple Adds Age Verification to Digital ID in Wallet, Moves Beyond TSA Airport Checkpoints

Apple just turned on the next phase of its Digital ID rollout and the framing in the company’s support documentation is almost casual. The passport-derived credential in Apple Wallet can now be used to confirm a user is over 18 when creating an Apple Account, updating iOS, adjusting safety settings, or downloading apps rated 18+. No press release accompanied the change, by the way.

The understated rollout undersells what is actually happening. Apple, like Google, Meta, Discord, and every other consumer-facing platform of significant size, is racing to operationalize digital identity infrastructure to meet a wave of age-verification mandates landing across the US, UK, EU, and Australia.

The companies did not invent this demand; lawmakers did, but the response is arriving faster than the laws themselves, and the architecture being built right now will outlast any specific statute that prompted it.

The UK’s Online Safety Act is already forcing platforms to verify ages with documented credentials.

Discord attempted its own age-verification rollout earlier this year, paused after backlash, and has continued reworking the system. State laws in the US are moving in the same direction with Texas, Louisiana, Utah, and a growing list of others passing mandates that target app stores, social platforms, and adult content sites.

Federal proposals keep recycling similar models. The European Union is preparing its own age-verification framework. Australia has already legislated a social media ban for under-16s.

The platforms doing the verifying have a choice. They can build the credential infrastructure themselves, license it from third-party vendors who upload your passport to their servers, or hand the job to the operating system that already lives on your phone. Apple’s Digital ID, and Google’s parallel work on digital credentials in Android, are bids to be the third option. They are also bids to be the default option, because once an OS-level identity wallet exists, regulators tend to treat it as the natural place to plug in.

Keep reading

Apple, Google Caught ‘Helping Users’ Find Apps That Can Deepfake Nude Pictures of Real People, and Worse Kids Are Vulnerable Too

Tech companies Apple and Google were found to have been leading users — specifically children — to apps that could effectively pornify images through artificial intelligence.

Last Wednesday, 9to5Mac reported the findings from January published by the Tech Transparency Project, which concluded both the Apple App Store and Google Play “are helping users to find apps that create deepfake nude images of women.”

The stores were even found promoting these apps and autocompleting search results for them.

About 40 percent of the top 10 apps appearing in searches for “nudify,” “undress,” and “deepnude” could “render women nude or scantily clad.”

These are apps where users can take two different images — one normal and one sexually explicit — and generate an image where components of both are used, sexualizing the person from the normal one.

9to5Mac reached out to the developer for one of these apps, and were told they “had no idea it was capable of producing such extreme content.”

On Thursday, Apple responded to the outlet, saying the apps were not allowed on their store given their review guidelines prohibit sexual content.

The company said it has removed 15 apps, with others receiving notice they will be removed if they continue to be in violation.

In January, California Democratic Gov. Gavin Newsom went after social media platform X with a similar allegation.

“xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” he said.

“I am calling on the Attorney General to immediately investigate the company and hold xAI accountable.”

Keep reading

US Bill Mandates On-Device Age Verification

A bill introduced by Representative Josh Gottheimer in the House on April 13 would require Apple, Google, and every other operating system vendor to verify the age of anyone setting up a new device in the United States.

The legislation, H.R. 8250, travels under the friendlier name of the Parents Decide Act, and it is among the most aggressive surveillance mandates ever proposed for American consumer technology.

We obtained a copy of the bill for you here.

The press releases describing it lead with children. The text describes something much larger. To confirm a child is under 18, the system has to identify everyone else, too, and the bill builds the infrastructure to do exactly that.

This is child safety as a delivery mechanism for mass identification. The pattern is familiar by now. A genuine harm gets named, a sympathetic victim gets centered, and the solution proposed reshapes the digital lives of three hundred million people who were not the problem.

The Parents Decide Act follows that template with unusual precision. It takes the real suffering of real children and uses it to justify building a national identity layer underneath every device sold in the country, administered by two private companies, with the details to be filled in later.

The mandate sits in Section 2(a)(1), which obligates providers to “Require any user of the operating system to provide the date of birth of the user” both to set up an account and to use the device at all. Adults included.

There is no carve-out for grown users, no opt-out for people who simply want to turn on a phone without handing a date of birth to Apple or Google first.

The age check is the entry fee for owning a computer. What happens to that data afterward gets handed off to the Federal Trade Commission to sort out later. A federal bill that mandates identification as a condition of using a general-purpose computing device represents something the United States has not previously had, which is a national ID requirement for turning on a device.

Keep reading

FBI Recovers Deleted Signal Messages Through iPhone Notifications

The FBI successfully recovered private Signal messages from a defendant’s iPhone even after the app was deleted. Learn how this security loophole works and the simple setting you must change today to keep your chats private.

Most of us prefer using the Signal app because it is supposed to be very secure with a remarkable end-to-end encryption system that hides our chats from everyone else. It also has a message-disappearing feature to help us set a message deletion time.

But the Federal Bureau of Investigation (FBI) found a way to read private Signal messages on an iPhone, even after the app was deleted. This was revealed in a court case in Texas that these messages can stay hidden in the phone’s memory longer than we expected.

How the loophole works

The case involves a woman named Lynette Sharp and an attack on a Texas detention centre in July 2025. During the trial in April 2026, the FBI revealed they recovered her messages even when she had deleted the Signal app. The bureau, reportedly, retrieved the messages from the iPhone’s push notification database.

During the trial, FBI Special Agent Clark Wiethorn explained how investigators accessed the evidence. When a message arrives, the phone shows a little preview on the screen, which is handled by the phone’s operating system and not the Signal app.

Even if Signal deletes the message later, the phone’s system can save a copy of that preview in its own records. To read these saved messages from Signal, the FBI used Cellebrite, a forensic tool often used by law enforcement to scan seized devices.

A key finding is that the FBI could only see incoming messages, not the ones Sharp sent, which confirms the data came from the notification storage. It shows that while the app’s encryption is strong, the phone’s operating system keeps its own logs of everything.

Keep reading

Apple Removes Bitchat from China App Store at Cyberspace Administration Order

Apple deleted Bitchat from the China App Store, acting on a direct order from the Cyberspace Administration of China. Jack Dorsey, who created the app, posted a screenshot of Apple’s removal notice to X with a short caption: “bitchat pulled from the china app store.”

The notice Apple sent to Dorsey is almost a copy-paste of the one it sent to Damus three years earlier. The language is identical. The accusation is identical. The CAC determined that Bitchat violates Articles 3 of the Provisions on the Security Assessment of Internet-based Information Services with Attribute of Public Opinions or Capable of Social Mobilization.

That regulation, enacted in 2018, requires any online service capable of influencing public opinion or organizing collective action to undergo a government security assessment before going live. If a service hasn’t submitted to that assessment, the CAC can order it pulled.

It targets the capacity for “public opinions” and “social mobilization.” The Chinese government has decided that the ability to communicate outside state-approved channels is itself a security threat, and Apple consistently treats that determination as sufficient grounds for deletion.

Bitchat is a peer-to-peer messaging app that operates over Bluetooth mesh networks. It requires no internet connection, no phone number, no email address, and no user account.

Messages are end-to-end encrypted and stored only on the devices involved. There are no central servers to subpoena, no user databases to hand over, and no content moderation pipeline for the CAC to plug into.

Dorsey built the initial version over a single weekend in July 2025, coding it with Goose, Block’s open-source AI assistant. He published a white paper on GitHub and opened a TestFlight beta that hit its 10,000-user cap within hours.

That design is precisely the problem from Beijing’s perspective. China’s internet censorship apparatus depends on having a chokepoint.

Keep reading

Apple Removes Private VPN Apps From Russia App Store

Apple pulled several custom VPN clients from the Russian App Store last week, including Streisand, V2Box, v2RayTun, and Happ Proxy Utility.

These aren’t the big-name commercial VPN providers that Apple already removed in 2024 at Roskomnadzor’s request. These are tools that let users connect to their own private servers and configure manual proxies, the kind of apps that give technically savvy Russians the ability to route around state censorship without depending on any company’s infrastructure.

Russian tech outlet Kod Durova first reported the removals, noting that the same apps remain available through Google Play on Android.

Days before the removals surfaced, Digital Development Minister Maksut Shadayev announced the Kremlin’s most aggressive anti-VPN campaign yet. “We have an obligation to fulfill the tasks that have been set before us. In this case, the task is to reduce the use of VPNs,” Shadayev said on the state-backed messenger Max.

He linked the push to what he called “long, difficult and ultimately unsuccessful” talks with foreign tech companies over compliance with Russian law.

Keep reading

Apple Expands Age Verification to Singapore & South Korea

Apple’s identity verification demands are spreading across Asia. Starting in late March, the company expanded age verification requirements in both Singapore and South Korea, adding these countries to a growing list alongside the UK, where users must prove they’re adults before Apple lets them fully use their own devices.

Singapore has been partially locked down since February 24, when Apple began blocking downloads of apps rated 18+ unless users confirmed they were adults.

That initial wave also hit Australia and Brazil. But the late March update goes further, bringing Singapore’s requirements closer to the UK model. Apple now requires Singaporean users to confirm they’re 18 or older to download or purchase 18+ apps, using a credit card, a driving license, a National Registration Identity Card, or a Foreign Identification Number card. Passports, debit cards, and gift cards aren’t accepted.

That list of acceptable documents tells you something about Apple’s priorities. Passports are internationally recognized government IDs, but they don’t work here. Debit cards, which millions of adults use as their primary payment method, are also excluded because minors can technically hold them.

Keep reading

Apple UK Age Verification Chaos: Users Face Failed Scans, Rejected Passports, and Forced Content Filters

Apple’s iOS 26.4 age verification system is failing UK users who don’t have a credit card or photocard driving license, leaving them with no way to prove they’re adults on devices they’ve owned for years.

The system arrived without warning, without explanation, and without any apparent consideration for the people who don’t fit Apple’s narrow assumptions about what a British adult looks like.

No Warning, No Communication

Apple sent no email. Included no mention of age verification in the iOS 26.4 release notes it shared publicly.

Unless you’d been following the developer beta track, where the feature appeared in February or reading Reclaim The Net’s earlier coverage, the first you knew about it was a prompt on your screen after restarting your phone.

That’s how 35 million UK iPhone users found out their devices now require identity documents to function normally. A “Confirm You Are 18+” label appeared at the top of Settings, and anyone who couldn’t or wouldn’t comply got silently downgraded. Apple’s Web Content Filter switched on, blocking websites across Safari and every third-party browser. Communication Safety is activated, scanning images and videos in Messages and FaceTime for nudity. Features that worked fine the day before now require government-approved proof of adulthood.

A company that controls what software runs on every iPhone it sells decided overnight that UK users needed to hand over identity documents to keep using the devices they already paid for. And it didn’t bother to tell them it was coming.

Keep reading