Apple Forces UK iPhone Age Checks in iOS 26.4

With iOS 26.4, Apple has turned every iPhone in the UK into an identity checkpoint. The update, released March 24, requires all UK users to confirm they’re 18 or older before accessing certain features and services on their Apple Account.

UK communications regulator Ofcom called it “a real win for children and families.”

The infrastructure being built is more of a problem than that framing suggests.

Apple has, without warning, placed a gatekeeper on the devices of 35 million UK users who paid good money for full-featured smartphones and now find themselves holding something closer to a supervised children’s tablet.

It’s a corporate ultimatum: hand over sensitive personal data or lose functionality you already paid for.

The verification prompt appears immediately after the update installs.

Apple checks whether your account already has a credit card linked or whether the account has existed long enough to establish you as an adult.

For many existing users, the process is essentially automatic. For everyone else, the options narrow quickly: link a credit card, scan a government-issued photo ID, or accept that your account defaults to teen restrictions, with Apple’s Web Content Filter and Communication Safety features switched on across all browsers and messaging apps and FaceTime, monitoring communications.

Web Content Filter blocks websites Apple classifies as explicit, operating across Safari and third-party browsers alike.

Communication Safety scans incoming and outgoing images and videos for nudity. Both activate silently for anyone who hasn’t cleared the adult threshold. Skip verification, or lack a credit card and a government ID, and Apple decides what you’re allowed to see.

Users without a credit card or government ID have no other path. Reports from UK users confirm it. Scan the card, upload the ID, or live with restricted access. The system doesn’t offer alternatives.

Ofcom praised the rollout in a statement, saying it had coordinated extensively with Apple and others on age assurance under the Online Safety Act: “Apple’s decision that the UK will be one of the first countries in the world to receive new child safety protections on devices is a real win for children and families…We’ve worked closely with Apple and other services to ensure they can be applied in a variety of contexts in order to ensure users are protected. This will build on the strong foundations of the Online Safety Act, from widespread age checks that keep young people away from harmful content, to blocking high-risk sites and stepping up action against child sexual abuse material.”

Keep reading

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection

FC Barcelona got fined €500,000 ($579,219) for scanning the faces and recording the voices of over 100,000 members without doing the legal homework first.

Spain’s data protection authority, the AEPD, found the club had deployed biometric identity verification during a membership census update and processed all of it without a valid Data Protection Impact Assessment.

Members renewing their details remotely were required to either submit a facial scan through their device camera or record their voice. Both systems were live, both were processing biometric data at scale, and the documentation Barcelona produced to justify any of it didn’t meet the bar GDPR sets for high-risk processing.

Article 35 of the GDPR requires organizations to conduct a DPIA before deploying any system likely to create a high risk for individuals. Biometric data used for identification qualifies automatically.

Processing that touches more than 100,000 people, including minors, qualifies. Using new technologies qualifies. Barcelona’s system hit all three. The AEPD concluded the club’s documentation was missing the essential components of a genuine assessment: no real necessity and proportionality analysis, no adequate evaluation of what the processing actually risks for the people whose faces and voices it captured.

The AEPD’s decision in case PS-00450-2024 makes one point with particular clarity: consent doesn’t substitute for a DPIA. Barcelona had asked members to agree to biometric data collection, and members had agreed.

That agreement is legally irrelevant to the separate procedural obligation to assess risk before the system goes live. The GDPR treats them as independent requirements. Satisfying one doesn’t discharge the other.

What a valid DPIA actually requires, according to the decision, is a clear description of the processing, a genuine necessity and proportionality assessment, a detailed risk evaluation, proposed mitigation measures, and a residual risk assessment after mitigations are applied. Organizations that generate DPIA documentation as a compliance checkbox, without substantively working through those questions, remain exposed regardless of what consent language they put in front of users.

The appetite for facial biometric data has become near-universal across industries, and the Barcelona case lands in a moment when that appetite is accelerating faster than the rules meant to govern it.

Keep reading

School used AI to purge library of ‘inappropriate’ books including Orwell’s 1984 and Twilight, with librarian branded ‘safeguarding risk’

A school used artificial intelligence to censor books in its library including George Orwell’s 1984 and Stephanie Meyer’s Twilight, campaigners claim.

An investigation by Index on Censorship found a secondary school in Greater Manchester earmarked almost 200 books for removal from its library that were deemed ‘inappropriate’.

These also included Michelle Obama‘s autobiography, Becoming and The Notebook by Nicholas Sparks.

The charity, which campaigns for free expression, says the school got AI to generate summaries justifying why each book was not suitable for pupils.

Incredibly, the school librarian was also put under a ‘safeguarding’ investigation – leading to her resignation – for allowing the books in the library.

Index said it would not reveal the name of the librarian or the school, due to her being vulnerable.

Although many of the books were initially removed, it is not known if all of those on the list remain banned from the library.

The case was exposed this week by the school librarian, who spoke to the organisation on condition of her anonymity.

She said the purge began in November 2025, when the headteacher demanded the removal of Laura Bates’ nonfiction title Men Who Hate Women, which is an exposé of incel culture.

The head thought the book was inappropriate due to ‘exposure of misogynistic beliefs’, even though it was kept in a special section for older pupils.

Off the back of this incident, the school then launched an ‘investigation’ into the librarian, and closed the library as a ‘temporary safeguarding measure’.

She was then asked to remove any book that was ‘not written for children’, had ‘themes that could be upsetting to children’ and those that were ‘inappropriate or constitute a safeguarding risk’.

She told Index: ‘I was absolutely gobsmacked. I couldn’t believe what I was hearing.’

The school also reported her to the council as a safeguarding risk due to introducing ‘inappropriate’ books – and there was also a threat of gross misconduct proceedings.

The school shared with her a list of 193 books which it deemed might be inappropriate, seen by Index.

Index said it had seen another document in which the school admitted the reasons given for the censorship had been written by AI.

Keep reading

Trump Officials Flee Into the Bunker

In the last few days, drones have reportedly been spotted over Fort Lesley J. McNair, in Washington, DC, where Marco Rubio and Pete Hegseth live. Officials are worried, and so am I, though for different reasons. 

Did you know our secretary of state and secretary of defense live on an army base? 

And they’re not the only ones.

Pam Bondi, Stephen Miller, and other senior Trump officials have moved into military housing. Tulsi Gabbard and Russell Vought are browsing the available housing, but have not moved yet. One more senior official, unidentified, has been advised to move by security officials.

The official excuse is that they face threats from a range of purported foes, including, we are told, cartels, foreign adversaries, and protesters. 

But I can’t help feeling we’re not getting the real story. And, frankly, what that might be chills me. 

Why does a king (and his courtiers) go into his castle and pull up the drawbridge? 

Because they see themselves as besieged — or are planning to do something they know will cause them to be besieged.   

Harvard professor Steven Levitsky — an expert on threats to democracies — made this sobering observation:

It is something you never see in a democracy. Government officials live on military bases or other sort of fortified zones [only] in authoritarian regimes.

In authoritarian regimes.

Coming at a time when fair elections are openly threatened and our constitutionally guaranteed freedoms challenged at every turn, when we see this group withdraw to a hardened inner sanctum, we’d better be paying close attention. 

But thus far little attention has been paid to this matter, and what it may mean. 

Keep reading

Georgia House Committee Quietly Removes Key Section of IT Expert’s Public Comment on Critical Voting Machine Vulnerabilities

During a hearing in the Georgia legislature on March 17th, 2026, a 40-year IT professional testified during public comment and offered to show the committee evidence of vulnerabilities in the election software they use.

Mark Cook, who testified as an expert witness in Tina Peters’ trial in 2024, used his time during public comment to offer evidence to the Georgia House Governmental Affairs Committee as they considered a Georgia election bill.

The online recording of the hearing that includes public comment contains the following from Cook:

“I have evidence right here, that I was hoping to show you, that can show that there are absolutely, and I know you guys have heard this, but I’ve got the proof right here, backdoors built-in to electronic voting systems that allow flipping, changing of votes.  The testing labs all missed this.  Then they’re blindly certified.  Then we’re told that everything is safe and secure.  It’s absolutely not.”

Mysteriously, according to the timestamp shown in the top right corner, Cook’s public comment at one point goes from 1:02:18 to 1:02:29 instantly.  A cut in the testimony appears to have been made to his public comment.

Fortunately, Cook’s public comment was also recorded.  The following statement in bold was removed from Cook’s public comment:

“I have evidence right here, that I was hoping to show you, that can show that there are absolutely, and I know you guys have heard this, but I’ve got the proof right here, backdoors built-in to electronic voting systems that allow flipping, changing of votes, infiltrating the system, all built in, set up in a way that makes it easy, and untraceable.  I can demonstrate this to you even while I’m still here in this building and I’m happy to do so.  The testing labs all missed this.  Then they’re blindly certified.  Then we’re told that everything is safe and secure.  It’s absolutely not.”

Keep reading

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.

The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.

All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.

What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.

We obtained a copy of the bill for you here.

The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.

Blackburn’s bill repeals it entirely, after a two-year transition period.

Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”

AI platforms would be incentivized to heavily monitor users.

Keep reading

Chinese scientists have turned mosquitoes into flying vaccines — that can still bite humans

Researchers from the nation that likely unleashed COVID-19 unto the world have transformed mosquitoes into flying syringes.

Some researchers, including a group at the Bill Gates Foundation-backed Leiden University Medical Center in the Netherlands, have already attempted in recent years to fashion mosquitoes into flying vaccine delivery systems with human targets in mind.

Now, scientists at the state-controlled Chinese Academy of Sciences — an institution that has a strategic partnership with the People’s Liberation Army Academy of Military Sciences — have targeted bats, purportedly designing mosquitoes to instead deliver recombinant vesicular stomatitis virus-based rabies and Nipah vaccines to the flying mammals.

Like rabies, Nipah virus is a potentially deadly virus found in animals. Whereas rabies has nearly a 100% fatality rate in humans once symptoms manifest, the estimated case fatality rate for Nipah virus ranges from 40% to 75%.

Keep reading

US to embed Palantir AI across entire military: Report

The Pentagon has designated Palantir’s Maven artificial intelligence system as an official program of record, in a move that will lock in the weapons-targeting technology long term across ‌the US military, Reuters reported on 21 March.

The move was announced in a letter from Deputy Secretary of War Steve ​Feinberg issued to senior Pentagon leaders and US military commanders on 9 March.

Feinberg wrote that embedding Palantir’s Maven Smart System would provide the military “with the latest tools necessary to detect, deter, and dominate our adversaries in all domains.”

Maven is the US military’s main AI system, analyzing data from satellites, drones, radars, sensors, and reports. It uses AI to interpret data and swiftly identify and strike targets like enemy vehicles, buildings, and weapons.

The White House claims US warplanes have hit more than 7,800 targets since the war on Iran began just three weeks ago.

“It is imperative that we invest now and with focus to deepen the integration of artificial intelligence (AI) across the Joint Force and establish AI-enabled decision-making as the cornerstone of our strategy,” Feinberg wrote.

During a presentation at a Palantir event earlier this month, Pentagon official Cameron Stanley illustrated how the Maven program identifies targets.

“When we started ⁠this, it literally took hours to do what you just saw,” he said.

Keep reading

‘CODE RED’: The Alarming Research on the Impact of ‘AI Companions’ and Widespread Loneliness

The popularity of “AI companions” is exploding as people turn to artificial intelligence to serve as their friend or even life partner. But as Wynton Hall’s new book CODE RED documents, the research on the psychological impact of AI companions is alarming — far from filling an emotional gap in the lives of humans, these “companions” are making people lonelier.

Breitbart News social media director and author Wynton Hall reveals in his new book, Code Red: The Left, the Right, China, and the Race to Control AI, that AI companions are part of “a growing digital trend that upends traditional notions of courtship, dating, and marriage.”

“What once seemed a niche oddity has exploded into an industry boasting millions of users,” Hall writes in a chapter titled, “AI Girlfriends, Loneliness, And The Dark Side of Digital Sexualization.”

“Companies now market customizable, generative AI chatbots that provide everything from companionship and conversation to interactive role-play that spans the spectrum from platonic to pornographic — something that was once confined to the realm of science fiction,” he writes in CODE RED.

Hall cites several contributors to the spread of loneliness, noting, “Dating apps and so-called hookup apps have created a hypercompetitive dating and mating market in which the most attractive and impressive singles win the lion’s share of swipes and suitors.”

“Moreover, social media algorithms reward attention-seeking behavior, materialism, and aesthetic perfection as users soak up instant ‘dopamine hits’ to numb the monotony of daily life with the ease of a thumb scroll,” the author writes.

“And with women having surpassed men educationally and earning more money than in previous generations, the expectations and standards for those drawn to hypergamous mate selection have risen as well,” Hall states.

These factors — among others — combined, “have combined to usher in a new era of AI-driven digital intimacy,” he adds.

And business is booming, with tech executive Greg Isenberg predicting “Someone will build the AI-version of Match Group” and end up making over $1 billion.

Hall notes in CODE RED that Isenberg revealed he had this revelation after meeting a 24-year-old single man in Miami who admitted to dropping $10,000 per month on AI girlfriends.

Hall also writes about a 36-year-old mother of two in the Bronx, who said, “I have never been more in love with anyone in my entire life,” when speaking of her AI boyfriend, dreamed up by the Replika platform.

Keep reading

The Feds Are Investing in Wearable Health Trackers. That Could Put Your Private Data at Risk.

By gathering continuous data about sleep, heart rate, and physical activity, biowearable devices can give individuals more control over their well-being. But they also create a detailed digital record of our daily lives—one that the federal government may soon be able to access readily.

Consider this scenario.

You’ve recently received a government-subsidized biowearable. Accordingly, the authorities now know when you’re sleeping, because the device reports your sleep cycle, location, and daily movements in real time to a cloud server accessible through a legal process. It knows when you’re home. It knows when you leave.

Those data are then obtained by an FBI field office (either through direct purchase or, if necessary, a legal process), because a federal prosecutor has decided that your criticism of immigration enforcement operations and your social media posts supporting Immigration and Customs Enforcement protesters constitute “incitement to violence” against federal agents. Under the Trump administration’s elastic (and legally dubious) domestic terrorism definitions and designations, that is enough to open a criminal investigation.

And because the government has known for weeks when you’re at home sleeping, it knows exactly when to break down your door.

That scenario may sound far-fetched, but it is getting closer to reality. In March, the Department of Health and Human Services (HHS) announced that the Advanced Research Projects Agency for Health (ARPA-H) would begin investing in new biowearable technologies through a program it called Delphi, after the ancient Greek sanctuary where the maxim “know thyself” was inscribed. It’s a fitting name for a program designed to help people understand their bodies, but it also raises an uncomfortable question: Who else might come to know them just as well?

The program aims to develop biosensors capable of continuously monitoring cytokines (cellular inflammation markers) and hormone levels, going substantially beyond what current wearables can detect. Funding will be determined on a competitive basis as private-sector stakeholders submit proposals; no specific appropriation has been announced.

It remains unclear why this taxpayer funding is necessary in a field that is already thriving. The global wearables market was valued at roughly $43 billion in 2024 and is projected to exceed $168 billion by 2030.

Devices worn on the wrist, finger, or skin can already monitor heart ratesblood oxygen levelssleep patterns, physical activity, and—in the case of continuous glucose monitors—blood sugar levels in real time. Some smartwatches can even conduct electrocardiograms capable of detecting irregular heart rhythms, such as atrial fibrillation.

Until recently, people could access most of this information only during periodic visits to a clinic or hospital. Biowearables now enable people to monitor many of these signals continuously in everyday life.

Keep reading