School librarians told to remove art books with ‘historic paintings of nudes’ in latest censorship row

School librarians are being told to remove art books with ‘historic paintings of nudes’ in the latest censorship controversy revealed today.

The ‘insane’ trend was revealed by a delegate at the annual conference of the National Education Union (NEU), saying she had heard ‘many accounts’ of art books being cut.

It comes after a school librarian at Lowry Academy in Salford, Greater Manchester, revealed last week she had been forced to remove books deemed ‘inappropriate’ by management.

Bosses used artificial intelligence to earmark almost 200 books for removal, including George Orwell’s 1984 and Stephenie Meyer’s Twilight.

The school later admitted it had removed ‘a small number of books’ but said it had put most of them back, into ‘age-appropriate categories’.

The Lowry Academy case prompted the NEU to pass an urgent motion yesterday to ‘fight censorship and defend librarians’.

The union said that although the woman in the original controversy is not part of the union, it wanted to protect its own librarian members from suffering a similar fate.

Proposing the motion, Kristabelle Williams, a member from Lewisham, said: ‘We cannot ignore the issues that this case has brought up.

‘We can take action as a union now to try to make sure it doesn’t happen again.’

She said the support of the union would give librarians the ‘confidence to not self-censor and resist the chilling effect that this case will cultivate’.

She added members fear there is now an ‘increased risk of external complaints’ and ‘hate campaigns’ about books in their libraries.

Also speaking during the debate was Laura Butterworth, a member from Tameside Greater Manchester, which is near Lowry Academy.

Keep reading

AI EXPANSION RAISES QUESTIONS ON JOBS, POWER, AND SECURITY

Jefferey Jaxen investigates the rapidly expanding influence of artificial intelligence and the growing concerns surrounding its impact on society, the economy, and global power structures.

As governments and corporations race to deploy increasingly powerful AI systems, the technology is transforming industries, automating jobs, and reshaping how information is created and distributed. While some see AI as a driver of unprecedented innovation and productivity, others warn it could deepen economic inequality, concentrate control among a handful of tech giants, and introduce new forms of digital influence.

Concerns are also intensifying around AI’s role in national security. Leaders in the field, including Alex Karp of Palantir Technologies, have openly stated that their platforms are designed to support military operations, including targeting and eliminating enemies—highlighting the growing intersection between AI development and modern warfare.

At the same time, new research suggests that people are often able to distinguish between human-created and AI-generated content, raising questions about authenticity and trust in an increasingly synthetic information landscape. Paradoxically, studies also indicate that workers who rely on AI tools frequently report feeling an increased workload, challenging the assumption that automation will simply make jobs easier.

As AI becomes more deeply embedded in everyday life, Jefferey asks whether society is truly prepared for the cultural, economic, and intellectual shifts ahead—and who ultimately controls the future of this rapidly evolving technology.

Keep reading

FDA Launches New AI-Powered System to Track Drug and Vaccine Side Effects

For decades, the system designed to catch dangerous side effects from drugs, vaccines, and consumer products has been failing. Not because the problems weren’t happening — but because the infrastructure meant to track them was too fragmented, too slow, and too burdensome to keep up.

The result was a growing gap between what patients experienced and what showed up in federal safety records. Patterns of harm went undetected or took far too long to surface, and the public was left making health decisions based on an incomplete picture.

Now the U.S. Food and Drug Administration (FDA) is attempting to close that gap with a sweeping technology overhaul — one that could fundamentally change how quickly you see safety signals and how much control you have over your own health choices. Here’s what the new system does, what it replaces, and why it matters.

Keep reading

China Unleashes Machine Gun-Toting Robot Wolves With “Collective Brain”

China has released the first footage of its “robot wolves” storming through simulated urban combat, armed with machine guns and upgraded for real battlefield carnage.

These aren’t cute Boston Dynamics knockoffs anymore – they’re pack-hunting death machines designed by an institute with deep People’s Liberation Army ties, and they’re getting deadlier by the day.

As noted in a viral post that has racked up over two million views, the footage shows the wolves operating in coordinated swarms during street battle drills.

The system comes from the Southwest Automation Institute. Developers call it “100% indigenously designed and 100% domestically produced.” A non-military version is even listed for civilian sale on JD.com for $73,500 – though how closely it matches the PLA-grade model remains unclear.

The Southwest Automation Institute’s own follow-up analysis even admits the counterintuitive reality of this new warfare: “on tomorrow’s battlefields, war robots may not be the ultimate killing machines—they could actually reduce casualties. They spare human troops the need to storm positions directly, pushing more engagements into ‘drone v.s. robot’ territory. And unlike two groups of soldiers grinding each other down in brutal close-quarters fighting, troops facing robots know the machines cannot be outfought. A handful of robots can clear and secure an entire street in minutes. The clash ends fast, and both sides bleed far less.”

But the post quickly adds the chilling caveat: “The real battlefield is far more complex than any training exercise. The ultimate test for these Machine Wolves will be whether they can reliably distinguish friendly troops from enemy forces—and, most critically, identify civilians who suddenly appear in the chaos.”

Keep reading

Government Actions Against Anthropic Are ‘Classic First Amendment Retaliation’

Good news in the battle between the federal government and the AI company Anthropic: A federal judge has temporarily blocked the Department of Defense from declaring Anthropic a “supply chain risk,” which would have barred any federal agency or contractor from doing business with the company.

The government’s “conduct appears to be driven not by a desire to maintain operational control when using AI in the military but by a desire to make an example of Anthropic for its public stance on the weighty issues at stake in the contracting dispute,” wrote U.S. District Judge Rita Lin in an order granting Anthropic’s motion for preliminary injunction.

“Weighty issues” might undersell it. The supply chain risk designation—usually reserved for foreign companies—and President Donald Trump’s declaration that all federal agencies must “IMMEDIATELY CEASE all use of Anthropic’s technology” came after Anthropic refused to remove contract language preventing the Pentagon from using its AI system, Claude, for autonomous weapons or mass domestic surveillance.

Rather than simply discontinue Anthropic’s contract, the Trump administration threw a massive public tantrum over not being able to use Claude for killer robots or new frontiers in the surveillance state. (Not that it wanted to do these things, the Pentagon insisted. It just needed these restrictions removed because…reasons.)

Anthropic sued, alleging a violation of its First Amendment rights.

In a March 26 order, Lin issued a preliminary injunction order that prohibits the federal government “from implementing, applying, or enforcing in any manner” the president’s directive and “any and all other agency actions taken in response to the Presidential Directive.” Lin further blocked the Department of Defense and Defense Secretary Pete Hegseth from designating Anthropic a supply chain risk.

“It is the Department of War’s prerogative to decide what AI product it uses,” notes Lin in the order.

Everyone, including Anthropic, agrees that the Department of War may permissibly stop using Claude and look for a new AI vendor who will allow ‘all lawful uses’ of its technology. That is not what this case is about.

The question here is whether the government violated the law when it went further.

For now, Lin has concluded that there is strong evidence that it did. “This appears to be classic First Amendment retaliation,” she wrote.

Keep reading

Biden Judge Blocks Trump From Forcing Pentagon, Every Federal Agency to Cut Ties with Anthropic, a ‘Woke’ AI Company That is ‘Putting Troops in Danger’

A federal judge on Thursday blocked the Trump Administration from forcing the Pentagon and every federal agency to cut ties with Anthropic.

Judge Rita Lin, a Biden appointee said Trump’s ban is a First Amendment violation.

The judge halted her ruling for a week to give the Justice Department time to appeal her decision.

Last month, President Trump ordered every federal agency to cease use of Anthropic AI after the company refused to comply with the Pentagon’s demands.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military,” Trump said.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” Trump said.

“Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump added.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!” Trump added.

Keep reading

Trump, Blackburn Push to Federalize AI Control

The Trump administration and its allies in Congress are moving to define the rules of the digital future, with consequences that could extend far beyond artificial intelligence (AI).

Last week, the White House released a national AI legislative framework, while Senator Marsha Blackburn (R-Tenn.) introduced a sweeping, 291-page companion bill to codify it into law. Together, they mark the most aggressive federal push yet to define how Americans access, use, and build AI systems.

Supporters argue the country needs a single national standard to compete with China and rein in Big Tech. The language is polished and ambitious. It promises to protect children, safeguard free speech, support creators, spur innovation, empower communities, and prepare Americans for an “AI-driven economy.”

Critics see something else: Identity-gated access, continuous monitoring, traceable content, and federally managed AI development.

At the center of the debate is a simple question: Who controls access to AI, and at what cost?

One National Framework

At the core of the Trump administration’s AI push is a single premise: Centralization of AI regulation.

The White House states it plainly:

Importantly, this framework can succeed only if it is applied uniformly across the United States. A patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race.

Blackburn’s bill sharpens the point. Its title is telling:

The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act (TRUMP AMERICA AI Act).

In other words, when states regulate AI, it is, in the senator’s telling, “chaos.” When Washington does it, it a “unifying” order.

“The Federal government is uniquely positioned to set a consistent national policy,” the White House adds.

The effect is sweeping. A single federal framework would override emerging state laws. States such as California and New York have already begun shaping AI rules. Under this model, those efforts would be sidelined.

Blackburn’s bill turns that vision into structure. It consolidates authority across safety, liability, and enforcement. It expands federal oversight and delegates rulemaking authority to agencies such as the Federal Trade Commission (FTC).

Other provisions reinforce the shift. The Department of Energy (DOE) gains authority to evaluate advanced systems, centralizing access to data and infrastructure.

Keep reading

School used AI to purge library of ‘inappropriate’ books including Orwell’s 1984 and Twilight, with librarian branded ‘safeguarding risk’

A school used artificial intelligence to censor books in its library including George Orwell’s 1984 and Stephanie Meyer’s Twilight, campaigners claim.

An investigation by Index on Censorship found a secondary school in Greater Manchester earmarked almost 200 books for removal from its library that were deemed ‘inappropriate’.

These also included Michelle Obama‘s autobiography, Becoming and The Notebook by Nicholas Sparks.

The charity, which campaigns for free expression, says the school got AI to generate summaries justifying why each book was not suitable for pupils.

Incredibly, the school librarian was also put under a ‘safeguarding’ investigation – leading to her resignation – for allowing the books in the library.

Index said it would not reveal the name of the librarian or the school, due to her being vulnerable.

Although many of the books were initially removed, it is not known if all of those on the list remain banned from the library.

The case was exposed this week by the school librarian, who spoke to the organisation on condition of her anonymity.

She said the purge began in November 2025, when the headteacher demanded the removal of Laura Bates’ nonfiction title Men Who Hate Women, which is an exposé of incel culture.

The head thought the book was inappropriate due to ‘exposure of misogynistic beliefs’, even though it was kept in a special section for older pupils.

Off the back of this incident, the school then launched an ‘investigation’ into the librarian, and closed the library as a ‘temporary safeguarding measure’.

She was then asked to remove any book that was ‘not written for children’, had ‘themes that could be upsetting to children’ and those that were ‘inappropriate or constitute a safeguarding risk’.

She told Index: ‘I was absolutely gobsmacked. I couldn’t believe what I was hearing.’

The school also reported her to the council as a safeguarding risk due to introducing ‘inappropriate’ books – and there was also a threat of gross misconduct proceedings.

The school shared with her a list of 193 books which it deemed might be inappropriate, seen by Index.

Index said it had seen another document in which the school admitted the reasons given for the censorship had been written by AI.

Keep reading

Blackburn’s TRUMP AMERICA AI Act Repeals Section 230, Expands AI Liability, and Mandates Age Verification

Senator Marsha Blackburn has introduced a 291-page legislative discussion draft that would reshape how information is allowed to exist online.

The TRUMP AMERICA AI Act, officially titled the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry” Act, bundles together Section 230 repeal, expanded AI liability, age verification mandates, and a stack of additional bills that have been circulating separately for years.

All of it is wrapped in a national AI framework that claims it is tied to President Trump’s December Executive Order. The bill is framed as pro-innovation, pro-safety, designed to “protect children, creators, conservatives, and communities” while positioning the US to win the global AI race.

What the actual 291 pages describe is a system that centralizes regulatory authority, removes the legal protections platforms currently rely on, and hands new enforcement tools to federal agencies, state attorneys general, and private litigants simultaneously.

We obtained a copy of the bill for you here.

The legal foundation of the modern internet is Section 230 of the Communications Decency Act. It shields platforms from being sued for the content that users post. Without Section 230, platforms could become legally responsible for what their users post, which could mean anything controversial, contested, or legally ambiguous becomes a liability they’ll quietly remove rather than defend.

Blackburn’s bill repeals it entirely, after a two-year transition period.

Platforms and AI developers could face lawsuits for “defective design,” “failure to warn,” or deploying systems deemed “unreasonably dangerous.”

AI platforms would be incentivized to heavily monitor users.

Keep reading

US to embed Palantir AI across entire military: Report

The Pentagon has designated Palantir’s Maven artificial intelligence system as an official program of record, in a move that will lock in the weapons-targeting technology long term across ‌the US military, Reuters reported on 21 March.

The move was announced in a letter from Deputy Secretary of War Steve ​Feinberg issued to senior Pentagon leaders and US military commanders on 9 March.

Feinberg wrote that embedding Palantir’s Maven Smart System would provide the military “with the latest tools necessary to detect, deter, and dominate our adversaries in all domains.”

Maven is the US military’s main AI system, analyzing data from satellites, drones, radars, sensors, and reports. It uses AI to interpret data and swiftly identify and strike targets like enemy vehicles, buildings, and weapons.

The White House claims US warplanes have hit more than 7,800 targets since the war on Iran began just three weeks ago.

“It is imperative that we invest now and with focus to deepen the integration of artificial intelligence (AI) across the Joint Force and establish AI-enabled decision-making as the cornerstone of our strategy,” Feinberg wrote.

During a presentation at a Palantir event earlier this month, Pentagon official Cameron Stanley illustrated how the Maven program identifies targets.

“When we started ⁠this, it literally took hours to do what you just saw,” he said.

Keep reading