Trump Administration Fights Anthropic’s Plan To Expand Mythos AI Tool That Could Cause Doomsday ‘If It Falls in the Wrong Hands’

White House is again opposing Anthropic’s plans.

Donald J. Trump’s administration is reportedly fighting AI company Anthropic’s plan to expand access to its Claude Mythos tool.

The powerful AI device is apparently so dangerous that company execs have warned it ‘could cause a wave of hacks and terror attacks if it fell into the wrong hands’.

The New York Post reported:

Anthropic recently proposed giving an additional 70 companies access to Mythos, bringing the total number to 120 organizations, sources familiar with the matter told the Wall Street Journal.

Just earlier this month, the firm announced ‘Project Glasswing’, a plan to provide the model to a select group of handpicked companies including Amazon, Google and JPMorgan.”

“White House officials have told Anthropic that they are against the move to broaden the rollout because of security concerns, sources said.”

Anthropic itself showed that Mythos could exploit electric grids, power plants and hospitals if hacked.

“Some Trump administration officials are also reportedly concerned that Anthropic does not have enough computing power to serve both government agencies and the additional companies. A White House official told The Post that the Trump administration is actively engaging with the private sector while trying to balance innovation and security.”

Keep reading

Judge Prevents Elon Musk’s Case Against OpenAI from Turning into a Trial of AI

Judge Yvonne Gonzalez Rogers repeatedly intervened during the third day of Elon Musk’s testimony in his lawsuit against OpenAI and Sam Altman, steering attorneys away from broad debates about AI’s potential threat to humanity.

NBC News reports that the contentious legal battle between Elon Musk and OpenAI entered its third day with Judge Yvonne Gonzalez Rogers firmly redirecting the proceedings back to the core legal issues at hand. The case centers on Musk’s claims that OpenAI CEO Sam Altman betrayed public trust by enriching himself through the AI company they co-founded in 2015 as a nonprofit organization.

The day began with a heated exchange when Musk’s attorney Steven Molo attempted to discuss AI’s potential dangers. “This is a real risk, we all could die as a result of artificial intelligence,” Molo argued in objection to the judge’s efforts to limit the discussion.

Judge Rogers quickly shut down this line of argument, pointing out the irony in Musk’s position. “It’s ironic your client, despite these risks, is creating a company that is in the exact space,” Rogers stated. “There are some people who do not want to put the future of humanity in Mr. Musk’s hands … But we’re not going to get into that business.”

The lawsuit represents the culmination of a years-long dispute between the two tech leaders, who have previously exchanged public criticism online. Altman was present in the courtroom during Musk’s testimony on Wednesday and Thursday.

The four-week trial could have significant implications for OpenAI’s future and its flagship product, ChatGPT. Musk is seeking approximately $134 billion in damages from OpenAI and co-defendant Microsoft, one of OpenAI’s major financial supporters. His lawsuit claims that OpenAI benefited substantially from his financial contributions, advice, recruitment assistance, and business connections.

Keep reading

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers customer service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Keep reading

As Hegseth Touts Autonomous Warfare Command, Human Rights Expert Pushes Civilian Protections

As the US military accelerates its adoption of autonomous weapons systems amid a growing global artificial intelligence arms race, one expert told Common Dreams on Wednesday that “greater action needs to be taken urgently” to protect civilians and ensure meaningful human control over rapidly developing technologies.

US Defense Secretary Pete Hegseth told congressional lawmakers Wednesday during a House Armed Services Committee hearing on the proposed $1.5 trillion Pentagon budget for 2027 that the military will soon have a new “sub-unified command” dedicated to autonomous warfare.

Hegseth, who advocates “maximum lethality” for US forces, has expressed disdain for what he called “stupid rules of engagement” designed to minimize civilian harm. He has overseen the dismantling of efforts meant to mitigate wartime harm to civilians – hundreds of thousands of whom have been killed in US-led wars during this century, according to experts.

This “maximum lethality” ethos, combined with AI-powered systems allowing for exponentially faster and more numerous target selection, has raised concerns that have been underscored by actions including Israel Defense Forces massacres in Gaza and Lebanon, and US attacks like the cruise missile strike on a school in Iran that killed 155 children and staff.

“A sole focus on achieving maximum lethality is inherently incompatible with civilian protection,” Verity Coyle, deputy director of Human Rights Watch’s (HRW) crisis, conflict, and arms division, told Common Dreams. “If the United States truly seeks to protect civilians, it should forgo this limited focus and ensure it has guardrails in place that assess the proportionality of its actions and guarantee a distinction between civilians and combatants.”

“Under international humanitarian law, civilian protection requires that military actions abide by the principles of distinction and proportionality,” Coyle noted. “In other words, military actors must distinguish between civilians and combatants and ensure that the resulting harm to civilians from their actions would not be excessive in comparison to the perceived military gain.”

Experts on lethal autonomous weapons systems – commonly called “killer robots” – stress the need for meaningful human control. However, with industry-backed efforts afoot to ban state and local governments from placing guardrails on AI development, retaining such control could become increasingly difficult as the technology advances.

“The lack of serious guardrails… shows a troubling lack of concern for these real and immediate risks to civilians both in the United States and abroad,” Coyle said. “While we have seen some Congress members and state legislators express concern over these developments, greater action needs to be taken urgently.”

Asked about the “if we don’t build it, they will” mentality of many US proponents of unchecked AI development that is reminiscent of the Cold War nuclear arms race, Coyle said the United States is ignoring its “ability to set the global agenda and international humanitarian law norms.”

“As we see greater integration of AI in the military domain and resulting civilian harm, we need strong international leadership to respond to these threats, not states relinquishing their responsibilities,” she asserted.

Keep reading

US wants to use hypersonic missile on Iran. Problem is, it may not work.

The U.S. is mulling using its first hypersonic missile against Iran — even though it may not yet be ready for battlefield use.

CENTCOM says it needs to deploy the “Dark Eagle” missile against Iran because it has been forced to move its launchers out of range for Washington’s Precision Strike Missile, which the U.S. is now running low on, according to a report from the Center for Strategic and International Studies.

Although it completed some successful tests in 2024, the Dark Eagle repeatedly failed to launch during other tests because of launcher and production quality issues. An unnamed defense official told Fox News that the weapon has reached “initial operational capability,” but the Pentagon testing office says it won’t have enough data to evaluate Dark Eagle’s combat effectiveness until early 2027.

The request comes amid a deadlock in U.S.-Iran talks that could spark a return to all-out war. President Donald Trump has pledged to maintain a blockade of Iranian shipping in the Persian Gulf, but Iran says it will only come to the table if the U.S. lifts the siege.

As Jennifer Kavanagh, senior fellow and director of military analysis at Defense Priorities, told RS, the possible Dark Eagle deployment “suggests that the Pentagon has lost all perspective.”

“Iran is not an existential threat, and the United States should not be expending its highest-end missiles there no matter what,” Kavanagh said. “The unit cost per missile is $41 million or so. Are any targets in Iran worth this much?”

Another expert observed that a deployment soon might help the Dark Eagle get more funding for next year’s defense budget.

“How do you know it is defense budget season in Washington? An unnecessary push to deploy a not-yet-fully-operational hypersonic missile against Iran,” Kelly Grieco, Senior Fellow at the Stimson Center, wrote on X. “Nothing says ‘fund me’ like first use, I guess.”

Keep reading

They’re All Ears: Apple’s Plan to Read Your Mind

We’ve handed over our location, our browsing history, our voice, our face, and our purchasing habits. In exchange, we’ve gotten convenience. Now Apple wants the one thing each of us might have thought was still ours—the electrical activity of our brain. And this time, they’re not even asking. What are we talking about here?

In January 2023, Apple quietly filed patent US20230225659A1 with the U.S. Patent and Trademark Office. The filing describes a wearable electronic device—an earbud—equipped with multiple electrodes embedded directly into the ear tip and housing. These electrodes aren’t for audio. They are not there to improve our sound quality. No indeed. Instead, they are there to read our brain—using the same EEG technology doctors use to monitor neurological activity in clinical settings. And because every ear canal is shaped differently, Apple’s patent describes a machine-learning model that figures out which electrode combinations work best for each person’s specific anatomy, then keeps refining that over time. The result is a read that is accurate, continuous, and tailored to each of us personally. The digital signal is then transmitted wirelessly to our phone—and, per the patent’s own language, to a server, where it can be stored as “historic data” accessible by “another person given permission.”

Read that sentence again.

What EEG Actually Reveals
This is not science fiction, and it is worth understanding what EEG data actually captures—because it is a lot more than Apple’s marketing department will ever tell you. Brain waves are not background noise. They are a direct readout of our inner life. The alpha, beta, delta, theta, and gamma frequencies each correspond to distinct mental states—relaxation, intense focus, deep sleep, creativity, active learning. Together they paint an individual portrait of our mind that is more revealing than anything we have ever typed into a search bar or whispered to a smart speaker. These frequencies, as Loyola University researchers have noted, are also the same signals measured in polygraph tests—the ones used to determine whether someone is lying. They can reveal our stress levels, our concentration, our emotional state, and potentially flag neurological conditions that have not yet been diagnosed. As one researcher at the Neurorights Foundation put it in a Science Friday interview, neural circuits in the brain create our thoughts, emotions, memories, decision-making, and our very sense of self.

Apple wants that data streaming off our ears into their servers.

Are There Any Upsides?
Fair is fair—applications for in-ear EEG technology are being floated, and it’s worth addressing them. As Neurofounders reports, startups like NextSense are already developing in-ear EEG devices to improve clinical sleep staging. Detecting seizure disorders from continuous passive monitoring is another possibility. Early signals for degenerative diseases like Alzheimer’s may surface in EEG data years before symptoms appear. And researchers have argued that natural-environment EEG collection—on the couch, at work, during real life rather than inside a sterile lab—would produce more accurate data on attention and cognitive states than anything gathered under clinical conditions.

These applications sound compelling on the surface. But step back for a second. Americans are not sleeping poorly because they lack a brain-monitoring device. They are sleeping poorly because they are overprescribed, overstimulated, and undernourished—and the same medical system profiting from that reality is not exactly rushing to fix it. Handing our neural data to Apple is not a solution to a pharmaceutical-created problem. It is just a new layer of surveillance dressed up as fake wellness. The idea that we should surrender the electrical activity of our brains as the price of entry for better sleep tracking should raise more than a few eyebrows.

Who Gets the Data?
Here is where things get serious. A 2024 Neurorights Foundation report pulled back the curtain on 30 companies already selling consumer neurotechnology devices. What they found should stop you cold. Twenty-nine of the thirty companies claimed unlimited rights to their users’ neural data. Most had quietly written third-party data sharing directly into their terms—buried in the kind of legal language nobody reads until it’s too late. Fewer than half even encrypt the data or de-identify users. There is no federal law in the United States governing how neural data collected by consumer devices can be used or sold. A handful of states—Colorado, California, Illinois—have moved to address this, but protections remain patchwork at best.

As a published paper in PMC bluntly put it, bulk sales of neural data by tech giants to third parties may already be occurring with minimal accountability. Data brokers could soon be cataloging individual “brain fingerprints” on a mass scale—data as uniquely identifying as a fingerprint, and infinitely more revealing.

Apple has faced its own data breach history. As Pearl Cohen’s legal analysts note, the patent describes data transmission to external servers accessible by parties beyond the user. The company that couldn’t keep our FaceID data secure wants a continuous stream of our brain’s electrical activity.

Keep reading

California Approves Rules Allowing Automakers to Test, Deploy Heavy-Duty Autonomous Vehicles

The California Department of Motor Vehicles (DMV) on April 28 approved new regulations that would allow autonomous vehicle (AV) makers to test and deploy heavy-duty AV technology on California roadways.

The California DMV said manufacturers are required to test their vehicles with a safety driver, then progress to driverless testing, before they can apply for deployment on state roads.

Manufacturers must complete 50,000 miles of testing at each phase for light-duty vehicles and 500,000 miles for heavy-duty AVs and provide “a structured safety case” showing the safety of vehicle hardware, software, and operations in order to obtain a permit, according to the DMV.

The new rules would also expand safety and oversight requirements for all classes of AVs, allowing law enforcement agencies to cite companies for moving violations committed by their vehicles.

Under the regulations, companies operating AVs must respond to first-responder calls within 30 seconds, and local emergency officials will have the authority to issue “electronic geofencing directives” requiring autonomous vehicles to leave active emergency zones.

“California continues to lead the nation in the development and adoption of AV technology, and these updated regulations further demonstrate the state’s commitment to public safety,” DMV Director Steve Gordon said.

Keep reading

Federal In-car Monitoring Mandate Expands Data Collection and Control Powers

A federal mandate rooted in a 2021 bipartisan law is set to reshape every new car sold in the United States, and potentially the boundaries of personal mobility itself. By the 2027 model year, vehicles will be required to include systems that monitor drivers for impairment and can intervene if necessary. Supporters frame it as a safety breakthrough. Critics call it a “kill switch.”

The policy has broad political backing. It passed with support from both Democrats and Republicans and has remained intact across administrations, including under the recent Consolidated Appropriations Act, which preserved both funding and the mandate. In January, that support was tested when the House voted down an amendment that would have stripped funding for the requirement, effectively keeping the rule on track.

One of the most persistent critics is Representative Thomas Massie (R-Ky.), who continues to lead opposition alongside a small group of lawmakers. Massie warns that Congress is normalizing continuous monitoring inside privately owned vehicles, a shift he argues carries implications far beyond roadway safety.

The Law

The requirement comes from the Infrastructure Investment and Jobs Act, specifically Section 24220. The law directs regulators to establish a safety standard for what it calls “advanced impaired driving prevention technology.”

The statute defines that technology as a system that can

(i) passively monitor the performance of a driver of a motor vehicle to accurately identify whether that driver may be impaired; and
(ii) prevent or limit motor vehicle operation if an impairment is detected;

It also allows for systems that can “passively and accurately detect whether the blood alcohol concentration of a driver … is equal to or greater than” the legal limit, with authority to intervene. The law sets the objective, not the method.

It also cites research from the Insurance Institute for Highway Safety (IIHS) estimating that such technology “can prevent more than 9,400 alcohol-impaired driving fatalities annually.”

The mandate and its funding were reaffirmed in early 2026, when President Donald Trump signed the Consolidated Appropriations Act, ensuring the requirement remains in force.

From Safety Feature to Standard Equipment

Driver monitoring is not new. It is already embedded in many modern vehicles, especially those equipped with advanced driver-assistance systems.

General Motors says its Super Cruise system “tracks the driver’s head position and/or the driver’s gaze” and alerts the driver when attention drifts. Chevrolet describes the system as using a camera mounted on the steering wheel to track “head and eye movement.”

Similarly, Ford’s BlueCruise uses “a driver-facing camera and infrared lighting” to confirm that the driver remains focused on the road. Subaru’s DriverFocus system uses comparable technology, capable of alerting occupants if the driver appears drowsy or distracted.

Today, these systems primarily issue warnings. Under the federal rule, similar technology could become standard in every new vehicle. It would not simply prompt the driver. It could help determine whether the vehicle should start or continue operating.

The National Highway Traffic Safety Administration (NHTSA) describes the current landscape in similar terms. Its 2026 report to Congress explains that indirect systems infer driver state “through camera-based monitoring and vehicle inputs.” It also notes that most current systems are designed to detect “drowsiness, inattention, and sudden sickness,” not alcohol impairment.

That distinction matters. A system designed to detect distraction is not automatically capable of reliably identifying intoxication. Yet the mandate moves in that direction, turning optional in-cabin monitoring into a required compliance system.

Keep reading

Digital Stepford Wives? Men Are Falling in Love With AI-generated Female Influencers

In the “old” days of the internet, you perhaps had to be wary of some creep in a chat masquerading as an appealing romantic interest. But that was small ball. Today some men are falling for gorgeous female “influencers” — who happen to be generated via artificial intelligence (AI). The kicker:

Some people are trying to forge relationships with these digital fictions even when knowing they’re AI-generated.

The appeal is that with current technology, these “digi-entities” appear, behave, and interact just as real women do. (But without the moodiness?) What’s more, you ain’t seen nothin’ yet.

When this AI is ultimately combined with ultra-sophisticated robotics, we could have real-life “Stepford Wives.” (This references the eponymous 1970s story about a town whose married women are gradually replaced with identical-looking, subservient, domesticity-oriented androids.) Moreover, this phenomenon’s acceptance is wholly congruent with today’s prevailing secular mindset.

After all, a corollary of atheism is that we’re just material beings — some pounds of chemicals and water. Another way of putting it:

We’re mere organic robots.

And what would be wrong with replacing sub-optimal robots with more pleasing ones?

“Domo Arigato, Mr. Roboto”

For now, however, that kind of real-woman realism (not to mention the affordability factor) is only found in the digi-entities. Reporting on the story Sunday, ZeroHedge cites the New York Post and, providing digi-entity examples, writes:

One widely followed pro-MAGA persona, for example, was ultimately exposed as “nothing more than an algorithm run by a guy in India,” revealing just how convincingly these accounts can mimic authenticity.

Despite that, audiences continue to engage — often deeply. Many followers, particularly older men, are “falling for them left, right and center.” Experts suggest this isn’t just about deception, but about a deeper emotional gap. Some describe the phenomenon as a “pandemic of loneliness,” even pointing to a broader “societal loss of humanity” as people increasingly form attachments to digital illusions instead of real relationships.

What’s striking is that these accounts don’t always hide the truth. Some openly identify as AI and still attract admiration. Take Ana Zelu, a fictional influencer who clearly labels herself an “ai-influencer,” yet maintains a highly curated feed filled with aspirational imagery — luxury travel, fashionable outfits, and picturesque city scenes. Her posts draw enthusiastic responses, with followers commenting things like “Number one is my favourite…May God bless you,” and “You are genuinely in a class of your own.” The awareness that she isn’t real doesn’t seem to diminish the appeal.

The Post writes that a similar pattern appears with Milla Sofia, another digital creation presented as a pop singer. Her content includes stylized videos and performances, and although her profile identifies her as virtual, fans respond as if she were a real celebrity. Comments such as “my sweet love,” “Listening to the music of this woman I love,” and “I love you” reflect genuine emotional investment.

Keep reading

New Digital ID Bill Ties Your Identity to Your Phone—and Everything You Do Online

Republicans are once again teaming up with Democrats to ram Digital ID through at the federal level.

The bill they’ve just introduced is, if you can believe it, worse than all the others before it.

HR 8250, deceptively named the Parents Decide Act, doesn’t just force everyone to link their identity to use apps on their phones, it mandates that they must do it to use ANY operating system. That means Apple, iOS, Windows, Google, Android, even Samsung—basically everything.

And once that’s in place, there’s nowhere to step outside of it.

But one brave group is refusing to go along.

GrapheneOS has made a statement saying: GrapheneOS will remain usable by anyone around the world without requiring personal information, identification, or an account.

Glenn and Eric Meder from Privacy Academy have been working to educate people on how to escape the digital control grid, including how to put GrapheneOS on your phone—for free. And they have a solution to Digital ID right now.

Keep reading