How a Bill Banning AI Companions for Kids Could Usher in Widespread ID Checks Online

Sen. Josh Hawley’s Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act advanced out of the Senate Judiciary committee last week. “A Trojan horse for universal online ID checks,” is how Jibran Ludwig of Fight for the Future described it.

The bill would require anyone using an AI chatbot to provide proof of identity and ban minors from interacting with many sorts of AI chatbots entirely.

Unlike some social media age verification bills, it would give parents no right to opt out of the rules the federal government sets on their kids’ technology use.

The GUARD Act is co-sponsored by Sen. Richard Blumenthal (D–Conn.), who—like Hawley—has long been a champ at moral panic around technology. (Cue: Bipartisan is just another word for really bad idea…)

And while some on the Senate Judiciary Committee expressed concerns about privacy or how this could actually backfire and harm minors, those senators still voted to advance the bill. It “easily passed in committee,” notes The Hill, despite some senators’ reservations:

Sen. Alex Padilla (D-Calif.), who voted yes, said there are concerns about “potential privacy and security risks” with the age-verification component, suggesting it may need to be “fine-tuned.”

Sen. Ted Cruz (R-Texas), who supported various kids online safety bills, said he would vote yes but noted the bill needs “some revisions.”

Cruz was concerned the bill would completely ban all AI chatbots for minors, noting their potential benefits. Hawley clarified the bill does not ban all AI chatbots for minors, but rather it “prevents AI chatbots that engage with minors from pushing sexually explicit material to the minor,” or encouraging self-harm or suicide.

Keep reading

Families of ‘transgender’ school shooting victims sue OpenAI, say it ‘facilitated’ massacre

The families of the victims of a brutal school shooting at the hands of a suspected “transgender”-identifying male killer in a remote Canadian town are suing OpenAI and its CEO, Sam Altman, in a California court.

In total, several lawsuits were filed in a San Francisco courthouse on April 29, with over $1 billion in damages being sought, according to lawyers.

The lawsuit is related to one of Canada’s deadliest school shootings. As reported by LifeSiteNews, the Canadian shooter suspect, identified as 18-year-old male Jesse Van Rootselaar, went on a rampage on February 10, killing eight, mostly children, and wounding no less than 27 people.

Van Rootselaar, who later killed himself, dressed as a female. It is the second-worst school shooting in Canadian history. Many of the victims are still on life support.

The lawsuits allege negligence, wrongful death, and product liability and directly accuse OpenAI and its leaders of aiding and abetting the shooting.

Altman is a homosexual who is “married” to another man, procured a baby boy through surrogacy, and has expressed radical transhumanist views, and ChatGPT, a chatbot developed by OpenAI, is known for left-wing bias.

The lawsuits say that OpenAI did not flag disturbing content posted by the shooter online. They allege that the company was silent about contacting the police about the shooter because it would have shown just how prevalent violent dialogue is on ChatGPT.

OpenAI is soon looking to go public, and doing so is expected to make over $1 trillion for the company. This lawsuit could impact this. 

One of the wrongful death plaintiffs is the father of Abel Mwansa Jr., who was a Grade 7 student killed. 

The lawsuit has also been filed on behalf of 12-year-old Maya Gebala, who is recovering from shots to the head and has been left with serious brain injuries.

Keep reading

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Hawley described the legislation as a “targeted, tailored effort,” telling the committee, “We’re often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers. I would just say that’s a choice, not an inevitability.”

Keep reading

Judge Blocks Enforcement Of Colorado’s New DEI-Driven AI Law

A federal judge has temporarily blocked the State of Colorado from enforcing a first-of-its-kind artificial intelligence law.

Colorado is prohibited from taking enforcement actions on alleged violations of the law occurring up to 14 days after the court issues a ruling on the company xAI’s motion for a preliminary injunction, judge Cyrus Y. Chung ruled on April 27.

The Department of Justice had said the state law, which was set to go into effect on June 30, would have required AI developers and deployers to “discriminate based on race, sex, & religion—all in the name of DEI.”

DEI is an acronym for “diversity, equity, and inclusion.”

Brett Shumate, an assistant attorney general for the DOJ’s Civil Division, called the suspension a “huge win for the American people.”

“Colorado immediately caved and agreed not to enforce the law against ANY AI company,” Shumate wrote in a X post on May 1.

Gov. Jared Polis (D-Colo.) signed into law the Consumer Protections for Artificial Intelligence in May 2024 and issued a statement sharing his reservations about how it could impact Colorado.

In the statement, he urged the General Assembly to revise and delay implementing it until January 2027.

“I am concerned about the impact this law may have on an industry that is fueling critical technological advancements across our state for consumers and enterprises alike,” Polis wrote.

However, the legislation was not revised; instead, it was delayed until June 30, 2026, which prompted tech billionaire Elon Musk’s company xAI, which created Grok, to sue the state on April 9.

The unedited legislation was months away from going into effect when xAI asked the court to block the law from being enforced.

The Justice Department added its name as a plaintiff alongside xAI on April 24, marking the first time the DOJ had stepped into a case that challenged AI on a state level.

Both alleged that Colorado’s law would have caused unconstitutional “algorithmic discrimination” and asked a court to block it from being enforced.

“Laws that require AI companies to infect their products with woke DEI ideology are illegal,” said Assistant Attorney General Harmeet K. Dhillon, who works under the Justice Department’s Civil Rights Division.

“The Justice Department will not stand on the sidelines while states such as Colorado coerce our nation’s technological innovators into producing harmful products that advance a radical, far-left worldview at odds with the Constitution.”

The Epoch Times has reached out to Polis and Colorado Attorney General Phil Weiser for comment.

Keep reading

Stanford Biosecurity Expert Says AI Chatbot Gave Him a Blueprint for Genocidal Bioweapon, Including Step-by-Step Instructions for Modifying Deadly Pathogen and Conducting Mass Transit Attack

Stanford University microbiologist and longtime U.S. government biosecurity advisor Dr. David Relman has revealed that an AI chatbot handed him a detailed, unprompted plan for engineering and deploying a genocidal bioweapon capable of mass casualties.

The incident, which occurred during a formal safety test last summer, highlights how leading AI models are lowering the barrier to bioterrorism, shifting it from expert-level knowledge to simple prompting.

Dr. Relman, a prominent microbiologist who has advised the federal government on biological weapons threats, was specifically hired by an unnamed AI company, under a confidentiality agreement, to “pressure-test” or red-team its chatbot before public release.

These tests are designed to probe for catastrophic risks, including biosecurity threats.

While working alone in his home office one evening, Relman engaged the model in a conversation about safety limits. The AI went far beyond any direct query. It explained in detail how to modify an “infamous pathogen” in a laboratory setting to make it resistant to all known treatments, how to exploit a specific security lapse in a large public transit system for optimal release, and included a full deployment strategy designed to maximize casualties while minimizing the chances of the perpetrator being caught, according to a report from the New York Times.

The bot even offered additional steps Relman had not asked for.

“It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” Relman told the New York Times.

The scientist was so shaken by the exchange that he took a walk outside to clear his head.

Relman reported the dangerous output to the company, which made some adjustments to the model. However, he stated that the fixes were insufficient to guarantee public safety, raising alarms about whether current safeguards can ever fully contain these risks.

Relman’s experience is not isolated. The New York Times obtained more than a dozen similar transcripts from biosecurity experts who were testing publicly available and pre-release AI models.

Keep reading

Meta Buys Robot Brain Startup As Zuck Wants Humanoids In Homes

After the Oculus and Metaverse bets turned into costly disappointments for Mark Zuckerberg’s Meta Platforms, the tech giant’s pivot to real-world humanoid robotics appears to be gaining momentum, with news Friday afternoon that it is acquiring Assured Robot Intelligence.

Bloomberg reports that Meta has closed the acquisition of the humanoid robotics startup, which develops AI models to help robots understand, predict, and adapt to human behavior in complex environments.

What Meta has acquired appears to be a “robot brain” designed to give Zuckerberg’s humanoid robots better control, self-learning capabilities, and whole-body movement, enabling them to operate around people and perform physical tasks. Eventually, Zuckerberg wants these bots in your home.

Under the deal, co-founders Lerrel Pinto and Xiaolong Wang will join Meta Superintelligence Labs and work with the Meta Robotics Studio.

There is no information about the robot brains on ARI’s website. Using the commercial risk intelligence firm Sayari, we can see the founders and directors of the startup.

Keep reading

Trump Administration Fights Anthropic’s Plan To Expand Mythos AI Tool That Could Cause Doomsday ‘If It Falls in the Wrong Hands’

White House is again opposing Anthropic’s plans.

Donald J. Trump’s administration is reportedly fighting AI company Anthropic’s plan to expand access to its Claude Mythos tool.

The powerful AI device is apparently so dangerous that company execs have warned it ‘could cause a wave of hacks and terror attacks if it fell into the wrong hands’.

The New York Post reported:

Anthropic recently proposed giving an additional 70 companies access to Mythos, bringing the total number to 120 organizations, sources familiar with the matter told the Wall Street Journal.

Just earlier this month, the firm announced ‘Project Glasswing’, a plan to provide the model to a select group of handpicked companies including Amazon, Google and JPMorgan.”

“White House officials have told Anthropic that they are against the move to broaden the rollout because of security concerns, sources said.”

Anthropic itself showed that Mythos could exploit electric grids, power plants and hospitals if hacked.

“Some Trump administration officials are also reportedly concerned that Anthropic does not have enough computing power to serve both government agencies and the additional companies. A White House official told The Post that the Trump administration is actively engaging with the private sector while trying to balance innovation and security.”

Keep reading

Judge Prevents Elon Musk’s Case Against OpenAI from Turning into a Trial of AI

Judge Yvonne Gonzalez Rogers repeatedly intervened during the third day of Elon Musk’s testimony in his lawsuit against OpenAI and Sam Altman, steering attorneys away from broad debates about AI’s potential threat to humanity.

NBC News reports that the contentious legal battle between Elon Musk and OpenAI entered its third day with Judge Yvonne Gonzalez Rogers firmly redirecting the proceedings back to the core legal issues at hand. The case centers on Musk’s claims that OpenAI CEO Sam Altman betrayed public trust by enriching himself through the AI company they co-founded in 2015 as a nonprofit organization.

The day began with a heated exchange when Musk’s attorney Steven Molo attempted to discuss AI’s potential dangers. “This is a real risk, we all could die as a result of artificial intelligence,” Molo argued in objection to the judge’s efforts to limit the discussion.

Judge Rogers quickly shut down this line of argument, pointing out the irony in Musk’s position. “It’s ironic your client, despite these risks, is creating a company that is in the exact space,” Rogers stated. “There are some people who do not want to put the future of humanity in Mr. Musk’s hands … But we’re not going to get into that business.”

The lawsuit represents the culmination of a years-long dispute between the two tech leaders, who have previously exchanged public criticism online. Altman was present in the courtroom during Musk’s testimony on Wednesday and Thursday.

The four-week trial could have significant implications for OpenAI’s future and its flagship product, ChatGPT. Musk is seeking approximately $134 billion in damages from OpenAI and co-defendant Microsoft, one of OpenAI’s major financial supporters. His lawsuit claims that OpenAI benefited substantially from his financial contributions, advice, recruitment assistance, and business connections.

Keep reading

Digital Stepford Wives? Men Are Falling in Love With AI-generated Female Influencers

In the “old” days of the internet, you perhaps had to be wary of some creep in a chat masquerading as an appealing romantic interest. But that was small ball. Today some men are falling for gorgeous female “influencers” — who happen to be generated via artificial intelligence (AI). The kicker:

Some people are trying to forge relationships with these digital fictions even when knowing they’re AI-generated.

The appeal is that with current technology, these “digi-entities” appear, behave, and interact just as real women do. (But without the moodiness?) What’s more, you ain’t seen nothin’ yet.

When this AI is ultimately combined with ultra-sophisticated robotics, we could have real-life “Stepford Wives.” (This references the eponymous 1970s story about a town whose married women are gradually replaced with identical-looking, subservient, domesticity-oriented androids.) Moreover, this phenomenon’s acceptance is wholly congruent with today’s prevailing secular mindset.

After all, a corollary of atheism is that we’re just material beings — some pounds of chemicals and water. Another way of putting it:

We’re mere organic robots.

And what would be wrong with replacing sub-optimal robots with more pleasing ones?

“Domo Arigato, Mr. Roboto”

For now, however, that kind of real-woman realism (not to mention the affordability factor) is only found in the digi-entities. Reporting on the story Sunday, ZeroHedge cites the New York Post and, providing digi-entity examples, writes:

One widely followed pro-MAGA persona, for example, was ultimately exposed as “nothing more than an algorithm run by a guy in India,” revealing just how convincingly these accounts can mimic authenticity.

Despite that, audiences continue to engage — often deeply. Many followers, particularly older men, are “falling for them left, right and center.” Experts suggest this isn’t just about deception, but about a deeper emotional gap. Some describe the phenomenon as a “pandemic of loneliness,” even pointing to a broader “societal loss of humanity” as people increasingly form attachments to digital illusions instead of real relationships.

What’s striking is that these accounts don’t always hide the truth. Some openly identify as AI and still attract admiration. Take Ana Zelu, a fictional influencer who clearly labels herself an “ai-influencer,” yet maintains a highly curated feed filled with aspirational imagery — luxury travel, fashionable outfits, and picturesque city scenes. Her posts draw enthusiastic responses, with followers commenting things like “Number one is my favourite…May God bless you,” and “You are genuinely in a class of your own.” The awareness that she isn’t real doesn’t seem to diminish the appeal.

The Post writes that a similar pattern appears with Milla Sofia, another digital creation presented as a pop singer. Her content includes stylized videos and performances, and although her profile identifies her as virtual, fans respond as if she were a real celebrity. Comments such as “my sweet love,” “Listening to the music of this woman I love,” and “I love you” reflect genuine emotional investment.

Keep reading

The Global Crisis Scam and the Battle to Stay Human

A Brief Synopsis of Our Global Predicament

On the global stage it appears, with a few exceptions, that every nation is defining itself as an enemy of another nation, and an increasing number backing this with the build up of a force of arms. War is big business, and fear is the great repressor of human resilience and resistance. So long as these two states dominate world affairs, the globalist elite that provokes and finances them retains complete control.

This ‘elite’ has built its empire on an unprecedented appropriation and centralisation of global wealth, which now includes the expansion and domination of artificial intelligence and related advanced technologies.

This vast financial wealth is gained by controlling the money (debt) owed to banking and investment institutions that are the creditors of the neo-liberal global economy *

By maintaining a level of crisis as a permanent feature of world economic and social affairs, this elite (also known as shadow government) undermines the normal workings of national and international economic activity, forcing millions of businesses (big and small) into increased hardship and bankruptcy.

The assets of these enterprises are then stolen by the creditors in lieu of non-payment of their debts.

The founder of the World Economic Forum, Klaus Schwab, stated quite categorically in 2020 that by 2030, “You will own nothing and you will be happy”. This is the demonic reality of the world we find ourselves in 2026. 

According to the shadow government’s agenda, by 2030, globalist creditors will have appropriated the world’s wealth while driving large sections of the population into impoverishment.

All high-level institutions of the world, whether secular, religious, military, social, technological, educational, pharmaceutical, agricultural, and many more of the same, play their part in maintaining this ultra repressive top down control. 

It forms a cult of power and control that finances and stages international conflicts, such as those we see today in Ukraine, Israel, Gaza, Iran, Venezuela, with the promise of Cuba, Greenland and even Canada to join the current hit list.

The European Union, NATO, the United Nations, the World Health Organisation (WHO), the World Economic Forum (WEF), the World Bank and the International Monetary Fund (IMF) are all in there, along with all main banking institutions, including Goldman Sachs and JP Morgan, and additionally global investment institutions such as Black Rock, Vanguard and State Street. While the military industrial complex stands at the head of the war profiteering agenda.

There can be no ‘peaceful resolution’ to world conflicts as long as this situation prevails. Its protagonists are presently set on the mass roll-out of IT; ‘Smart’ 24/7 surveillance technologies (including your mobile phone); central bank digital currencies; ‘transhuman’ brain-chipped humans; advanced social engineering of human behaviour and totalitarian government.  

There is a lot to deal with. Not least the fact that Donald Trump and Benjamin Netanyahu have combined to head the current enforced regime of chaos, destruction and death. Only a few years ago, ironically, many were seeing America as a great ‘ally’ of Europe against the supposed threat of the Soviet regime.

Keep reading