Tech company launches AI Jesus you can talk to for $2 a minute

A tech company has set up a new platform that allows users to have conversations with an AI avatar of Jesus Christ.

Just Like Me is a tech company that creates AI versions of certain celebrities, experts and personalities. According to the company, these “digital twins” are intended to “provide guidance, mentorship, support and friendship” to users of the site.

Users can pay to have “video calls” with AI versions of personalities such as MySpace co-founder Chris DeWolfe, deceased political commentator Charlie Kirk, or fictional characters such as Santa Claus.

How the model was trained

The latest personality to be added to the website is an avatar of Jesus Christ. The AI version of Jesus is said to offer words of prayer and encouragement, speak a variety of languages, and recall prior conversations with Just Like Me users.

Just Like Me CEO Chris Breed said their model was trained on both the King James Bible and its understanding of scripture comes from sermons given by various preachers. Visually, the avatar was based on actor Jonathon Roumie’s portrayal of Jesus in the TV show The Chosen.

Just Like Me charges $1.99 per minute to speak to the avatar. Alternatively, a package deal priced at $49.99 gives users 45 minutes with the AI Jesus each month.

Keep reading

Another Wrongful Arrest Based on Faulty Facial Recognition Raises Growing Concerns

Jason Killinger walked into Reno’s Peppermill Casino in September of 2023, later that evening as he was exiting the building he would be arrested. The casino’s facial recognition system had flagged him as a “100 percent match” for an individual that had previously been banned from the property. The only problem? It was completely wrong.

After the system flagged Killinger, casino security would approach him, referring to him as “Mike”, an individual who had been previously removed from the property. Despite his insistence and ability to prove that he was in fact not Mike, security would surround and handcuff Killinger before calling the Reno Police Department. Shortly thereafter rookie Officer Richard Jager would arrive on the scene.

Killinger quickly proved he was not the man identified in the system, as he was carrying three valid forms of identification, including a Nevada Real ID compliant drivers license, his Peppermill player’s card, and a debit card, all with his name on it. When this wasn’t enough he offered to retrieve more from his vehicle, which included a pay stub, vehicle registration, and a medical card. Despite all of this copious documentation proving who he was, Officer Jager declined to investigate further, not bothering to look at any of his other identifying documents. Killinger would be arrested and charged with criminal trespass.

He would then spend 11 hours in police custody, only after a fingerprint check at Washoe County jail confirmed his identity would he finally be released.

Keep reading

AMD’s senior director of AI thinks ‘Claude has regressed’ and that it ‘cannot be trusted to perform complex engineering’

Anthropic’s Claude has reportedly gotten much worse at generating code recently. That’s according to a head member of the AI Group at AMD.

Last week, user StellarAccident took to the Claude Code GitHub to vent their frustrations at Claude’s AI-generated code. This account was soon discovered to be tied to Stellar Laurenzo, the senior director of AI at AMD, who linked to that GitHub account for a post on LinkedIn (via The Register) three weeks prior.

In their GitHub complaint, they state that “Claude has regressed to the point it cannot be trusted to perform complex engineering.”

Laurenzo claims Claude performed better in January, and says it now ignores instructions, suggests fixes that are incorrect, directly contradicts requests and claims completion despite, well, not completing the task. Laurenzo is using the Opus model and claims the result they get is replicable with the same prompt.

Laurenzo says, “We have a very consistent, high complexity work environment and data mined months of logs to understand why—essentially—starting in February, we have noticed a degradation performing complex engineering tasks.” She goes on to say, “Claude has been good to us, and we are leaving this in the hopes that Anthropic can address these concerns.”

What follows this is a report on what has changed since January and how to improve it, which is, rather ironically, generated by Claude. It argues that Claude shows its thinking less, and it ‘thinks’ less deeply now, often editing code before fully ‘reading it’.

Users have been complaining about Claude’s February update for some time now. Over on the Claude Code Subreddit, one user says, “I can no longer in good conscience recommend Claude Code to clients.”

Keep reading

Florida Launches Investigation into OpenAI: ‘Linked to Criminal Behavior’

Florida is launching an investigation into OpenAI, Florida Attorney General James Uthmeier announced on Thursday, noting that it has been linked to “criminal behavior,” including the deadly shooting at Florida State University (FSU) in April 2025.

“Today we are launching an investigation into OpenAI. The development and rollout of artificial intelligence is a monumental leap in technology, but it has not been without concern for public safety and national security,” Uthmeier began in his Thursday announcement.

“AI is built on its ability to gather data, and there are concerns about whether open AI’s data and AI technologies that can be used against America are falling into the hands of America’s enemies,” he said, naming the Chinese Communist Party (CCP) as an example.

Further, Chat GPT has been linked to a variety of criminal behavior, he continued, “including child sex abuse, material use by child predators and the encouragement of suicide and self harm.” Not only that, but Uthmeier pointed out that the AI chatbot was likely used to assist Phoenix Ikner, the suspect who allegedly opened fire on the FSU campus on April 17, 2025, plan the attack.

“We’ve also learned that Chat GPT may likely have been used to assist the murderer in a recent masculine shooting at Florida State University. The Tragically, AI should exist to supplement support and advance mankind, not lead to an existential crisis or our ultimate demise,” he said.

“As big tech rolls out these technologies, they should not they cannot put our safety and security at risk. We support innovation that doesn’t give any company the right to endanger our children, facilitate criminal activity, empower America’s enemies or threaten our national security, companies that do so will be held accountable to the fullest extent,” he continued, formally announcing the investigation and warning that subpoenas were on their way.

Uthmeier also called on the Florida legislature to take action and implement protections to safeguard children from the dangers of AI.

Keep reading

War for Fusion – From Iran’s Front Lines to a Boston Scientist’s Murder

“BLOOD, FUSION, and POWER” asked whether the Brown University mass shooting and the killing of MIT fusion scientist Nuno Loureiro were random crimes or signs of a bigger battle over fusion. This battle is really about who will control future energy and military power, and why those choices are being made far away from the American people.

Under Barack Obama, the United States quietly moved tens of billions of dollars in funding, equipment, and scientific work toward the France based ITER fusion project and away from American labs, weakening U.S. facilities while feeding a foreign run “global collaboration.” Even some Democrats and budget experts warned that ITER was turning into a money pit that trapped U.S. fusion funding inside a structure controlled overseas. Taxpayers were never plainly told that money meant for American labs and jobs was being shifted so a multinational body in southern France could decide how it would be spent.

France sells ITER as a peaceful science and climate project, but it is also a tool of French power. Hosting the world’s flagship fusion experiment makes France the gatekeeper of a critical energy technology. China is an official partner, shipping giant components to the French site and embedding its engineers there while using what they learn to boost their own “artificial sun” projects at home. Iran, although blocked from formally joining ITER by a U.S. veto, has locked itself into a sweeping 25 year strategic deal with China covering energy and technology, and has sought scientific cooperation with Europe in nuclear adjacent fields. On paper, ITER is neutral; in reality, France, China, and Iran are tied together through energy, technology, and strategy. The current war involving Iran’s proxies only underlines the point. Any serious solution has to look at those backing and supplying Tehran, not just the fighters on the ground.

This creates a sharp problem inside NATO. France enjoys the full benefits of the alliance and American security guarantees, yet hosts a fusion project closely tied to Chinese industry and sits in a European environment that looks for ways to keep trade and energy links with Iran alive. How can a country claim to protect NATO and U.S. interests while deepening its energy and technology ties to Beijing and standing at the center of a system that helps the very powers arming Iran’s war?

At the same time, there are still no clear answers about why someone killed one of America’s top fusion scientists. Police and media reports identify the suspected gunman, Claudio Manuel Neves Valente, a former Brown physics student later found dead in an apparent suicide, as the man likely responsible for both the Brown University shooting and Loureiro’s killing. Yet officials have not provided a convincing motive and have said they have no public evidence linking the attack directly to Loureiro’s research. The official story stops right where the real questions begin, and what the public is being asked to accept, without full explanation, does not make sense.

All of this unfolded as President Donald Trump pushed in the opposite direction, toward bringing fusion power and investment back under American control. In 2025, his administration advanced an “America First” investment and industrial approach, tightening focus on strategic sectors such as advanced energy and technology and supporting moves toward a national fusion roadmap aimed at a strong domestic industry.

Keep reading

Musk’s xAI Sues Colorado Over AI Law, Saying It Forces Developers to Back State’s Views

An artificial intelligence company founded by Elon Musk filed a lawsuit on April 9 over a Colorado law it claims makes AI developers endorse “Colorado’s views on diversity, equity, and inclusion or face significant compliance costs and civil fines.”

xAI, whose flagship product is the chatbot Grok, named Colorado Attorney General Philip Weiser as the defendant. The lawsuit states that the law’s provisions “prohibit developers of AI systems from producing speech that the State of Colorado dislikes, while compelling them to conform their speech to a State-enforced orthodoxy on controversial topics of great public concern.” The lawsuit says the Colorado law violates the First Amendment.

Weiser didn’t respond to an email seeking comment.

The lawsuit questions the use of the term “algorithmic discrimination” in the law, calling it vague.

The text of the law defines it this way: “Algorithmic discrimination means any condition in which the use of an artificial intelligence system results in unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law.”

The bill, SB24-205, was introduced in April 2024, passed the next month, and will take effect on June 30, 2026.

Keep reading

Your Old Kindle Still Works Perfectly. Amazon Is Killing It Anyway

A record player from 1972 still plays records. A paperback from 1985 still opens. A Kindle from 2011, the one that works perfectly, the one with no cracked screen or dead battery, will stop functioning as an e-reader on May 20, 2026, because Amazon decided it should.

Amazon sent emails this week to owners of Kindle devices manufactured in 2012 or earlier, informing them that support for their hardware would end in six weeks.

After May 20, those devices will no longer be able to buy, borrow, or download books. The only content available will be whatever is already sitting on the device. And if you factory reset your Kindle, or deregister it from your Amazon account for any reason, you will not be able to re-register it. At that point, the device becomes a plastic rectangle.

The affected models include the original Kindle, Kindle 2, Kindle DX, Kindle Keyboard, Kindle 4, Kindle 5, Kindle Touch, and the first-generation Kindle Paperwhite. Some of these devices have been in continuous use for 14 years. They work. The screens display text. The batteries hold a charge. The page-turn buttons click. None of that matters.

Amazon spokesperson Jesse Carr said that, “These models have been supported for at least 14 years — some as long as 18 years — but technology has come a long way in that time, and these devices will no longer be supported moving forward.” He added that Amazon is “notifying those still actively using them and offering promotions to help with the transition to newer devices.”

The promotion is a 20 percent discount on a new Kindle and a $20 eBook credit. Amazon is offering customers a coupon to buy something they didn’t want to buy, to replace something that already works. The offer expires June 20, 2026, which gives affected users exactly one month to decide whether to spend money solving a problem Amazon created for them.

The deregistration clause is where this gets ugly. The email Amazon sent includes a specific warning: if you deregister or factory reset your device after May 20, you cannot re-register it. The device becomes permanently unusable as a Kindle.

Keep reading

Scientists invented a fake disease. AI told people it was real

Got sore, itchy eyes? You’re probably one of the millions of people who spend too much time staring at screens, being bombarded with blue light. Rub your eyes too much and your eyelids might turn a slight, pinkish hue.

So far, so normal. But if, in the past 18 months, you typed those symptoms into a range of popular chatbots and asked what was wrong with you, you might have got an odd answer: bixonimania.

The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.

The problem was that the experiment worked too well. Within weeks of her uploading information about the condition, attributed to a fictional author, major artificial-intelligence systems began repeating the invented condition as if it were real.

Even more troublingly, other researchers say, the fake papers were then cited in peer-reviewed literature. Osmanovic Thunström says this suggests that some researchers are relying on AI-generated references without reading the underlying papers.

Keep reading

Voters Oust Half of City Council for Greenlighting $6,000,000,000 AI Data Center

Voters in a Missouri town ousted four city council members Tuesday for supporting an AI data center in the area.

Voters in the town of Festus, Missouri, voted against four members of the town’s eight-member city council who voted to greenlight a $6 billion data center project a week prior, according to St. Louis Public Radio. Anger has been brewing in several localities across the U.S. against data center projects, with voters in a Wisconsin town overwhelmingly opting Tuesday to crack down on a proposed development.

Tuesday’s vote followed months of at times raucous opposition against CRG Clayco’s plan to build a hyperscale data center on 360 acres in the town. The four city council members who lost their reelection bids were defeated by candidates who ran against the data centers and supported more transparency in the data center approval process.

“This data center fight has struck this community to the core and really, honestly ignited a community-driven effort here,” Dan Moore, who defeated pro-data center incumbent Bobby Benz, told St. Louis Public Radio. “People are awake now, and we’re not going to let this continue on anymore.”

Residents opposed to the AI data center flooded a local gymnasium to voice their frustrations during a March 31 city council meeting where the council voted to approve a framework of requirements for CRG’s planned construction, St. Louis Public Radio reported.

“I am not against growth,” Festus resident Lauren Albers said during the raucous city council meeting. “I’m against putting data centers between homes. I am against rushing into development before residents get real information, real answers and a real voice.”

Keep reading

Google’s AI Overviews spew millions of false answers per hour, bombshell study reveals

Google’s AI-generated search results are spewing out tens of millions of inaccurate answers per hour – even as the tech giant siphons visitors and ad revenue from cash-strapped news outlets, according to a bombshell analysis.

To test the accuracy of Google’s AI Overviews, startup Oumi reviewed 4,326 Google search results generated by Google’s Gemini 2 model and the same number of results generated by its more advanced Gemini 3 model.

The analysis found that the models were accurate 85% and 91% of the time, respectively.

With Google expected to handle more than 5 trillion searchers in 2026 alone, that means AI Overviews are spitting out fake news at a rate of hundreds of thousands of mistakes every single minute – with users left none the wiser.

Keep reading