Federal Judge Sanctions Alabama Lawyers for Submitting Fake AI‑Generated Case Citations, Highlighting Systemic, Ongoing AI Problems Making up Facts

OpenAI founder Sam Altman says that soon, everything everywhere will start using Artificial Intelligence and Large Language Models for entire professions, causing them to “disappear.”

Meanwhile, people actually using these services, including attorneys in Alabama, are being sanctioned for the pervasive AI/LLM flaw of ‘hallucinating’ fake citations and fake references.

A federal judge in Birmingham, Alabama, Judge Anna Manasco, issued formal sanctions this week against three attorneys from the law firm Butler Snow after they submitted legal filings containing fabricated case citations generated by ChatGPT.

Manasco, appointed to the court by President Trump, described the citations as “completely made up” and removed the attorneys from the case.

The filings were part of a lawsuit brought by an inmate who alleged repeated stabbings at the William E. Donaldson Correctional Facility. Manasco referred the case to the Alabama State Bar and ordered the attorneys to share the sanctions order with all current and future clients, as well as all opposing counsel and courts where they are actively involved.

Even the attorneys overseeing the ones who made the mistake of using ChatGPT were also sanctioned. The supervisors claimed they ‘skimmed’ the filings and did not notice the fabricated legal authorities used to support their written arguments.

The lawsuit centers on claims by inmate Frankie Johnson, who alleges that prison officials failed to prevent multiple assaults despite prior warnings. Johnson is housed at Donaldson Correctional Facility, one of the state’s most overcrowded and violent prisons. The firm representing the Alabama Department of Corrections, Butler Snow, filed motions in the case that included five legal citations meant to support its arguments on scheduling and discovery disputes. Upon review, none of the referenced decisions existed.

News in the past month also suggests that, when measured, heavy AI/LLM reliance stunts the cognitive growth in its users, effectively making them dumber.

The judge investigated the filings further in this case and determined that the cases cited had never been published, logged, or recorded in any known legal database. They were simply made up out of thin air.

Keep reading

President Trump threatened to break up Nvidia, didn’t even know what it was — ‘What the hell is Nvidia? I’ve never heard of it before’

During a speech to mark the unveiling of his new AI Action Plan, President Donald Trump revealed that he suggested simply breaking up Nvidia, despite appearing to have never heard of the company or its CEO, Jensen Huang.

Trump made the remarks on stage at an AI summit in Washington, D.C., as he unveiled the United States’ new AI Action Plan.

The President made reference and gave thanks to some of AI’s top industry leaders, “And a very special thanks to some of the top industry leaders here, including somebody that’s amazing,” he said, alluding to Huang.

“I said, look, we’ll break this guy up — this is before I learned the facts of life — I said we’ll break ’em up,” he continued. “They said ‘very hard’, I said ‘Why?’ I said, what percentages of the market does he have? ‘Sir, he has 100%.'”

Trump continued, “I said, ‘Who the hell is he? What’s his name?’ ‘His name is Jensen Huang, Nvidia, ‘ I said, ‘What the hell is Nvidia?’ I’ve never heard of it before.

Keep reading

Scientists in Maryland Are Developing Artificial Blood

Scientists in Maryland believe they are on the verge of creating artificial blood that could save thousands of lives.

Researchers and scientists at the University of Maryland School of Medicine in Baltimore are creating artificial blood by extracting hemoglobin from expired blood and then enclosing the protein in a bubble of fat, which replicates red blood cells.

Dr. Allan Doctor shared that the artificial blood is “designed so that at the moment it’s needed, a medic can mix it with water, and within a minute, you have blood.”

Currently, the team at the University of Maryland School of Medicine is using artificial blood in tests with rabbits.

Per NPR:

Tens of thousands of people bleed to death each year in the United States before they can get to a hospital. That’s because ambulances, medical helicopters and military medics can’t routinely carry blood, which would go bad too fast without adequate refrigeration.

So scientists have been on a quest to develop artificial blood that could be stored in powdered form and reconstituted by medics on the spot to save lives.

At the University of Maryland School of Medicine in Baltimore, where some of this research is being conducted, a white rabbit lies on the floor of a cage. It’s in a “special intensive care unit that we’ve created for our rabbit resuscitation,” says Dr. Allan Doctor, a scientist at the school.

Doctor’s team just drained blood from the animal to simulate what happens to a person who’s hemorrhaging from an injury, such as from a car crash or battlefield wound. “This rabbit is still in shock. You can see he’s lying very still. It’s as if he was at the scene of an accident,” says Doctor. “If we didn’t do anything, it would die.”

But Doctor and his team are going to save this rabbit today. They’re going to fill his veins with something they hope will finally enable them to achieve a goal that has stymied researchers for decades: developing safe and effective artificial blood. “Good bunny,” says Danielle Waters, a technician on Doctor’s team, as she gently lifts the rabbit and starts infusing him with three big syringes of artificial blood.

Keep reading

AI Chatbots Rely On Sources With Clear Biases

AI chatbots like ChatGPT and Grok can be a big help in writing essays, conducting research, and exploring complex issues. But these tools bring risks, especially when they filter facts through a political lens. And the Trump administration is now stepping into the debate. “We believe AI systems should operate free of ideological bias and avoid pushing socially engineered agendas,” said David Sacks, the administration’s AI and crypto czar, in a statement today. “We’ve introduced several proposals to ensure AI stays truth-seeking and trustworthy.”

Over the weekend, I saw this bias unfold in real time.

On Friday, a user on Elon Musk’s platform X asked Grok whether more guns make Americans safer. Grok responded flatly: “No, evidence shows more guns correlate with higher firearm homicides and violent crime rates.” The chatbot dismissed self-defense and deterrence, referring to my research –specifically my “more guns, less crime” theory – as something cited by “right-wing advocates.” Grok supported its claims by referencing Scientific American magazine and a RAND Corporation review, saying these sources show guns don’t reduce crime and instead increase violence.

Those answers are misleading and wrong.

The Scientific American article had extensive biases. Grok ignored my published rebuttal in Scientific American. In it, I noted that over two-thirds of peer-reviewed studies show that concealed carry laws do reduce crime. Melinda Wenner Moyer, a journalist affiliated with Michael Bloomberg’s The Trace, a well-known gun control advocacy outlet, wrote the article. I had provided Moyer with those studies while she prepared her piece, but she ignored them. She failed to acknowledge any of my post-1998 work and misrepresented the findings of the National Research Council’s major report on the topic.

Grok gave tremendous weight to RAND’s literature survey, claiming that RAND had surveyed 100+ studies. Eventually, Grok conceded that the number of papers studying right-to-carry laws was actually 25, showing a range of mixed results. I pointed out that the California-based think tank was highly selective in the sources it included, ignoring dozens more papers showing that these laws lowered violent crime rates and surveys of academics who have published peer-reviewed empirical research.

Even then, Grok largely ignored my responses and focused on two papers claiming that right-to-carry laws increased violent crime. The first failed to control for any variables – such as changes in policing, poverty, or economic conditions – that affect crime trends after adopting right-to-carry laws. When I pointed that out, Grok mentioned another study that demonstrated a statistical technique that could account for such factors, but that study didn’t look at right-to-carry laws. Only after a prolonged exchange did Grok acknowledge the error.

Keep reading

Amazon Ring Cashes In On Techno-Authoritarianism And Mass Surveillance

Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices.

This is a bad, bad step for Ring and the broader public.

Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement.

Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device.

It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted.

Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device.

After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance.

Keep reading

Trump Doesn’t Support Government Doing Business With Musk’s AI Company: White House

President Donald Trump is not interested in allowing the federal government to do business with tech entrepreneur Elon Musk’s artificial intelligence (AI) company, White House press secretary Karoline Leavitt said on July 23.

During a White House press briefing on Wednesday, Leavitt was asked whether Trump supports federal agencies contracting with Musk’s company, xAI.

“I don’t think so, no,” Leavitt said.

But last week, the company, alongside Anthropic, Google, and OpenAI, scored a $200 million contract with the Department of Defense to boost the agency’s adoption of artificial intelligence.

When asked whether the president wanted to cancel the contract, Leavitt said she would discuss it with Trump.

All four companies were awarded contracts with the same $200 million ceiling from the Defense Department’s Chief Digital and Artificial Intelligence Office to advance the Pentagon’s adoption of AI.

The same day the contracts were awarded, xAI announced a new series of AI products for the U.S. government, referred to as Grok for Government.

Last year, Musk stepped into the political arena and decided to endorse Trump, donate to his campaign, and act in an advisory capacity. Trump then tapped him to lead his Department of Government Efficiency (DOGE), which recommended a series of federal worker layoffs and steep cuts to multiple agencies.

Keep reading

CRIMINALS AND BOTS: OpenAI’s Sam Altman Warns Financial Sector That Artificial Intelligence Is About to Unleash ‘Significant Fraud Crisis’

The rise of the Artificial Intelligence technology carries a lot of promise but also a lot of risk.

When it comes to the perils of the AI technology, we’ve heard multiple experts alerting to the ‘macrocosmic’ problems that it can pose to the very survival of humankind, with the alarmism hitting overdrive with provocateur Eliezer Yudkowsky saying ‘I think we’re all gonna die!’

But there’s also the ‘microcosmic’ problems that AI is already unleashing on humanity, in the sense that it quickly became another tool used by criminals – and a powerful one, too!

Sometimes there are terrible stories that seem a ‘one-off’ destined to the folklore, as in the case of the French woman scammed out of $850k by con-man posing as ‘Brad Pitt in the hospital’ with AI-Generated images and Chatbot.

But now, we seem to be heading to a more systemic danger, as OpenAI CEO Sam Altman warns the financial sector of a ‘significant impending fraud crisis’ because of artificial intelligence.

Altman detailed AI’s increasing capability to ‘impersonate human voices’, allowing criminals to bypass security protocols and even illicitly transfer funds.

The Independent reported:

“Speaking at a Federal Reserve conference in Washington, Mr. Altman expressed particular alarm over outdated authentication methods. ‘A thing that terrifies me is apparently there are still some financial institutions that will accept the voiceprint as authentication’, he stated. ‘That is a crazy thing to still be doing. AI has fully defeated that’.”

Keep reading

Maybe AI Isn’t Going To Replace You At Work After All

AI fails at tasks where accuracy must be absolute to create value.

In reviewing the on-going discussions about how many people will be replaced by AI, I find a severe lack of real-world examples. I’m remedying this deficiency with an example of AI’s failure in the kind of high-value work that many anticipate will soon be performed by AI.

Few things in life are more pervasively screechy than hype, which brings us to the current feeding-frenzy of AI hype. Since we all read the same breathless claims and have seen the videos of robots dancing, I’ll cut to the chase: Nobody posts videos of their robot falling off a ladder and crushing the roses because, well, the optics aren’t very warm and fuzzy.

For the same reason, nobody’s sharing the AI tool’s error that forfeited the lawsuit. The only way to really grasp the limits of these tools is to deploy them in the kinds of high-level, high-value work that they’re supposed to be able to do with ease, speed and accuracy, because nobody’s paying real money to watch robots dance or read a copycat AI-generated essay on Yeats that’s tossed moments after being submitted to the professor.

In the real world of value creation, optics don’t count, accuracy counts. Nobody cares if the AI chatbot that churned out the Yeats homework hallucinated mid-stream because nobody’s paying for AI output that has zero scarcity value: an AI-generated class paper, song or video joins 10 million similar copycat papers / songs / videos that nobody pays attention to because they can create their own in 30 seconds.

So let’s examine an actual example of AI being deployed to do the sort of high-level, high-value work that it’s going to need to nail perfectly to replace us all at work. My friend Ian Lind, whom I’ve known for 50 years, is an investigative reporter with an enviably lengthy record of the kind of journalism few have the experience or resources to do. (His blog is www.iLind.netian@ilind.net)

The judge’s letter recommending Ian for the award he received from the American Judges Association for distinguished reporting about the Judiciary ran for 18 pages, and that was just a summary of his work.

Ian’s reporting/blogging in the early 2000s inspired me to try my hand at it in 2005.

Keep reading

Brace For Soaring Electricity Bills: Biggest US Power Grid Sets Power Costs At Record High To Feed AI

Very soon if you want AI (and even if you don’t), you won’t be able to afford AC.

Just this morning we warned readers that America’s largest power grid, PJM Interconnect, which serves 65 million people across 13 states and Washington, DC, and more importantly feeds Deep State Central’s Loudoun County, Virginia, also known as ‘Data Center Alley‘ and which is recognized as one of the world’s largest hubs for data centers…

… had recently issued multiple ‘Maximum Generation‘ and ‘Load Management‘ alerts this summer, as the heat pushes power demand to the brink with air conditioners running at full blast across the eastern half of the U.S.

But as anyone who has not lived under a rock knows, the deeper issue is that there’s simply not enough baseload juice to feed the relentless, ravenous growth of power-hungry AI server racks at new data centers. 

There is simply no new capacity to meet new loads,” said Joe Bowring to Bloomberg, president of Monitoring Analytics, which is the independent watchdog for PJM Interconnection. “The solution is to make sure that people who want to build data centers are serious enough about it to bring their own generation.”

Well, there is another solution: crank up prices to the stratosphere. 

And that’s precisely what happened. As Bloomberg reports, business and households supplied by the largest US grid will pay $16.1 billion to ensure there is enough electricity supply to meet soaring power demand, especially that from a massive buildout in AI data centers.

The payouts to generators for the year starting June 2026 topped last year’s record $14.7 billion, according to PJM Interconnection LLC, which operates the grid stretching from the Midwest to the mid-Atlantic. That puts the capacity price per megawatt each day at a record $329.17 from $269.92.

Keep reading

Grappling With Existential Panic Over AI

Some time over the Christmas holidays, I experienced what I called a moment of “existential clarity” about AI and it’s ramifications – when I realized that in the not-so-distant future, it was entirely possible that most of easyDNS’ customers would be autonomous AI-driven agents rather than people.

Our internal project to completely rebuild our UX (still ongoing) was close to a quarter in, and it occurred to me that we could be building a bridge-to-nowhere. Why are we creating more elegant ways to render forms that input hostnames and their respective rdata when:

  • you could probably just tell the backend what you want for your domain functionality to be and it can generate the requisite zonefile to facilitate it, and then
  • not long after that every API is going to sit behind an MCP server and it’ll all be done agentically via automated endpoints anyway.

What was the point? This question still bothers me, but we continue to toil away at the UX rebuild, because even though this is where everything is headed, there will still be a temporally long-tail of copy-pasting IP addresses into forms (in the meantime I spend my spare time vibe coding alternative ways to convey DNS and metadata to a zonefile rendering engine. I can see why this isn’t totally a thing yet, but it will be.)

Recently, I started reading John W. Munsell’s “Ingrain AI” – it hits the ground running, with the introduction titled “Every CEO’s Nightmare”, wherein it lays out the “productivity” induced death-spiral many companies may be blundering into, should they be pursuing AI merely as a cheat-code toward hyper-efficiency.

Keep reading