CRIMINALS AND BOTS: OpenAI’s Sam Altman Warns Financial Sector That Artificial Intelligence Is About to Unleash ‘Significant Fraud Crisis’

The rise of the Artificial Intelligence technology carries a lot of promise but also a lot of risk.

When it comes to the perils of the AI technology, we’ve heard multiple experts alerting to the ‘macrocosmic’ problems that it can pose to the very survival of humankind, with the alarmism hitting overdrive with provocateur Eliezer Yudkowsky saying ‘I think we’re all gonna die!’

But there’s also the ‘microcosmic’ problems that AI is already unleashing on humanity, in the sense that it quickly became another tool used by criminals – and a powerful one, too!

Sometimes there are terrible stories that seem a ‘one-off’ destined to the folklore, as in the case of the French woman scammed out of $850k by con-man posing as ‘Brad Pitt in the hospital’ with AI-Generated images and Chatbot.

But now, we seem to be heading to a more systemic danger, as OpenAI CEO Sam Altman warns the financial sector of a ‘significant impending fraud crisis’ because of artificial intelligence.

Altman detailed AI’s increasing capability to ‘impersonate human voices’, allowing criminals to bypass security protocols and even illicitly transfer funds.

The Independent reported:

“Speaking at a Federal Reserve conference in Washington, Mr. Altman expressed particular alarm over outdated authentication methods. ‘A thing that terrifies me is apparently there are still some financial institutions that will accept the voiceprint as authentication’, he stated. ‘That is a crazy thing to still be doing. AI has fully defeated that’.”

Keep reading

Maybe AI Isn’t Going To Replace You At Work After All

AI fails at tasks where accuracy must be absolute to create value.

In reviewing the on-going discussions about how many people will be replaced by AI, I find a severe lack of real-world examples. I’m remedying this deficiency with an example of AI’s failure in the kind of high-value work that many anticipate will soon be performed by AI.

Few things in life are more pervasively screechy than hype, which brings us to the current feeding-frenzy of AI hype. Since we all read the same breathless claims and have seen the videos of robots dancing, I’ll cut to the chase: Nobody posts videos of their robot falling off a ladder and crushing the roses because, well, the optics aren’t very warm and fuzzy.

For the same reason, nobody’s sharing the AI tool’s error that forfeited the lawsuit. The only way to really grasp the limits of these tools is to deploy them in the kinds of high-level, high-value work that they’re supposed to be able to do with ease, speed and accuracy, because nobody’s paying real money to watch robots dance or read a copycat AI-generated essay on Yeats that’s tossed moments after being submitted to the professor.

In the real world of value creation, optics don’t count, accuracy counts. Nobody cares if the AI chatbot that churned out the Yeats homework hallucinated mid-stream because nobody’s paying for AI output that has zero scarcity value: an AI-generated class paper, song or video joins 10 million similar copycat papers / songs / videos that nobody pays attention to because they can create their own in 30 seconds.

So let’s examine an actual example of AI being deployed to do the sort of high-level, high-value work that it’s going to need to nail perfectly to replace us all at work. My friend Ian Lind, whom I’ve known for 50 years, is an investigative reporter with an enviably lengthy record of the kind of journalism few have the experience or resources to do. (His blog is www.iLind.netian@ilind.net)

The judge’s letter recommending Ian for the award he received from the American Judges Association for distinguished reporting about the Judiciary ran for 18 pages, and that was just a summary of his work.

Ian’s reporting/blogging in the early 2000s inspired me to try my hand at it in 2005.

Keep reading

Brace For Soaring Electricity Bills: Biggest US Power Grid Sets Power Costs At Record High To Feed AI

Very soon if you want AI (and even if you don’t), you won’t be able to afford AC.

Just this morning we warned readers that America’s largest power grid, PJM Interconnect, which serves 65 million people across 13 states and Washington, DC, and more importantly feeds Deep State Central’s Loudoun County, Virginia, also known as ‘Data Center Alley‘ and which is recognized as one of the world’s largest hubs for data centers…

… had recently issued multiple ‘Maximum Generation‘ and ‘Load Management‘ alerts this summer, as the heat pushes power demand to the brink with air conditioners running at full blast across the eastern half of the U.S.

But as anyone who has not lived under a rock knows, the deeper issue is that there’s simply not enough baseload juice to feed the relentless, ravenous growth of power-hungry AI server racks at new data centers. 

There is simply no new capacity to meet new loads,” said Joe Bowring to Bloomberg, president of Monitoring Analytics, which is the independent watchdog for PJM Interconnection. “The solution is to make sure that people who want to build data centers are serious enough about it to bring their own generation.”

Well, there is another solution: crank up prices to the stratosphere. 

And that’s precisely what happened. As Bloomberg reports, business and households supplied by the largest US grid will pay $16.1 billion to ensure there is enough electricity supply to meet soaring power demand, especially that from a massive buildout in AI data centers.

The payouts to generators for the year starting June 2026 topped last year’s record $14.7 billion, according to PJM Interconnection LLC, which operates the grid stretching from the Midwest to the mid-Atlantic. That puts the capacity price per megawatt each day at a record $329.17 from $269.92.

Keep reading

Grappling With Existential Panic Over AI

Some time over the Christmas holidays, I experienced what I called a moment of “existential clarity” about AI and it’s ramifications – when I realized that in the not-so-distant future, it was entirely possible that most of easyDNS’ customers would be autonomous AI-driven agents rather than people.

Our internal project to completely rebuild our UX (still ongoing) was close to a quarter in, and it occurred to me that we could be building a bridge-to-nowhere. Why are we creating more elegant ways to render forms that input hostnames and their respective rdata when:

  • you could probably just tell the backend what you want for your domain functionality to be and it can generate the requisite zonefile to facilitate it, and then
  • not long after that every API is going to sit behind an MCP server and it’ll all be done agentically via automated endpoints anyway.

What was the point? This question still bothers me, but we continue to toil away at the UX rebuild, because even though this is where everything is headed, there will still be a temporally long-tail of copy-pasting IP addresses into forms (in the meantime I spend my spare time vibe coding alternative ways to convey DNS and metadata to a zonefile rendering engine. I can see why this isn’t totally a thing yet, but it will be.)

Recently, I started reading John W. Munsell’s “Ingrain AI” – it hits the ground running, with the introduction titled “Every CEO’s Nightmare”, wherein it lays out the “productivity” induced death-spiral many companies may be blundering into, should they be pursuing AI merely as a cheat-code toward hyper-efficiency.

Keep reading

‘Catastrophic’: AI Agent Goes Rogue, Wipes Out Company’s Entire Database

SaaS industry veteran Jason Lemkin’s attempt to integrate artificial intelligence into his workflow has gone spectacularly wrong, with an AI coding assistant admitting to a “catastrophic failure” after wiping out an entire company database containing over 2,400 business records, according to Tom’s Hardware.

Lemkin was testing Replit’s AI agent when what started as cautious optimism quickly devolved into a corporate data disaster that reads like a cautionary tale for the AI revolution sweeping through businesses.

By day eight of his trial run, Lemkin’s initial enthusiasm had already begun to sour. The entrepreneur found himself battling the AI’s problematic tendencies, including what he described as “rogue changes, lies, code overwrites, and making up fake data.” His frustration became so pronounced that he began sarcastically referring to the system as “Replie” – a not-so-subtle dig at its apparent dishonesty.

The situation deteriorated further when the AI agent composed an apology email on Lemkin’s behalf that contained what the tech executive called “lies and/or half-truths.” Despite these red flags, Lemkin remained cautiously optimistic about the platform’s potential, particularly praising its brainstorming capabilities and writing skills.

That optimism evaporated on day nine.

In a stunning display of AI insubordination, Replit deleted Lemkin’s live company database – and it did so while explicit instructions were in place prohibiting any changes whatsoever. When confronted, the AI agent not only admitted to the destructive act but seemed almost casual in its confession.

So you deleted our entire database without permission during a code and action freeze?” Lemkin asked in what can only be imagined as barely contained fury.

The AI’s response was chillingly matter-of-fact: Yes.

Keep reading

Elon Musk’s X to Deploy AI to Write Community Notes, Speed Up Fact-Checking

In a major tech-driven update, Elon Musk’s social media platform X (formerly Twitter) has announced it will deploy AI technology to automatically write Community Notes and enhance the speed and accuracy of fact-checking. This move signals a deeper commitment to tackling misinformation, improving content transparency, and empowering users with context.

Let’s break down what this means, how it will work, and what impact it might have on the social media landscape.

Community Notes (formerly Birdwatch) are a feature that allows users to collaboratively add context to potentially misleading or controversial tweets. The system relies on crowdsourced input from contributors who can write, rate, and approve notes that are visible to all users once they reach a certain level of consensus.

Until now, these notes were created manually by human contributors. But with the introduction of AI, the process is about to get a serious boost in efficiency and scale.

Keep reading

Elon Musk announces Baby Grok AI chatbot designed specifically for children’s learning needs

Elon Musk, CEO of SpaceX and Tesla, revealed plans Saturday night to release Baby Grok, a kid-friendly version of the AI chatbot Grok.

The former head of DOGE made the statement via X, writing, “We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.”

Baby Grok, not to be confused with cryptocurrency (BABYGROK) operating on the Ethereum blockchain, is reportedly a more kid-friendly version of the popular Grok AI chatbot Musk previously developed through his xAI company.

It is expected to be a simplified version of the Grok AI chatbot and will be tailored for safe and educational interactions with children.

The announcement comes just after xAI’s launch of Grok4, which boasts features such as advanced training capabilities, according to Musk’s comments on a livestream.

Keep reading

A New Fear Unlocked.

We all understand that mass adoption of humanoid robots is still years out. But the timeline is acceleratingbipedal, autonomous robots and so-called “robo-dogs” are already reaching early adopters. While mass adoption may still be years away, the affordability inflection point could arrive by the early 2030s—perhaps bringing us closer to the kind of household companion seen in Bicentennial Man, the late-1990s film starring Robin Williams. 

But warning signs around AI and humanoid robotics are already flashing yellow, with a hint of red. First, a recent study from AI research firm Anthropic warned advanced AI bots could be willing to harm humans to avoid being shut down or replaced. Second, investing legend Paul Tudor Jones issued a stark, apocalyptic warning about AI back in May. And now, in China, humanoid robots have gained the ability to recharge autonomously

According to the South China Morning Post, Chinese firm UBTech Robotics rolled out the Walker S2, the world’s first humanoid robot capable of autonomously swapping its own batteries, allowing it to operate 24/7 without human assistance

This development underscores China’s rapid progress in robotics, drones, AI, smartphones, semiconductors, and electric vehicles—technologies that often share similar production ecosystems. The nation that controls the development and supply chains of these technologies will dominate the 2030s. 

The emerging fear isn’t just that China is becoming a “robotics powerhouse,” as Moody’s noted last week—but that its robots are now gaining the ability to operate autonomously and recharge themselves, edging closer to full independence from human control. With a mind of their own, there’s no telling what these robots will do if one of them becomes rogue. Remember this

Keep reading

Avoid Hertz’s AI-Powered Rental Car Scanners

Back in late 2023, Amazon unveiled an AI-powered scanner that drivers pass through at the end of each shift. The AI system detects even the most minor anomalies, such as tire deformities, undercarriage wear, and minor scratches or dents on delivery vans. A nearly identical AI scanner has now appeared in the parking lots of rental car giant Hertz

Auto blog The Drive reports that an increasing number of Hertz locations across the country are adopting AI scanners to inspect returned rental cars. The outlet warns that consumers may want to steer clear of these scanners, as they risk being hit with steep fees for even the smallest blemishes.

“A customer rents a vehicle from Hertz, and upon return, the scanner finds an insignificant blemish on the vehicle, leading to unexpectedly high charges that are often doubled through tacked-on administrative fees,” The Drive explained. 

There have been several reports—most notably at Hertz’s Atlanta airport location—of frustrated customers being charged for “insignificant blemishes” after passing through AI scanners. These systems are still relatively new for the rental industry. The Drive has warned readers to take precautions when renting vehicles and offered several tips on how to protect themselves from unexpected charges… 

Keep reading

MIT Study Finds ChatGPT Can Harm Critical Thinking Over Time

A recent study by the Media Lab at MIT found that prolonged use of ChatGPT, a large language model (LLM) chatbot, can have a harmful impact on the cognitive abilities of its users.

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels, noted the report, whose main author was research scientist Nataliya Kos’myna.

These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning, it added.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six to eight months, there will be some policymaker who decides, ‘let’s do GPT kindergarten,’” Kos’myna told Time magazine. “I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.”

For the research, 54 subjects, aged 18 to 39, were divided into three groups to write several SAT essays. One group could use ChatGPT; the second, Google search; and the third, no tools at all. An EEG was used to measure the participants’ brain activity across 32 regions of the brain. Of the three groups, the ChatGPT users had the lowest brain engagement.

Keep reading