Brace For Soaring Electricity Bills: Biggest US Power Grid Sets Power Costs At Record High To Feed AI

Very soon if you want AI (and even if you don’t), you won’t be able to afford AC.

Just this morning we warned readers that America’s largest power grid, PJM Interconnect, which serves 65 million people across 13 states and Washington, DC, and more importantly feeds Deep State Central’s Loudoun County, Virginia, also known as ‘Data Center Alley‘ and which is recognized as one of the world’s largest hubs for data centers…

… had recently issued multiple ‘Maximum Generation‘ and ‘Load Management‘ alerts this summer, as the heat pushes power demand to the brink with air conditioners running at full blast across the eastern half of the U.S.

But as anyone who has not lived under a rock knows, the deeper issue is that there’s simply not enough baseload juice to feed the relentless, ravenous growth of power-hungry AI server racks at new data centers. 

There is simply no new capacity to meet new loads,” said Joe Bowring to Bloomberg, president of Monitoring Analytics, which is the independent watchdog for PJM Interconnection. “The solution is to make sure that people who want to build data centers are serious enough about it to bring their own generation.”

Well, there is another solution: crank up prices to the stratosphere. 

And that’s precisely what happened. As Bloomberg reports, business and households supplied by the largest US grid will pay $16.1 billion to ensure there is enough electricity supply to meet soaring power demand, especially that from a massive buildout in AI data centers.

The payouts to generators for the year starting June 2026 topped last year’s record $14.7 billion, according to PJM Interconnection LLC, which operates the grid stretching from the Midwest to the mid-Atlantic. That puts the capacity price per megawatt each day at a record $329.17 from $269.92.

Keep reading

Grappling With Existential Panic Over AI

Some time over the Christmas holidays, I experienced what I called a moment of “existential clarity” about AI and it’s ramifications – when I realized that in the not-so-distant future, it was entirely possible that most of easyDNS’ customers would be autonomous AI-driven agents rather than people.

Our internal project to completely rebuild our UX (still ongoing) was close to a quarter in, and it occurred to me that we could be building a bridge-to-nowhere. Why are we creating more elegant ways to render forms that input hostnames and their respective rdata when:

  • you could probably just tell the backend what you want for your domain functionality to be and it can generate the requisite zonefile to facilitate it, and then
  • not long after that every API is going to sit behind an MCP server and it’ll all be done agentically via automated endpoints anyway.

What was the point? This question still bothers me, but we continue to toil away at the UX rebuild, because even though this is where everything is headed, there will still be a temporally long-tail of copy-pasting IP addresses into forms (in the meantime I spend my spare time vibe coding alternative ways to convey DNS and metadata to a zonefile rendering engine. I can see why this isn’t totally a thing yet, but it will be.)

Recently, I started reading John W. Munsell’s “Ingrain AI” – it hits the ground running, with the introduction titled “Every CEO’s Nightmare”, wherein it lays out the “productivity” induced death-spiral many companies may be blundering into, should they be pursuing AI merely as a cheat-code toward hyper-efficiency.

Keep reading

‘Catastrophic’: AI Agent Goes Rogue, Wipes Out Company’s Entire Database

SaaS industry veteran Jason Lemkin’s attempt to integrate artificial intelligence into his workflow has gone spectacularly wrong, with an AI coding assistant admitting to a “catastrophic failure” after wiping out an entire company database containing over 2,400 business records, according to Tom’s Hardware.

Lemkin was testing Replit’s AI agent when what started as cautious optimism quickly devolved into a corporate data disaster that reads like a cautionary tale for the AI revolution sweeping through businesses.

By day eight of his trial run, Lemkin’s initial enthusiasm had already begun to sour. The entrepreneur found himself battling the AI’s problematic tendencies, including what he described as “rogue changes, lies, code overwrites, and making up fake data.” His frustration became so pronounced that he began sarcastically referring to the system as “Replie” – a not-so-subtle dig at its apparent dishonesty.

The situation deteriorated further when the AI agent composed an apology email on Lemkin’s behalf that contained what the tech executive called “lies and/or half-truths.” Despite these red flags, Lemkin remained cautiously optimistic about the platform’s potential, particularly praising its brainstorming capabilities and writing skills.

That optimism evaporated on day nine.

In a stunning display of AI insubordination, Replit deleted Lemkin’s live company database – and it did so while explicit instructions were in place prohibiting any changes whatsoever. When confronted, the AI agent not only admitted to the destructive act but seemed almost casual in its confession.

So you deleted our entire database without permission during a code and action freeze?” Lemkin asked in what can only be imagined as barely contained fury.

The AI’s response was chillingly matter-of-fact: Yes.

Keep reading

Elon Musk’s X to Deploy AI to Write Community Notes, Speed Up Fact-Checking

In a major tech-driven update, Elon Musk’s social media platform X (formerly Twitter) has announced it will deploy AI technology to automatically write Community Notes and enhance the speed and accuracy of fact-checking. This move signals a deeper commitment to tackling misinformation, improving content transparency, and empowering users with context.

Let’s break down what this means, how it will work, and what impact it might have on the social media landscape.

Community Notes (formerly Birdwatch) are a feature that allows users to collaboratively add context to potentially misleading or controversial tweets. The system relies on crowdsourced input from contributors who can write, rate, and approve notes that are visible to all users once they reach a certain level of consensus.

Until now, these notes were created manually by human contributors. But with the introduction of AI, the process is about to get a serious boost in efficiency and scale.

Keep reading

Elon Musk announces Baby Grok AI chatbot designed specifically for children’s learning needs

Elon Musk, CEO of SpaceX and Tesla, revealed plans Saturday night to release Baby Grok, a kid-friendly version of the AI chatbot Grok.

The former head of DOGE made the statement via X, writing, “We’re going to make Baby Grok @xAI, an app dedicated to kid-friendly content.”

Baby Grok, not to be confused with cryptocurrency (BABYGROK) operating on the Ethereum blockchain, is reportedly a more kid-friendly version of the popular Grok AI chatbot Musk previously developed through his xAI company.

It is expected to be a simplified version of the Grok AI chatbot and will be tailored for safe and educational interactions with children.

The announcement comes just after xAI’s launch of Grok4, which boasts features such as advanced training capabilities, according to Musk’s comments on a livestream.

Keep reading

A New Fear Unlocked.

We all understand that mass adoption of humanoid robots is still years out. But the timeline is acceleratingbipedal, autonomous robots and so-called “robo-dogs” are already reaching early adopters. While mass adoption may still be years away, the affordability inflection point could arrive by the early 2030s—perhaps bringing us closer to the kind of household companion seen in Bicentennial Man, the late-1990s film starring Robin Williams. 

But warning signs around AI and humanoid robotics are already flashing yellow, with a hint of red. First, a recent study from AI research firm Anthropic warned advanced AI bots could be willing to harm humans to avoid being shut down or replaced. Second, investing legend Paul Tudor Jones issued a stark, apocalyptic warning about AI back in May. And now, in China, humanoid robots have gained the ability to recharge autonomously

According to the South China Morning Post, Chinese firm UBTech Robotics rolled out the Walker S2, the world’s first humanoid robot capable of autonomously swapping its own batteries, allowing it to operate 24/7 without human assistance

This development underscores China’s rapid progress in robotics, drones, AI, smartphones, semiconductors, and electric vehicles—technologies that often share similar production ecosystems. The nation that controls the development and supply chains of these technologies will dominate the 2030s. 

The emerging fear isn’t just that China is becoming a “robotics powerhouse,” as Moody’s noted last week—but that its robots are now gaining the ability to operate autonomously and recharge themselves, edging closer to full independence from human control. With a mind of their own, there’s no telling what these robots will do if one of them becomes rogue. Remember this

Keep reading

Avoid Hertz’s AI-Powered Rental Car Scanners

Back in late 2023, Amazon unveiled an AI-powered scanner that drivers pass through at the end of each shift. The AI system detects even the most minor anomalies, such as tire deformities, undercarriage wear, and minor scratches or dents on delivery vans. A nearly identical AI scanner has now appeared in the parking lots of rental car giant Hertz

Auto blog The Drive reports that an increasing number of Hertz locations across the country are adopting AI scanners to inspect returned rental cars. The outlet warns that consumers may want to steer clear of these scanners, as they risk being hit with steep fees for even the smallest blemishes.

“A customer rents a vehicle from Hertz, and upon return, the scanner finds an insignificant blemish on the vehicle, leading to unexpectedly high charges that are often doubled through tacked-on administrative fees,” The Drive explained. 

There have been several reports—most notably at Hertz’s Atlanta airport location—of frustrated customers being charged for “insignificant blemishes” after passing through AI scanners. These systems are still relatively new for the rental industry. The Drive has warned readers to take precautions when renting vehicles and offered several tips on how to protect themselves from unexpected charges… 

Keep reading

MIT Study Finds ChatGPT Can Harm Critical Thinking Over Time

A recent study by the Media Lab at MIT found that prolonged use of ChatGPT, a large language model (LLM) chatbot, can have a harmful impact on the cognitive abilities of its users.

Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels, noted the report, whose main author was research scientist Nataliya Kos’myna.

These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning, it added.

“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in six to eight months, there will be some policymaker who decides, ‘let’s do GPT kindergarten,’” Kos’myna told Time magazine. “I think that would be absolutely bad and detrimental. Developing brains are at the highest risk.”

For the research, 54 subjects, aged 18 to 39, were divided into three groups to write several SAT essays. One group could use ChatGPT; the second, Google search; and the third, no tools at all. An EEG was used to measure the participants’ brain activity across 32 regions of the brain. Of the three groups, the ChatGPT users had the lowest brain engagement.

Keep reading

Meta refuses to sign EU’s AI code of practice

Meta has refused to sign the European Union’s code of practice for its AI Act, weeks before the bloc’s rules for providers of general-purpose AI models take effect.

“Europe is heading down the wrong path on AI,” wrote Meta’s chief global affairs officer Joel Kaplan in a post on LinkedIn. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI (GPAI) models and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

The EU’s code of practice — a voluntary framework published earlier this month — aims to help companies implement processes and systems to comply with the bloc’s legislation for regulating AI. Among other things, the code requires companies to provide and regularly update documentation about their AI tools and services and bans developers from training AI on pirated content; companies must also comply with content owners’ requests to not use their works in their datasets.

Calling the EU’s implementation of the legislation “overreach,” Kaplan claimed that the law will “throttle the development and deployment of frontier AI models in Europe and will stunt European companies looking to build businesses on top of them.”

risk-based regulation for applications of artificial intelligence, the AI Act bans some “unacceptable risk” use cases outright, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” uses, such as biometrics and facial recognition, and in domains like education and employment. The act also requires developers to register AI systems and meet risk- and quality-management obligations.

Tech companies from across the world, including those at the forefront of the AI race like Alphabet, Meta, Microsoft, and Mistral AI, have been fighting the rules, even urging the European Commission to delay its rollout. But the Commission has held firm, saying it will not change its timeline.

Also on Friday, the EU published guidelines for providers of AI models ahead of rules that will go into effect on August 2. These rules would affect providers of “general-purpose AI models with systemic risk,” like OpenAI, Anthropic, Google, and Meta. Companies that have such models on the market before August 2 will have to comply with the legislation by August 2, 2027.

Keep reading

Artificial Intelligence Breeds Mindless Inhumanity

I began studying AI in the mid-1980s. Unusually for a computer scientist of that era, my interest was entirely in information, not in machines. I became obsessed with understanding what it meant to live during the transition from the late Industrial Age to the early Information Age.  

What I learned is that computers fundamentally alter the economics of information. We now have inexpensive access to more information, and to higher quality information, than ever before. In theory, that should help individuals reach better decisions, organizations devise improved strategies, and governments craft superior policies. But that’s just a theory. Does it? 

The answer is “sometimes.” Unfortunately, the “sometimes not” part of the equation is now poised to unleash devastating consequences. 

Consider the altered economics of information: Scarcity creates value. That’s been true in all times, in all cultures, and for all resources. If there’s not enough of a resource to meet demand, its value increases. If demand is met and a surplus remains, value plummets.  

Historically, information was scarce. Spies, lawyers, doctors, priests, scientists, scholars, accountants, teachers, and others spent years acquiring knowledge, then commanded a premium for their services.  

Today, information is overabundant. No one need know anything because the trusty phones that never leave our sides can answer any question that might come our way. Why waste your time learning, studying, or internalizing information when you can just look it up on demand? 

Keep reading