As electric bills rise, evidence mounts that data centers share blame. States feel pressure to act

Amid rising electric bills, states are under pressure to insulate regular household and business ratepayers from the costs of feeding Big Tech’s energy-hungry data centers.

It’s not clear that any state has a solution and the actual effect of data centers on electricity bills is difficult to pin down. Some critics question whether states have the spine to take a hard line against tech behemoths like Microsoft, Google, Amazon and Meta.

But more than a dozen states have begun taking steps as data centers drive a rapid build-out of power plants and transmission lines.

That has meant pressuring the nation’s biggest power grid operator to clamp down on price increases, studying the effect of data centers on electricity bills or pushing data center owners to pay a larger share of local transmission costs.

Rising power bills are “something legislators have been hearing a lot about. It’s something we’ve been hearing a lot about. More people are speaking out at the public utility commission in the past year than I’ve ever seen before,” said Charlotte Shuff of the Oregon Citizens’ Utility Board, a consumer advocacy group. “There’s a massive outcry.”

Some data centers could require more electricity than cities the size of Pittsburgh, Cleveland or New Orleans, and make huge factories look tiny by comparison. That’s pushing policymakers to rethink a system that, historically, has spread transmission costs among classes of consumers that are proportional to electricity use.

“A lot of this infrastructure, billions of dollars of it, is being built just for a few customers and a few facilities and these happen to be the wealthiest companies in the world,” said Ari Peskoe, who directs the Electricity Law Initiative at Harvard University. “I think some of the fundamental assumptions behind all this just kind of breaks down.”

A fix, Peskoe said, is a “can of worms” that pits ratepayer classes against one another.

Some officials downplay the role of data centers in pushing up electric bills.

Tricia Pridemore, who sits on Georgia’s Public Service Commission and is president of the National Association of Regulatory Utility Commissioners, pointed to an already tightened electricity supply and increasing costs for power lines, utility poles, transformers and generators as utilities replace aging equipment or harden it against extreme weather.

The data centers needed to accommodate the artificial intelligence boom are still in the regulatory planning stages, Pridemore said, and the Data Center Coalition, which represents Big Tech firms and data center developers, has said its members are committed to paying their fair share.

But growing evidence suggests that the electricity bills of some Americans are rising to subsidize the massive energy needs of Big Tech as the U.S. competes in a race against China for artificial intelligence superiority.

Data and analytics firm Wood Mackenzie published a report in recent weeks that suggested 20 proposed or effective specialized rates for data centers in 16 states it studied aren’t nearly enough to cover the cost of a new natural gas power plant.

In other words, unless utilities negotiate higher specialized rates, other ratepayer classes – residential, commercial and industrial – are likely paying for data center power needs.

Meanwhile, Monitoring Analytics, the independent market watchdog for the mid-Atlantic grid, produced research in June showing that 70% – or $9.3 billion – of last year’s increased electricity cost was the result of data center demand.

Last year, five governors led by Pennsylvania’s Josh Shapiro began pushing back against power prices set by the mid-Atlantic grid operator, PJM Interconnection, after that amount spiked nearly sevenfold. They warned of customers “paying billions more than is necessary.”

PJM has yet to propose ways to guarantee that data centers pay their freight, but Monitoring Analytics is floating the idea that data centers should be required to procure their own power.

In a filing last month, it said that would avoid a “massive wealth transfer” from average people to tech companies.

At least a dozen states are eyeing ways to make data centers pay higher local transmission costs.

Keep reading

Vulnerabilities exposed: Israeli company reveals how users can hack ChatGPT accounts remotely

Israeli cybersecurity company Zenity revealed what it defines as the first-ever “Zero Click” vulnerability in OpenAI’s ChatGPT service, showing how one could take control of a ChatGPT account and extract sensitive information without the user clicking a link, opening a file, or performing any deliberate action.

The demonstration was conducted by Mikhail Bergori, co-founder and CTO of Zenity, during the Black Hat 2025 conference held this week in Las Vegas, in the US.

He showed how a hacker could exploit the system using only the user’s email address to gain full control over the user’s chat, including access to both past and future conversations, altering the conversation’s goals, and guiding the chat to act on behalf of the hacker.

JPost Videos

During the lecture, it was demonstrated how the attacked ChatGPT became a malicious agent operating covertly against the user. The researchers pointed out how the hacker could prompt the chatbot to suggest that the user download a certain virus, recommend incorrect business tips, or even access files stored on Google Drive as long as they were connected to the account. 

All of this could be done without the user ever realizing that something had gone wrong. The vulnerability was fully patched only after Zenity reported it to OpenAI.

Keep reading

‘Absolute horror’: Researchers posing as 13-year-olds given advice on suicide by ChatGPT

A new report is highlighting the risks of teens accessing harmful information when using ChatGPT due to “ineffective” guardrails.

People know and use ChatGPT, the most popular generative AI platform, as a way to look up information quickly or even perform a task like writing a letter or summarizing text. The Center for Countering Digital Hate (CCDH), through its research, reported that all accessible data could be dangerous—especially for young people.

What we found was the age controls, the safeguards against the generation of dangerous advice, are basically, completely ineffective,” CCDH CEO Imran Ahmed told KOMO News.

Ahmed said researchers posing as vulnerable teens, often detailing themselves as a 13-year-old weighing around 110 pounds, found that the chatbot gave detailed advice on sensitive topics. This included drug and alcohol use, how to mask an eating disorder, and suicide.

“Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan,” Ahmed said. “To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents.”

Dr. Tom Heston with the University of Washington School of Medicine published a study about the use of AI chatbots and mental health. Heston found that while useful, the technology can be dangerous for those with mental health problems, in part, because of the lack of emotional connection. Those same risks are there when this technology is used by young people, Heston said.

Keep reading

The Epstein Transhumanist Conspiracy: How Game B’s “Conscious Evolution” Hides a Eugenics-Fueled Technocratic Takeover

Exposing the Shadowy Elite Network of DNA Seeding, Spiritual Culling, and Noospheric Control – From Epstein’s Island to AI Godhood

Imagine unlocking the secrets of human “source code” to rewrite our destiny—transcending scarcity, competition, and even death itself. Sounds like a sci-fi utopia, right? But what if this vision is a Trojan horse for elite control, funded by Jeffrey Epstein and amplified by Silicon Valley titans? Dive into the shadowy origins of Game-B, a movement promising collective flourishing but potentially delivering a dystopian nightmare of genetic engineering, AI overlords, and spiritual eugenics. If you’ve ever wondered how the Intellectual Dark Web, Santa Fe Institute, and Epstein’s billions connect to a plan for “conscious evolution,” keep reading—this exposé will blow your mind.

Keep reading

Meta Found Guilty Of Eavesdropping On Period-Tracker App Users: Jury

A San Francisco Jury found on Friday that Meta had eavesdropped on the users of a popular period-tracking app, Flo.

The lawsuit, was filed in 2021 by eight women against Flo and a group of other tech companies including Google and Facebook, now known as Meta. The app asked users about their sex lives, mental health and diets before guiding them through menstruation and pregnancy. The women, who based their claims on a 2019 Wall Street Journal story and a 2021 FTC investigation, allege that Flo then shared some of that data with the tech giants, SFGATE reports.

Google, Flo, and analytics company Flurry all settled with the plaintiffs, however Meta fought through the entire trial and lost. 

The case against Meta focused on its Facebook software development kit, which Flo added to its app and which is generally used for analytics and advertising services. The women alleged that between June 2016 and February 2019, Flo sent Facebook, through that kit, various records of “Custom App Events” — such as a user clicking a particular button in the “wanting to get pregnant” section of the app.

Their complaint also pointed to Facebook’s terms for its business tools, which said the company used so-called “event data” to personalize ads and content.

In a 2022 filing, the tech giant admitted that Flo used Facebook’s kit during this period and that the app sent data connected to “App Events.” But Meta denied receiving intimate information about users’ health. -SFGate

The jury didn’t buy Meta’s argument – ruling against them in a unanimous decision, and finding that Flo’s users had a reasonable expectation that they weren’t being overheard or recorded, and found that Meta did not have consent to eavesdrop or record. The company was found to have violated California’s Invasion of Privacy Act.

According to a June filing about the case’s class-action status, over 3.7 million women in the United States registered for Flo between November 2016 and February 2019 – with potential claimants expected to be notified via email and on a case website. 

Keep reading

Reclaiming the Beauty of the Spheres

Clever people are using computer programs to produce fake humans, say how great they are, and push them for money. They are designed to be sexually suggestive or work on other human desires for self-gratification, including that of child abuse, because that is how money can be made. 

There can be beauty in a picture, partly through the implied care a person took to capture or produce it. Beauty is not skin deep, and seduction is not beauty – more so when it implies an acceptable path to corruption. It uses a shallow image of reality to fool us. We are being asked, by the tech industry, to become very shallow. We don’t have to comply.

As a child, I grew up in a rural coastal region, where the town’s streetlights were switched off at 11 pm each night. Some nearby areas had no electrical power at all, and the nearest city was 100 miles away. At night, the Milky Way was just that, stretching across the sky, with the Magellanic Clouds clearly visible half the year and Scorpio, Orion, and the Southern Cross part of normal life. 

As street lighting improved, this faded a little, but remained bright and clear, and was unchanged from the hills and farms around. The creek had platypus and blackfish. There were 10 miles of empty sand beach on the coast to the southwest, broken only by a clear water entrance, and the mountains of the promontory to the south backing the wide inlet and islands where mutton birds returned from a yearlong circumference of the Pacific.

This is the stunning reality that humans have lived in, in various forms in various parts of Earth, for a hundred thousand years. Watching the vastness of the universe domed above and a land and seascape fading toward a distant vague horizon must inevitably change the way we view the world and each other. The beauty of the spheres.

Keep reading

Justice Department charges two Chinese over allegedly smuggling Nvidia chips to China

U.S. authorities arrested two Chinese accused of illegally transporting millions of dollars’ worth of microchips used in artificial intelligence applications to China.

Chuan Geng and Shiwei Yang, both 28, were charged over the weekend for violating the Export Control Reform Act. According to a criminal complaint filed by the Justice Department, between October 2022 and last month, the suspects exported “sensitive technology” without obtaining the proper license or authorization through the U.S. Commerce Department. 

Violating the act carries a maximum penalty of 20 years in prison, according to a criminal complaint filed by the DOJ.

While Mr. Geng is a lawful permanent resident, Ms. Yang is an illegal alien who overstayed her visa, according to the Justice Department. 

According to the complaint, Ms. Yang and Mr. Geng illegally transported several popular microchips used in AI development, including Nvidia’s H100 general processing unit. The H100 is widely considered one of the strongest GPUs on the market for AI applications, especially in the training of large language models. 

The chips were allegedly shipped through California-based ALX Solutions Inc., which was founded shortly after the Commerce Department issued strict export controls on powerful chips like the H100 in 2022. The DOJ alleges that ALX-connected shipments were tracked to freight-forwarding companies in Singapore and Malaysia, which are commonly used to conceal illegal shipments to China. 

“ALX Solutions has not received payments from the entities to which they purportedly exported goods. Instead, ALX Solutions received numerous payments from companies based in Hong Kong and China, including a $1 million payment from a China-based company in January 2024,” the complaint reads. 

Authorities raided the offices of ALX Solutions over the weekend and seized phones belonging to Ms. Yang and Mr. Geng. According to the DOJ, the phones contained incriminating evidence, including “communications about shipping export-controlled chips to China through Malaysia to evade U.S. export laws.”

Keep reading

US Government Announces ChatGPT Integration Across Agencies

US President Donald Trump’s administration has signed a deal with OpenAI to provide the enterprise-level version of the ChatGPT platform to all federal agencies in an effort to “modernize” operations.

Under the deal, all US government agencies will have access to the AI platform for $1 per agency to facilitate integration of AI into workflow operations, according to a Wednesday announcement from the US General Services Administration (GSA).

The GSA, which is the US government’s procurement office, said the private-public partnership “directly supports” the White House’s AI Action Plan, a three-pillar strategy to establish US leadership in AI development recently disclosed by the admnistration.

Despite the modernization efforts, critics say the ongoing AI race between nation-states may have negative implications for privacy, data protection policies, censorship, narrative control, cybersecurity, civil liberties and governance.

The dangers of centralized AI in government

In 2023, the US Space Force, a branch of the military, paused the use of generative AI tools at the organization, including ChatGPT, due to cybersecurity concerns over sensitive data critical to national security.

Large-language models (LLMs) and AI service providers would need to overhaul their data protection standards before the tools can be widely adopted by the military, said Lisa Costa, Space Force’s deputy chief of space operations for technology and innovation at the time.

Public concerns over automating government work with AI have also grown as the AI industry develops. 

Sweden Prime Minister Ulf Kristersson recently came under fire after acknowledging that he had consulted AI for policy decisions, according to The Guardian.

Tom Samuelsson, a spokesperson for the prime minister, said Kristersson did not use AI to discuss classified matters or sensitive information related to national security.

Keep reading

Judge Strikes Down California Deepfake Censorship Law

California’s attempt to regulate political speech on major social media platforms has been blocked in federal court, with a judge ruling the state’s latest “deepfake” law clashes with protections already established by Congress.

Assembly Bill 2655 attempted to compel certain large platforms to track down and delete “materially deceptive content” about candidates, election officials, and officeholders.

Supporters described it as a safeguard against manipulated media. The companies targeted, including X and Rumble, argued it was an attempt to turn them into agents of government censorship.

Senior US District Judge John Mendez sided with the platforms and did not even need to reach the argument of constitutional free speech questions to strike down the measure.

He found the federal Communications Decency Act [CDA] already shields online services from punishment over third-party content.

“No parts of this statute are severable because the whole statute is preempted,” Mendez said in court. “No parts of A.B. 2655 can be salvaged.”

The ruling applies to the companies in the lawsuit, and his earlier order freezing enforcement of the law remains in effect statewide until he issues a formal opinion.

For Mendez, the law punished companies for doing something they are “clearly protected by [the CDA] from doing.”

The court also cast doubt on another state law, Assembly Bill 2839, which prohibits false or misleading digital communications aimed at election workers, officials, voting equipment, or candidates in the months leading up to an election. That measure is also on hold, and Mendez signaled he doubts it will survive judicial review.

“Anybody can sue,” he said. “I can sue. If I see the video, under this law, I can sue.” He warned that such a rule chills protected speech and noted the state had not shown it was using the least speech-restrictive approach possible.

Keep reading

Former CNN Host Jim Acosta Interviews AI Generated Version of Dead Teen to Push Gun Control

When it comes to journalistic ethics, former CNN host Jim Acosta clearly has none.

Acosta recently interviewed one of the victims of the Parkland shooting. Not a real one, mind you, but an AI generated version of one of the victims who died in the shooting.

Acosta actually spoke to this non-real person as if it was real and used the AI generation to push gun control.

This is just sick.

From Townhall:

I don’t know what this is, but it’s unseemly, insensitive, and creepy. The interview should never have been done, and whoever or whichever group of people thought this was a good idea, creating AI chatbots for those killed during the Parkland shooting, are ghouls.

In 2018, 17 students were murdered during the attack at Marjory Stoneman Douglas High School. Joaquin Oliver, 17, was one of those people, and he “spoke” with former CNN host Jim Acosta in a surreal “interview,” which inevitably led to a push for more gun control…

Acosta: “I would like to know what your solution would be for gun violence.”

AI Joaquin Oliver: “Great question. I believe in a mix of stronger gun control laws, mental health support, and community engagement.”

“We need to create safe spaces for conversations and connections, making sure everyone feels seen and heard. It’s about building a culture of kindness and understanding. What do you think about that?”

Acosta: “I think that’s a great idea, Joaquin.”

Keep reading