“It’s Utilities Versus Rent” – Data Centers Send Energy Prices Soaring

The surge in data center construction to power today’s AI and cloud computing demands has sent electricity prices skyrocketing over the last few years. And, as Bloomberg reports, it is only getting worse.

With electricity costs now as much as 267% higher compared to five years ago in some parts of the US, fingers are being pointed directly at data center activity for blame. And while some – especially generously funded lobbies – are eager to dissemble and distort, claiming that on the contrary, electricity prices are barely keeping up with inflation and that data centers have little to no impact on electrical bills, the map below shows that more than 70% of the nodes that recorded pricing increases are located within 50 miles of significant data center activity.

Take Nicole Pasture: the Baltimore resident said her utility bills are up 50% over the past year. She is also a judge who rules on rental disputes in the city’s district court and sees people struggling with their power bills.

“It’s utilities versus rent,” she said. “They want to stay in their home, but they also want to keep their lights on.”

New data center construction projects are announced weekly, sometimes every day. Some of the construction timelines have upwards of 100 MW of new data center demand being built only two years from groundbreaking. This has to be contrasted against the rate of new energy generation construction, with the recent vite among PJM Interconnection stakeholders resulting in a failure to even select a plan for how to add data centers to the grid. 

“The voting reflects the nearly impossible challenge of trying to ensure resource adequacy and control ratepayer costs, while also allowing data center development in a market that is already short on generation supply and faces a 5-to-7 year timeline to bring on new large-scale generating resources,” Jon Gordon, a director at Advanced Energy United, a clean energy trade group, said in a bulletin on the meeting.

While some utilities have been able to pass the burden of higher electricity costs onto the owners of the large loads, most of the costs of expanding grid capacity inevitably find their way to consumers.

According to Bloomberg, in northern Virginia, Dominion Energy cited data center demand, inflation and higher fuel costs when asking regulators to raise its customer bills by about $20 a month for the average residential user over the next two years. Dominion also forecasts peak demand would rise by more than 75% by 2039 with data centers. It would be just 10% without.

And it’s only getting worse: with hundreds of gigawatts of future power demand from data centers built by companies like Oracle and Microsoft, Goldman writes that “eight out of the 13 US regional power markets are already at or below critical spare capacity levels.”

Keep reading

Young People Yearning for Socialism and AI Governance Is a Dangerous Proposition

Socialism has failed every time it has been tried because it is impossible for a group of people to implement a centralized governing apparatus capable of effectively organizing society. 

Heretofore, most people have resented and rejected the yoke of socialism, sometimes after long struggles, because collectivism is also antithetical to individual autonomy, free will, human nature, and the pursuit of happiness.

This is not the case in the United States. Today, more than 30 years after the collapse of the Union of Soviet Socialist Republics, young Americans want socialism. 

According to new polling conducted by Rasmussen Reports and The Heartland Institute, which included 1,496 likely voters aged 18 to 39, more than half of young Americans want a democratic socialist to win the White House in 2028.

Likewise, more than half of those polled have a favorable impression of New York City Mayor-elect Zohran Mamdani, and nearly 60 percent support socialist policies like a nationwide rent freeze and government-run grocery stores in every town.

There are many reasons why socialism appeals to young Americans. 

First and foremost, young people are not being taught about the dark history of socialism. Second, they are misled into believing that socialism is superior to free-market capitalism. Third, they are brainwashed into believing that collectivism is more righteous, fair, and just than personal freedom. Fourth, they feel that the American dream is dead and socialism is the solution to the cost-of-living crisis they face.

Nearly three-in-four young likely voters think the cost of housing is at a crisis level, and only 22 percent think they will be better off than their parents. 

At this point in time, given the economic headwinds they face, coupled with their ignorance of socialism, it makes sense that an alarming portion of young Americans want socialism.

Keep reading

Federal Uniformity Sounds Good – Until Big Tech Writes the Rules

Big Tech is jamming preemption of state AI laws into the National Defense Authorization Act (NDAA) at the last minute, but it is unclear if they are fully aware of what it would actually mean. If Congress prohibits states from implementing AI policies that would protect their citizens, it would have far-reaching consequences. True conservatives in Congress must uphold their pro-American values and refuse to support any preemption effort that would ultimately be a coronation of Big Tech as our country’s new rulers.

The United States is the dominant leader in AI on the global stage, and we are in a high-stakes race with adversaries – especially China – to maintain that advantage. We cannot afford to cut corners on oversight and safety while Big Tech develops AI systems at a rapid pace. States are best positioned to test thoughtful safeguards that address the most pressing concerns – from public safety to protecting children. The federal government, by contrast, is lagging behind.

States have been laboratories of democracy on every pressing issue of our time. The issue of AI should not be any different. The federal government is behind the states in terms of simply thinking about the ramifications of AI, and Congress should allow the states to try to find effective policy solutions that cater to our most imminent concerns.

Preemption is a clear violation of the principle of federalism inherent in the 10th Amendment to the Constitution.

Additionally, this provision is a blatant cover for Big Tech. It allows Big Tech to continue to exploit kids, creators, and conservatives. This provision will not empower small businesses and entrepreneurs in AI because they simply don’t have $40 billion in funding to put toward artificial general intelligence (AGI) development and $100 million bonuses to hand out to potential employees.

They are already shut out of the industry by people like OpenAI CEO Sam Altman, who popularized the “patchwork” characterization of state policies that is now being used in smaller circles in support of preemption.

If we intend to outpace China on AI, we must abandon misguided proposals that undermine federalism. The federal government should focus on enacting strong, strategic measures that protect our national security and prevent U.S. technologies and advanced chips from ending up in the wrong hands.

Keep reading

Gmail Explainer: How to Stop Google AI from Snooping Through Your Emails

Google has quietly started accessing Gmail users’ private emails and attachments to train its AI models, requiring manual opt-out to avoid participation. To make the process even trickier, Gmail users have to opt out in two separate places for the change to work. Follow these steps to protect your privacy from Google’s invasive AI endeavors.

Malwarebytes reports that Google has recently implemented changes that enable Gmail to access all private messages and attachments for the purpose of training its AI models. This means that unless users take action to opt out, their emails could be analyzed to improve Google’s AI assistants, such as Smart Compose or AI-generated replies.

The motivation behind this change is Google’s push to enhance Gmail’s features with the company’s Gemini AI, aiming to help users write emails more efficiently and manage their inboxes more effectively. To accomplish this, Google is utilizing real email content, including attachments, to train and refine its AI models. These settings are now reportedly switched on by default, rather than requiring explicit opt-in consent.

As a result, if users do not manually disable these settings, their private messages may be used for AI training without their knowledge. While Google assures strong privacy measures are in place, such as data anonymization and security during the AI training process, those handling sensitive or confidential information may find little comfort in these promises.

To fully opt out of Gmail’s AI training, users must change settings in two separate locations. This article features a guide and images for opting out on desktop, but the selections are very similar if accessing Gmail via the mobile app.

Keep reading

WaPo Defends Data Centers—With Few Disclosures That Amazon Depends on Them

US electricity prices, you may have noticed, keep going up. And in some parts of the country, like here in the DC region, they’re soaring. In Virginia, for example, electricity rates are up 13% this year, an issue Democrats highlighted as they swept back into power in Richmond earlier this month.

Burgeoning electric bills also factored into Democrats’ November wins in New Jersey and Georgia. But let’s stick with Virginia for a moment, where energy-sucking data centers are so plentiful that if northern Virginia’s DC suburbs were to secede, the new country would have more data center capacity than China.

As a result of these data centers, this new country would likely suffer from crippling electric bills. “Wholesale electricity [now] costs as much as 267% more than it did five years ago in areas near data centers. That’s being passed on to customers,” read a recent Bloomberg subhead.

Keep reading

White House crafting executive order to thwart state AI laws

An executive order that could soon be signed by President Donald Trump would thwart states’ artificial intelligence laws by launching legal challenges and withholding federal funding, according to a draft of the order obtained by CNBC on Wednesday.

The draft surfaced shortly after Trump publicly called for a single federal standard on AI “instead of a patchwork of 50 State Regulatory Regimes.”

The draft order would give Attorney General Pam Bondi 30 days to establish an “AI Litigation Task Force” whose sole task is to challenge state AI laws.

Those challenges would be issued “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment,” the draft says.

The order also directs Commerce Secretary Howard Lutnick to notify states with contested AI laws that they are ineligible for funds under the federal Broadband Equity Access and Deployment program. BEAD is a more than $42 billion program that has allocated funding to all U.S. states and territories.

The order, which was first reported by The Information, is not yet finalized. A White House official told CNBC that any discussion around it is just speculation until it is officially announced.

As written, the EO would be a major win for the burgeoning AI industry, whose leaders — including Sam Altman’s OpenAI, venture capital firm Andreessen Horowitz and other Silicon Valley titans — oppose an inconsistent state-by-state policy approach.

It would be just as big a blow to state lawmakers across the country who have sought to pass bills that would place guardrails on the nascent technology.

Keep reading

Microsoft Adds AI to Windows Despite ‘Novel Security Risks’

Microsoft’s recent introduction of Copilot Actions, an experimental AI agent integrated into Windows, has sparked criticism from security experts who question the safety of pushing new features before fully understanding and containing their potential risks

Ars Technica reports that Microsoft unveiled Copilot Actions this week, a set of “experimental agentic features” that allow AI to perform various tasks such as organizing files, scheduling meetings, and sending emails. While the company touted the AI agent as an active digital collaborator that enhances efficiency and productivity, it also issued a warning about the security implications of enabling the feature.

Microsoft’s warning reads:

As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.

Security concerns stem from known defects inherent in most large language models (LLMs), including Copilot. Researchers have repeatedly demonstrated that LLMs can provide factually erroneous and illogical answers, a behavior known as “hallucinations.” This means users cannot fully trust the output of AI assistants like Copilot, Gemini, or Claude, and must independently verify the information.

Another significant issue with LLMs is their vulnerability to prompt injections. Hackers can exploit this flaw by planting malicious instructions in websites, resumes, and emails, which the AI eagerly follows without discerning between valid user prompts and untrusted, third-party content. These vulnerabilities can lead to data exfiltration, malicious code execution, and cryptocurrency theft.

Keep reading

Microsoft Warns Experimental Windows 11 AI Could Install Malware on Your Computer

Microsoft said in an update on Nov. 17 that Windows 11 users who utilize “agentic features” from its AI services should be cautious because the AI agents could potentially download and install malware.

In an alert, Microsoft warned that its AI models could “occasionally hallucinate” and introduce “novel security risks” such as malware because large language models, a type of AI that processes data and generates human-like text, are susceptible to cyberattacks.

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA),” the warning stated. A prompt injection attack is a type of cyberattack where an attacker crafts an input to trick the AI into performing malicious actions.

Microsoft added that in the case of Windows 11’s “experimental” AI services, “malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

The AI features are turned off by default and operate only after the user opts into them, the company said.

The agentic AI setting “can only be enabled by an administrator user of the device and once enabled, it’s enabled for all users on the device including other administrators and standard users,” Microsoft said of the AI services.

Keep reading

New U.N. Treaty Decriminalizes AI Child Sexual Abuse Images

Anew United Nations treaty against cybercrime expressly allows for child sexting and the production and dissemination of virtual child pornography. The new treaty will normalize pedophilia and turbocharge pedophilic sexual content online via artificial intelligence and sexbots.

Reports vary on the exact number of initial signatories on Oct. 25 — the U.N. reports 65 signatories, including the European Union, China, and Russia. It will enter into force after 40 countries ratify it.

The treaty has been criticized widely because it is seen as undermining privacy and human rights by requiring countries to cooperate with dictatorial regimes. But the bigger problem is that the new treaty expressly allows virtual child pornography and child sexting, which until now were deemed criminal child pornography under the Optional Protocol of the Convention on the Rights of the Child on the sale of children, child prostitution, and child pornography (“Optional Protocol”), a treaty developed, funded, and backed by the U.S. government for three decades. The way the new treaty undermines the Optional Protocol is quite disingenuous.

The new treaty expressly establishes that countries who ratify the treaty “may” decriminalize some materials until now considered illegal child pornography. According to article 14.2 of the new treaty, countries may opt to decriminalize the production, distribution, and possession of sexualized visual, written, or audio content depicting children, so long as they do not represent an “existing person” or do not “visually depict child sexual abuse or child sexual exploitation.” Articles 14.4 and 15.4 also give states the option to decriminalize sexting between children under all circumstances as well as sexting by a child to an adult when the child is above the age of consent and the sexting is done privately. In such cases, the treaty only requires prosecution where images are shared “non-consensually.”

Keep reading

Google boss says trillion-dollar AI investment boom has ‘elements of irrationality’

Every company would be affected if the AI bubble were to burst, the head of Google’s parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an “extraordinary moment”, there was some “irrationality” in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

“I think no company is going to be immune, including us,” he said.

In a wide-ranging exclusive interview at Google’s California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant’s ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet’s development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people’s savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of “irrational exuberance” in the market well ahead of the dotcom crash, Mr Pichai said the industry can “overshoot” in investment cycles like this.

“We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound,” he said.

“I expect AI to be the same. So I think it’s both rational and there are elements of irrationality through a moment like this.”

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would “probably be lost”.

But Mr Pichai said Google’s unique model of owning its own “full stack” of technologies – from chips to YouTube data, to models and frontier science – meant it was in a better position to ride out any AI market turbulence.

Keep reading