The Dangers Of AI: Visualizing The Top Risks Companies Face

Companies are rushing to implement AI, but it’s not all smooth sailing.

More than half of businesses say the dangers of AI have led to at least one negative consequence.

But which issues plague businesses the most?

This infographic, via Visual Capitalist’s Jenna Ross, breaks down the most common risks.

It’s a preview of the brand-new executive guide from Terzo and Visual Capitalist, AI’s Illusion of Truth: The Data Behind AI Errors.

Keep reading

Suicides And Delusions: Lawsuits Point To Dark Side Of AI Chatbot

Can an artificial intelligence (AI) chatbot twist someone’s mind to breaking point, push them to reject their family, or even go so far as to coach them to commit suicide? And if it did, is the company that built that chatbot liable? What would need to be proven in a court of law?

These questions are already before the courts, raised by seven lawsuits that allege ChatGPT sent three people down delusional “rabbit holes” and encouraged four others to kill themselves.

ChatGPT, the mass-adopted AI assistant currently has 700 million active users, with 58 percent of adults under 30 saying they have used it—up 43 percent from 2024, according to a Pew Research survey.

The lawsuits accuse OpenAI of rushing a new version of its chatbot to market without sufficient safety testing, leading it to encourage every whim and claim users made, validate their delusions, and drive wedges between them and their loved ones.

Lawsuits Seek Injunctions on OpenAI

The lawsuits were filed in state courts in California on Nov. 6  by the Social Media Victims Law Center and the Tech Justice Law Project.

They allege “wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims—against OpenAI, Inc. and CEO Sam Altman,” according to a statement from the Tech Justice Law Project.

The seven alleged victims range in age from 17 to 48 years. Two were students, and several had white collar jobs in positions working with technology before their lives spiraled out of control.

The plaintiffs want the court to award civil damages, and also to compel OpenAI to take specific actions.

The lawsuits demand that the company offer comprehensive safety warnings; delete the data derived from the conversations with the alleged victims; implement design changes to lessen psychological dependency; and create mandatory reporting to users’ emergency contacts when they express suicidal ideation or delusional beliefs.

The lawsuits also demand OpenAI display “clear” warnings about risks of psychological dependency.

Keep reading

Trump’s New EO, “Genesis Mission”, Just Gave The Nation Over To Technocrats, Lock, Stock, and Barrel

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:

Section 1.  Purpose.  From the founding of our Republic, scientific discovery and technological innovation have driven American progress and prosperity.  Today, America is in a race for global technology dominance in the development of artificial intelligence (AI), an important frontier of scientific discovery and economic growth.  To that end, my Administration has taken a number of actions to win that race, including issuing multiple Executive Orders and implementing America’s AI Action Plan, which recognizes the need to invest in AI-enabled science to accelerate scientific advancement.  In this pivotal moment, the challenges we face require a historic national effort, comparable in urgency and ambition to the Manhattan Project that was instrumental to our victory in World War II and was a critical basis for the foundation of the Department of Energy (DOE) and its national laboratories.

Editor’s note: America is in a race with itself, but declared with upmost urgency. America’s AI Action Plan was written by Michael J. Kratsios, Assistant to the President for Science and Technology and David O. Sacks, Special Advisor for AI and Crypto. The EO is undoubtedly ghost written by same two Arch-Technocrats. The phrase “accelerate scientific advancement” refers to “accelerationism”, which is part of the Dark Enlightenment. Marc Andreessen wrote in his Techno-Optimist Manifesto: “We believe in accelerationism – the conscious and deliberate propulsion of technological development – to ensure the fulfillment of the Law of Accelerating Returns. To ensure the techno-capital upward spiral continues forever.” – End Editor’s note.

This order launches the “Genesis Mission” as a dedicated, coordinated national effort to unleash a new age of AI‑accelerated innovation and discovery that can solve the most challenging problems of this century.  The Genesis Mission will build an integrated AI platform to harness Federal scientific datasets — the world’s largest collection of such datasets, developed over decades of Federal investments — to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs.  The Genesis Mission will bring together our Nation’s research and development resources — combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites — to achieve dramatic acceleration in AI development and utilization.  We will harness for the benefit of our Nation the revolution underway in computing, and build on decades of innovation in semiconductors and high-performance computing.  The Genesis Mission will dramatically accelerate scientific discovery, strengthen national security, secure energy dominance, enhance workforce productivity, and multiply the return on taxpayer investment into research and development, thereby furthering America’s technological dominance and global strategic leadership.

Keep reading

“It’s Utilities Versus Rent” – Data Centers Send Energy Prices Soaring

The surge in data center construction to power today’s AI and cloud computing demands has sent electricity prices skyrocketing over the last few years. And, as Bloomberg reports, it is only getting worse.

With electricity costs now as much as 267% higher compared to five years ago in some parts of the US, fingers are being pointed directly at data center activity for blame. And while some – especially generously funded lobbies – are eager to dissemble and distort, claiming that on the contrary, electricity prices are barely keeping up with inflation and that data centers have little to no impact on electrical bills, the map below shows that more than 70% of the nodes that recorded pricing increases are located within 50 miles of significant data center activity.

Take Nicole Pasture: the Baltimore resident said her utility bills are up 50% over the past year. She is also a judge who rules on rental disputes in the city’s district court and sees people struggling with their power bills.

“It’s utilities versus rent,” she said. “They want to stay in their home, but they also want to keep their lights on.”

New data center construction projects are announced weekly, sometimes every day. Some of the construction timelines have upwards of 100 MW of new data center demand being built only two years from groundbreaking. This has to be contrasted against the rate of new energy generation construction, with the recent vite among PJM Interconnection stakeholders resulting in a failure to even select a plan for how to add data centers to the grid. 

“The voting reflects the nearly impossible challenge of trying to ensure resource adequacy and control ratepayer costs, while also allowing data center development in a market that is already short on generation supply and faces a 5-to-7 year timeline to bring on new large-scale generating resources,” Jon Gordon, a director at Advanced Energy United, a clean energy trade group, said in a bulletin on the meeting.

While some utilities have been able to pass the burden of higher electricity costs onto the owners of the large loads, most of the costs of expanding grid capacity inevitably find their way to consumers.

According to Bloomberg, in northern Virginia, Dominion Energy cited data center demand, inflation and higher fuel costs when asking regulators to raise its customer bills by about $20 a month for the average residential user over the next two years. Dominion also forecasts peak demand would rise by more than 75% by 2039 with data centers. It would be just 10% without.

And it’s only getting worse: with hundreds of gigawatts of future power demand from data centers built by companies like Oracle and Microsoft, Goldman writes that “eight out of the 13 US regional power markets are already at or below critical spare capacity levels.”

Keep reading

Young People Yearning for Socialism and AI Governance Is a Dangerous Proposition

Socialism has failed every time it has been tried because it is impossible for a group of people to implement a centralized governing apparatus capable of effectively organizing society. 

Heretofore, most people have resented and rejected the yoke of socialism, sometimes after long struggles, because collectivism is also antithetical to individual autonomy, free will, human nature, and the pursuit of happiness.

This is not the case in the United States. Today, more than 30 years after the collapse of the Union of Soviet Socialist Republics, young Americans want socialism. 

According to new polling conducted by Rasmussen Reports and The Heartland Institute, which included 1,496 likely voters aged 18 to 39, more than half of young Americans want a democratic socialist to win the White House in 2028.

Likewise, more than half of those polled have a favorable impression of New York City Mayor-elect Zohran Mamdani, and nearly 60 percent support socialist policies like a nationwide rent freeze and government-run grocery stores in every town.

There are many reasons why socialism appeals to young Americans. 

First and foremost, young people are not being taught about the dark history of socialism. Second, they are misled into believing that socialism is superior to free-market capitalism. Third, they are brainwashed into believing that collectivism is more righteous, fair, and just than personal freedom. Fourth, they feel that the American dream is dead and socialism is the solution to the cost-of-living crisis they face.

Nearly three-in-four young likely voters think the cost of housing is at a crisis level, and only 22 percent think they will be better off than their parents. 

At this point in time, given the economic headwinds they face, coupled with their ignorance of socialism, it makes sense that an alarming portion of young Americans want socialism.

Keep reading

Federal Uniformity Sounds Good – Until Big Tech Writes the Rules

Big Tech is jamming preemption of state AI laws into the National Defense Authorization Act (NDAA) at the last minute, but it is unclear if they are fully aware of what it would actually mean. If Congress prohibits states from implementing AI policies that would protect their citizens, it would have far-reaching consequences. True conservatives in Congress must uphold their pro-American values and refuse to support any preemption effort that would ultimately be a coronation of Big Tech as our country’s new rulers.

The United States is the dominant leader in AI on the global stage, and we are in a high-stakes race with adversaries – especially China – to maintain that advantage. We cannot afford to cut corners on oversight and safety while Big Tech develops AI systems at a rapid pace. States are best positioned to test thoughtful safeguards that address the most pressing concerns – from public safety to protecting children. The federal government, by contrast, is lagging behind.

States have been laboratories of democracy on every pressing issue of our time. The issue of AI should not be any different. The federal government is behind the states in terms of simply thinking about the ramifications of AI, and Congress should allow the states to try to find effective policy solutions that cater to our most imminent concerns.

Preemption is a clear violation of the principle of federalism inherent in the 10th Amendment to the Constitution.

Additionally, this provision is a blatant cover for Big Tech. It allows Big Tech to continue to exploit kids, creators, and conservatives. This provision will not empower small businesses and entrepreneurs in AI because they simply don’t have $40 billion in funding to put toward artificial general intelligence (AGI) development and $100 million bonuses to hand out to potential employees.

They are already shut out of the industry by people like OpenAI CEO Sam Altman, who popularized the “patchwork” characterization of state policies that is now being used in smaller circles in support of preemption.

If we intend to outpace China on AI, we must abandon misguided proposals that undermine federalism. The federal government should focus on enacting strong, strategic measures that protect our national security and prevent U.S. technologies and advanced chips from ending up in the wrong hands.

Keep reading

Gmail Explainer: How to Stop Google AI from Snooping Through Your Emails

Google has quietly started accessing Gmail users’ private emails and attachments to train its AI models, requiring manual opt-out to avoid participation. To make the process even trickier, Gmail users have to opt out in two separate places for the change to work. Follow these steps to protect your privacy from Google’s invasive AI endeavors.

Malwarebytes reports that Google has recently implemented changes that enable Gmail to access all private messages and attachments for the purpose of training its AI models. This means that unless users take action to opt out, their emails could be analyzed to improve Google’s AI assistants, such as Smart Compose or AI-generated replies.

The motivation behind this change is Google’s push to enhance Gmail’s features with the company’s Gemini AI, aiming to help users write emails more efficiently and manage their inboxes more effectively. To accomplish this, Google is utilizing real email content, including attachments, to train and refine its AI models. These settings are now reportedly switched on by default, rather than requiring explicit opt-in consent.

As a result, if users do not manually disable these settings, their private messages may be used for AI training without their knowledge. While Google assures strong privacy measures are in place, such as data anonymization and security during the AI training process, those handling sensitive or confidential information may find little comfort in these promises.

To fully opt out of Gmail’s AI training, users must change settings in two separate locations. This article features a guide and images for opting out on desktop, but the selections are very similar if accessing Gmail via the mobile app.

Keep reading

WaPo Defends Data Centers—With Few Disclosures That Amazon Depends on Them

US electricity prices, you may have noticed, keep going up. And in some parts of the country, like here in the DC region, they’re soaring. In Virginia, for example, electricity rates are up 13% this year, an issue Democrats highlighted as they swept back into power in Richmond earlier this month.

Burgeoning electric bills also factored into Democrats’ November wins in New Jersey and Georgia. But let’s stick with Virginia for a moment, where energy-sucking data centers are so plentiful that if northern Virginia’s DC suburbs were to secede, the new country would have more data center capacity than China.

As a result of these data centers, this new country would likely suffer from crippling electric bills. “Wholesale electricity [now] costs as much as 267% more than it did five years ago in areas near data centers. That’s being passed on to customers,” read a recent Bloomberg subhead.

Keep reading

White House crafting executive order to thwart state AI laws

An executive order that could soon be signed by President Donald Trump would thwart states’ artificial intelligence laws by launching legal challenges and withholding federal funding, according to a draft of the order obtained by CNBC on Wednesday.

The draft surfaced shortly after Trump publicly called for a single federal standard on AI “instead of a patchwork of 50 State Regulatory Regimes.”

The draft order would give Attorney General Pam Bondi 30 days to establish an “AI Litigation Task Force” whose sole task is to challenge state AI laws.

Those challenges would be issued “on grounds that such laws unconstitutionally regulate interstate commerce, are preempted by existing Federal regulations, or are otherwise unlawful in the Attorney General’s judgment,” the draft says.

The order also directs Commerce Secretary Howard Lutnick to notify states with contested AI laws that they are ineligible for funds under the federal Broadband Equity Access and Deployment program. BEAD is a more than $42 billion program that has allocated funding to all U.S. states and territories.

The order, which was first reported by The Information, is not yet finalized. A White House official told CNBC that any discussion around it is just speculation until it is officially announced.

As written, the EO would be a major win for the burgeoning AI industry, whose leaders — including Sam Altman’s OpenAI, venture capital firm Andreessen Horowitz and other Silicon Valley titans — oppose an inconsistent state-by-state policy approach.

It would be just as big a blow to state lawmakers across the country who have sought to pass bills that would place guardrails on the nascent technology.

Keep reading

Microsoft Adds AI to Windows Despite ‘Novel Security Risks’

Microsoft’s recent introduction of Copilot Actions, an experimental AI agent integrated into Windows, has sparked criticism from security experts who question the safety of pushing new features before fully understanding and containing their potential risks

Ars Technica reports that Microsoft unveiled Copilot Actions this week, a set of “experimental agentic features” that allow AI to perform various tasks such as organizing files, scheduling meetings, and sending emails. While the company touted the AI agent as an active digital collaborator that enhances efficiency and productivity, it also issued a warning about the security implications of enabling the feature.

Microsoft’s warning reads:

As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.

Security concerns stem from known defects inherent in most large language models (LLMs), including Copilot. Researchers have repeatedly demonstrated that LLMs can provide factually erroneous and illogical answers, a behavior known as “hallucinations.” This means users cannot fully trust the output of AI assistants like Copilot, Gemini, or Claude, and must independently verify the information.

Another significant issue with LLMs is their vulnerability to prompt injections. Hackers can exploit this flaw by planting malicious instructions in websites, resumes, and emails, which the AI eagerly follows without discerning between valid user prompts and untrusted, third-party content. These vulnerabilities can lead to data exfiltration, malicious code execution, and cryptocurrency theft.

Keep reading