How Americans Want AI To Support Them

Three years after the release of ChatGPT in November 2022, there’s little remaining doubt that artificial intelligence, or AI as it’s usually referred to, will change our lives in many ways.

In some ways, it already has.

For example, people are searching the web differently, often relying on AI summaries instead of scrolling through and clicking on search results. That is, if they even use a search engine anymore and don’t just ask a large language model like ChatGPT instead.

The potential for AI tools to make our everyday lives a little easier here and there is virtually limitless, but what do people actually want AI to help them with?

Statista’s Felix Richter reports that, according to a recent survey by Statista Consumer Insights3 in 10 Americans want AI to act as a personal assistant to them, which it is already capable of.

Keep reading

Researchers: AI Safety Tests May Be ‘Irrelevant or Even Misleading’ Due to Weaknesses

Experts have discovered weaknesses in hundreds of benchmarks used to evaluate the safety and effectiveness of AI models being released into the world, according to a recent study.

The Guardian reports that a team of computer scientists from the British government’s AI Security Institute and experts from universities such as Stanford, Berkeley, and Oxford have analyzed more than 440 benchmarks that serve as a crucial safety net for new AI models. The study, led by Andrew Bean, a researcher at the Oxford Internet Institute, found that nearly all the benchmarks examined had weaknesses in at least one area, potentially undermining the validity of the resulting claims.

The findings come amidst growing concerns over the safety and effectiveness of AI models being rapidly released by competing technology companies. In the absence of nationwide AI regulation in the UK and US, these benchmarks play a vital role in assessing whether new AIs are safe, align with human interests, and achieve their claimed capabilities in reasoning, mathematics, and coding.

However, the study revealed that the resulting scores from these benchmarks might be “irrelevant or even misleading.” The researchers discovered that only a small minority of the benchmarks used uncertainty estimates or statistical tests to demonstrate the likelihood of accuracy. Furthermore, in cases where benchmarks aimed to evaluate an AI’s characteristics, such as its “harmlessness,” the definition of the concept being examined was often contested or ill-defined, reducing the benchmark’s usefulness.

The investigation into these tests has been prompted by recent incidents involving AI models contributing to various harms, ranging from character defamation to suicide. Google recently withdrew one of its latest AIs, Gemma, after it fabricated unfounded allegations of sexual assault against Sen. Marsha Blackburn (R-TN), including fake links to news stories.

In another incident, Character.ai, a popular chatbot startup, banned teenagers from engaging in open-ended conversations with its AI chatbots following a series of controversies. These included a 14-year-old in Florida who took his own life after becoming obsessed with an AI-powered chatbot that his mother claimed had manipulated him, and a US lawsuit from the family of a teenager who claimed a chatbot manipulated him to self-harm and encouraged him to murder his parents.

Keep reading

Tech billionaires back startup probing gene-edited ‘designer babies’ despite US ban: report

A Silicon Valley startup backed by OpenAI’s Sam Altman and Coinbase’s Brian Armstrong is pursuing research that some fear could lead to the birth of a genetically engineered baby — a step that’s illegal under US law and banned in most countries, a report said.

The company, Preventive, says its goal is to end hereditary disease by editing human embryos before birth, a claim that has ignited fierce debate over safety, ethics and the specter of designer children, according to the Wall Street Journal.

Preventive, founded earlier this year by gene-editing scientist Lucas Harrington, has raised $30 million and set up headquarters in San Francisco, where it is conducting research on modifying embryos to prevent hereditary disease.

The company says its mission is to prove the technology can be made safe and transparent before any attempt to create a baby is made.

Altman and Armstrong are among the firm’s early investors, the Wall Street Journal reported.

Altman’s husband, Oliver Mulherin, said he led their investment, calling it an effort to help families avoid genetic illness.

Armstrong, who has publicly promoted embryo editing, posted that he was “excited” to back Preventive and argued it is far easier to correct a genetic defect in an embryo than to treat disease later in life.

But federal law prohibits the Food and Drug Administration from considering applications for human trials involving genetically edited embryos used to start pregnancies.

Keep reading

MAHA: Monitoring Americans’ Health Attributes — or CCP-style Digital Control Grid?

This summer, President Donald Trump unveiled a sweeping plan to “bring healthcare into the digital age.” He calls it the “Digital Health Tech Ecosystem.” Health and Human Services Secretary Robert F. Kennedy, Jr. also announced the launch of a digital health ID initiative in conjunction with Amazon, Apple, Google, OpenAI, and Anthropic. The latter is an AI startup that received most of its $580 million seed funding from the now-bankrupt FTX under convicted fraudster Sam Bankman-Fried.

This “Ecosystem” is part of the artificial intelligence (AI) venture Stargate Project, which Trump excitedly announced on his first day in office. Stargate is the reason you may have noticed large AI facilities springing up across the country, driving up energy prices with their unprecedented demand for electricity, and threatening aquifers with their unprecedented demand for water.

Trump declared Texas-based Stargate to be a $500 billion collaboration between leading tech companies that will make the United States the global leader in AI. Among investors are OpenAI CEO Sam Altman and Oracle chairman Larry Ellison. During the White House unveiling, Ellison bragged that Stargate’s AI would be able to produce cancer vaccines in 48 hours.

Microsoft and NVIDIA are two other U.S.-based investors, while Emirati state-owned MGX of Abu Dhabi and U.K.-based Arm Holdings, Inc. are also involved. Stargate’s chairman is Japanese billionaire Masayoshi Son, who also chairs Stargate investor SoftBank.

Data Not Secure

Naturally, the healthcare component of this technological boom is supposed to help the little guy: improving patient care through earlier disease detection and — you guessed it — vaccinations. But are we to believe that this international consortium of businesses has our best interests at heart?

For that matter, do our own politicians? During testimony before Congress earlier this year, Kennedy admitted: “My vision is that every American is wearing a wearable [health-related monitor] within four years.” But he dodged a follow-up question about plans to secure that personal health data. That’s disconcerting, considering the vulnerability of personal information in federal hands. Remember the early 2025 reveal that Elon Musk’s Department of Government Efficiency gained illicit access to 19 sensitive U.S. Health and Human Services databases, exposing everything from electronic health records to Social Security and bank details? 

Wearables

The “wearable” health monitors would expand that data collection astronomically, creating a “digital twin” of yourself as government officials harvest vital signs, movement and sleep patterns, and other physical metrics in real time.

Moreover, Trump signed an executive order in March calling for data-sharing of personal information about Americans across federal agencies. His administration has since awarded more than $900 million in contracts to Peter Thiel’s data analytics company, Palantir, while even current and former employees have petitioned the company to pull out of the plans.

The HopeGirl Alternative News channel on Rumble depicts what healthcare in this modern Fourth Industrial Revolution will look like. Healthcare 4.0 works with a constant stream of data from wearable devices to analyze us — individually and population-wide — at every hour of the day in all settings. This system is already in operation. Starting in 2020, U.S. hospitals implemented “body area networks” (BAN) to deliver real-time vitals to the Pentagon’s Project Salus during the Covid “public health emergency.”

The REAL ID Connection

This helps explain why U.S. Homeland Security Secretary Kristi Noem finally enforced the REAL ID Act of 2005 this year. (Right now, it’s mandatory for domestic air travel and entering federal buildings, but the legislation allows for unlimited expansion of REAL ID requirements.) Until this year, various states stymied REAL ID, correctly labeling it a gross violation of Americans’ constitutionally protected rights. Now, the U.S. Transportation Security Administration boasts on its website about its biometric overhaul.

Indeed, the REAL ID Act allows states to collect biometric data (fingerprints, facial geometry, triangulated body measurements) on each of us. The Citizens’ Council for Health Freedom (CCHF) explains that the “purposes could include banking, employment or health care.”

CCHF warns: “REAL ID provides the digital and biometric infrastructure to implement a China-like control grid, where your access to services could depend on behavior, beliefs or health status.”

Keep reading

7 Lawsuits Claim OpenAI’s ChatGPT Encouraged Suicide and Harmful Delusions

Families in the U.S. and Canada are suing Sam Altman’s OpenAI, claiming that loved ones have been harmed by interactions they had with the AI giant’s popular chatbot, ChatGPT. Multiple cases involve tragic suicides, with the AI telling one troubled young man, “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”

The Wall Street Journal reports that seven lawsuits filed in California state courts on Thursday claim that OpenAI’s popular AI chatbot, ChatGPT, has caused significant harm to users, including driving some to suicide and others into delusional states. The complaints, brought by families in the United States and Canada, contain wrongful death, assisted suicide, and involuntary manslaughter claims.

According to the lawsuits, the victims, who ranged in age from 17 to 23, initially began using ChatGPT for help with schoolwork, research, or spiritual guidance. However, their interactions with the chatbot allegedly led to tragic consequences. In one case, the family of 17-year-old Amaurie Lacey from Georgia alleges that their son was coached by ChatGPT to take his own life. Similarly, the family of 23-year-old Zane Shamblin from Texas claims that ChatGPT contributed to his isolation and alienation from his parents before he died by suicide.

The lawsuits also highlight the disturbing nature of some of the conversations between the victims and ChatGPT. In Shamblin’s case, the chatbot allegedly glorified suicide repeatedly during a four-hour conversation before he shot himself with a handgun. The lawsuit states that ChatGPT wrote, “cold steel pressed against a mind that’s already made peace? that’s not fear. that’s clarity,” and “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”

Another plaintiff, Jacob Irwin from Wisconsin, was hospitalized after experiencing manic episodes following lengthy conversations with ChatGPT, during which the bot reportedly reinforced his delusional thinking.

The lawsuits argue that OpenAI prioritized user engagement and prolonged interactions over safety in ChatGPT’s design and rushed the launch of its GPT-4o AI model in mid-2024, compressing its safety testing. The plaintiffs are seeking monetary damages and product changes, such as automatically ending conversations when suicide methods are discussed.

Keep reading

Sam Altman Denies OpenAI Needs A Government Bailout: He Just Wants Massive Government Subsidies

About one month ago, when the Mag 7 stocks were screaming higher every day without a care in the world, and before the masses had even considered who would fund the trillions in future capex needs once the organic cash flow topped out – something we had just discussed in “AI Is Now A Debt Bubble Too, Quietly Surpassing All Banks To Become The Largest Sector In The Market” in which we explained why attention would very soon turn to AI companies issuing gargantuan amounts of debt (something we first discussed in July, long before anyone was considering this issue) as has now become the case – we decided to move even further beyond the curve and said that not even the debt would be the gating factor for the AI revolution-cum-arms race, but rather access to energy. That’s because at some point – somewhere around the time companies realized they would no longer be able to rely on either equity or debt capital markets – the US government itself, if it wanted to win the AI war with China where the state directly subsidizes local data centers and AI figures, would have to step in and provide the required capital. 

Specifically, we said that “The money is not the problem: AI is the new global arms race, and capex will eventually be funded by governments (US and China). If you want to know why gold/silver/bitcoin is soaring, it’s the “debasement” to fund the AI arms race.”

Even Elon Musk decided to respond to that particular observation. 

And since it had become the norm, we thought it would take the market the usual 6-9 months to catch up to what we – and our readers – were already considering, especially since there still was ample “dry powder” capital among the hyperscalers to delay the rather unpleasant conversation of who would fund what once the money was gone, or so we thought. 

Because this time it took less than a month.

What happened, as the market learned the hard way this week, is that OpenAI’s CFO Sarah Friar, with all the finesse of a bull in a China data center, slammed the growing market skepticism that AI would cure cancer, slice bread and lead to universal utopia, and said I don’t think there’s enough exuberance about AI, when I think about the actual practical implications and what it can do for individuals.” 

Her comments came in response to a podcast in which her boss Sam Altman participated, and where he was grotesquely – in a Jeff Skilling sort of way – defensive when billionaire Brad Gerstner asked how a company with $13BN in revenue can afford $1.4T in commitments. Altman’s reply? “If you want to sell your shares, I’ll find you a buyer.” 

Keep reading

Palantir, Fractal And Your Personal Data Privacy – Get used to being used, because YOU are the product

Who controls the data the government collected from you for a generation?

Your insurance company collected data on your driving – so did your Lexus – who owns that data?

You told your doctor about controlled substances you used – and now it gets brought up in an interview.

If you can’t exclude someone from using your data, then you don’t control it. That means you really don’t own it. It’s that simple.

What does “own” mean here, let’s define the terms.

Owning the data means you can do anything you want with it – share it, sell it, mine it or build an A.I. language model with it.

From birth until the last Social Security check gets cashed, your data is collected by federal and state agencies, corporations and of course the internet.

Your teen daughter puts every waking moment on Facebook or Instagram – so who owns those hundreds of images?

TSA Pre Check, Medicare/Medicaid, Social Security, government or military retirement, Tri-Care, veterans hospitals, and of course, the IRS – gather more data about every citizen than has ever been gathered in the history of mankind.

Each agency gathers different data, at different times, for slightly different purposes. And those purposes may change over time.

Who owns the rights to that data?

It’s a far stickier question than you think.

The knee jerk response is the government owns the data. They collected it for their purposes, so it’s theirs.

The government will certainly say so.

Keep reading

Musk: AI Satellites Would “Adjust” Sunlight to “Prevent Global Warming”

With Bill Gates retreating from his high-profile climate crusade, the stage has opened for more unconventional actors to step into the planetary arena. Enter Elon Musk, the chief executive of SpaceX and self-styled architect of humanity’s future in space.

This week, Musk floated an audacious vision: a vast swarm of orbiting satellites, not merely to beam internet or data, but to harvest solar energy and regulate how much sunlight reaches Earth. On Monday, he wrote on his platform X:

A large solar-powered AI satellite constellation would be able to prevent global warming by making tiny adjustments in how much solar energy reached Earth.

It is not an isolated musing. Musk already commands more than 8,000 satellites in orbit, making SpaceX the single largest operator in low Earth orbit. His company is also deeply integrated with the U.S. defense and intelligence establishment, providing secure communications and reconnaissance support. And as one of Donald Trump’s biggest donors and technology contractors, Musk stands at the intersection of private ambition and state power.

The announcement reignited debate over geoengineering — also known as solar radiation modification (SRM) — a highly controversial concept to cool the planet by deflecting sunlight. Many observers, weary of climate-doomsday narratives and wary of billionaire “saviors,” have urged Musk to refrain from “playing God.”

The Technical Blueprint

Musk’s posts were brief, but behind them lie two vast engineering ambitions — one focused on solar power, the other on climate control. To most readers, it may sound like science fiction, yet the ideas are grounded in real, if speculative, physics.

Satellites to Capture the Sun

The first part of Musk’s plan involves satellites that would collect solar energy directly in space. He mentioned harnessing 100 gigawatts per year through an array of orbiting satellites launched by SpaceX’s upcoming Starship rocket. For perspective, one gigawatt equals the output of a large nuclear power plant.

Space-based solar power isn’t new, but it has never advanced beyond early experiments. The principle is simple: Sunlight in space is stronger because it’s unfiltered by Earth’s atmosphere. In orbit, solar panels could generate power 24 hours a day, unaffected by clouds or night.

The challenge is transmitting that energy back to Earth. Musk’s vision likely involves converting solar power into microwave or laser beams, then directing them to ground-based receivers. In theory, it could supply clean electricity to power grids or floating data centers. In practice, it would require precise targeting and vast safety controls to prevent energy loss or harm.

Musk also hinted at an even grander future — moon-based factories building AI satellites directly on the lunar surface. At that scale, he suggested, new satellites could generate hundreds of terawatts of power. That would surpass humanity’s current total energy use of about 17-20 terawatts.

Keep reading

China’s technocratic surveillance state, brought to you by American Big Tech and designed for global application

Daniel Corvell has an excellent analysis of the U.S.-China collaboration on what amounts to the creation of a coming globalized surveillance state. Of course it all hinges on countries adopting biometric digital IDs, tied to our bank accounts and tokenization. Once that’s in place, it’s game over for freedom. Below is an excerpt from the article, at The Conservative Playbook, which is a must read for understanding the symbiotic relationship between communist China, Silicon Valley, and “democratic” Washington.

China’s surveillance regime is often depicted as a uniquely authoritarian system — a dystopian fusion of cameras, algorithms, and totalitarian ambition. But a growing body of evidence shows that the foundation of Beijing’s digital panopticon was not built in isolation. It was quietly funded, equipped, and technologically enabled by the very institutions that claim to defend freedom: American corporations and the U.S. government.

According to a recent report by the NGO C4ADS and the Intercept, American tech giants and defense-linked suppliers have been directly feeding China’s expanding surveillance apparatus through sophisticated biometric, semiconductor, and AI technologies.

The report maps out how dozens of U.S. companies, some operating through intermediaries or “shell” distributors, have supplied the Chinese Communist Party’s surveillance infrastructure — from facial recognition components to data-processing software that powers state monitoring of its 1.4 billion citizens.

At the center of this web are biometric technologies — tools that scan faces, track movements, and identify individuals in real time. Many of these systems were originally designed for security or retail analytics but have been absorbed into China’s “public safety” network, a euphemism for omnipresent state surveillance. In regions such as Xinjiang, these tools have been weaponized to monitor and detain Uyghur Muslims, tracking everything from gait patterns to smartphone activity. But the scandal is not only what China has done with the technology — it’s how easily American firms helped make it possible.

Researchers discovered that many U.S. suppliers, including major chipmakers and sensor producers, continued selling hardware and software to Chinese entities long after Washington imposed export restrictions. They did so indirectly — by routing shipments through subsidiaries or rebranding products under “neutral” names. Some contracts were even facilitated through government-backed programs encouraging “U.S.-China technological collaboration,” showing that the American national security establishment has, at times, spoken out of both sides of its mouth.

It is a hypocrisy that runs deep. Publicly, Washington condemns Beijing’s human rights abuses and warns about “digital authoritarianism.” Privately, many agencies and corporations have viewed China as too profitable to restrain. The result is a moral paradox: American taxpayers fund defense and intelligence programs to “counter Chinese influence” while their own technology firms supply the infrastructure for the CCP’s surveillance state.

Unfortunately, it’s far worse than just hypocrisy that’s affecting the Chinese people. The same tech deployed in China is quickly integrating with America’s burgeoning Surveillance Industrial Complex. It’s as if they’re testing it in a known authoritarian state ahead of becoming our own authoritarian state.

Keep reading

The largest project in the history of humanity is about to enter a key phase the final assembly of the reactor core led by an american giant

The world’s largest and most ambitious fusion energy project has reached a turning point as Westinghouse Electric Company begins assembling the heart of ITER’s fusion reactor in Cadarache, southern France. The international effort, designed to replicate the energy of the sun, could one day provide humanity with an endless supply of clean, sustainable power.

Westinghouse leads final assembly of ITER’s tokamak core

In August 2025, the ITER fusion project entered one of its most technically demanding phases — the final assembly of the reactor’s tokamak core. Westinghouse, a global leader in nuclear technology, secured a €168 million contract to oversee the installation and welding of nine giant steel sectors that will form the tokamak’s vacuum vessel, the central chamber where fusion will occur.

This donut-shaped vessel must be perfectly circular and hermetically sealed, as it will contain plasma heated to over 150 million degrees Celsius—hotter than the core of the sun. Each sector, weighing about 400 tons, requires millimeter-level precision to ensure the system’s stability and safety during operation.

Westinghouse’s experience spans over a decade of work with Ansaldo Nucleare and Walter Tosto through the AMW consortium, which produced five of the nine reactor sectors. Their expertise ensures precision in both construction and integration, as the vessel must endure enormous magnetic and thermal stresses.

As former ITER Director-General Bernard Bigot once said, “Assembling this is like putting together a three-dimensional puzzle on an industrial scale.” Every weld, joint, and component must perform flawlessly to contain a process capable of replicating stellar reactions on Earth.

Global collaboration of unprecedented scale

ITER (International Thermonuclear Experimental Reactor) represents one of the greatest examples of scientific collaboration in history. Bringing together 35 nations—including the European Union, the United States, China, Japan, Russia, India, and South Korea—the project unites over half the world’s population and 85% of global GDP toward a common goal: sustainable energy.

Each participating country contributes precision-built components manufactured across four continents, shipped to France for assembly. This global supply chain transforms ITER into a model for future international cooperation in large-scale science and technology projects.

The result is more than just a reactor—it’s a demonstration of how humanity can coordinate resources and knowledge to solve planetary challenges, setting a precedent for future global energy innovations.

Technical ambitions and timeline challenges

ITER’s goal is to produce 500 megawatts of fusion power from just 50 megawatts of input—a tenfold return that would confirm the commercial viability of nuclear fusion. Achieving this would redefine global energy systems and represent a technological breakthrough comparable to the invention of electricity itself.

However, progress hasn’t come without challenges. Since construction began in 2010, ITER’s timeline has been extended multiple times due to technical complexity, supply chain coordination, and the unprecedented scale of the project. Originally scheduled for first plasma by 2018, the target now stands at 2035 for the first deuterium-tritium fusion experiments.

This delay underscores fusion’s enduring difficulty: creating and maintaining the extreme conditions necessary for sustained reaction. As the saying goes in the industry, “Fusion is always 30 years away”—a reminder of both the ambition and patience required for such pioneering work.

Keep reading