A 2025 Assessment: The Emerging Artificial Intelligence Threat

The rise of artificial intelligence (AI) has been alarmingly rapid since 2022. Before then, AI was considered little more than a plaything or a novelty.  But now it is transforming businesses, wiping out entire categories of office jobs, and threatening human liberty. The first practical release of the Claude AI code-writing tool just by itself has completely transformed the global software industry. I’m talking about a Buggy Whips level of industry transformation. As a personal illustration, I should mention that my youngest son is now in his third year at a university here in The American Redoubt, studying for a degree in computer science. His prospects for finding a job when he graduates in 2027 have dropped dramatically since his freshman year. I’m now advising him to pursue a career in software design rather than programming. Otherwise, he’ll be another buggy whip maker.

AI has also changed the blogosphere.  It has been estimated that by the end of 2026, nearly 90 percent of blogs will be written by AI. The Internet will be flooded with AI-generated schlock, and it will become more difficult to differentiate between that which is human-written and AI-written. The same is happening with video blogs (“vlogs”). In another year, it will be hard to tell if a vlog host is a real living, breathing individual, or something AI-generated.  By the way, don’t worry about SurvivalBlog. We shall stalwartly remain one of the last of the Old School blogs. Take that, you Clankers!

I don’t want to sound like a prophet of doom, but the advance of AI troubles me deeply.  We’ve been warning about the threats posed by AI in SurvivalBlog since 2014.

Keep reading

Thousands Of Grok chats Now Searchable On Google

Hundreds of thousands of conversations that users had with Elon Musk’s xAI chatbot Grok are easily accessible through Google Search, reports Forbes.

Whenever a Grok user clicks the “share” button on a conversation with the chatbot, it creates a unique URL that the user can use to share the conversation via email, text, or on social media. According to Forbes, those URLs are being indexed by search engines like Google, Bing, and DuckDuckGo, which in turn lets anyone look up those conversations on the web. 

Users of Meta‘s and OpenAI‘s chatbots were recently affected by a similar problem, and like those cases, the chats leaked by Grok give us a glimpse into users’ less-than-respectable desires — questions about how to hack crypto wallets; dirty chats with an explicit AI persona; and asking for instructions on cooking meth. 

xAI’s rules prohibit the use of its bot to “promote critically harming human life” or developing “bioweapons, chemical weapons, or weapons of mass destruction,” though that obviously hasn’t stopped users from asking Grok for help with such things anyway.

According to conversations made accessible by Google, Grok gave users instructions on making fentanyl, listed various suicide methods, handed out bomb construction tips, and even provided a detailed plan for the assassination of Elon Musk.

xAI did not immediately respond to a request for comment. We’ve also asked when xAI began indexing Grok conversations.

Late last month, ChatGPT users sounded the alarm that their chats were being indexed on Google, which OpenAI described as a “short-lived experiment.” In a post Musk quote-tweeted with the words “Grok ftw,” Grok explained that it had “no such sharing feature” and “prioritize[s] privacy.”

Keep reading

Democrats Can’t Take A Joke, So They’re Trying To Outlaw Free Speech

Sen. Amy Klobuchar, D-Minn., wants to make one thing perfectly clear: She has never said Sydney Sweeney has “perfect [breasts].” Nor has she accused her fellow Democrats of being “too fat to wear jeans or too ugly to go outside.”

The Minnesota leftist attempted to clear the air earlier this week in a New York Times opinion piece headlined, “Amy Klobuchar: What I Didn’t Say About Sydney Sweeney.” 

Klobuchar wrote that she is the victim of a hoax, a “realistic deepfake.” Some trickster apparently put together and pushed out an AI-generated video in which Klobuchar appears to make (hilariously) outrageous comments about Sweeney’s American Eagle jeans ad — after liberals charged that the commercial is racist and an endorsement of eugenics. 

‘Party of Ugly People’

The doctored Klobuchar appears to be speaking at a Senate committee hearing, She demands Democrats receive “representation.” Of course, the satirical video has gone viral. 

“If Republicans are going to have beautiful girls with perfect ti**ies” in their ads, we want ads for Democrats, too, you know?” the fake Klobuchar asserts in the vid. “We want ugly, fat bitches wearing pink wigs and long-ass fake nails being loud and twerking on top of a cop car at a Waffle House ‘cause they didn’t get extra ketchup.”

“Just because we’re the party of ugly people doesn’t mean we can’t be featured in ads, okay?” the AI Amy implores. “And I know most of us are too fat to wear jeans or too ugly to go outside, but we want representation.” 

She appears — and sounds — so sincere.  But Klobuchar wants you to know it certainly was not her saying such “vulgar and absurd” things. That’s why she’s urging Congress to pass laws to ban such AI videos, which would be as absurd as social justice warriors calling American Eagle white supremacists for paying a blue jeans-clad, beautiful actress to say she has great jeans

Any such law would certainly and rightly be challenged in court. 

Keep reading

How Managers Are Using AI To Hire And Fire People

The role of artificial intelligence (AI) in the workplace is evolving rapidly, and some are warning that using AI to make executive decisions without careful consideration could backfire.

AI is being used more and more in recruitment, hiring, and performance evaluations that could lead to a promotion or termination.

Researchers, legal experts, legislators, and groups such as Human Rights Watch have expressed concern over the potential that AI algorithms are a gateway to ethical quagmires, including marginalization and discrimination in the workplace.

This warning bell isn’t new, but with more managers using AI to assist with important staff decisions, the risk of reducing employees to numbers and graphs also grows.

A Resume Builder survey released in June found that among a group of 1,342 managers in the United States, 78 percent use AI tools to determine raises, 77 percent use it for promotions, 66 percent use it for layoffs, and 64 percent use it for terminations.

Keep reading

Microsoft AI chief says it’s ‘dangerous’ to study AI consciousness

AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn’t exactly make them conscious. It’s not like ChatGPT experiences sadness doing my tax return … right?

Well, a growing number of AI researchers at labs like Anthropic are asking when — if ever — AI models might develop subjective experiences similar to living beings, and if they do, what rights they should have.

The debate over whether AI models could one day be conscious — and merit legal safeguards — is dividing tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare,” and if you think it’s a little out there, you’re not alone.

Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.”

Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we’re just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots.

Furthermore, Microsoft’s AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a “world already roiling with polarized arguments over identity and rights.”

Suleyman’s views may sound reasonable, but he’s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been hiring researchers to study AI welfare and recently launched a dedicated research program around the concept. Last week, Anthropic’s AI welfare program gave some of the company’s models a new feature: Claude can now end conversations with humans who are being “persistently harmful or abusive.

Keep reading

We Need To Rethink AI Before It Destroys What It Means To Be Human

America was built on the foundational belief that every man is created in the image of God with purpose, responsibility, and the liberty to chart his own course. We were not made to be managed. We were not made to be obsolete. But that is exactly the future Big Tech is building under the banner of Artificial Intelligence (AI). And if we do not slam the brakes right now, we are going to find ourselves in a world where the human experience is not enhanced by technology but erased by it.

Even Elon Musk, who is arguably one of AI’s most influential innovators, has warned us about the path we are on. In a sit-down with Israeli Prime Minister Benjamin Netanyahu, he laid out the endgame.

AI will lead us to either a future like the Terminator or what he described as Heaven on Earth.

But here is the kicker. That so-called heaven looks a lot like Pixar’s Wall-E, where human beings become obese, lazy blobs who float around while robots do all the work, all the thinking, and frankly all the living.

This may seem like science fiction, but this is what they are actually building.

At last year’s We, Robot event, Musk unveiled Tesla’s new self-driving robotaxi. But what caught my attention was their preview of Optimus, the AI-powered humanoid robot. In their promotional video, Tesla showed Optimus babysitting children, teaching in schools, and even serving as a doctor. Combine that with Tesla’s fully automated Hollywood diner concept, where Optimus is flipping burgers and even working as a waiter and bartender, and you begin to see the real aim. Automation is replacing human connection, service, and care.

So where do humans fit in? That is the terrifying part. Musk and Bill Gates have both pitched the idea of universal basic income to replace traditional employment that AI is going to replace. Musk has said there will come a point where no job is needed. You can have a job if you want one for personal satisfaction, but AI will do everything. Gates has proposed taxing robot labor to fund people who no longer work.

The reality is that work is more than a paycheck. It is not just how we survive; it is how we find purpose. It is how we grow, how we learn, and how we take responsibility. Struggle is not a flaw in the system; it is part of what makes us human. The daily grind, the failures, the perseverance, the sense of accomplishment. Strip all of that away, and you have stripped away humanity.

Keep reading

Legal Experts: ChatGPT and AI Models Should Face Medical Review for Human Testing, Weigh Serious Mental Health Risks to Users

When studies are done on human beings, they are required to have an “Institutional Review Board” or “IRB” review the study, and formally approve the research, this is not being done at present for federally-funded work with AI/LLM programs and may, experts warn, be significantly harming U.S. citizens.

This is done because studies are being conducted on human beings.
Critics say that ‘Large Language Models’ powered by Artificial Intelligence, platforms like “Claude” and “ChatGPT” are engaged in this kind of human research and should be subject to board review and approval.

And they point out that current HHS policies would appear to require IRB-review for all federally-funded research on human subjects, but that Big Tech companies have so far evaded such review.

IRB Rules (45 C.F.R. 46.109, “The Common Rule”), requires all federally funded human-subjects research to go through IRB approval, informed consent, and continuing oversight.

Some courts have recognized that failure to obtain IRB approval can be used as evidence in itself of negligence or misconduct.

Even low-impact and otherwise innocent research requires this kind of professional review to ensure that harmful effects are not inadvertently caused to the human participants. Most modern surveys are often required to have an IRB review prior to its start.

Already, scientists have raised alarm about the mental and psychological impact of LLM use among the population.

One legal expert who is investigating the potential for a class action against these Big Tech giants on this issue told the Gateway Pundit, “under these rules, if you read them closely, at a minimum, HHS should be terminating every single federal contract at a university that works on Artificial Intelligence.”

This issue came up in 2014, when Facebook was discovered to have been changing and manipulating their algorithms on 700,000 people to see how they responded. This testing on human subjects may have seemed benign to some, but there was a risk that long-term mental and emotional health was significantly impacted. In 2018, the same complaints were made about the Cambridge Analytica program where a private company harvested millions of Facebook user profiles in order to more accurately market to those individuals.
Studies, including this 2019 study in the Journal ‘Frontiers in Psychology’, have examined the many ethical issues about Facebook’s actions, including how it selected whom to test upon, the intentions of testing on these individuals, and the ethics of doing so on children.

The legal expert pointed out to the Gateway Pundit, “People are using these systems, like ChatGPT, to discuss their mental health. Their responses are being used in their training data. Companies like OpenAI and Anthropic admit user chats may be stored and used for “training.” Yet under IRB standards, that kind of data collection would usually require informed consent forms explaining risks, yet none are provided.”

Keep reading

How Much Energy Does ChatGPT’s Newest Model Consume?

  • The energy consumption of the newest version of ChatGPT is significantly higher than previous models, with estimates suggesting it could be up to 20 times more energy-intensive than the first version.
  • There is a severe lack of transparency regarding the energy use and environmental impact of AI models, as there are no mandates forcing AI companies to disclose this information.
  • The increasing energy demands of AI are contributing to rising electricity costs for consumers and raising concerns about the broader environmental impact of the tech industry.

How much energy does the newest version of ChatGPT consume? No one knows for sure, but one thing is certain – it’s a whole lot. OpenAI, the company behind ChatGPT, hasn’t released any official figures for the large language model’s energy footprints, but academics are working to quantify the energy use for query – and it’s considerably higher than for previous models. 

Keep reading

AI-powered stuffed animals are coming for your kids

Do A.I. chatbots packaged inside cute-looking plushies offer a viable alternative to screen time for kids?

That’s how the companies selling these A.I.-powered kiddie companions are marketing them, but The New York Times’ Amanda Hess has some reservations. She recounts a demonstration in which Grem, one of the offerings from startup Curio, tried to bond with her. (Curio also sells a plushie named Grok, with no apparent connection to the Elon Musk-owned chatbot.)

Hess writes that this is when she knew, “I would not be introducing Grem to my own children.” As she talked to the chatbot, she became convinced it was “less an upgrade to the lifeless teddy bear” and instead “more like a replacement for me.”

She also argues that while these talking toys might keep kids away from a tablet or TV screen, what they’re really communicating is that “the natural endpoint for [children’s] curiosity lies inside their phones.”

Keep reading

To Share Weights from Neural Network Training Is Dangerous

Some organizations and researchers are sharing neural network weights, particularly through the open-weight model movement. These include Meta’s Llama series, Mistral’s models, and DeepSeek’s open-weight releases, which claim to democratize access to powerful AI. But doing so raises not only security concerns, but potentially an existential threat.

For background, I have written a few articles on LLMs and AIs as part of my own learning process in this very dynamic and quickly evolving Pandora’s open box field. You can read those herehere, and here.

Once you understand what neural networks are and how they are trained on data, you will also understand what weights (and biases) and backpropagation are. It’s basically just linear algebra and matrix vector multiplication to yield numbers, to be honest. More specifically, a weight is a number (typically a floating-point value – a way to write numbers with decimal points for more accuracy) that represents the strength or importance of the connection between two neurons or nodes across different layers of the neural network. 

I highly recommend watching 3Blue1Brown’s videos to gain a better understanding, and it’s important that you do. 3Blue1Brown’s instructional videos are incredibly good. 

Start with this one.

And head to this one.

The weights are the parameter values determined from data in a neural network to make predictions or decisions to arrive at a solution. Each weight is an instruction telling the network how important certain pieces of information are, like how much to pay attention to a specific color or shape in a picture. These weights are numbers that get fine-tuned during training thanks to all those decimal points, helping the network figure out patterns. Examples include recognizing a dog in a photo or translating a sentence. They are critical in the ‘thinking’ process of a neural network. 

You can think of the weights in a neural network like the paths of least resistance that guide the network toward the best solution. Imagine water flowing down a hill, naturally finding the easiest routes to reach the bottom. In a neural network, the weights are adjusted during training on data sets to create the easiest paths for information to flow through, helping the network quickly and accurately solve problems, like recognizing patterns or making predictions, by emphasizing the most important connections and minimizing errors.

If you’re an electronic musician, think of weights like the dials on your analog synth that allow you to tune into the right frequency or sound to say, mimic a sound you want to recreate, or in fact, create a new one. If you’re a sound guy, you can also think of it like adjusting the knobs on your mixer to balance different instruments.

Keep reading