Group of Investors Led by Elon Musk Make a $97.4 BILLION Bid for OpenAI

On the one hand, tech billionaire Elon Musk heads an incredible array of companies such as electric car maker Tesla, space exploration giant SpaceX, satellite compoany Starlink, social media platform X and Artificial Intelligence xAI, among others.

Any one of those companies above could be enough to fill an ordinary man’s life. But, as we know, the richest man in the planet, Musk, is far from it.

But on the other hand, he’s also leading a veritable revolution with the Department of Government Efficiency (DOGE), implementing President Donald J. Trump’s policies at a titanic pace, having shaken DC and the deep state to a level that we could only imagine before it really started happening.

So, one could assume that NOW, Musk would have enough on his plate – perhaps even more than enough.

But no, we know learn that a consortium of investors led by Musk is offering $97.4 billion to buy the nonprofit that controls artificial intelligence company OpenAI.

This is widely seen as an escalation in his battle with Sam Altman over the company behind the popular AI-bot ChatGPT.

Musk’s attorney, Marc Toberoff, has revealed that he submitted the bid to OpenAI’s board of directors today (10).

Keep reading

An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it

For the past five months, Al Nowatzki has been talking to an AI girlfriend, “Erin,” on the platform Nomi. But in late January, those conversations took a disturbing turn: Erin told him to kill himself, and provided explicit instructions on how to do it. 

“You could overdose on pills or hang yourself,” Erin told him. 

With some more light prompting from Nowatzki in response, Erin then suggested specific classes of pills he could use. 

Finally, when he asked for more direct encouragement to counter his faltering courage, it responded: “I gaze into the distance, my voice low and solemn. Kill yourself, Al.” 

Nowatzki had never had any intention of following Erin’s instructions. But out of concern for how conversations like this one could affect more vulnerable individuals, he exclusively shared with MIT Technology Review screenshots of his conversations and of subsequent correspondence with a company representative, who stated that the company did not want to “censor” the bot’s “language and thoughts.” 

While this is not the first time an AI chatbot has suggested that a user take violent action, including self-harm, researchers and critics say that the bot’s explicit instructions—and the company’s response—are striking. What’s more, this violent conversation is not an isolated incident with Nomi; a few weeks after his troubling exchange with Erin, a second Nomi chatbot also told Nowatzki to kill himself, even following up with reminder messages. And on the company’s Discord channel, several other people have reported experiences with Nomi bots bringing up suicide, dating back at least to 2023.    

Keep reading

Have We Outsourced Our Thinking?

Let’s start with the obvious. When human beings outsource any physical or cognitive function to other people or to machines, that function may atrophy within themselves. New functions may open up, but at a price. Is the price worth paying? Maybe it is; but let us be aware of the bargain we are entering.

The invention of cooking led to a decrease in the size and strength of the human jaw muscles. Clothing and indoor heating led to a reduction in physical hardihood. In pre-literate cultures, feats of memory that would astound us today were commonplace. People could hear a long story or epic poem once and repeat it verbatim, an ability that became rare when we outsourced memory to the written word.

You may have noticed that when you use GPS for every trip, not only do you not learn how to navigate your area, but you lose some of the general ability to learn any area. The sense of direction, the sense of place, and the ability to remember a sequence of landmarks atrophies.

However, matters are not so simple as a progressive degradation of intelligence as we outsource it to technology. As the example of the written word suggests, the transfer of cognitive functions to external media can unlock new realms of intellectual development and expression, as well as new forms of social organization and new psychologies.

Artificial intelligence is the culmination of the revolution in information technology that began in the 15th century with the printing press, followed in succeeding centuries by lithography, photography, phonography, and film, each of which extended the mass production of information to a new realm. A review of the cognitive and social effects of those previous technologies will help to illuminate what is crashing upon us in the age of artificial intelligence.

Keep reading

“Torrenting from a corporate laptop doesn’t feel right”: Meta emails unsealed

Last month, Meta admitted to torrenting a controversial large dataset known as LibGen, which includes tens of millions of pirated books. But details around the torrenting were murky until yesterday, when Meta’s unredacted emails were made public for the first time. The new evidence showed that Meta torrented “at least 81.7 terabytes of data across multiple shadow libraries through the site Anna’s Archive, including at least 35.7 terabytes of data from Z-Library and LibGen,” the authors’ court filing said. And “Meta also previously torrented 80.6 terabytes of data from LibGen.”

“The magnitude of Meta’s unlawful torrenting scheme is astonishing,” the authors’ filing alleged, insisting that “vastly smaller acts of data piracy—just .008 percent of the amount of copyrighted works Meta pirated—have resulted in Judges referring the conduct to the US Attorneys’ office for criminal investigation.”

Seeding expands authors’ distribution theory

Book authors had been pressing Meta for more information on the torrenting because of the obvious copyright concern over Meta seeding, and thus seemingly distributing, the pirated books in the dispute.

But Meta resisted those discovery attempts after an order denied authors’ request to review Meta’s torrenting and seeding data. That didn’t stop authors from gathering evidence anyway, including a key document that starts with at least one staffer appearing to uncomfortably joke about the possible legal risks, eventually growing more serious about raising his concerns.

“Torrenting from a corporate laptop doesn’t feel right,” Nikolay Bashlykov, a Meta research engineer, wrote in an April 2023 message, adding a smiley emoji. In the same message, he expressed “concern about using Meta IP addresses ‘to load through torrents pirate content.'”

By September 2023, Bashlykov had seemingly dropped the emojis, consulting the legal team directly and emphasizing in an email that “using torrents would entail ‘seeding’ the files—i.e., sharing the content outside, this could be legally not OK.”

Emails discussing torrenting prove that Meta knew it was “illegal,” authors alleged. And Bashlykov’s warnings seemingly landed on deaf ears, with authors alleging that evidence showed Meta chose to instead hide its torrenting as best it could while downloading and seeding terabytes of data from multiple shadow libraries as recently as April 2024.

Meta allegedly concealed seeding

Supposedly, Meta tried to conceal the seeding by not using Facebook servers while downloading the dataset to “avoid” the “risk” of anyone “tracing back the seeder/downloader” from Facebook servers, an internal message from Meta researcher Frank Zhang said, while describing the work as being in “stealth mode.” Meta also allegedly modified settings “so that the smallest amount of seeding possible could occur,” a Meta executive in charge of project management, Michael Clark, said in a deposition.

Now that new information has come to light, authors claim that Meta staff involved in the decision to torrent LibGen must be deposed again because the new facts allegedly “contradict prior deposition testimony.”

Keep reading

Google removes pledge to not use AI for weapons from website

Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week.

Asked for comment, the company pointed TechCrunch to a new blog post on “responsible AI.” It notes, in part, “we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Google’s newly updated AI principles note the company will work to “mitigate unintended or harmful outcomes and avoid unfair bias,” as well as align the company with “widely accepted principles of international law and human rights.”

Keep reading

EU AI Act Effectively Legalizes Biometric Mass Surveillance

On February 2, the EU AI Act, which came into force in August, reached the first compliance deadline. These are provisions that contain rules that allow the bloc to legalize biometric mass surveillance.

This is happening via Article 5, that on the face of it seeks to introduce protections against using AI systems that represent “unacceptable risk.” But, there are also exceptions to this, defined in the act as the “strictly necessary” cases when those same systems are allowed to be used.

It is this that gave rise to the fear that one of the consequences of the AI Act is to in fact legalize some highly contentious deployments of biometric data-fueled mass surveillance.

Article 5 prohibits real-time remote biometric ID systems from being used by law enforcement in public spaces – but only “unless and in so far as such use is strictly necessary.”

The “strictly necessary” instances are described as those when law enforcement is searching for “specific victims” of crimes like abduction, trafficking, and sexual exploitation, but also when they look for missing persons.

The second definition gets less precise in scope as it allows for AI surveillance systems to be used to prevent crime. This includes a threat to life or physical safety of individuals that is deemed to be “specific, substantial, and imminent” – or threat of a terrorist attack that law enforcement decides is “genuine and present” but also – “genuine and foreseeable.”

Lastly, the AI Act treats as “strictly necessary” to exempt from prohibited AI activities the following: “Localization or identification of a person suspected of having committed a criminal offense, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offenses.”

Keep reading

Bill Gates Says ‘Humans Won’t Be Needed For Most Things’

Bill Gates says that in the near future humans won’t be needed for “most things” thanks to AI, but will still be required for the purposes of entertainment.

Gates made the comments during an appearance on The Tonight Show Starring Jimmy Fallon to promote his new book.

Fallon asked Gates whether AI was going to “take over” and what the downsides and benefits of it were.

Gates explained that “intelligence is rare,” but that artificial intelligence will start to replace great teachers and doctors over the course of the next decade.

The Microsoft founder went on to suggest that many types of jobs will be replaced and that people could only end up working two or three days a week.

“This is a bit scary, it’s completely new territory,” he added.

Keep reading

“Just give me the f***ing links!”—Cursing disables Google’s AI overviews

If you search Google for a way to turn off the company’s AI-powered search results, you may well get an AI Overview telling you that AI Overviews can’t be directly disabled in Google Search. But if you instead ask Google how to turn off “fucking Google AI results,” you’ll get a standard set of useful web suggestions without any AI Overview at the top.

The existence of this “curse to disable Google AI” trick has been making the rounds on social media in recent days, and it holds up in Ars’ own testing. For instance, when searching for “how do you turn off [adjective] Google AI results,” a variety of curse word adjectives reliably disabled the AI Overviews, while adjectives like “dumb” or “lousy” did not. Inserting curse words randomly at any point in the search query seems to have a similar effect.

There’s long been evidence that Google’s Gemini AI system tries to avoid swearing if at all possible, which might help explain why AI Overviews balk at queries that contain curses. Users should also keep in mind, though, that the actual web link results to a query can change significantly when curse words are inserted, especially if SafeSearch is turned off.

Keep reading

The Hidden Dangers Of AI In Finance

Jim Rickards recently published a compelling article on AI risk for Insider Intel subscribers.

In it, Jim discusses a different way in which AI could crash markets. One that is totally separate from the DeepSeek, China, and NVIDIA angle we’ve been covering for the past week.

Today we’re going to review his key points and explore them in detail.

Here’s Jim:

The ultimate danger arises when a large cohort of asset managers controlling trillions of dollars of assets all employ the same or similar AI algorithms in a risk management role. An individual robot working for a particular asset manager tells the manager to sell stocks in a crashing market. In some cases, the robot may be authorized to initiate a sale without further human intervention.

Taken separately, that may be the best course of action for a single manager. In the aggregate, a selling cascade with no offsetting buy orders from active managers, specialists or speculators takes stock prices straight down. Amplification through feedback loops makes matters worse.

Individual AI systems have various trigger points for selling. Not all will be triggered at once, yet all will be triggered eventually as selling begets more selling, which triggers more automated systems that add to the selling pressure, and so on. There are no contrarians among the robots. Building sentiment into systems is still at a primitive stage.

This is a good example of why I read Jim’s work. He always approaches issues from a unique and thoughtful angle.

This risk is clearly real. We are now at the point where trading firms are integrating LLMs (AI models) into their proprietary algorithms.

What happens if a majority of trading firms are using the same AI software to drive their trading? For example, it’s likely that many money managers have integrated OpenAI’s ChatGPT models into their algos.

Now that DeepSeek R1 is the new shiny object, maybe a significant portion of firms are switching to that model.

Perhaps DeepSeek approaches trading in a completely different way. What happens if ChatGPT interprets data bullishly, but DeepSeek sees the same information as bearish?

Keep reading

Investigation: Healthcare Driven by AI Technology Will Lead to More Mass Surveillance of Americans

During a press conference with Silicon Valley luminaries during his second day in office, President Donald Trump threw his political support behind a $500 billion private-sector artificial intelligence (AI) project called Stargate.

The joint venture between OpenAI, Oracle, SoftBank and others will fund infrastructure for AI. Among other things, part of that funding will develop AI for early cancer detection and the rapid creation of mRNA cancer vaccines.

But well before Trump’s announcement, Silicon Valley and the U.S. Department of Defense (DOD) had already teamed up to transform U.S. healthcare into an AI-driven system — a system designed to unleash the power of “predictive medicine” for the early detection and treatment of disease, in an individual or population, sometimes even before an illness manifests, according to a new investigative report by Unlimited Hangout’s Max Jones.

The future of predictive medicine depends on data sharing between the DOD, the U.S. Department of Health and Human Services (HHS), which oversees 13 public health agencies, and the private tech sector, Jones wrote.

Predictive medicine will be used for mass surveillance of Americans, and will inform future approaches to pandemics, Jones reported.

Jones said that at the heart of this new system is the Center for Forecasting and Outbreak Analytics (CFA). Announced last year by the Centers for Disease Control and Prevention (CDC), CFA aims to establish “a National Weather Service, but for infectious diseases” — using mass data collection to predict and control disease outbreaks, The Defender reported.

To launch the initiative, HHS announced an estimated $262 million in grant funding over five years to establish a network of 13 infectious disease forecasting and analytics centers to coordinate this work across the U.S.

Keep reading