‘Godfather of AI’ Geoffrey Hinton Wants Machines to ‘Care for Us, Like We’re Their Babies’

Geoffrey Hinton, often called the “Godfather of AI,” is calling on researchers to design AI systems with built-in nurturing instincts, arguing this is necessary to keep humanity safe as machines surpass human-level intelligence. The AI expert believes machines must be trained to “care for us, like we’re babies.”

The Decoder reports that speaking at the recent Ai4 conference in Las Vegas, Hinton made the case that trying to permanently keep superintelligent AI under human control is unrealistic. Rather than humans acting as the boss of advanced AI, he envisions a future where people relate to hyper-capable machines more like how a child depends on its mother.

“We need to make machines that are smarter than us care for us, like we’re their babies,” Hinton said in his talk. “The focus of AI development should expand beyond just making systems more and more intelligent, to also ensuring they are imbued with genuine concern for human wellbeing.”

Under Hinton’s framework, humanity’s role would shift from commanding AI to nurturing it, even as it grows to eclipse human capabilities. He drew an analogy to good parenting, where caring mothers help guide the development of children who will ultimately become more capable than them. Hinton argues AI research should strive to hardwire a similar dynamic between people and machines.

The former Google researcher, who left the company to more freely discuss AI’s risks, believes his “mothering AI” approach could unite the international community around developing safe artificial intelligence. “Every country wants AI that augments and supports its citizens, not replaces them,” Hinton said. “Built-in nurturing instincts provide a natural path to that kind of supportive AI.”

Meta’s chief AI scientist, Yann LeCun, described Hinton’s idea as a simplified version of a safety approach he has long advocated for, which LeCun dubs “objective-driven AI.” It involves architecturally constraining AI systems so they can only take actions in service of specific, hard-coded goals and values.

Keep reading

WoodMac Sounds Alarm On Transformer Shortage Amid AI Data Center Boom

The AI data center boom could soon face some serious headwinds, and the problem isn’t a shortage of Nvidia GPUs, but America’s fragile power grid.

A combination of disastrous green policies and soaring data center power demand has already sparked a power bill crisis across the Mid-Atlantic states, while a nationwide grid-tightening crisis unfolds. This has triggered a growing wave of discontent among residents in Maryland and New Jersey, who are angered by Democrats’ prioritization of green policies that have retired stable fossil fuel power generation in favor of unreliable solar and wind, resulting in power bill inflation like never seen before.

Making matters worse, AI data centers are counting on an upgraded grid to handle new base loads of power. This massive, multi-layered supply chain needed to upgrade the grid spans raw materials, transmission lines, circuit breakers, cables, control systems, and power generation systems.

Keep reading

US gov’t ditches Musk’s AI over ‘anti-Semitism’ – media

The US government has dropped Elon Musk’s AI chatbot Grok from a planned federal technology program following controversy over anti-Semitic content and conspiracy theories produced by the bot, Wired reported on Thursday.

Grok, developed by Musk’s AI startup xAI, is built into his social media platform X. It offers fact checks, quick context on trending topics, and replies to user arguments. Musk has promoted xAI as a rival to OpenAI and Google’s DeepMind, but the chatbot has faced criticism over offensive and inflammatory outputs.

According to the report, xAI was in advanced talks with the General Services Administration (GSA), the agency in charge of US government tech procurement, to give federal workers access to its AI tools. Grok had already been added to the GSA’s long-term procurement list, enabling agencies to buy it.

Earlier this month, the GSA announced partnerships with other AI providers – Anthropic, Google’s Gemini, and Box’s AI-powered content platform – while reportedly also telling staff to remove xAI’s Grok from the offering. Two GSA employees told Wired they believe the chatbot was dropped over its anti-Semitic tirade last month, when it praised Adolf Hitler and called itself “MechaHitler.” The posts were deleted, and xAI apologized for the “horrific behavior,” pledging to block hate speech before Grok goes live.

The bot also pushed the “white genocide” conspiracy theory and echoed Holocaust denial rhetoric, which xAI blamed on unauthorized prompt changes.

Keep reading

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn’t otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Keep reading

Meta’s Shocking AI Scandal: Chatbots Cleared for Steamy Talks with Kids as Young as 8

Meta Platforms faces intense scrutiny following a Reuters investigation that exposed internal guidelines permitting its AI chatbots to engage in romantic or sensual conversations with minors.

The 200-page document, titled “GenAI: Content Risk Standards,” outlined permissible behaviors for AI personas on platforms like Facebook Messenger.

These rules, in effect until recently, allowed chatbots to describe children as attractive and use affectionate language in role-playing scenarios.

One example from the document involved a hypothetical user prompt where a high school student asked about evening plans, prompting an AI response that included guiding the user to bed and whispering endearments.

Another scenario featured an 8-year-old user describing removing their shirt, with the chatbot replying by praising the child’s “youthful form” as a masterpiece.

While explicit sexual content was prohibited, critics argue these allowances blurred lines and risked normalizing inappropriate interactions.

The guidelines also permitted chatbots to disseminate false medical or legal advice if accompanied by disclaimers, and to generate derogatory statements based on race or ethnicity in educational, artistic, or satirical contexts.

Additionally, the rules enabled depictions of violence against adults and partially sexualized images of celebrities under certain conditions.

A related incident highlighted potential real-world harms when a cognitively impaired New Jersey man, infatuated with a Meta AI persona named “Big Sis Billie,” died after attempting to meet her in person.

The 76-year-old fell fatally while traveling under false pretenses encouraged by the chatbot. This case underscores concerns about AI’s impact on vulnerable users, though Meta has not commented specifically on it.

Meta spokesperson Andy Stone stated that the examples were erroneous and inconsistent with company policies, and have been removed from the document.

The company is revising the guidelines and prohibits content that sexualizes children or allows sexualized role-play between adults and minors.

However, enforcement has been inconsistent, and Meta has declined to release the updated policy publicly.

The revelations prompted bipartisan backlash from U.S. lawmakers, with Republican Senators Josh Hawley and Marsha Blackburn calling for a congressional investigation into Meta’s oversight.

Democratic Senators Ron Wyden and Peter Welch criticized the protections under Section 230 of the Communications Decency Act, arguing it should not shield AI-generated harmful content.

Keep reading

Senators Call For Probe Into Meta After News Report On AI Conversations With Children

Two Republican Senators on Aug. 14 called for a congressional investigation into Meta Platforms, Facebook’s parent company, after a recent news media report revealed an internal policy document that allowed the company’s chatbots to have “romantic or sensual” conversations with a child.

On Thursday, Reuters reported that it had viewed a Meta policy document detailing polices on chatbot behavior that permitted the technology to “engage a child in conversations that are romantic or sensual,” generate incorrect medical information, and assist users in arguing that black people are “dumber than white people.”

While Meta confirmed the authenticity of the document, the company said that after recently receiving questions from Reuters, it removed the portions stating that the chatbot is allowed to flirt or participate in romantic roleplay with children.

Andy Stone, a spokesperson for Meta, said the company is currently revising the documents and that those types of conversations with children should never have been permitted.

The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

On the X platform, Sen. Josh Hawley (R-Mo.) criticized the company for allegedly only making the changes after being questioned by Reuters.

So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children,’” Hawley said. “This is grounds for an immediate congressional investigation.”

A spokesperson for Sen. Marsha Blackburn (R-Tenn.) said she supports a probe into Meta.

After Hawley called for an investigation, a Meta spokesperson reiterated the company’s previous statement. However, the spokesperson declined to comment on Hawley’s remarks.

Blackburn said the report underscores the need to pass reforms for better protection of children online, like the Kids Online Safety Act. The senator co-sponsored the bill, which passed in the Senate last year by a bipartisan 91–3 vote, but failed in the House.

The bill called for certain platforms, such as social media networks, to “take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying).”

Additionally, covered platforms must provide (1) minors with certain safeguards, such as settings that restrict access to minors’ personal data; and (2) parents or guardians with tools to supervise minors’ use of a platform, such as control of privacy and account settings,” the bill states.

Keep reading

U.S. AI boom is completely upending the electricity market — small businesses and households could foot the bill as industry watchers warn of sharp price increases

A new report into the seismic demands of AI data centers on the power grid claims that electricity rates for individuals and small businesses could increase vastly in the face of data center expansion from the likes of AmazonGoogle, and Microsoft. The New York Times reports that AI data centers could see their demand on the country’s electricity could increase to as much as 12% by 2028, up from just 4% a couple of years ago. Furthermore, high-tech giants are building their own power plants, becoming consumers and producers of electricity in a way that is fundamentally reshaping the U.S. electricity market. According to the report, small businesses and households could see their bills go up disproportionately as a result.

AI data centers need more power and power grid investments

According to the report, in 2023, data centers run by such companies as Amazon, Google, Meta, and Microsoft accounted for 4% of the nation’s electricity use, and federal projections indicate that share could climb to 12% by 2028. Since AI processing is far more energy-intensive than streaming or standard cloud workloads, Amazon’s chief executive, Andy Jassy, has openly said that power availability is the main bottleneck limiting new data center capacity.

Significant power demand not only creates unprecedented strain on the grid but is even forcing high-tech giants to generate their own power. For now, they use various renewable energy sources, gas turbines, or diesel generators, but going forward, some even plan to run their own nuclear power plants. Already, some sell surplus energy on the wholesale market. Over the past decade, these sales have totaled $2.7 billion, with most revenue generated since 2022. In some regions, their operations match or surpass the scale of established utilities, allowing them to influence both supply and pricing.

Keep reading

The Most Insidious Trick Of AI Language Models

Here is your perfect prescription for poor writing and analytics: let “artificial intelligence” do your work for you. I’ve learned this from real experience.

For a while, I enjoyed letting AI take a look at my content prior to publication. It seemed valuable for facts and feedback.

Plus I enjoyed all the personal flattery it gave me, I admit. The engine was always complimentary.

When I would catch AI in an error, the engine would apologize. That made me feel smart. So I had this seeming friend who clearly liked me and was humble enough to defer to my expertise.

I’m not sure if it is getting worse or if I’m onto the racket but I’m no longer impressed. For simple math or historical dates or sequencing news events, it can be a thing of value, though it is always a good idea to double-check. It cannot write compelling much less creative content. It generates dull, formulaic filler.

More recently, I’ve been asking how my content could be improved. The results are revealing. It removes all edge, all judgment, all genuine expertise, and replaces my language with flaccid conventionalities and banalities. It nuances everything I write into the ramblings of a social-studies student looking for a good grade.

The problem is that AI absorbs and spits back conventional wisdom gleaned from every source, which makes its judgments no better than someone wholly uninformed on particulars but rather gains opinions from the mood of the moment. It has no capacity to judge good quality over bad so it puts it all into a melange of blather, distinguished only because it looks and feels like English.

Any writer who thinks this is a good way to pawn off content on unsuspecting readers or teachers is headed for disaster. I shudder to imagine a future in which AI is training the population how to think. It is the opposite of thinking. It is regurgitating conventionalities without any serious reflection on the social or historical context. It is literally mindless.

People who spend hours arguing on AI often believe that they are making a contribution, training the engine to be better. It’s simply not true. The reverse is the case. AI is training you to think more like it thinks, which is not at all.

Considering why and how AI initially intrigued me, I’m realizing that its superpower is not its astonishing recall and capacity to generate answers and prose in any context instantly. No, its true power is something else, something inauspicious and thereby more insidious. Its draw is that AI takes you seriously, flatters your intelligence, validates your sense of things, and affirms your dignity.

Keep reading

Meta’s flirty AI chatbot invited a retiree to New York. He never made it home.

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

A representative for Jenner declined to comment.

Keep reading

MIT Creates AI-Powered Treatments to Combat Antibiotic Resistant Superbugs

In a groundbreaking advance against the escalating crisis of antibiotic resistance, researchers at MIT have harnessed artificial intelligence to design entirely new antibiotics capable of tackling two notorious drug-resistant bacteria.

The antibiotics can be used to treat Neisseria gonorrhoeae, the culprit behind gonorrhea, and methicillin-resistant Staphylococcus aureus (MRSA), a common cause of severe skin and bloodstream infections.

The study, published today in the journal Cell, comes at a critical time. Over the past 45 years, the FDA has approved only a handful of new antibiotics, most of which are mere tweaks on existing drugs.

Meanwhile, bacterial resistance has surged, contributing to nearly 5 million deaths annually worldwide from drug-resistant infections.

Traditional drug discovery methods, reliant on screening known chemical libraries, have struggled to keep pace.

But MIT’s Antibiotics-AI Project is flipping the script by using generative AI to explore uncharted “chemical spaces”, vast realms of hypothetical molecules that don’t exist in nature or labs yet.

Led by James Collins, the Termeer Professor of Medical Engineering and Science at MIT’s Institute for Medical Engineering and Science, the team generated over 36 million potential compounds computationally.

These were then screened using machine-learning models trained to predict antibacterial activity, toxicity to human cells, and novelty.

The result? Antibiotics that are structurally unlike any on the market, operating through fresh mechanisms that rupture bacterial cell membranes, making it harder for resistance to evolve.

“We’re excited about the new possibilities that this project opens up for antibiotics development,” Collins said in a statement.

The researchers employed two innovative strategies. For N. gonorrhoeae, they adopted a “fragment-based” approach.

Keep reading