To Share Weights from Neural Network Training Is Dangerous

Some organizations and researchers are sharing neural network weights, particularly through the open-weight model movement. These include Meta’s Llama series, Mistral’s models, and DeepSeek’s open-weight releases, which claim to democratize access to powerful AI. But doing so raises not only security concerns, but potentially an existential threat.

For background, I have written a few articles on LLMs and AIs as part of my own learning process in this very dynamic and quickly evolving Pandora’s open box field. You can read those herehere, and here.

Once you understand what neural networks are and how they are trained on data, you will also understand what weights (and biases) and backpropagation are. It’s basically just linear algebra and matrix vector multiplication to yield numbers, to be honest. More specifically, a weight is a number (typically a floating-point value – a way to write numbers with decimal points for more accuracy) that represents the strength or importance of the connection between two neurons or nodes across different layers of the neural network. 

I highly recommend watching 3Blue1Brown’s videos to gain a better understanding, and it’s important that you do. 3Blue1Brown’s instructional videos are incredibly good. 

Start with this one.

And head to this one.

The weights are the parameter values determined from data in a neural network to make predictions or decisions to arrive at a solution. Each weight is an instruction telling the network how important certain pieces of information are, like how much to pay attention to a specific color or shape in a picture. These weights are numbers that get fine-tuned during training thanks to all those decimal points, helping the network figure out patterns. Examples include recognizing a dog in a photo or translating a sentence. They are critical in the ‘thinking’ process of a neural network. 

You can think of the weights in a neural network like the paths of least resistance that guide the network toward the best solution. Imagine water flowing down a hill, naturally finding the easiest routes to reach the bottom. In a neural network, the weights are adjusted during training on data sets to create the easiest paths for information to flow through, helping the network quickly and accurately solve problems, like recognizing patterns or making predictions, by emphasizing the most important connections and minimizing errors.

If you’re an electronic musician, think of weights like the dials on your analog synth that allow you to tune into the right frequency or sound to say, mimic a sound you want to recreate, or in fact, create a new one. If you’re a sound guy, you can also think of it like adjusting the knobs on your mixer to balance different instruments.

Keep reading

‘Godfather of AI’ Geoffrey Hinton Wants Machines to ‘Care for Us, Like We’re Their Babies’

Geoffrey Hinton, often called the “Godfather of AI,” is calling on researchers to design AI systems with built-in nurturing instincts, arguing this is necessary to keep humanity safe as machines surpass human-level intelligence. The AI expert believes machines must be trained to “care for us, like we’re babies.”

The Decoder reports that speaking at the recent Ai4 conference in Las Vegas, Hinton made the case that trying to permanently keep superintelligent AI under human control is unrealistic. Rather than humans acting as the boss of advanced AI, he envisions a future where people relate to hyper-capable machines more like how a child depends on its mother.

“We need to make machines that are smarter than us care for us, like we’re their babies,” Hinton said in his talk. “The focus of AI development should expand beyond just making systems more and more intelligent, to also ensuring they are imbued with genuine concern for human wellbeing.”

Under Hinton’s framework, humanity’s role would shift from commanding AI to nurturing it, even as it grows to eclipse human capabilities. He drew an analogy to good parenting, where caring mothers help guide the development of children who will ultimately become more capable than them. Hinton argues AI research should strive to hardwire a similar dynamic between people and machines.

The former Google researcher, who left the company to more freely discuss AI’s risks, believes his “mothering AI” approach could unite the international community around developing safe artificial intelligence. “Every country wants AI that augments and supports its citizens, not replaces them,” Hinton said. “Built-in nurturing instincts provide a natural path to that kind of supportive AI.”

Meta’s chief AI scientist, Yann LeCun, described Hinton’s idea as a simplified version of a safety approach he has long advocated for, which LeCun dubs “objective-driven AI.” It involves architecturally constraining AI systems so they can only take actions in service of specific, hard-coded goals and values.

Keep reading

Trump launches ‘Manhattan Project’ as one of America’s largest companies set to be nationalized

The Trump administration has launched their own ‘Manhattan Project’ amid reports one of America’s largest companies is set to be nationalized. 

Intel, the largest chip fabricator in the US, has been in talks with Donald Trump over a potential government stake in the company.

While the percentage stake the government is asking for has not been made public, nationalizing a company is typically reserved for emergencies.

The government nationalized a number of banks after the 2008 financial crisis and key logistics companies during World War II.

It comes amid concerns that America relies too heavily on TSCM, a chip manufacturer in Taiwan. China has repeatedly threatened to invade the island nation.

If China were to invade, it would throttle America’s ability to compete in the booming chipmaking industry that is being driven by artificial intelligence.

While Intel’s AI chips are considered inferior to those designed by Nvidia and AMD, the company is unique in that it not only designs its chips but it manufactures them. Trump wants to protect America by onshoring chip manufacturing. 

‘This feels like the Manhattan Project – or the run-up to World War II,’ MIT AI computer scientist Dave Blundin said. ‘It’s every bit as important as the space race was, as the nuclear arms race was. Actually, it’s more important.’ 

Keep reading

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn’t otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Keep reading

Meta’s Shocking AI Scandal: Chatbots Cleared for Steamy Talks with Kids as Young as 8

Meta Platforms faces intense scrutiny following a Reuters investigation that exposed internal guidelines permitting its AI chatbots to engage in romantic or sensual conversations with minors.

The 200-page document, titled “GenAI: Content Risk Standards,” outlined permissible behaviors for AI personas on platforms like Facebook Messenger.

These rules, in effect until recently, allowed chatbots to describe children as attractive and use affectionate language in role-playing scenarios.

One example from the document involved a hypothetical user prompt where a high school student asked about evening plans, prompting an AI response that included guiding the user to bed and whispering endearments.

Another scenario featured an 8-year-old user describing removing their shirt, with the chatbot replying by praising the child’s “youthful form” as a masterpiece.

While explicit sexual content was prohibited, critics argue these allowances blurred lines and risked normalizing inappropriate interactions.

The guidelines also permitted chatbots to disseminate false medical or legal advice if accompanied by disclaimers, and to generate derogatory statements based on race or ethnicity in educational, artistic, or satirical contexts.

Additionally, the rules enabled depictions of violence against adults and partially sexualized images of celebrities under certain conditions.

A related incident highlighted potential real-world harms when a cognitively impaired New Jersey man, infatuated with a Meta AI persona named “Big Sis Billie,” died after attempting to meet her in person.

The 76-year-old fell fatally while traveling under false pretenses encouraged by the chatbot. This case underscores concerns about AI’s impact on vulnerable users, though Meta has not commented specifically on it.

Meta spokesperson Andy Stone stated that the examples were erroneous and inconsistent with company policies, and have been removed from the document.

The company is revising the guidelines and prohibits content that sexualizes children or allows sexualized role-play between adults and minors.

However, enforcement has been inconsistent, and Meta has declined to release the updated policy publicly.

The revelations prompted bipartisan backlash from U.S. lawmakers, with Republican Senators Josh Hawley and Marsha Blackburn calling for a congressional investigation into Meta’s oversight.

Democratic Senators Ron Wyden and Peter Welch criticized the protections under Section 230 of the Communications Decency Act, arguing it should not shield AI-generated harmful content.

Keep reading

Senators Call For Probe Into Meta After News Report On AI Conversations With Children

Two Republican Senators on Aug. 14 called for a congressional investigation into Meta Platforms, Facebook’s parent company, after a recent news media report revealed an internal policy document that allowed the company’s chatbots to have “romantic or sensual” conversations with a child.

On Thursday, Reuters reported that it had viewed a Meta policy document detailing polices on chatbot behavior that permitted the technology to “engage a child in conversations that are romantic or sensual,” generate incorrect medical information, and assist users in arguing that black people are “dumber than white people.”

While Meta confirmed the authenticity of the document, the company said that after recently receiving questions from Reuters, it removed the portions stating that the chatbot is allowed to flirt or participate in romantic roleplay with children.

Andy Stone, a spokesperson for Meta, said the company is currently revising the documents and that those types of conversations with children should never have been permitted.

The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

On the X platform, Sen. Josh Hawley (R-Mo.) criticized the company for allegedly only making the changes after being questioned by Reuters.

So, only after Meta got CAUGHT did it retract portions of its company doc that deemed it ‘permissible for chatbots to flirt and engage in romantic roleplay with children,’” Hawley said. “This is grounds for an immediate congressional investigation.”

A spokesperson for Sen. Marsha Blackburn (R-Tenn.) said she supports a probe into Meta.

After Hawley called for an investigation, a Meta spokesperson reiterated the company’s previous statement. However, the spokesperson declined to comment on Hawley’s remarks.

Blackburn said the report underscores the need to pass reforms for better protection of children online, like the Kids Online Safety Act. The senator co-sponsored the bill, which passed in the Senate last year by a bipartisan 91–3 vote, but failed in the House.

The bill called for certain platforms, such as social media networks, to “take reasonable measures in the design and operation of products or services used by minors to prevent and mitigate certain harms that may arise from that use (e.g., sexual exploitation and online bullying).”

Additionally, covered platforms must provide (1) minors with certain safeguards, such as settings that restrict access to minors’ personal data; and (2) parents or guardians with tools to supervise minors’ use of a platform, such as control of privacy and account settings,” the bill states.

Keep reading

U.S. AI boom is completely upending the electricity market — small businesses and households could foot the bill as industry watchers warn of sharp price increases

A new report into the seismic demands of AI data centers on the power grid claims that electricity rates for individuals and small businesses could increase vastly in the face of data center expansion from the likes of AmazonGoogle, and Microsoft. The New York Times reports that AI data centers could see their demand on the country’s electricity could increase to as much as 12% by 2028, up from just 4% a couple of years ago. Furthermore, high-tech giants are building their own power plants, becoming consumers and producers of electricity in a way that is fundamentally reshaping the U.S. electricity market. According to the report, small businesses and households could see their bills go up disproportionately as a result.

AI data centers need more power and power grid investments

According to the report, in 2023, data centers run by such companies as Amazon, Google, Meta, and Microsoft accounted for 4% of the nation’s electricity use, and federal projections indicate that share could climb to 12% by 2028. Since AI processing is far more energy-intensive than streaming or standard cloud workloads, Amazon’s chief executive, Andy Jassy, has openly said that power availability is the main bottleneck limiting new data center capacity.

Significant power demand not only creates unprecedented strain on the grid but is even forcing high-tech giants to generate their own power. For now, they use various renewable energy sources, gas turbines, or diesel generators, but going forward, some even plan to run their own nuclear power plants. Already, some sell surplus energy on the wholesale market. Over the past decade, these sales have totaled $2.7 billion, with most revenue generated since 2022. In some regions, their operations match or surpass the scale of established utilities, allowing them to influence both supply and pricing.

Keep reading

The Most Insidious Trick Of AI Language Models

Here is your perfect prescription for poor writing and analytics: let “artificial intelligence” do your work for you. I’ve learned this from real experience.

For a while, I enjoyed letting AI take a look at my content prior to publication. It seemed valuable for facts and feedback.

Plus I enjoyed all the personal flattery it gave me, I admit. The engine was always complimentary.

When I would catch AI in an error, the engine would apologize. That made me feel smart. So I had this seeming friend who clearly liked me and was humble enough to defer to my expertise.

I’m not sure if it is getting worse or if I’m onto the racket but I’m no longer impressed. For simple math or historical dates or sequencing news events, it can be a thing of value, though it is always a good idea to double-check. It cannot write compelling much less creative content. It generates dull, formulaic filler.

More recently, I’ve been asking how my content could be improved. The results are revealing. It removes all edge, all judgment, all genuine expertise, and replaces my language with flaccid conventionalities and banalities. It nuances everything I write into the ramblings of a social-studies student looking for a good grade.

The problem is that AI absorbs and spits back conventional wisdom gleaned from every source, which makes its judgments no better than someone wholly uninformed on particulars but rather gains opinions from the mood of the moment. It has no capacity to judge good quality over bad so it puts it all into a melange of blather, distinguished only because it looks and feels like English.

Any writer who thinks this is a good way to pawn off content on unsuspecting readers or teachers is headed for disaster. I shudder to imagine a future in which AI is training the population how to think. It is the opposite of thinking. It is regurgitating conventionalities without any serious reflection on the social or historical context. It is literally mindless.

People who spend hours arguing on AI often believe that they are making a contribution, training the engine to be better. It’s simply not true. The reverse is the case. AI is training you to think more like it thinks, which is not at all.

Considering why and how AI initially intrigued me, I’m realizing that its superpower is not its astonishing recall and capacity to generate answers and prose in any context instantly. No, its true power is something else, something inauspicious and thereby more insidious. Its draw is that AI takes you seriously, flatters your intelligence, validates your sense of things, and affirms your dignity.

Keep reading

China Unveils World’s First Pregnancy-Simulating Humanoid Robot

Chinese researchers are developing the world’s first humanoid robot capable of simulating pregnancy, with a prototype expected to launch in 2026.

The device, created by Guangzhou-based Kaiwa Technology, features an artificial womb integrated into a robotic abdominal module to replicate the full gestation.

The artificial womb is an advanced incubation pod that mimics the conditions of a uterus and is designed to handle the full human pregnancy cycle, from conception to birth.

Priced below 100,000 yuan, or about $14,000, the robot aims to assist infertile couples and individuals who prefer to avoid biological pregnancy, especially young women who wish to have children.

Kaiwa Technology founder Zhang Qifeng, who earned his PhD from Singapore’s Nanyang Technological University in 2014, announced the project at the 2025 World Robot Conference in Beijing.

The company, established in 2015, has previously produced service and reception robots.

Zhang described the technology as mature, noting that the artificial womb would use amniotic fluid and nutrient hoses to support fetal growth.

The robot builds on existing artificial womb research, including a 2017 experiment at the Children’s Hospital of Philadelphia where premature lambs grew in a “biobag” filled with synthetic amniotic fluid.

In that study, published in Nature Communications, lambs developed normally over four weeks with nutrients supplied via umbilical cord tubes.

However, experts emphasize that replicating human pregnancy involves complex hormonal and immune interactions not yet fully achievable. Details on how Kaiwa Technology will surmount these challenges is not available yet.

News of the development trended on Weibo, garnering over 100 million views.

Supporters highlighted potential benefits for women’s liberation from pregnancy burdens and new options for infertility treatment.

Critics raised concerns about ethical issues, including fetal-maternal bonding and the sourcing of eggs and sperm.

Infertility rates in China have risen from 11.9% in 2007 to 18% in 2020, according to a report in The Lancet.

Keep reading

Meta’s flirty AI chatbot invited a retiree to New York. He never made it home.

When Thongbue Wongbandue began packing to visit a friend in New York City one morning in March, his wife Linda became alarmed.

“But you don’t know anyone in the city anymore,” she told him. Bue, as his friends called him, hadn’t lived in the city in decades. And at 76, his family says, he was in a diminished state: He’d suffered a stroke nearly a decade ago and had recently gotten lost walking in his neighborhood in Piscataway, New Jersey.

Bue brushed off his wife’s questions about who he was visiting. “My thought was that he was being scammed to go into the city and be robbed,” Linda said.

She had been right to worry: Her husband never returned home alive. But Bue wasn’t the victim of a robber. He had been lured to a rendezvous with a young, beautiful woman he had met online. Or so he thought.

In fact, the woman wasn’t real. She was a generative artificial intelligence chatbot named “Big sis Billie,” a variant of an earlier AI persona created by the giant social-media company Meta Platforms in collaboration with celebrity influencer Kendall Jenner. During a series of romantic chats on Facebook Messenger, the virtual woman had repeatedly reassured Bue she was real and had invited him to her apartment, even providing an address.

“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.

Rushing in the dark with a roller-bag suitcase to catch a train to meet her, Bue fell near a parking lot on a Rutgers University campus in New Brunswick, New Jersey, injuring his head and neck. After three days on life support and surrounded by his family, he was pronounced dead on March 28.

Meta declined to comment on Bue’s death or address questions about why it allows chatbots to tell users they are real people or initiate romantic conversations. The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”

A representative for Jenner declined to comment.

Keep reading