‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading

Google Discontinues AI Health Feature Filled with Misleading Advice

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

Keep reading

Researchers uncover iPhone spyware capable of penetrating millions of devices

A powerful software exploit capable of penetrating and stealing information from potentially hundreds of millions of Apple (AAPL.O), opens new tab iPhones ‌was planted on dozens of websites in Ukraine in recent weeks, researchers said on Wednesday.

The discovery marks the second time this month that researchers have found spyware targeting iPhones and other Apple devices. Together, the two hacking tools show that the market for sophisticated malware capable of stealing data and cryptocurrency wallet information ​is flourishing, researchers said.

Researchers with cyber firm Lookout, opens new tab, mobile security firm iVerify, opens new tab and Alphabet’s (GOOGL.O), opens new tabGoogle, opens new tab published coordinated analyses of the malware they dubbed “Darksword.” ​On March 3, Google and iVerify revealed a separate powerful iPhone spyware called “Coruna.” Researchers found Darksword hosted on ⁠the same servers.

“There’s now a verified pipeline of recent exploits … that have ended up in the hands of potentially criminal entities with ​a financial focus,” said Justin Albrecht, principal researcher with Lookout.

Keep reading

Neanderthals may have used birch tar for its anti-bacterial properties, experiments suggest

Neanderthals probably used birch tar for multiple functions, including treating their wounds, according to a study published in the open-access journal PLOS One by a team of researchers led by Tjaark Siemssen of the University of Cologne, Germany, and the University of Oxford, U.K.

Birch tar is commonly found at Neanderthal archaeological sites, and in some cases this tar is known to have been used as an adhesive to assemble tools.

Recently, some researchers have raised the question of whether Neanderthals had multiple uses for this substance. For instance, Indigenous communities in northern Europe and Canada use birch tar to treat wounds, and there is growing evidence that Neanderthals also employed a variety of medical practices.

To investigate the medicinal potential of birch tar, Siemssen and colleagues extracted tar from modern birch tree bark, specifically targeting species known from Neanderthal sites.

They used multiple extraction methods, including distillation of tar in a clay pit and condensation of tar against a stone surface, both of which would have been methods available to Neanderthals. When exposed to different strains of bacteria, all of the tar samples were found to be effective at hindering the growth of Staphylococcus bacteria known to cause wound infections.

These experiments not only support the efficacy of Indigenous medicinal practices, but also reinforce the possibility that Neanderthals used birch tar to treat wounds.

The authors note that there are other potential uses of birch tar, such as insect repellent, as well as other plants to which Neanderthals had access. Further exploration of the multiple potential uses of these natural ingredients will enable a more thorough understanding of Neanderthal culture.

The authors add, “We found that the birch tar produced by Neanderthals and early humans had antibacterial properties. This has important implications for how Neanderthals may have mitigated disease burden during the last Ice Ages, and adds to a growing set of evidence on health care in these early human communities.”

“By bringing together research on indigenous pharmacology and experimental archaeology, we begin to understand the medicinal practices of our distant human ancestors and their closest cousins. Additionally, this study of ‘palaeopharmacology‘ can contribute to the rediscovery of antibiotic remedies while we face an ever more pressing antimicrobial resistance crisis.”

“The messiness of birch tar production deserves a special mention. Every step of the production is a sensory experience in itself, and getting the tar off our hands after spending hours at the fire has been a challenge every time.”

Keep reading

Army ID’s Two Suspects Connected to Drone Theft at Fort Campbell

The U.S. Army has identified the two suspects in the theft of two drones at Fort Campbell in Kentucky.

As The Gateway Pundit previously reported, in a post on the U.S. Army Fort Campbell Facebook Page last week, a spokesperson revealed that four Skydio X10D Drone Systems were stolen from the 326th Division Engineer Battalion building.

The drones were originally stolen in November of last year, but Fort Campbell released information and surveillance photos to the public on March 11.

Now, officials at Fort Campbell have announced that the suspects behind the drone theft have been identified, but have not released their names.

The officials at Fort Campbell added, “The individuals responsible had authorized access to the military installation and the building, and they defeated the locks on the storage cages to perpetrate this theft. This was a targeted act, not a random breach of security.”

Per WSMV:

Fort Campbell provided an update to the investigation into four stolen drones from a government building in late November 2025.

Fort Campbell reported that the Department of the Army Criminal Investigative Division investigation led to the identification of two suspects, credible evidence, and the possible whereabouts of the missing quadcopter drones.

“This is an active criminal investigation, and we are working diligently to resolve this matter,” Fort Campbell said. “This is an active criminal investigation, and we are working diligently to resolve this matter.”

Fort Campbell is adamant there is no threat to the public and that the stolen drones were equipped only with small cameras.

The drones stolen were high-tech Skydio X10D drones, which are unmanned aerial systems designed with modular payload capability.

The U.S. Army 7th Army Training Command, last July, used the Skydio X10D to drop a live M67 grenade for the first time at the Grafenwoehr Training Area in Germany.

Keep reading

‘Pokémon Go’ Players Unknowingly Contributed 30 Billion Images to Train Delivery Robots

Nearly a decade after Pokémon Go transformed the real world into an augmented reality playground, the data collected from hundreds of millions of players is being repurposed to help autonomous delivery robots navigate city streets.

Popular Science reports that Niantic Spatial, part of the team behind the popular augmented reality game Pokémon Go, has announced a partnership with Coco Robotics, a company specializing in short-distance delivery robots for food and groceries. The collaboration will utilize Niantic’s Visual Positioning System, a navigation technology trained on more than 30 billion images captured by Pokémon Go users over the years, to help delivery robots navigate sidewalks and urban environments with unprecedented precision.

The Visual Positioning System can reportedly pinpoint location down to a few centimeters by analyzing nearby buildings and landmarks, offering a significant improvement over traditional GPS technology. This crowdsourced mapping effort represents one of the largest real-world data collection projects ever undertaken through a mobile gaming application, and demonstrates how user-generated content can be repurposed years after its initial collection.

“It turns out that getting Pikachu to realistically run around and getting Coco’s robot to safely and accurately move through the world is actually the same problem,” Niantic Spatial CEO John Hanke said in a recent interview with MIT Technology Review.

When Pokémon Go launched in 2016, it became a cultural phenomenon, attracting approximately 230 million monthly active players at its peak. The game prompted players to physically travel to specific locations and point their phone cameras at various angles while searching for virtual creatures superimposed onto real-world environments. While the game’s popularity has declined since its heyday, it still maintains around 50 million active users by some estimates.

The data collection effort received a significant boost in 2020 when Niantic added a feature called Field Research, which incentivized players to scan real-world statues and landmarks with their cameras in exchange for in-game rewards. Additional data reportedly came from areas designated as Pokémon battle arenas. These scans created detailed 3D models of the real world, capturing the same locations across varying weather conditions, lighting scenarios, angles, and heights.

Keep reading

First ‘Robot Arrest’ Takes Place in China After Droid Harasses Elderly Woman in the Streets

Is technology ‘breaking bad’?

Around a month ago, the world was faced with a stunning display of Chinese art and robotics, as a team of droids danced and performed flawless Kung Fu moves in perfect sync during the Lunar New Year celebrations.

But last week, a much less flattering portrait of the new technology was broadcast to the world, as an incident in the streets of the Chinese city of Macau ended with what is arguably the first android ‘arrest’.

The New York Post reported:

“The surreal incident occurred last week in the city of Macau, with the startled 70-year-old ending up in the hospital following her encounter with the 4-foot 4-inch bot.

Two cops escorted the humanoid bot off the busy street. They reportedly reprimanded the man who was operating the android remotely.”

Keep reading

‘CODE RED’ Author Tells Fox News: Google Gemini AI Claims Republicans Like Marsha Blackburn, Tom Cotton Engage in Hate Speech

Google’s Gemini AI chatbot claims that only Republican senators violate its hate speech policy, with not a single Democrat flagged by the woke tech giant’s system, Breitbart News social media director Wynton Hall demonstrated to Fox News in a revelation published today. The bias built into AI by leftist Silicon Valley tech titans is a central subject of Hall’s new book, CODE RED.

Gemini flagged a group of Republican senators — but no Democrats — when asked to name senators who have made statements that violate Google’s hate speech policies, Hall demonstrated to Fox News with a video of Gemini AI in action.

Hall, whose new book, Code Red: The Left, the Right, China, and the Race to Control AI, publishes on Tuesday, added that this is just one example of what is a deeply ingrained bias against conservatives in AI tools.

“AI’s Silicon Valley architects lean left politically, and their lopsided political donations to Democrats underscore their ideological aims,” the author told the outlet.

Fox News reported:

Hall used the “deep research” function on Google’s Gemini Pro. Fox News Digital reviewed a screen recording of Hall’s prompt and findings. Google did not immediately respond to Fox News Digital’s request for comment.

One of the Republicans flagged by Gemini in Hall’s research, Sen. Marsha Blackburn, of Tennessee, was listed for characterizing “transgender identity as a harmful cultural ‘influence’ and has used ‘woke’ as a derogatory slur against protected groups.” Another, Arkansas’ Sen. Tom Cotton, was cited for cosponsoring legislation “to exclude transgender students from sports.”

Hall explains in CODE RED that AI tools touting themselves as neutral are actually shaped by the political bias of those who create them. The Breitbart News social media director begins his book with a stark example, pointing to an incident in 2024 in which several viral videos seemingly exposed a clear double standard in American homes.

Keep reading

Companies Are Starting To Enforce AI Use. Is That A Good Or Bad Thing?

Years ago, I was working on the editorial side for what was then a hot new media company, and found myself spending more and more time with Johan, the lead programmer, and his team, asking them a lot of annoying questions as it was all so new – certainly to me. I was standing over Johan’s left shoulder, mesmerized by whatever new video game he was obsessing over that week…when suddenly, out of nowhere, a spreadsheet and a pie chart appeared on his screen.

“Whatcha got there, Johan?” asked Jim, Johan’s boss, peering over a sheaf of print-outs as he sharked past the cubicle.

“Hey, just looking at some numbers,” Johan replied. Johan had hit the “game key” in the nick of time – in those days, every video game had a game key – ALT-G if memory serves – calling up a slight variation of the same spreadsheet and pie chart.

This would never happen today. First, you’re probably not working in a cubicle, and if you are, it’s not the game key you’d hit to give your boss the impression that you’re actually doing productive work…it would be the “AI key.”

“Tech Firms Aren’t Just Encouraging Their Workers to Use AI. They’re Enforcing It.”

This article appeared in the February 24 edition of the Wall Street Journal. It includes the subtitle: From startups to giants, including Meta and Google, companies are factoring AI use into performance reviews and trying to track productivity gains

Across industries, companies are now enforcing AI use through performance reviews, dashboards that track adoption, and explicit mandates that tie it to compensation and promotion. What began in Silicon Valley has rapidly spread to consulting firms, banks, manufacturers, hospitals, and even government agencies.

As you’d expect, Meta, Google, Amazon, and Microsoft were the first to move from encouragement to enforcement. Employees at these firms now see AI usage metrics appear in quarterly reviews. Non-adopters have reported stalled promotions or explicit warnings that “AI fluency” is a core competency (The Wall Street Journal, Feb 2026, reporting on internal policies).

The trend has jumped sectors. PwC requires every consultant to complete an “AI + Human Skillset” curriculum and incorporates usage into evaluations (Business Insider, Feb 5, 2026). Colgate-Palmolive’s “AI evangelist” tracks adoption across global teams. Major banks have begun tying bonuses to the number of AI-assisted analyses completed. Even some hospitals now require doctors and nurses to use AI-assisted diagnostic tools for certain procedures.

Keep reading

Singularity Update: You Have No Idea How Crazy Humanoid Robots Have Gotten

I just spent the afternoon at Figure headquarters in San Jose with Brett Adcock and David Blundin, and I’m still processing what I saw.

We’re not talking about concept robots. We’re talking about fully autonomous humanoid robots running neural networks end-to-end, doing kitchen work, unloading dishwashers, organizing packages – for hours at a time, with no human intervention.

Today? Figure’s robots are doing 67 consecutive hours of autonomous work. One error in 67 hours. That’s not a demo. That’s a product.

And here’s what most people don’t understand: the gap between “doing one task really well” and “doing every task a human can do” is collapsing at exponential speeds.

Let me explain why…

NOTE: Brett has been a past Faculty Member at my Abundance Summit, where leaders like him share insights years before the mainstream catches on. In-person seats for the 2026 Summit next month are nearly sold out. Learn more and apply.

The Death of C++ and the Rise of the Neural Net

When I first visited Figure, they had several hundred thousand lines of C++ code controlling the robots. Handwritten. Expensive. Brittle.

Every new behavior required engineers to anticipate edge cases, write more code, test it, debug it. It was the software equivalent of teaching a toddler to walk by writing an instruction manual.

In the last year, Figure deleted 109,000 lines of C++ code.

All of it. Gone.

What replaced it? A single neural network that controls the entire robot: hands, arms, torso, legs, feet. Full-body coordination. Real-time planning. Dynamic response to unexpected situations.

This is Helix 2, their latest AI model, and it’s a fundamentally different approach to robotics.

Here’s why this matters: neural nets learn from experience, not instructions.

You don’t code a robot to “grab a cup.” You show it thousands of examples of grasping objects—different shapes, weights, materials—and the neural net extracts the underlying patterns. It learns what “grasping” is at a representational level.

And once it understands grasping? It can generalize to objects it’s never seen before.

Brett put it simply: “If you can teleoperate the robot to do a task, you can train the neural net to learn it.”

That’s the unlock. If the hardware is capable—if the motors, sensors, and joints can physically perform the movement—then the AI can learn it from data.

Compare that to traditional robotics, where you’d need to write thousands of lines of code for every single new task. That approach doesn’t scale. Neural nets do.

The implication: Every robot in the fleet learns from every other robot’s experience. When one Figure robot masters folding laundry, every Figure robot on the planet instantly knows how to fold laundry.

Humans don’t work like this. Robots do.

Keep reading