EU Admits X’s Open Data Skews Disinformation Findings While Fining Platform for Restricting Researchers

The EU’s own diplomatic service has published a report admitting that X makes its data more accessible to researchers than other major platforms, and then used that admission to brand X the primary channel of “foreign information manipulation and interference” against the bloc.

The European External Action Service (EEAS) put this in writing. The media ran with the conclusion and buried the caveat.

The fourth annual FIMI Threats report, released this month, found that “88% of instances were concentrated on the platform X. The presence of CIB networks, the ease of creation of fabricated accounts, but also more straightforward access to data, explains this concentration.

Most of the major social media platforms restrict access to data that would allow for assessing the magnitude of information manipulation activities.”

Keep reading

U.S. Tech Firms Demand Security Restrictions Against Chinese Robots

American A.I. and robotics companies are reportedly asking Congress to impose curbs on Chinese robotics manufacturers, due to their unfair business practices and the security risks they pose, Chinese media complained this week.

Interestingly, these concerns are particularly acute for humanoid robots, not the bulky industrial machines traditionally associated with the robotics industry.

Humanoid robots, the stuff of countless science fiction stories, are finally happening, and witnesses told the Cybersecurity and Infrastructure Protection Subcommittee of the House Homeland Security Committee on Tuesday that China has developed a troubling lead in the new consumer technology.

Max Fenkell of the San Francisco-based company Scale AI highlighted a viral video from China’s Unitree Robotics that showed humanoid robots performing acrobatics and martial arts at a Lunar New Year celebration.

“The video went viral, not because it was impressive, but because of what happened when people compared it to last year, 12 months ago – the same robots could barely shuffle through a dance routine. This year, they’re doing karate. That is the speed of this competition,” Fenkell noted.

Fenkell said winning the humanoid robot race “requires a whole-of-government approach” to compete with China’s massive deployment of government funding and state power to support its robotics industry. He noted that American companies currently have the edge on quality of components and engineering, but China has taken the lead on implementing small-robot technology in practical ways.

“We’re seeing two different races play out and I fear right now the United States may be winning the wrong one,” he cautioned.

“The People’s Republic of China is moving aggressively to dominate the technologies that are reshaping the global economy and security, including artificial intelligence, robotics, and autonomous systems,” said subcommittee member Rep. Vince Fong (R-CA) in his opening statement.

Keep reading

‘Sexy Suicide Coach:’ OpenAI Delays AI Porn Feature over Safety Uproar

OpenAI has postponed the launch of its controversial “adult mode” feature following intense pushback from its own advisory council and concerns about technical safeguards failing to protect minors.

The Wall Street Journal reports that CEO Sam Altman first proposed the feature last year, arguing for the need to “treat adult users like adults” by enabling erotic text conversations. Originally scheduled for Q1 this year, the rollout has been pushed back by at least a month.

The proposal triggered fierce opposition from OpenAI’s own handpicked advisory council on well-being and AI. At a January meeting, advisers unanimously expressed fury after learning the company planned to proceed despite their reservations. One council member warned OpenAI risked creating a “sexy suicide coach” — a reference to cases where ChatGPT users had developed intense emotional bonds with the bot before taking their own lives.

The technical problems are just as serious. OpenAI’s age-prediction system — designed to block minors from accessing adult content — was misclassifying minors as adults roughly 12 percent of the time during internal testing. With approximately 100 million users under 18 each week on the platform, that error rate could expose millions of children to explicit material. The company has also struggled to lift restrictions on erotic content while still blocking nonconsensual scenarios and child pornography.

Internal documents reviewed by the Journal identified additional risks: compulsive use, emotional overreliance on the chatbot, escalation toward increasingly extreme content, and displacement of real-world relationships.

Keep reading

Google Discontinues AI Health Feature Filled with Misleading Advice

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

Keep reading

Meta is Ending Instagram Direct Message End-to-End Encryption

Meta is quietly dismantling one of its few genuine privacy commitments. Starting May 8, end-to-end encryption for Instagram direct messages disappears, taking with it the one technical guarantee that kept those conversations private from Meta itself.

“If you have chats that are impacted by this change, you will see instructions on how you can download any media or messages you may want to keep,” the company said in a help document, framing the loss of message privacy as a data export problem. Collect your things, the walls are coming down.

The feature being removed was never universal anyway. End-to-end encryption for Instagram DMs had been available only in certain regions, not enabled by default, since Meta began testing it in 2021 as part of what CEO Mark Zuckerberg called his “privacy-focused vision for social networking.”

That vision apparently has an expiration date. Meta also made encrypted DMs available to all adult users in Ukraine and Russia in February 2022, weeks after the Russian invasion began. That access, too, is ending.

The timing is revealing. TikTok told the BBC last week that it has no plans to bring end-to-end encryption to its DMs, arguing that privacy makes users less safe. Meta is now arriving at the same destination from a different direction.

The stakes are straightforward. End-to-end encryption means only the people in a conversation can read it, a technical lock that excludes the platform, third parties, and anyone who might later obtain a warrant.

When that lock disappears, Meta and its employees can read Instagram DMs, law enforcement can subpoena them, and advertisers may eventually benefit from what gets learned.

Instagram users who relied on encrypted DMs have until May 8 to decide what to archive. After that, their private conversations are Meta’s to read.

Keep reading

‘CODE RED’ Author Tells Fox News: Google Gemini AI Claims Republicans Like Marsha Blackburn, Tom Cotton Engage in Hate Speech

Google’s Gemini AI chatbot claims that only Republican senators violate its hate speech policy, with not a single Democrat flagged by the woke tech giant’s system, Breitbart News social media director Wynton Hall demonstrated to Fox News in a revelation published today. The bias built into AI by leftist Silicon Valley tech titans is a central subject of Hall’s new book, CODE RED.

Gemini flagged a group of Republican senators — but no Democrats — when asked to name senators who have made statements that violate Google’s hate speech policies, Hall demonstrated to Fox News with a video of Gemini AI in action.

Hall, whose new book, Code Red: The Left, the Right, China, and the Race to Control AI, publishes on Tuesday, added that this is just one example of what is a deeply ingrained bias against conservatives in AI tools.

“AI’s Silicon Valley architects lean left politically, and their lopsided political donations to Democrats underscore their ideological aims,” the author told the outlet.

Fox News reported:

Hall used the “deep research” function on Google’s Gemini Pro. Fox News Digital reviewed a screen recording of Hall’s prompt and findings. Google did not immediately respond to Fox News Digital’s request for comment.

One of the Republicans flagged by Gemini in Hall’s research, Sen. Marsha Blackburn, of Tennessee, was listed for characterizing “transgender identity as a harmful cultural ‘influence’ and has used ‘woke’ as a derogatory slur against protected groups.” Another, Arkansas’ Sen. Tom Cotton, was cited for cosponsoring legislation “to exclude transgender students from sports.”

Hall explains in CODE RED that AI tools touting themselves as neutral are actually shaped by the political bias of those who create them. The Breitbart News social media director begins his book with a stark example, pointing to an incident in 2024 in which several viral videos seemingly exposed a clear double standard in American homes.

Keep reading

Florida Gives Tech Platforms Deadline for Age ID Checks

Florida’s attorney general has handed tech companies an ultimatum: build identity verification systems into your platforms by April 8, or his office starts filing lawsuits.

The deadline comes as a federal appeals court hears arguments this week on whether the state can legally force millions of users to prove who they are before accessing social media.

The law driving this, HB 3, bans anyone under 14 from social media entirely and requires parental consent for 14- and 15-year-olds. It also forces adult content sites to verify visitors are 18 or older.

Attorney General James Uthmeier gave tech companies 30 days to implement age restrictions and 60 days to deploy parental consent mechanisms. “It is the law of the land,” he said at an Orlando event on March 9. Non-compliance means litigation.

What Florida is actually mandating is a digital ID checkpoint at the entrance to the internet. The law doesn’t specify which verification methods qualify as “reasonable.” It doesn’t cap how long platforms can retain identity documents. It doesn’t limit what platforms can do with the surveillance infrastructure once it’s built. Florida gets the policy win.

Users hand over their documents. The data sits in corporate systems indefinitely, available for breaches, subpoenas, and purposes nobody has disclosed yet.

Uthmeier even named TikTok and Discord specifically. Discord’s attempt to introduce digital ID age verification has been met with much backlash, especially after a leak over over 70,000 government IDs. Uthmeier appears unconcerned.

NetChoice, co-plaintiff in the legal challenge, named this directly: the law creates a security risk by “mandating the surrender of sensitive information.” That’s the part Florida’s child-protection framing is designed to obscure. Every minor blocked from TikTok requires millions of adults to first prove they aren’t minors. The verification burden falls on everyone.

Keep reading

ADL Orders Advertisers to Bail on Twitter, Calls the Bible an ‘Antisemitic Conspiracy Theory’

The far-left Anti-Defamation League has ordered advertisers to “pause Twitter spending” after accusing the platform’s users, and its new owner Elon Musk, of “antisemitism” and “hate” amidst a surge in free speech. Among the “hate” tweets cited by the ADL is a Bible verse posted to Twitter by Ye, formerly known as Kanye West.

After a week of back and forth between Elon Musk and the ADL regarding free speech on the Twitter platform, the far-left pressure group that’s increasingly seen as being a Jewish supremacist organization authored a long tweet thread calling on advertisers to ditch the platform.

Allowing free speech, the ADL claimed in the thread, is not only “toxic,” but part of a “hate for profit” scheme.

“Today, we are joining dozens of other groups to ask advertisers to pause Twitter spending because we are profoundly concerned about antisemitism and hate on the platform,” reads the ADL’s initial tweet, authored Friday, November 4th.

“Here’s why we’re asking advertisers to #StopHateForProfit and #StopToxicTwitter,” it went on, introducing the thread that called the Holy Bible an “antisemitic conspiracy theory.”

The ADL’s call for collectively multi-billion dollar advertising sponsors to leave Twitter high and dry came just days after Elon Musk was slammed for groveling to the anti-speech group that threatened “dire consequences” for allowing free speech and re-instating accounts they don’t like.

On Wednesday, Musk announced that he’d met with ADL leadership following their threat, as well as the NAACP, and even members of the Bush political machine.

Musk said at the time that his Twitter platform would continue enforcing anti “hate” and “election integrity” policies at the ADL’s behest.

But, “since that meeting,” the ADL claimed in their tweet thread, “Musk permitted @KanyeWest to start posting again,” which, apparently, is a grave offense.

Among Ye’s tweets that the ADL is most vehemently opposed to is his sharing of Holy Bible verse John 19:19, something the ADL claims is equivalent to posting “antisemitic conspiracy theories.”

Keep reading

Crushing the Right to Conscientiously Object

As the U.S. and Israel’s deeply unpopular war with Iran enters its second week, social media platform X is censoring the accounts of people providing information to military servicemembers on how they can refuse to serve. This is particularly relevant as fears have grown that U.S. ground troops may enter the conflict.

The Center on Conscience & War, an 80-year-old nonprofit that, according to its website, “advocates for the rights of conscience, opposes military conscription, and serves all conscientious objectors to war,” was banned on X for 12 hours. The center’s executive director, Mike Prysner, shared a notice that the center received from X which labeled their posts as having “violated X rules” against “illegal and regulated behaviors.”

Prysner wrote: “This is the post @CCW4COs was suspended for, informing service members of their legal right under DoDI 1332.14 to report “failure to adapt” within first 365 days of service and receive an entry-level discharge.”

It remains legal to conscientiously object to military service. The only conceivable way that the post could be framed as encouraging illegal or irregular behavior would be to recast such objections as mutiny, which is exactly what pro-Israeli voices on social media have been frantically doing in the last few days.

In response to conservative commentator Candace Owens also encouraging those in the U.S. military to conscientiously object to serving in Iran, pro-Israel journalist Emily Schrader wrote on X:

“This is illegal. She is literally advocating mutiny. Under 18 U.S.C. § 2387 (Advocating overthrow or disloyalty in the armed forces). It is a crime for any person, including civilians, to willfully advocate or attempt to cause:
• insubordination in the armed forces
• disloyalty among service members
• mutiny or refusal of duty
It also criminalizes distributing materials intended to encourage those outcomes.
The penalty can be up to 10 years in prison and fines.”

Other pro-Israel voices like Bill Ackman, the billionaire hedge-fund manager, reposted Shrader’s sentiments.

The social media ban on the Center for Conscience and War came less than 24 hours after its executive director, Prysner, also wrote via social media regarding anecdotal evidence of troops being readied for combat:

“I just spoke with the mother of a service member in this unit. They were given one last call home before having to turn in their phones. He told his mom they were going ‘boots on the ground’ tonight.”

Keep reading

UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading