Apple, Google Caught ‘Helping Users’ Find Apps That Can Deepfake Nude Pictures of Real People, and Worse Kids Are Vulnerable Too

Tech companies Apple and Google were found to have been leading users — specifically children — to apps that could effectively pornify images through artificial intelligence.

Last Wednesday, 9to5Mac reported the findings from January published by the Tech Transparency Project, which concluded both the Apple App Store and Google Play “are helping users to find apps that create deepfake nude images of women.”

The stores were even found promoting these apps and autocompleting search results for them.

About 40 percent of the top 10 apps appearing in searches for “nudify,” “undress,” and “deepnude” could “render women nude or scantily clad.”

These are apps where users can take two different images — one normal and one sexually explicit — and generate an image where components of both are used, sexualizing the person from the normal one.

9to5Mac reached out to the developer for one of these apps, and were told they “had no idea it was capable of producing such extreme content.”

On Thursday, Apple responded to the outlet, saying the apps were not allowed on their store given their review guidelines prohibit sexual content.

The company said it has removed 15 apps, with others receiving notice they will be removed if they continue to be in violation.

In January, California Democratic Gov. Gavin Newsom went after social media platform X with a similar allegation.

“xAI’s decision to create and host a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children, is vile,” he said.

“I am calling on the Attorney General to immediately investigate the company and hold xAI accountable.”

Keep reading

Google is Tracking Your Life – Photo Cloud Feeding AI System

There was a time when your photo album sat in a drawer, private, personal, and disconnected from the outside world. Privacy no longer exists in the modern world as personal data will become the key tool of control, and now Google is taking the next step by turning your memories into fuel for artificial intelligence.

According to a recent report, Google has rolled out a major update to its Photos platform that allows its AI system, Gemini, to scan your entire photo library to build what it calls “Personal Intelligence.” What this means in plain English is that your images are no longer just stored, they are analyzed and integrated into a broader behavioral profile. Google openly admits the system can use actual images of you and your loved ones to generate AI content, eliminating the need for users to manually upload reference photos.

This is not a minor tweak to a photo app, but a structural shift in how data is harvested and understood, because every image you have ever taken now becomes part of a living model that attempts to understand who you are, who you associate with, where you go, and how you live your life. What was once private into something continuously processed and categorized.

Keep reading

US Bill Mandates On-Device Age Verification

A bill introduced by Representative Josh Gottheimer in the House on April 13 would require Apple, Google, and every other operating system vendor to verify the age of anyone setting up a new device in the United States.

The legislation, H.R. 8250, travels under the friendlier name of the Parents Decide Act, and it is among the most aggressive surveillance mandates ever proposed for American consumer technology.

We obtained a copy of the bill for you here.

The press releases describing it lead with children. The text describes something much larger. To confirm a child is under 18, the system has to identify everyone else, too, and the bill builds the infrastructure to do exactly that.

This is child safety as a delivery mechanism for mass identification. The pattern is familiar by now. A genuine harm gets named, a sympathetic victim gets centered, and the solution proposed reshapes the digital lives of three hundred million people who were not the problem.

The Parents Decide Act follows that template with unusual precision. It takes the real suffering of real children and uses it to justify building a national identity layer underneath every device sold in the country, administered by two private companies, with the details to be filled in later.

The mandate sits in Section 2(a)(1), which obligates providers to “Require any user of the operating system to provide the date of birth of the user” both to set up an account and to use the device at all. Adults included.

There is no carve-out for grown users, no opt-out for people who simply want to turn on a phone without handing a date of birth to Apple or Google first.

The age check is the entry fee for owning a computer. What happens to that data afterward gets handed off to the Federal Trade Commission to sort out later. A federal bill that mandates identification as a condition of using a general-purpose computing device represents something the United States has not previously had, which is a national ID requirement for turning on a device.

Keep reading

YouTube Removes Pro-Iran Channel Producing Anti-Trump Videos

Google, the owners of YouTube, has removed a channel on the platform belonging to a pro-Iran group producing Lego-themed videos mocking Donald Trump.

“Upon review, we’ve terminated the channel for violating our Spam, deceptive practices and scams policies,” a YouTube spokesperson told Middle East Eye. “YouTube doesn’t allow spam, scams, or other deceptive practices that take advantage of the YouTube community.” 

Explosive Media’s content largely consists of animations ridiculing the US war effort against Iran and poking fun at the US president.

YouTube did not specify how the channel had violated its policies, but the company has previously been described as being “aligned with the Islamic Revolutionary Guard Corps”.  

One of the group’s videos depicts Trump hurling a chair at US military figures, while Iranian generals press a red button with the label “Back to the Stone Age,” referencing a threat made by Secretary of War Pete Hegseth. Another depicts Trump with a flaming bottom, holding a sign that reads: “VICTORY! I am a loser.”

A number of videos reference Shia Islamic mythology, including depictions of Hussein ibn Ali, the grandson of the Prophet Muhammad, who is a key symbol of resistance and spiritual leadership for Shia Muslims. 

Writing on X, Explosive Media hit out at Google for suspending its channel, saying it had been done because its content was “violent”. It wrote: “Seriously! Are our LEGO-style animations actually violent?” 

Explosive Media, known in Persian as Akhbar Enfejari, has denied it is backed by the Iranian government and its videos have reached millions of viewers across a range of social media platforms.

Its most recent video prior to being suspended appeared to show Trump carrying out the war in Iran to distract from the Epstein files and at Israel’s behest.

It also implied that Epstein and his associates had engaged in cannibalism, for which there is no evidence. An earlier video referenced other victims of US violence through history, including Native Americans, the Vietnamese and the children of Gaza, Hiroshima and Nagasaki. It also quoted Malcolm X. 

Keep reading

Google’s AI Overviews spew millions of false answers per hour, bombshell study reveals

Google’s AI-generated search results are spewing out tens of millions of inaccurate answers per hour – even as the tech giant siphons visitors and ad revenue from cash-strapped news outlets, according to a bombshell analysis.

To test the accuracy of Google’s AI Overviews, startup Oumi reviewed 4,326 Google search results generated by Google’s Gemini 2 model and the same number of results generated by its more advanced Gemini 3 model.

The analysis found that the models were accurate 85% and 91% of the time, respectively.

With Google expected to handle more than 5 trillion searchers in 2026 alone, that means AI Overviews are spitting out fake news at a rate of hundreds of thousands of mistakes every single minute – with users left none the wiser.

Keep reading

White House renamed ‘Epstein Island’ on Google phones – WaPo

The White House was briefly renamed ‘Epstein Island’ for some Google Pixel phone users, the Washington Post has reported.

The term is used to refer to the Caribbean island of Little St. James, which had been owned by the late convicted pedophile Jeffry Epstein. According to the prosecutors, it served as the venue for sex trafficking and other abuses involving some high-profile figures in business and politics.

WaPo said in an article on Saturday that when its journalist tried calling the White House switchboard earlier this week, the name on screen indicated that they were contacting “Epstein Island.”

Only users of Google’s Pixel phones experienced the issue. For those calling the presidential residence from other Android phones and iPhones, no name was displayed, the report read.

Keep reading

The Verdict Against Meta and Google That Could End the Anonymous Internet

A Los Angeles jury has found Meta and YouTube negligent in the design of their platforms and awarded $3 million to a plaintiff identified as K.G.M., a young woman who testified that years of near-constant social media use contributed to depression, anxiety, and body dysmorphia. The jury assigned 70% of the responsibility to Meta and 30% to YouTube. Punitive damages came to another $6 million.

The verdict is being reported as a landmark for child safety. It also represents a significant legal mechanism for dismantling anonymous internet access, built in plain sight, with bipartisan enthusiasm and a CEO’s enthusiastic assistance.

K.G.M.’s attorneys built their claim not around what users posted, which Section 230 of the Communications Decency Act largely shields platforms from liability for, but around how the platforms were designed.

Infinite scroll, algorithmically amplified notifications, engagement loops engineered to maximize time on site. The argument treats social media architecture the way product liability law treats a car without brakes. A defective product that the public needs to be protected from.

Keep reading

Google Gets Caught Red-Handed Pushing Left-Wing Agenda, Silencing Conservative News

Commentator Britt Hughes is raising concerns about what she describes as a lack of ideological balance on Google News, pointing to a February analysis that examined the platform’s top morning stories.

Hughes compared the findings to earlier criticism of another major platform before turning to Google’s results.

“Do you remember when I told you that Apple News had gone three straight months without featuring a single article from a conservative news outlet in its top stories?” Hughes said.

She then shifted to Google’s performance during the same type of review period.

“Well now they can hold Google’s beer,” Hughes said.

Keep reading

Google Discontinues AI Health Feature Filled with Misleading Advice

Google has quietly discontinued an AI search feature that offered users health advice crowdsourced from non-medical professionals worldwide.

The Guardian reports that Google has removed a controversial AI-powered search feature called “What People Suggest” that provided users with crowdsourced health advice from people around the world. The decision comes amid growing scrutiny over the technology company’s use of artificial intelligence to deliver health information to millions of users.

Three sources familiar with the decision confirmed that Google has scrapped the feature. A company spokesperson acknowledged that “What People Suggest” had been discontinued, stating the removal was part of a broader simplification of the search results page and was unrelated to concerns about the quality or safety of the feature.

The feature was initially launched in March of last year at an event in New York called “The Check Up,” where Google announced plans to expand medical-related AI summaries in its search function. At the time, the company promoted “What People Suggest” as demonstrating the potential of AI to transform health outcomes globally by connecting users with information from people who had similar lived medical experiences.

Karen DeSalvo, who served as Google’s chief health officer at the time of the launch, explained the rationale behind the feature in a blog post. “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences,” DeSalvo wrote. The feature used AI to organize perspectives from online discussions into themes, making it easier for users to understand what people were saying about particular health conditions.

DeSalvo provided an example of how the feature would work, noting that someone with arthritis seeking information about exercise could quickly find insights from others with the same condition, with links to explore further information. The feature was initially available on mobile devices in the United States before being discontinued.

Keep reading

‘CODE RED’ Author Tells Fox News: Google Gemini AI Claims Republicans Like Marsha Blackburn, Tom Cotton Engage in Hate Speech

Google’s Gemini AI chatbot claims that only Republican senators violate its hate speech policy, with not a single Democrat flagged by the woke tech giant’s system, Breitbart News social media director Wynton Hall demonstrated to Fox News in a revelation published today. The bias built into AI by leftist Silicon Valley tech titans is a central subject of Hall’s new book, CODE RED.

Gemini flagged a group of Republican senators — but no Democrats — when asked to name senators who have made statements that violate Google’s hate speech policies, Hall demonstrated to Fox News with a video of Gemini AI in action.

Hall, whose new book, Code Red: The Left, the Right, China, and the Race to Control AI, publishes on Tuesday, added that this is just one example of what is a deeply ingrained bias against conservatives in AI tools.

“AI’s Silicon Valley architects lean left politically, and their lopsided political donations to Democrats underscore their ideological aims,” the author told the outlet.

Fox News reported:

Hall used the “deep research” function on Google’s Gemini Pro. Fox News Digital reviewed a screen recording of Hall’s prompt and findings. Google did not immediately respond to Fox News Digital’s request for comment.

One of the Republicans flagged by Gemini in Hall’s research, Sen. Marsha Blackburn, of Tennessee, was listed for characterizing “transgender identity as a harmful cultural ‘influence’ and has used ‘woke’ as a derogatory slur against protected groups.” Another, Arkansas’ Sen. Tom Cotton, was cited for cosponsoring legislation “to exclude transgender students from sports.”

Hall explains in CODE RED that AI tools touting themselves as neutral are actually shaped by the political bias of those who create them. The Breitbart News social media director begins his book with a stark example, pointing to an incident in 2024 in which several viral videos seemingly exposed a clear double standard in American homes.

Keep reading