X Users Find Their Real Names Are Being Googled in Israel After Using X Verification Software “Au10tix”

Awide range of anonymous X (Twitter) users have reported that their real names are suddenly being Googled in Israel shortly after they began criticizing the country for its actions in Palestine. Some connected the phenomenon to Au10tix, the software X requires users (even anonymous ones) to use in order to verify their real identities.

Au10tix is an Israeli company founded and staffed by former Israeli spies from the elite Israeli military intelligence group Unit 8200. MintPress News investigates this disturbing phenomenon.

“The Largest Honeypot Operation On the Planet”
“I’m not even kidding when I say my full legal name, including my middle name, has been searched up in Israel 11 times in the past day,” wrote TransFemPOTUS, an anonymous X user who has been highly critical of Israel’s actions.

This was not an isolated incident. “So apparently my full legal name got searched for in Israel the other day,” revealed TheAtlantean9, an anonymous far-left user with a Palestinian flag in their bio.

Meanwhile, artist Bionico Bandito stated that “My full name got searched 100 times in Israel when I posted this,” referring to a cartoon depicting associates of Jeffrey Epstein being executed.

Across the world, from conservative Japanese accounts to American conspiracy theorists, anonymous users are reporting that data from Google Trends shows their real names, not divulged anywhere online, are being mass searched in Israel.

How could this be happening? Some laid the blame at Au10tix’s door. “Only Au10tix and X holds my data obtained from ID verification,” wrote one user in a viral post, adding, “The rumors are absolutely true.”

“Israel is now 100% confirmed to be Googling anonymous users on X and their family members shortly after they speak out against the country,” wrote another, concluding that, “X is now the largest honeypot operation on the planet.”

The theory centers around Israeli security company Au10tix, who, in 2023, was tasked with verifying users’ identities, a prerequisite for joining X’s premium service which allows users a far greater reach.

The process requires individuals to upload a picture of their passport or other photo I.D., and allow Au10tix to scan their face via their device’s camera. Au10tix claims that it deletes users’ data within 72 hours of receiving it. However, the fact that the company was founded and is staffed by veterans of notorious Israeli spying group Unit 8200 – a group that has been behind many of the most outrageous hacking, infiltration, and cyberwarfare scandals of the past decade – has led many to be extremely suspicious.

The idea that Au10tix itself, or the Israeli government could be using the data given to it by users in order to combat online criticism is far from outlandish. The Department of Homeland Security is already known to be doing the same, sending hundreds of subpoenas to Google, Facebook, Instagram, Reddit, Discord, and other large social media apps demanding they share the personal information and identities of anonymous users who have criticized the actions of Immigration Customs and Enforcement (ICE). Government officials confirmed to The New York Times that platforms have often complied with their requests.

Keep reading

X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Apple Rolls Out Age Verification to UK iPhone Users Under Online Safety Act

Apple is now starting to demand age verification from UK iPhone users, and the latest iOS 26.4 beta makes clear what’s at stake for anyone who declines.

The move is a direct consequence of the UK’s Online Safety Act, a censorship law that has also forces platforms to check the identity/age eligibility of every adult user or face fines reaching 10% of global revenue.

The law is controversial but British Prime Minister Keir Starmer says it doesn’t go far enough.

A prompt appears after installation asking users to confirm they’re over 18. Refuse, and Apple says users “will not be able to download and purchase apps or make in-app purchases.”

The verification process gives Apple several ways to build a profile of your age. It can pull from the payment method already linked to your account, use account age as a proxy, or ask you to scan a credit card. Some users may eventually be asked to scan a photo ID. Apple frames this as seamless.

Keep reading

Privacy Groups Revolt Against Google’s Demand to Register Every Android Developer

Android’s defining advantage over iOS has always been openness. You could build an app, distribute it yourself, and never touch Google’s systems. That era is about to end unless the open-source community can force Google to back down.

Starting September 2026, any app installed on a certified Android device must be registered by a Google-verified developer. No registration, no installation. The verification demands government-issued identification, agreement to Google’s terms and conditions, and a $25 fee.

Developers who skip Google’s approval process will find their apps blocked, even when distributed entirely outside Google Play, through stores like F-Droid, the Amazon Appstore, or Samsung’s Galaxy Store.

Organizations, including the Electronic Frontier Foundation, the Free Software Foundation, F-Droid, Article 19, Fastmail, and Vivaldi, signed an open letter calling on Alphabet CEO Sundar Pichai, founders Larry Page and Sergey Brin, and app ecosystem chief Vijaya Kaza to kill the policy. Their message is simple: Google is reaching into distribution channels it doesn’t own, doesn’t operate, and has no legitimate authority over.

Keep reading

Sam Altman Fumes That It Takes Longer to Train a Human Than an AI, Plus They Eat All That Wasteful Food

AI leaders insist they’ve got humanity’s best interests in mind. If we’re to take them at their word, then we must say: they have a really unfortunate habit of sounding like they have nothing but contempt for the human race.

The latest case in point: OpenAI CEO Sam Altman’s tone-deaf comments at an event hosted by The Indian Express — made fresh off his skin-crawlingly awkward refusal to join hands with Anthropic’s Dario Amodei on stage with other industry titans — in which he attempted to downplay critiques of AI’s environmental impact. 

For starters, he called it “unfair” to compare the energy costs of training an AI model “to how much it costs a human to do one inference query.” That’s because, as Altman explains, “it also takes a lot of energy to train a human.”

“It takes like 20 years of life and all of the food you eat during that time before you get smart,” Altman continued. “And not only that, it took the very widespread evolution of the 100 billion people that have ever lived and learned not to get eaten by predators and learned how to figure out science and whatever, to produce you.”

Measured that way, “probably AI has already caught up on an energy efficiency basis” to humans, Altman said.

Altman also fumed against claims about AI’s water consumption.

“Water is totally fake,” he began, almost taunting quote-miners. “It used to be true, we used to do evaporative cooling in data centers.” 

“But now that we don’t do that,” Altman said, you still see claims like “‘don’t use ChatGPT, it’s 17 gallons of water for each query,’ or whatever.” 

“This is completely untrue and totally insane,” he asserted. “No connection to reality.”

No one can deny that humans are costly to bring up in our industrialized age. We should be doing everything realistically possible to bring down our CO2 emissions and stop eating so much meat — but we aren’t, for a number of dispiriting systemic reasons we won’t get into today.

Regardless, at least those costs are going towards keeping human civilization ticking. All the water in agriculture will keep someone fed, and the fossil fuels we burn will keep someone warm. 

Keep reading

Russia’s FSB Charges Telegram Founder Pavel Durov with Aiding Terrorism

Russia’s Federal Security Service is now pursuing a criminal terrorism case against Pavel Durov, the founder of Telegram. The charge, “assistance to terrorist activities” under Article 205.1 of the Russian Criminal Code, carries up to 15 years in prison. The accusation was published Tuesday in Rossiyskaya Gazeta, Russia’s official state newspaper, which said the article was “based on materials from Russia’s Federal Security Service” and called Telegram “a tool for hybrid threats.”

The timing is hardly subtle. For months, Moscow has been throttling Telegram’s speed, blocking its voice and video calls, and pushing tens of millions of Russians toward MAX, a state-built messaging app with no end-to-end encryption, legally required integration with the FSB’s surveillance infrastructure, and a privacy policy that allows sharing user data with government authorities on request.

MAX has been pre-installed on every smartphone sold in Russia since September 2025. Telegram, used by more than 90 million Russians every month, is the target. MAX is the replacement. The terrorism charge against Durov is the lever.

Durov responded on his Telegram channel: “Russia has opened a criminal case against me for ‘aiding terrorism.’ Each day, the authorities fabricate new pretexts to restrict Russians’ access to Telegram as they seek to suppress the right to privacy and free speech.”

Keep reading

Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone

Mark Zuckerberg spent more than five hours on the stand in Los Angeles Superior Court on Wednesday, testifying before a jury for the first time about claims that Meta deliberately designed Instagram to addict children.

The headline from most coverage was the spectacle: an annotated paper trail of internal emails, a 35-foot collage of the plaintiff’s Instagram posts unspooled across the courtroom, a CEO growing visibly agitated under cross-examination.

The more important story is what Wednesday’s proceedings are being used to build.

The trial is framed as a child safety case. What it is actually doing, especially through Zuckerberg’s own testimony, is laying the political and legal groundwork for mandatory identity verification across the internet.

And Zuckerberg, rather than pushing back on that outcome, offered the court his preferred implementation plan.

Keep reading

Iranian Spies Busted: Three Silicon Valley Engineers Charged with Stealing Google Trade Secrets and Funneling Data to Tehran

A federal grand jury has indicted three engineers for stealing hundreds of confidential files from Google and other tech giants, then smuggling the sensitive data to Iran.

The defendants, Samaneh Ghandali, 41, a U.S. citizen; her sister Soroor Ghandali, 32, an Iranian national on a student visa; and Samaneh’s husband Mohammadjavad Khosravi, 40, an Iranian national and legal permanent resident, were all residents of San Jose at the time of the theft.

Samaneh and Soroor previously worked at Google before joining another unnamed tech firm, while Khosravi, a former member of the Iranian army, was employed at a developer of system-on-chip (SoC) platforms, such as Qualcomm’s Snapdragon series used in high-end Android phones and Apple’s iPhones.

Charged with conspiracy to commit trade secret theft, theft, and attempted theft of trade secrets, and obstruction of justice, the trio allegedly exploited their insider access to steal processor security, cryptography materials, and Snapdragon SoC hardware architecture secrets that serve as valuable intel not readily available to competitors.

The trio routed the files through third-party platforms like Telegram, copied them to personal devices, and even photographed computer screens to dodge digital monitoring.

In a particularly brazen move, just before Samaneh and Khosravi jetted off to Iran in December 2023, she snapped photos of his work screen displaying company secrets.

While in Iran, devices linked to them accessed this pilfered info.

Google caught wind of the scheme in August 2023 when internal security flagged Samaneh’s suspicious activity, leading to her access being revoked.

Keep reading

OpenAI failed to disclose Canadian trans shooter’s ChatGPT history in meeting with officials day after shooting

In a meeting with the British Columbia government, the day after a trans-identifying 18-year-old carried out a mass shooting in Tumbler Ridge, OpenAI did not disclose that it had been aware of concerning conversations the shooter had with its chatbot months prior.

The province said in a statement that OpenAI waited until the following day to ask its provincial contact to help connect the company with the Royal Canadian Mounted Police. OpenAI handed over evidence that the shooter was banned from using ChatGPT after its automated screening systems flagged his chats last June, according to a company statement.

A previous report found that some employees had wanted the company to alert police about posts involving gun violence, but were ignored.

The Globe and Mail reported that, according to a statement from Premier David Erby’s office, a government representative met with OpenAI employees on February 11 about its interest in opening a satellite office in Canada. The following day, OpenAI requested contact information from the RCMP.

“That request was sent to the director of policing and law-enforcement services, who connected OpenAI with the RCMP,” the statement said. “OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge.”

Keep reading

West Virginia sues Apple, saying iCloud distributed ‘child porn’

West Virginia’s attorney general sued Apple (AAPL.O) on Thursday, accusing the iPhone maker of allowing its iCloud service to become what the company’s own internal communications called the “greatest platform for distributing child porn.”

Attorney General JB McCuskey, a Republican, accused Apple of prioritizing user privacy over child safety. His office called the case the first of its kind by a government agency over the distribution of child sexual abuse material on Apple’s data storage platform.

“These images are a permanent record of a child’s trauma, and that child is revictimized every time the material is shared or viewed,” McCuskey said in the statement.

Apple in a statement said it has implemented features that prevent children from uploading or receiving nude images and was “innovating every day to combat ever-evolving threats and maintain the safest, most trusted platform for kids.”

“All of our industry-leading parental controls and features, like Communication Safety — which automatically intervenes on kids’ devices when nudity is detected in Messages, shared Photos, AirDrop and even live FaceTime calls — are designed with the safety, security, and privacy of our users at their core,” Apple said.

Apple on Thursday said it plans to roll out a feature in the coming weeks that allows users in the U.S. to flag inappropriate content such as nudity directly to Apple via a “Report to Apple” feature. This is already available in Australia and the United Kingdom. Apple said the expansion was previously planned and not in response to West Virginia’s lawsuit.

The U.S. has seen a growing national reckoning over how smartphones and social media harm children. So far, the wave of litigation and public pressure has mostly targeted companies like Meta, Snap, and Google’s YouTube, with Apple largely insulated from scrutiny.

Keep reading