Zuckerberg’s “Fix” for Child Safety Could End Anonymous Internet Access for Everyone

Mark Zuckerberg spent more than five hours on the stand in Los Angeles Superior Court on Wednesday, testifying before a jury for the first time about claims that Meta deliberately designed Instagram to addict children.

The headline from most coverage was the spectacle: an annotated paper trail of internal emails, a 35-foot collage of the plaintiff’s Instagram posts unspooled across the courtroom, a CEO growing visibly agitated under cross-examination.

The more important story is what Wednesday’s proceedings are being used to build.

The trial is framed as a child safety case. What it is actually doing, especially through Zuckerberg’s own testimony, is laying the political and legal groundwork for mandatory identity verification across the internet.

And Zuckerberg, rather than pushing back on that outcome, offered the court his preferred implementation plan.

Keep reading

Iranian Spies Busted: Three Silicon Valley Engineers Charged with Stealing Google Trade Secrets and Funneling Data to Tehran

A federal grand jury has indicted three engineers for stealing hundreds of confidential files from Google and other tech giants, then smuggling the sensitive data to Iran.

The defendants, Samaneh Ghandali, 41, a U.S. citizen; her sister Soroor Ghandali, 32, an Iranian national on a student visa; and Samaneh’s husband Mohammadjavad Khosravi, 40, an Iranian national and legal permanent resident, were all residents of San Jose at the time of the theft.

Samaneh and Soroor previously worked at Google before joining another unnamed tech firm, while Khosravi, a former member of the Iranian army, was employed at a developer of system-on-chip (SoC) platforms, such as Qualcomm’s Snapdragon series used in high-end Android phones and Apple’s iPhones.

Charged with conspiracy to commit trade secret theft, theft, and attempted theft of trade secrets, and obstruction of justice, the trio allegedly exploited their insider access to steal processor security, cryptography materials, and Snapdragon SoC hardware architecture secrets that serve as valuable intel not readily available to competitors.

The trio routed the files through third-party platforms like Telegram, copied them to personal devices, and even photographed computer screens to dodge digital monitoring.

In a particularly brazen move, just before Samaneh and Khosravi jetted off to Iran in December 2023, she snapped photos of his work screen displaying company secrets.

While in Iran, devices linked to them accessed this pilfered info.

Google caught wind of the scheme in August 2023 when internal security flagged Samaneh’s suspicious activity, leading to her access being revoked.

Keep reading

OpenAI failed to disclose Canadian trans shooter’s ChatGPT history in meeting with officials day after shooting

In a meeting with the British Columbia government, the day after a trans-identifying 18-year-old carried out a mass shooting in Tumbler Ridge, OpenAI did not disclose that it had been aware of concerning conversations the shooter had with its chatbot months prior.

The province said in a statement that OpenAI waited until the following day to ask its provincial contact to help connect the company with the Royal Canadian Mounted Police. OpenAI handed over evidence that the shooter was banned from using ChatGPT after its automated screening systems flagged his chats last June, according to a company statement.

A previous report found that some employees had wanted the company to alert police about posts involving gun violence, but were ignored.

The Globe and Mail reported that, according to a statement from Premier David Erby’s office, a government representative met with OpenAI employees on February 11 about its interest in opening a satellite office in Canada. The following day, OpenAI requested contact information from the RCMP.

“That request was sent to the director of policing and law-enforcement services, who connected OpenAI with the RCMP,” the statement said. “OpenAI did not inform any member of government that they had potential evidence regarding the shootings in Tumbler Ridge.”

Keep reading

West Virginia sues Apple, saying iCloud distributed ‘child porn’

West Virginia’s attorney general sued Apple (AAPL.O) on Thursday, accusing the iPhone maker of allowing its iCloud service to become what the company’s own internal communications called the “greatest platform for distributing child porn.”

Attorney General JB McCuskey, a Republican, accused Apple of prioritizing user privacy over child safety. His office called the case the first of its kind by a government agency over the distribution of child sexual abuse material on Apple’s data storage platform.

“These images are a permanent record of a child’s trauma, and that child is revictimized every time the material is shared or viewed,” McCuskey said in the statement.

Apple in a statement said it has implemented features that prevent children from uploading or receiving nude images and was “innovating every day to combat ever-evolving threats and maintain the safest, most trusted platform for kids.”

“All of our industry-leading parental controls and features, like Communication Safety — which automatically intervenes on kids’ devices when nudity is detected in Messages, shared Photos, AirDrop and even live FaceTime calls — are designed with the safety, security, and privacy of our users at their core,” Apple said.

Apple on Thursday said it plans to roll out a feature in the coming weeks that allows users in the U.S. to flag inappropriate content such as nudity directly to Apple via a “Report to Apple” feature. This is already available in Australia and the United Kingdom. Apple said the expansion was previously planned and not in response to West Virginia’s lawsuit.

The U.S. has seen a growing national reckoning over how smartphones and social media harm children. So far, the wave of litigation and public pressure has mostly targeted companies like Meta, Snap, and Google’s YouTube, with Apple largely insulated from scrutiny.

Keep reading

Berlin Court Orders X to Hand Over Hungarian Election Data to Researchers

A Berlin court has ordered X to hand over data on Hungarian election activity to researchers, ruling in favor of Democracy Reporting International after the platform refused the group’s access requests in November.

The ruling turns on the EU’s censorship law, the Digital Services Act, which requires large platforms to give external researchers access to data for scrutiny of election interference risks. X ignored that obligation. The European Commission fined it €40 ($47) million for that refusal, as part of a broader €120 ($141) million levy, in December.

X’s position throughout has been straightforward: don’t share the data. No response to press inquiries, no compliance, no engagement.

Hungary votes in April in what amounts to a test of Viktor Orbán’s power as he faces his rival Péter Magyar.

Keep reading

Boasberg Rubber-Stamps DOJ Request To Keep FBI-Twitter Payments Secret

When the Twitter files hit in December of 2022, they revealed that the Biden administration had paid Twitter at least $3.4 million between October 2019 and February 2021 to reimburse the pre-Musk, left-leaning social media giant for a flood of requests. 

During this period, the Biden DOJ was going after vaccine skeptics, lab-leak proponents, 2020 election ‘deniers,’ Catholic parents, Hunter Biden laptop / Burisma content, and conservative news outlets. We also learned that the FBI’s Elvis Chan and crew were holding weekly meeting with Twitter on “misinformation,” and flagged thousands of accounts for the above. 

Days after the Twitter files were released, watchdog group Judicial Watch sued the Biden DOJ, which oversees the FBI, over a FOIA request demanding to know how much the FBI paid Twitter from 2016 onward. The FBI initially refused, but eventually released 44-pages of documents with the key payment details redacted – claiming the data was protected under FOIA’s “Exemption 7(E),” which lets agencies hide info about law enforcement methods if releasing it could help criminals or enemies dodge detection.

Judicial Watch then narrowed their claims to just those redacted payment amounts (JW dropped other issues such as vendor names), however in December of 2025, the Trump DOJ asked Judge James Boasberg for a Motion for Summary Judgement to deny Judicial Watch’s request – effectively concealing the extent to which the FBI, under Trump and Biden, was going after Americans. 

In its request for summary judgement, US Attorney Jeanine Pirro’s office (say it ain’t so!) argued that revealing payments that are tied to real investigations could reveal super secret investigative methods – such as how much the FBI is “engaging” with Twitter vs. other platforms, which could lead to ‘bad guys’ (criminals, hackers, foreign spies) to switch to platforms with less FBI activity, and that it might reveal shifts in FBI priorities over time.

Revealing the quarterly totals could also betray “mosaic theory,” where seemingly harmless info (like one quarter’s payment) can be pieced together with public data (e.g., Twitter’s transparency reports) to form a big picture of FBI strategies.

Earlier this month, Boasberg agreed – ruling that revealing the payments could expose FBI “techniques and procedures” (how they monitor online threats) and help bad actors figure out what the FBI is focused on, allowing them to adapt and change strategies. 

Boasberg wrote in his opinion that the 7(E) exemption is valid because it could “risk circumvention of the law.” 

Keep reading

Meta’s Zuckerberg denies at LA trial that Instagram targets kids

Meta Platforms chief executive Mark Zuckerberg on Feb 18 repeatedly said during 

a landmark trial over youth social media addiction that the Facebook and Instagram operator does not allow kids under 13 on its platforms, despite being confronted with evidence suggesting they were a key demographic.

Mr Mark Lanier, a lawyer for a woman suing Instagram and Google’s YouTube for harming her mental health when she was a child, pressed Mr Zuckerberg over his statement to Congress in 2024 that users under 13 are not allowed on the platform.

Mr Lanier confronted Mr Zuckerberg with internal Meta documents.

The case involves a California woman who started using Instagram and YouTube as a child. She alleges the companies sought to profit by hooking kids on their services despite knowing social media could harm their mental health.

She alleges the apps fuelled her depression and suicidal thoughts and is seeking to hold the companies liable.

Meta and Google have denied the allegations, and pointed to their work to add features that keep users safe.

“If we want to win big with teens, we must bring them in as tweens,” read one internal Instagram presentation from 2018.

Keep reading

META Granted Patent That Allows Facebook Users to Post After They Die

A patent granted to META in December gives an inside look at how Facebook users may still be able to post beyond the grave.

According to the patent, the large language model will be “simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

The AI model would essentially replicate the deceased person’s digital presence by analyzing data such as their writing style, tone, and expressed beliefs, allowing it to continue posting content and even commenting on friends’ posts in their voice.

It gets stranger.

The patent also referenced technology that would allow users to have phone calls with the deceased or even video calls.

Per Business Insider:

In the patent, Meta lays out why it thinks people might need this.

If you’re no longer posting online — whether that’s because you need a break from social media or … you … die — your followers’ user experience will be affected. In short, they’ll miss you.

“The impact on the users is much more severe and permanent if that user is deceased and can never return to the social networking platform,” the document says.

To fill that void, Meta would essentially create a digital clone of your social media presence, training a model on “user-specific” data — including historical platform activity, such as comments, likes, or content — to understand how you would (or rather, did) behave.

The patent was filed by META’s CTO, Andrew Bosworth.

Despite being granted the patent, META has claimed it has no plans to implement the technology.

In recent years, the idea of using AI to bring someone back from the dead digitally has gained some momentum.

In 2025, the family of Christopher Pelkey, who was killed in a road rage incident in Arizona, used an AI-generated video of Pelkey as an impact statement in the courtroom.

Keep reading

Meta Considers Timed Face Recognition Launch to Exploit Distracted Society

Meta is weighing whether to add face recognition to its camera-equipped smart glasses, and The New York Times obtained an internal company document that reveals more than just the plan itself.

It reveals how Meta thinks about when to launch it: “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”

Read that plainly: Meta wants to release a mass biometric surveillance product while the people most likely to fight it are too distracted to respond.

The technology would scan the face of every person who enters the glasses’ field of view, building a faceprint to match against a database. Every passerby. Every stranger on the subway. Every person who happens to walk through the frame of someone else’s device. None of them consented. Most of them won’t even know they were captured.

Faceprints are among the most sensitive data a company can collect. Unlike a password, a face cannot be changed after a breach. Once collected, this data enables mass surveillance, fuels discrimination, and creates a permanent identification trail attached to a person’s physical movement through the world.

Putting that capability into wearable glasses carried by ordinary people in ordinary places moves it off servers and into every room, street, and gathering that people enter.

Meta ran this experiment before and lost.

The company shut down (only kind of) its photo face-scanning tool in November 2021, simultaneously announcing it would delete (if you believe them) over a billion stored face templates. That retreat came after years of mounting legal exposure that produced a very expensive record.

In July 2019, Facebook settled a Federal Trade Commission investigation for $5 billion. The allegations included that the company’s face recognition settings were confusing and deceptive, and the settlement required the company to obtain consent before running face recognition on users going forward.

Less than two years later, Meta agreed to pay $650 million to settle a class action brought by Illinois residents under that state’s biometric privacy law. Then, in July 2024, it settled with Texas for $1.4 billion over the same defunct system. Nearly $7 billion across three settlements, all tied to face recognition practices the company ultimately abandoned.

Keep reading

Amazon’s Ring and Google’s Nest Unwittingly Reveal the Severity of the U.S. Surveillance State

That the U.S. Surveillance State is rapidly growing to the point of ubiquity has been demonstrated over the past week by seemingly benign events. While the picture that emerges is grim, to put it mildly, at least Americans are again confronted with crystal clarity over how severe this has become.

The latest round of valid panic over privacy began during the Super Bowl held on Sunday. During the game, Amazon ran a commercial for its Ring camera security system. The ad manipulatively exploited people’s love of dogs to induce them to ignore the consequences of what Amazon was touting. It seems that trick did not work.

The ad highlighted what the company calls its “Search Party” feature, whereby one can upload a picture, for example, of a lost dog. Doing so will activate multiple other Amazon Ring cameras in the neighborhood, which will, in turn, use AI programs to scan all dogs, it seems, and identify the one that is lost. The 30-second commercial was full of heart-tugging scenes of young children and elderly people being reunited with their lost dogs.

But the graphic Amazon used seems to have unwittingly depicted how invasive this technology can be. That this capability now exists in a product that has long been pitched as nothing more than a simple tool for homeowners to monitor their own homes created, it seems, an unavoidable contract between public understanding of Ring and what Amazon was now boasting it could do.

Keep reading