Kiwi Farms Challenges DMCA Subpoenas as Tools to Unmask Anonymous Speech

A new lawsuit filed in the Southern District of New York offers a clean example of something that keeps happening and keeps getting ignored: the Digital Millennium Copyright Act being used to censor speech and unmask anonymous speakers.

The case is Lolcow LLC v. Fong-Jones, filed on March 12, 2026, and it pits the operator of the web forum Kiwi Farms against Liz Fong-Jones, an activist and field Chief Technology Officer at SaaS observability platform Honeycomb, who has been filing DMCA subpoenas in an attempt to identify anonymous forum users.

The content Fong-Jones wants censored is a screenshot of a Fong-Jones Bluesky post and an edited version of a Fong-Jones headshot, both related to what Fong-Jones has previously described publicly as a “consent accident.”

Forum users posted and discussed those images. Fong-Jones responded by claiming copyright ownership and filing DMCA subpoenas to force the site to hand over the identities of the people who posted them.

The copyright claims seem thin. Kiwi Farms operator Joshua Moon argues that the screenshot is a derivative work over which Fong-Jones holds no copyright, and that the edited headshot represents a textbook case of fair use, given that the image has no commercial value and was modified specifically for purposes of criticism and commentary.

That argument carries weight. Courts have long recognized that transformative use of images for commentary or ridicule sits comfortably within fair use protections.

What makes this case useful as a case study is less the copyright question itself and more the mechanism being exploited. The DMCA subpoena process, codified in Section 512(h), allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users just by asking a court clerk to issue one and attaching a copy of the infringement notice.

Keep reading

Apple Removes Private VPN Apps From Russia App Store

Apple pulled several custom VPN clients from the Russian App Store last week, including Streisand, V2Box, v2RayTun, and Happ Proxy Utility.

These aren’t the big-name commercial VPN providers that Apple already removed in 2024 at Roskomnadzor’s request. These are tools that let users connect to their own private servers and configure manual proxies, the kind of apps that give technically savvy Russians the ability to route around state censorship without depending on any company’s infrastructure.

Russian tech outlet Kod Durova first reported the removals, noting that the same apps remain available through Google Play on Android.

Days before the removals surfaced, Digital Development Minister Maksut Shadayev announced the Kremlin’s most aggressive anti-VPN campaign yet. “We have an obligation to fulfill the tasks that have been set before us. In this case, the task is to reduce the use of VPNs,” Shadayev said on the state-backed messenger Max.

He linked the push to what he called “long, difficult and ultimately unsuccessful” talks with foreign tech companies over compliance with Russian law.

Keep reading

Americans Traveling to Hong Kong Will Now Face ARREST for Refusing to Hand Over Phones, Laptops, and Passwords

The era of privacy is officially over in the “New Hong Kong,” and if you’re an American traveling abroad, you are now a target for the Communist-controlled regime.

According to a recent warning issued by the U.S. Consulate, Americans entering or even transiting through Hong Kong could now face criminal charges simply for refusing to unlock their phones or provide passwords to authorities.

Under newly updated enforcement rules tied to Hong Kong’s sweeping National Security Law, police now have the authority to demand access to personal electronic devices, including phones and laptops, on the spot.

And here’s the catch:

This applies to everyone, residents, tourists, business travelers, and even passengers just passing through the airport.

The U.S. Consulate General in Hong Kong and Macao issued the following alert:

On March 23, 2026, the Hong Kong government changed the implementing rules relating to the National Security Law. It is now a criminal offense to refuse to give the Hong Kong police the passwords or decryption assistance to access all personal electronic devices including cellphones and laptops. This legal change applies to everyone, including U.S. citizens, in Hong Kong, arriving or just transiting Hong Kong International Airport. In addition, the Hong Kong government also has more authority to take and keep any personal devices, as evidence, that they claim are linked to national security offenses.

Keep reading

New Company Hopes to Build Age-Verification Tech into Vape Cartridges 

Their goal is to use biometric data and blockchain to build age-verification measures directly into disposable vape cartridges.

Wired reports on a partnership between vape/cartridge manufacturer Ispire Technology and regulatory consulting company Chemular (which specializes in the nicotine market) — which they’ve named “Ike Tech”:[Using blockchain-based security, the e-cig cartridge] would use a camera to scan some form of ID and then also take a video of the user’s face. Once it verifies your identity and determines you’re old enough to vape, it translates that information into anonymized tokens. That info goes to an identity service like ID.me or Clear. If approved, it bounces back to the app, which then uses a Bluetooth signal to give the vape the OK to turn on.

“Everything is tokenized,” [says Ispire CEO Michael Wang]. “As a result of this process, we don’t communicate consumer personal private information.” He says the process takes about a minute and a half… After that onetime check, the Bluetooth connection on the phone will recognize when the vape cartridge is nearby and keep it unlocked. Move the vape too far away from the phone, and it shuts off again. Based on testing, the companies behind Ike Tech claim this process has a 100 percent success rate in age verification, more or less calling the tech infallible. “The FDA told us it’s the holy grail technology they were looking for,” Wang says. “That’s word-for-word what they said when we met with them….”

Wang says the goal is to implement additional features in the verification process, like geo-fencing, which would force the vape to shut off while near a school or on an airplane. In the future, the plan is to license this biometric verification tech to other e-cig companies. The tech may also grow to include fingerprint readers and expand to other product categories; Wang suggests guns, which have a long history of age-verification features not quite working.

Keep reading

Digital IDs are insecure and can be hacked

On Wednesday, Norman Fenton, Professor Emeritus at Queen Mary University of London, published an article describing how he lost control of his Twitter (now X) account to a hacker on 18 March but has now regained limited control. 

“After sending a spam ‘vote for me’ direct message (with a malicious link) to all my mutual followers on 20 March, the attackers began posting spam publicly. Not sporadically, but continuously: the bot was posting roughly once per second, pausing only intermittently to repost content from accounts I follow. By the time I regained access, there were thousands of spam posts and reposts,” he said.  “For now, anyone visiting my timeline will see little but pages of spam.”

Prof. Fenton ended his article by saying, “Regaining access to my account ultimately required persistence, external pressure, and, quite likely, legal escalation. Many users facing similar attacks will not have those options.

“If X cannot reliably protect accounts, respond promptly to verified compromise reports, and restore control without unnecessary barriers, then users are left dangerously exposed.

“What’s clear right now is that when things go wrong, you will be largely on your own.”

Read more: I’ve regained control of my hacked X account, Norman Fenton, 25 March 2026

In response to Prof. Fenton’s experience, Dr. Scott McLachlanformerly a lecturer in Digital Technologies for Healthcare at King’s College London, posted a Substack note taking Prof. Fenton’s final remarks a logical step further.

Keep reading

The Verdict Against Meta and Google That Could End the Anonymous Internet

A Los Angeles jury has found Meta and YouTube negligent in the design of their platforms and awarded $3 million to a plaintiff identified as K.G.M., a young woman who testified that years of near-constant social media use contributed to depression, anxiety, and body dysmorphia. The jury assigned 70% of the responsibility to Meta and 30% to YouTube. Punitive damages came to another $6 million.

The verdict is being reported as a landmark for child safety. It also represents a significant legal mechanism for dismantling anonymous internet access, built in plain sight, with bipartisan enthusiasm and a CEO’s enthusiastic assistance.

K.G.M.’s attorneys built their claim not around what users posted, which Section 230 of the Communications Decency Act largely shields platforms from liability for, but around how the platforms were designed.

Infinite scroll, algorithmically amplified notifications, engagement loops engineered to maximize time on site. The argument treats social media architecture the way product liability law treats a car without brakes. A defective product that the public needs to be protected from.

Keep reading

The Kill Switch Society

There was a time — not very long ago — when the automobile represented one of the clearest expressions of individual choice in a free society. Limited only by fuel, roads, and imagination, a person could choose where to go, when to go, and how to get there. The car was not merely a machine. It was mobility made personal — an extension of autonomy and freedom.

Sadly, that is no longer the case. Increasingly, this same instrument, once a tool to facilitate individual independence, has been repurposed into a system of monitoring and control. Though advertised as safety measures for the consumer, these measures were clearly designed to empower the state.

Modern vehicles are no longer just mechanical devices; they are computers on wheels. Embedded sensors track speed, braking patterns, seatbelt usage, location, and even driver attention. Event Data Recorders — commonly referred to as “black boxes” — have been standard in most new vehicles for years. Originally justified as instruments to reconstruct accidents, these devices record data in the moments before a crash. Few object to understanding the causes of collisions. But it is worth noting that once data exists, its use rarely remains confined to its original purpose.

Insurance companies now seek access to driving data to adjust premiums. Law enforcement agencies have used vehicle data in criminal investigations. Courts have admitted such data as evidence. Each of these developments can be justified in isolation. Together, they represent a quiet but unmistakable shift: the automobile is no longer simply your property — it is a source of information about you.

More recently, legislative developments have accelerated this trend. The federal infrastructure legislation passed in 2021 includes a mandate for advanced impaired driving prevention technology to be installed in all new vehicles within the coming years. While often described in benign terms — systems that passively detect intoxication or driver impairment — the practical reality is that these systems must continuously monitor driver behavior in order to function. Monitoring creates data. And data, once created, rarely remains unused. It takes on a life of its own.

Proposals and discussions around remote vehicle disablement — popularly referred to as “kill switches” — have raised further concerns. While proponents argue that such features could prevent high-speed chases or stop stolen vehicles, the existence of remote-control capabilities introduces a fundamentally different relationship between the individual and the machine. A car that can be disabled remotely is clearly not under the control of its owner.

History suggests that powers granted for limited purposes seldom remain limited. Civil asset forfeiture, initially justified as a tool against organized crime, expanded into widespread seizures affecting ordinary citizens. Surveillance authorities granted for national security purposes have been used in far broader contexts. It would be historically naïve to assume that vehicle control technologies would be immune to similar expansion.

Keep reading

FC Barcelona Fined for Privacy Violations Over Biometric Data Collection

FC Barcelona got fined €500,000 ($579,219) for scanning the faces and recording the voices of over 100,000 members without doing the legal homework first.

Spain’s data protection authority, the AEPD, found the club had deployed biometric identity verification during a membership census update and processed all of it without a valid Data Protection Impact Assessment.

Members renewing their details remotely were required to either submit a facial scan through their device camera or record their voice. Both systems were live, both were processing biometric data at scale, and the documentation Barcelona produced to justify any of it didn’t meet the bar GDPR sets for high-risk processing.

Article 35 of the GDPR requires organizations to conduct a DPIA before deploying any system likely to create a high risk for individuals. Biometric data used for identification qualifies automatically.

Processing that touches more than 100,000 people, including minors, qualifies. Using new technologies qualifies. Barcelona’s system hit all three. The AEPD concluded the club’s documentation was missing the essential components of a genuine assessment: no real necessity and proportionality analysis, no adequate evaluation of what the processing actually risks for the people whose faces and voices it captured.

The AEPD’s decision in case PS-00450-2024 makes one point with particular clarity: consent doesn’t substitute for a DPIA. Barcelona had asked members to agree to biometric data collection, and members had agreed.

That agreement is legally irrelevant to the separate procedural obligation to assess risk before the system goes live. The GDPR treats them as independent requirements. Satisfying one doesn’t discharge the other.

What a valid DPIA actually requires, according to the decision, is a clear description of the processing, a genuine necessity and proportionality assessment, a detailed risk evaluation, proposed mitigation measures, and a residual risk assessment after mitigations are applied. Organizations that generate DPIA documentation as a compliance checkbox, without substantively working through those questions, remain exposed regardless of what consent language they put in front of users.

The appetite for facial biometric data has become near-universal across industries, and the Barcelona case lands in a moment when that appetite is accelerating faster than the rules meant to govern it.

Keep reading

Canada’s Public Safety Minister Defends Mass Surveillance Bill

Canada’s Public Safety Minister, Gary Anandasangaree, wants you to know that Bill C-22 is not a surveillance bill. He said so twice.

“I want to be very clear about what C-22 is not. It is not about the surveillance of honest, hard-working Canadians going on about their daily lives,” Anandasangaree told an audience that included police chiefs and law enforcement officials.

Then, a few sentences later: “We’re not looking for sneaky ways to surveil Canadians. We are doing our part to combat bad actors in both the physical and digital worlds.”

What he described is a surveillance bill.

The Lawful Access Act, introduced this month, compels electronic service providers to retain Canadians’ metadata for a year and gives police and CSIS new mechanisms to access it. That includes location data, device identifiers, and daily movement patterns, all stored in advance, on every Canadian, not just suspects, held ready for law enforcement retrieval.

Keep reading

Hong Kong Police Can Now Demand Phone or Computer Passwords of Suspects Under National Security Law

Your passwords are no longer safe in Hong Kong.

Ever since Hong Kong returned from British rule to China in 1997, the island has devolved towards the totalitarian ways of mainland China.

In 2020, the National Security Law (NSL) imposed by Beijing crystallized this new reality.

The NSL criminalizes acts of ‘secession, subversion, terrorism, collusion with foreign forces, treason, espionage, sabotage, and external interference’.

With penalties up to life imprisonment, it even allows some cases to be tried on the mainland, and it grants police ‘broad surveillance and detention powers’.

Needless to say, this has curtailed political dissent, free speech, and pro-democracy activities in Hong Kong.

And now, it arises that Hong Kong police can demand phone or computer passwords from suspects under the NSL.

BBC reported:

“Those who refuse could face up to a year in jail and a fine of up to HK$100,000 ($12,700; £9,600), and individuals who provide ‘false or misleading information’ could face up to three years in jail.

It comes as part of new amendments to a bylaw under the NSL that the government gazetted on Monday.”

Keep reading