Ireland’s Simon Harris to Push EU-Wide Ban on Social Media Anonymity

Ireland’s next term leading the European Union will be used to promote a new agenda: an effort to end online anonymity and make verified identity the standard across social media platforms.

Tánaiste Simon Harris said the government plans to use Ireland’s presidency to push for EU-wide rules that would require users to confirm their identities before posting or interacting online.

Speaking to Extra.ie, Harris described the plan as part of a broader attempt to defend what he called “democracy” from anonymous abuse and digital manipulation.

He said the initiative will coincide with another policy being developed by Media Minister Patrick O’Donovan, aimed at preventing children from accessing social media.

O’Donovan’s proposal, modeled on Australian restrictions, is expected to be introduced while Ireland holds the EU presidency next year.

Both ideas would involve rewriting parts of the EU’s Digital Services Act, which already governs how online platforms operate within the bloc.

Expanding it to require verified identities would mark a major shift toward government involvement in online identity systems, a move that many privacy advocates believe could expose citizens to new forms of monitoring and limit open speech.

Harris said his motivation comes from concerns about the health of public life, not personal grievance.

Harris said he believes Ireland will find allies across Europe for the initiative.

He pointed to recent statements from French President Emmanuel Macron and UK Prime Minister Keir Starmer, who he said have shown interest in following Australia’s lead. “If you look at the comments of Emmanuel Macron…of Keir Starmer…recently, in terms of being open to considering what Australia have done…You know this is a global conversation Ireland will and should be a part of,” he said.

Technology companies based in Ireland, many of which already face scrutiny under existing EU rules, are likely to resist further regulation.

The United States government has also expressed growing hostility toward European efforts to regulate speech on its major tech firms, recently imposing visa bans on several EU officials connected to such laws.

Keep reading

Indian Supreme Court Judge Says Those With Nothing to Hide Shouldn’t Fear Surveillance

A courtroom drama over state surveillance in India took a striking turn when a Supreme Court judge suggested that people who live transparently should not be troubled by government monitoring.

The case involved allegations that Telangana’s state intelligence apparatus was used for political snooping, but the discussion soon widened into a philosophical clash over privacy and power.

Former Special Intelligence Bureau (SIB) chief T. Prabhakar Rao, accused of directing unlawful phone tapping during the previous BRS government, was before the bench as the State sought more time to keep him in police custody.

During the hearing, Justice B.V. Nagarathna questioned why citizens would object to being monitored at all, asking, “Now we live in an open world. Nobody is in a closed world. Nobody should be really bothered about surveillance. Why should anyone be bothered about surveillance unless they have something to hide?”

Her comment prompted Solicitor General Tushar Mehta to caution against normalizing government spying. He asked whether this meant “every government will have a free hand in putting people under surveillance,” warning that secret monitoring without authorization was unlawful and incompatible with basic freedoms.

Mehta reminded the bench that the Constitution, as affirmed in the landmark Puttaswamy ruling, enshrines privacy as part of human dignity and liberty.

“The Supreme Court knows the difference between an ‘open’ world and being under illegal surveillance. My personal communications with my wife… I have a right not to be under surveillance,” he said.

Keep reading

China bans sharing porn on messaging apps

China will expand a ban on sharing obscene materials to include content sent via phone and online messaging apps starting next year.

According to the revised law, anyone “disseminating obscene information using information networks, telephones, or other communication tools” will face up to 15 days in jail and a fine of up to 5,000 yuan ($711). Penalties will be higher if the content involves children.

The wording of the law has led to concerns from media and social networks as to whether it could be applied to private sexually explicit messages between adults, such as sexting.

However, according to multiple legal experts cited by Chinese state media, the legal changes will not affect one-on-one private communications. They argue that the revisions reflect technological development, increasing the maximum fines, while leaving detention periods unchanged.

“China has mature standards and procedures for identifying obscene materials. It is critical to clarify that ‘obscene’ does not equal ‘indecent’,” China Daily cited Ji Ying, an associate professor of law at the University of International Business and Economics in Beijing, as saying.

Keep reading

Pennsylvania High Court Rules Police Can Access Google Searches Without Warrant

The Pennsylvania Supreme Court has a new definition of “reasonable expectation.” According to the justices, it’s no longer reasonable to assume that what you type into Google is yours to keep.

In a decision that reads like a love letter to the surveillance economy, the court ruled that police were within their rights to access a convicted rapist’s search history without a warrant. The reasoning is that everyone knows they’re being watched anyway.

The opinion, issued Tuesday, leaned on the idea that the public has already surrendered its privacy to Silicon Valley.

We obtained a copy of the ruling for you here.

“It is common knowledge that websites, internet-based applications, and internet service providers collect, and then sell, user data,” the court said, as if mass exploitation of personal information had become a civic tradition.

Because that practice is so widely known, the court concluded, users cannot reasonably expect privacy. In other words, if corporations do it first, the government gets a free pass.

The case traces back to a rape and home invasion investigation that had gone cold. In a final effort, police asked Google to identify anyone who searched for the victim’s address the week before the crime. Google obliged. The search came from an IP address linked to John Edward Kurtz, later convicted in the case.

It’s hard to argue with the result; no one’s defending a rapist, but the method drew a line through an already fading concept: digital privacy.

Investigators didn’t start with a suspect; they started with everyone. That’s the quiet power of a “reverse keyword search,” a dragnet that scoops up the thoughts of every user who happens to type a particular phrase.

The justices pointed to Google’s own privacy policy as a kind of consent form. “In the case before us, Google went beyond subtle indicators,” they wrote. “Google expressly informed its users that one should not expect any privacy when using its services.”

Keep reading

Tor Project received $2.5M from the US government to bolster privacy

The US government contributed over $2.5 million to the Tor Project in its 2023–2024 fiscal year, marking a continued but reduced financial relationship with the privacy-focused nonprofit.

The funds represent 35% of Tor’s $7.28 million in reported revenue, according to newly released financial disclosures.

The funding, primarily sourced through the US State Department’s Bureau of Democracy, Human Rights, and Labor (DRL), supports multiple high-impact projects aimed at strengthening internet freedom, especially in regions experiencing heavy censorship. The largest single contributor was DRL, providing $2.12 million. These funds were allocated across several major initiatives, including expanding Tor access in China, Hong Kong, and Tibet; developing a Tor-based VPN client for Android; combating malicious Tor relays; and migrating core network infrastructure to a more secure Rust-based implementation (Arti).

Keep reading

How new social media checks would change travel to US

The US is seeking to significantly expand its 

vetting of social media accounts for people who want to enter the country.

In 2019, during President Donald Trump’s first term, the US imposed a requirement that visa applicants disclose their social media accounts. The Department of Homeland Security (DHS) now aims to apply a similar requirement to another group: travellers from countries such as the UK, Japan and Australia whose citizens can enter the US without a visa.

The Trump administration argues that the rule change is necessary to ensure travellers entering the country “do not bear hostile attitudes” to the US and its citizens. Civil-liberties groups warn that the approach marks a sweeping expansion of federal surveillance over routine travel. Here’s what to know.

What exactly is the US proposing?

The US is proposing that foreign visitors from countries whose citizens can travel to the US without a visa, but must still apply online for advance authorisation, provide their social media history from the last five years. 

DHS did not respond to a query about what information applicants from visa-waiver countries would need to supply for the social media screening. (Visa applicants are required to list all social media identifiers they have used in the past five years.)

Applicants would also be required to supply, when “feasible,” a broad set of additional personal information: telephone numbers used in the last five years; e-mail addresses used in the last ten years; IP addresses and metadata from electronically submitted photos; family members’ names, residences, places and dates of birth, and phone numbers used in the last five years; and personal biometrics – fingerprints, DNA samples, iris scans, and facial images. The proposal does not clarify how biometric information would be collected. 

Keep reading

Governments Are Pushing Digital IDs. Are You Ready To Be Tracked?

Politicians push government IDs.

In a TSA announcement, Secretary of Homeland Security Kristi Noem sternly warns, “You will need a REAL ID to travel by air or visit federal buildings.”

European politicians go much further, reports Stossel TV producer Kristin Tokarev.

They’re pushing government-mandated digital IDs that tie your identity to nearly everything you do.

Spain’s prime minister promises “an end to anonymity” on social media!

Britain’s prime minister warns, “You will not be able to work in the United Kingdom if you do not have digital ID.”

Queen Maxima of the Netherlands enthusiastically told the World Economic Forum that digital IDs are good for knowing “who actually got a vaccination or not.”

Many American tech leaders also like digital IDs.

The second richest man in the world, Oracle founder Larry Ellison, says, “Citizens will be on their best behavior because we’re constantly recording and reporting everything.”

That’s a good thing?

“That is a recipe for disaster and totalitarianism,” says privacy specialist Naomi Brockwell. “Privacy is not about hiding. It’s about an individual’s right to decide for themselves who gets access to their data. A digital ID will strip individuals of that choice.”

“I already have a government-issued ID,” says Tokarev. “Why is a digital one worse?”

“It connects everything,” says Brockwell. “Your financial decisions, social media posts, your likes, things that you’re watching, places you’re going. You won’t be able to voice things anonymously online anymore. Everything you say will be tied back to who you are.”

Digital ID backers say the new ID will make life easier.

“You can access your own money, make payments so much more easily,” says the U.K.’s prime minister.

Yes, says Brockwell, “until those services start saying, ‘No, you can’t use our system.'”

Even without a digital ID, Canada froze the bank accounts of truckers who protested COVID-19 vaccine mandates.

With a digital ID, politicians could do that much more easily.

Keep reading

Porn Sites Must Block VPNs To Comply With Indiana’s Age-Verification Law, State Suggests in New Lawsuit

Indiana Attorney General Todd Rokita is suing dozens of porn websites, claiming that they are in violation of the state’s age-verification law and seeking “injunctive relief, civil penalties, and recovery of costs incurred to investigate and maintain the action.”

Last year, Indiana Senate Bill 17 mandated that websites featuring “material harmful to minors” must verify that visitors are age 18 or above. Rather than start checking IDs, Aylo—the parent company of Pornhub and an array of other adult websites—responded by blocking access for Indiana residents.

Now, Indiana says this is not good enough. To successfully comply, Pornhub and other Aylo platforms (which include Brazzers, Youporn, and Redtube, among others) must also block virtual private networks and other tools that allow internet users to mask their IP addresses, the state suggests.

This is an insane—and frighteningly dystopian—interpretation of the law.

Broad Anti-Privacy Logic

In a section of the suit detailing how Aylo allegedly violated the age-check law, Indiana notes that last July, “an investigator employed by the Office of the Indiana Attorney General (‘OAG Investigator’) accessed Pornhub.com from Indiana using a Virtual Private Network (VPN) with a Chicago, Illinois IP address.”

“Defendants have not implemented any reasonable form of age verification on its website Pornhub.com,” the suit states. It goes on to detail how Indiana investigators also accessed Brazzers.com, Faketaxi.com, Spicevids.com, and other adult websites using a VPN.

Keep reading

Privacy For The Powerful, Surveillance For The Rest: EU’s Proposed Tech Regulation Goes Too Far

Last month, we lamented California’s Frontier AI Act of 2025. The Act favors compliance over risk management, while shielding bureaucrats and lawmakers from responsibility. Mostly, it imposes top-down regulatory norms, instead of letting civil society and industry experts experiment and develop ethical standards from the bottom up.

Perhaps we could dismiss the Act as just another example of California’s interventionist penchant. But some American politicians and regulators are already calling for the Act to be a “template for harmonizing federal and state oversight.” The other source for that template would be the European Union (EU), so it’s worth keeping an eye on the regulations spewed out of Brussels.

The EU is already way ahead of California in imposing troubling, top-down regulation. Indeed, the EU Artificial Intelligence Act of 2024 follows the EU’s overall precautionary principle. As the EU Parliament’s internal think tank explains, “the precautionary principle enables decision-makers to adopt precautionary measures when scientific evidence about an environmental or human health hazard is uncertain and the stakes are high.” The precautionary principle gives immense power to the EU when it comes to regulating in the face of uncertainty — rather than allowing for experimentation with the guardrails of fines and tort law (as in the US). It stifles ethical learning and innovation. Because of the precautionary principle and associated regulation, the EU economy suffers from greater market concentration, higher regulatory compliance costs, and diminished innovation — compared to an environment that allows for experimentation and sensible risk management. It is small wonder that only four of the world’s top 50 tech companies are European.

From Stifled Innovation to Stifled Privacy

Along with the precautionary principle, the second driving force behind EU regulation is the advancement of rights — but cherry-picking from the EU Charter of Fundamental Rights of rights that often conflict with others. For example, the EU’s General Data Protection Regulation (GDPR) of 2016 was imposed with the idea of protecting a fundamental right to personal data protection (this is technically separate from the right to privacy, and gives the EU much more power to intervene — but that is the stuff of academic journals). The GDPR ended up curtailing the right to economic freedom.

This time, fundamental rights are being deployed to justify the EU’s fight against child sexual abuse. We all love fundamental rights, and we all hate child abuse. But, over the years, fundamental rights have been deployed as a blunt and powerful weapon to expand the EU’s regulatory powers. The proposed Child Sex Abuse regulation (CSA) is no exception. What is exceptional, is the extent of the intrusion: the EU is proposing to monitor communications among European citizens, lumping them all together as potential threats rather than as protected speech that enjoys a prima facie right to privacy.

As of 26 November 2025, the EU bureaucratic machine has been negotiating the details of the CSA. In the latest draft, mandatory scanning of private communications has thankfully been removed, at least formally. But there is a catch. Providers of hosting and interpersonal communication services must identify, analyze, and assess how their services might be used for online child sexual abuse, and then take “all reasonable mitigation measures.” Faced with such an open-ended mandate and the threat of liability, many providers may conclude that the safest — and most legally prudent — way to show they have complied with the EU directive is to deploy large-scale scanning of private communications.

The draft CSA insists that mitigation measures should, where possible, be limited to specific parts of the service or specific groups of users. But the incentive structure points in one direction. Widespread monitoring may end up as the only viable option for regulatory compliance. What is presented as voluntary today risks becoming a de facto obligation tomorrow.

In the words of Peter Hummelgaard, the Danish Minister of Justice: “Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse. This is completely unacceptable.” No one disputes the gravity or turpitude of the problem. And yet, under this narrative, the telecommunications industry and European citizens are expected to absorb dangerous risk-mitigation measures that are likely to involve lost privacy for citizens and widespread monitoring powers for the state.

The cost, we are told, is nothing compared to the benefit.

After all, who wouldn’t want to fight child sexual abuse? It’s high time to take a deep breath. Child abusers should be punished severely. This does not dispense a free society from respecting other core values.

But, wait. There’s more…

Keep reading

House Lawmakers Unite in Moral Panic, Advancing 18 “Kids’ Online Safety” Bills That Expand Surveillance and Weaken Privacy

The House Energy & Commerce Subcommittee on Commerce, Manufacturing, and Trade spent its latest markup hearing on Thursday proving that if there’s one bipartisan passion left in Washington, it’s moral panic about the internet.

Eighteen separate bills on “kids’ online safety” were debated, amended, and then promptly advanced to the full committee. Not one was stopped.

Ranking Member Jan Schakowsky (D) set the tone early, describing the bills as “terribly inadequate” and announcing she was “furious.”

She complained that the package “leaves out the big issues that we are fighting for.” If it’s not clear, Schakowsky is complaining that the already-controversial bills don’t go far enough.

Eighteen bills now move forward, eight of which hinge on some form of age verification, which would likely require showing a government ID. Three: App Store Accountability (H.R. 3149), the SCREEN Act (H.R. 1623), and the Parents Over Platforms Act (H.R. 6333), would require it outright.

The other five rely on what lawmakers call the “actual knowledge” or “willful disregard” standards, which sound like legalese but function as a dare to platforms: either know everyone’s age, or risk a lawsuit.

The safest corporate response, of course, would be to treat everyone as a child until they’ve shown ID.

Keep reading