Watching Porn on California’s Death Row

Under Governor Gavin Newsom, California has sought to transform its massive prison system into a Nordic-style rehabilitation program. Newsom has placed a moratorium on all executions, transferred condemned prisoners to facilities across the state, dismantled San Quentin State Prison’s death row, and turned the notorious prison into a therapeutic center, with artclassrooms, a café, and podcast studios.

As part of this transformation, the Newsom administration approved a $189 million contract to provide new digital tablets—generic, flat-screen devices in a plastic shell—to every inmate in the state prison system, at “no cost” to offenders. The administration heralded the effort to replace inmates’ old tablets—which were piloted in 2018 and given to nearly all prisoners by 2023—as a step toward “digital equity” for “justice impacted” individuals, who could, in theory, use the devices to contact their families, consume “educational” content, and “learn new technology.”

In reality, taxpayer-funded tablets have also been used for more lurid endeavors. In this exclusive City Journal investigation, we contacted dozens of death-row inmates, who told us that prisoners in the state system use such devices to watch pornography and have explicit sexual conversations. Some prisoners, according to a former high-ranking California corrections official, use their tablets to groom minors. Though the state has claimed to regulate explicit content, the inmates told us that users can easily evade detection.

When reached for comment, the California Department of Corrections and Rehabilitation said the tablets were “tightly controlled education tools” that provided inmates with “access to the Bible, education, and reentry resources that actually reduce crime.”

But inmates told us a different story. For some, the devices have become personal sex machines. In the words of one inmate, California’s death row is populated with desperately “horny” criminals who see the tablets as a way to satisfy their basest fantasies and desires—all thanks to the California taxpayer.

Keep reading

When one man, a civilian, controls the kill switch for military ops

In September 2022, Ukrainian forces prepared to launch a drone strike on the Russian naval fleet anchored off Crimea. The drones never arrived.

Elon Musk had decided, unilaterally, not to activate Starlink coverage over the region. But he wasn’t simply declining to help. SpaceX had already been managing battlefield access for both sides: restricting Russian use, imposing speed limits to prevent drone integration, and maintaining a verified whitelist with Ukraine’s Ministry of Defense. One private citizen, with no security clearance and no accountability to any electorate, was governing the battlefield connectivity of an active war.

The public debate treats this as a story about Elon Musk — his politics, his proximity to the White House, his X posts. That framing lets the actual problem off the hook. Replace Musk with the most patriotic, internationalist, apolitical CEO imaginable and the structural problem remains identical. The Pentagon has spent a decade building critical military functions on infrastructure it can’t legally compel, and the consequences are now arriving in real time.

A common reflex is to argue that private defense contractors have always been central to American military power. Lockheed Martin builds the F-35; Raytheon builds the Patriot. What’s different now is the control plane: who has real-time administrative control during use. When the government buys a tank, it owns it. The keys don’t expire. The manufacturer can’t disable it mid-mission or impose terms in combat. Software and AI are different. Vendors keep ongoing control — updates, access, and usage limits. They don’t sell a capability; they license access to one, and the license has conditions.

Those conditions have already collided with active operations. After months of failed negotiations, the Pentagon formally designated the AI firm Anthropic a supply-chain risk because of restrictions on how its model could be used. The Pentagon was explicit in its decision: “The military will not allow a vendor to insert itself into the chain of command.” Emil Michael, the Under Secretary of Defense for Research and Engineering, described the moment he fully grasped the vulnerability: Anthropic’s models were already embedded across combatant commands and intelligence agencies, wired into classified workflows. Anthropic retained the control plane inside the Pentagon’s cloud — able to update, restrict, or shut off access. When Michael raised hypothetical crisis scenarios, Dario Amodei, CEO of Anthropic, offered exceptions case by case. “Just call me if you need another exception,” Michael recalls him saying. In a genuine crisis, a commander can’t call a vendor to authorize military action, nor should he have to.

This isn’t about whether Anthropic’s rules are reasonable. They weren’t set by anyone accountable to the joint force, there’s no override mechanism, and the Pentagon had made itself dependent on systems it doesn’t control.

Keep reading

Europe Wants To Ban VPN Privacy

The European Union is now openly discussing restricting VPN access as part of its expanding online age-verification system, which demonstrates precisely where the entire digital agenda has been heading from the beginning. They always introduce these systems under emotionally untouchable justifications such as child safety or combating terrorism, but once the infrastructure is in place, the scope inevitably expands.

According to a new European Parliament briefing, officials are concerned that users are bypassing online age-verification requirements via VPNs, and the report notes a surge in VPN usage in countries implementing stricter digital controls. The proposal being discussed is to potentially restrict VPN access itself to those above a so-called “digital age of majority.” In other words, they are now targeting the very tools people use to protect their privacy online.

For readers who may not use these services personally, a VPN simply encrypts your internet traffic and masks your location, preventing internet providers, corporations, and governments from monitoring everything you do online. Businesses use them constantly, financial institutions rely on them, journalists use them, and ordinary people use them simply to avoid being tracked across the internet.

The problem from the government’s perspective is that VPNs interfere with surveillance. Europe’s Digital Services Act has already pushed platforms toward mandatory age-verification systems that increasingly require identification documents, facial scans, or biometric verification simply to access online content. Once users began using VPNs to avoid those systems, regulators immediately shifted toward framing the VPN itself as the threat. This is how these systems always evolve, because the objective is never merely regulation, it is compliance and visibility.

What they are building is effectively a digital identity system where access to information requires permission. People fail to understand how dangerous this becomes once connected to the broader European agenda involving CBDCs, centralized digital IDs, online speech regulation, and financial monitoring. These are not isolated policies appearing randomly at the same time. They are interconnected components of a single structural transition toward centralized digital control.

First they regulate speech under the justification of misinformation. Then they regulate platforms under the justification of safety. Then they require identity verification under the justification of protecting children. Finally they target anonymity itself by restricting the tools people use to avoid surveillance.

This fits perfectly within the broader cycle unfolding in Europe, where declining economic confidence and political instability lead governments toward greater centralization and control. Historically, governments facing crisis do not voluntarily reduce authority, they expand surveillance, tighten restrictions, and attempt to maintain control over information and capital flows.

Keep reading

Radical Leftist Adored by Democrats Pushes ‘Violent Revolution’ Following Court Decision Against Virginia Gerrymander

People in the left are absolutely frothing at the mouth over the decision by the Virginia Supreme Court that tore down their effort to gerrymander the state.

Hasan Piker, an America-hating, self-proclaimed communist and streamer, who Democrats just can’t seem to get enough of, has alsready suggested violent revolution as a fix for the problem.

It’s just amazing how political violence has become so normal for people on the left. They just want their way and will do anything to get it.

FOX News reports:

Leftist streamer calls violent revolution ‘inevitable’ as Democrats explode over Virginia court decision

Democrats exploded in fury Friday after the Virginia Supreme Court struck down a party-backed redistricting map central to their midterm election strategy, with at least one prominent leftist voice going so far as to call violent revolution “inevitable.”

In a 4-3 decision, the Virginia Supreme Court ruled that a voter-approved map, which would give Democrats a 10-1 advantage in U.S. House races, violated the state’s constitution because of procedural errors in the map’s passage. Virginia voters will cast ballots in the 2026 midterms using the same district maps from the 2022 and 2024 elections, which Democrats currently hold 6-5.

But Democratic lawmakers and commentators alike have framed the Supreme Court’s ruling as an act going against the will of the people. Hasan Piker, a popular leftist streamer who has espoused antisemitic rhetoric and campaigns with congressional candidates, accused the Virginia Supreme Court of denying the results of the state’s redistricting referendum.

“Scotus gutted the voting rights act and tennessee carved up the last dem district destroying black voter power in the state,” Piker wrote on X. “Those who make peaceful revolution impossible, make violent revolution inevitable.”

Keep reading

European Commission Official Touts 17 Investigations as Proof the Digital Services Act “Delivers”

The European Union’s Digital Services Act is a censorship and surveillance law dressed in the language of safety. It gives unelected officials in Brussels the power to decide what hundreds of millions of people are allowed to say online and it is building the infrastructure to verify their identities before they’re permitted to say it.

But at POLITICO’s AI & Tech Week summit in Brussels this month, Renate Nikolay, the European Commission’s Deputy Director-General at DG CONNECT, celebrated the law’s growing enforcement record. Seventeen ongoing investigations and one non-compliance decision, she told the audience, prove the DSA “delivers.”

What the DSA delivers is pressure. Pressure on platforms to censor more speech, faster, with fewer questions asked. Pressure to open their algorithms and internal systems to government inspection without a court order. And, increasingly, pressure on individual users to prove who they are before they’re allowed to participate in public discourse online.

Nikolay presented these enforcement numbers as proof of success. They are proof of something but not what she thinks.

Keep reading

EU Going To War With VPNs In Bid To “Save The Children”

Western European governments and EU bureaucrats are advancing tighter regulations on VPNs as part of a broader push for “online age verification” and their ‘Chat Control’ agenda.  Privacy advocates and digital rights groups warn that Europe is drifting towards a surveillance and censorship regime similar to internet restrictions and firewalls used by Russia and China.

Last week European Commission Executive Vice-President Henna Virkkunen suggested that Brussels may need to address the use of VPNs to bypass the EU’s upcoming age-verification systems.  Speaking during a press conference on the EU’s new digital age-verification app, Virkkunen acknowledged that users could circumvent the system with VPNs and stated that preventing such circumvention would be among the ‘next steps’ policymakers need to examine.

Her statements were delivered only two weeks after she shared a stage with EU Commission President Ursula von der Leyen, who called for a crackdown on web media companies to “protect children” from dangerous content.  The first stage of their agenda is a government created universal age verification app which web companies will be required to integrate.  Von der Leyen asserts that the new restrictions are designed to “defend children’s rights” (how does restricting access protect rights?).

The Orwellian language of the EU is not coincidental.  “Child vulnerability” is a carefully chosen vehicle to manipulate public approval, opening the door to incremental government management of online content and discourse. 

Keep reading

EU Targets VPNs in EU Age Verification Push

Brussels has a problem with people trying to stay anonymous online and now it’s eyeing the tools they use to do it.

Henna Virkkunen, the European Commission’s Executive Vice-President for Tech Sovereignty, Security, and Democracy, told reporters that VPNs sit on the agenda as the EU pushes its age verification app toward member states.

Asked how Brussels intends to stop children from routing around age checks with a VPN, she said “it’s also an important part of next steps also to look at that it shouldn’t be circumvented.”

VPNs are more than a tool for teenagers trying to access Instagram. They are how journalists protect sources, how dissidents talk to family, how ordinary people stop their internet provider from logging every site they visit. Treating circumvention as a problem to be solved at the network level means treating privacy tools as the obstacle, rather than the proportionate response to a system that demands ID for ordinary online activity.

The VPN comment surfaced at a press conference about the Commission’s broader regulatory squeeze.

Brussels provisionally found that Meta likely violated the Digital Services Act by failing to keep under-13s off Facebook and Instagram, accusing the company of “failing to diligently identify, assess and mitigate the risks of minors under 13 years old accessing their services.”

By the Commission’s own count, roughly 12% of European children below the age limit log into the platforms anyway.

Virkkunen framed the finding as enforcement of existing rules rather than a new mandate. “The DSA requires platforms to enforce their own rules: terms and conditions should not be mere written statements, but rather the basis for concrete action to protect users, including children,” she said.

A Commission spokesperson echoed the line, telling ISMG that the DSA “does not mandate specific mitigation measures,” and pointing to alternatives like better internal review processes.

The denial sits awkwardly next to everything else Brussels is doing. The Commission published guidelines last July recommending age verification. It is now pressing member states to “accelerate the adoption of age verification tools.”

Keep reading

Predators are reaching children through apps, social media and now, video games

Children are being sexually exploited by predators lurking on the internet, social media apps, and now gaming platforms. It’s happening in small towns and big cities across the country, including Ohio.

“The monsters that we are chasing are now coming into your home with some device,” said Kirtland Police Chief Jamey Fisher.

Kirtland Police Detective Jake Scott is on a mission to stop it.

“I will pursue these relentlessly,” said Scott.

Two years ago, Kirtland Police signed an agreement with the Ohio Internet Crimes Against Children Task Force to investigate these types of crimes in their community. Case referrals began landing on Scott’s desk.

“Anywhere on the internet where there are children, there’s going to be adults who have a proclivity for sexual abuse of children trying to speak with those kids, groom those kids and foster relationships,” Scott said.

His first case involved 45-year-old Todd Oravecz, a Kirtland man who was arrested, indicted, and pleaded guilty to several charges, including receipt, distribution and transportation of visual depictions of minors engaged in sexually explicit conduct. Agents found more than 100 child sexual abuse material images and videos on his electronic devices that included children under 12 years old.

Keep reading

Utah first state to hold websites liable for users who mask their location with VPNs — law goes into effect, designed to prevent bypassing age checks

Utah’s Online Age Verification Amendments, formally Senate Bill 73, take effect on May 6, making the state the first in the U.S. to explicitly target VPN use as part of age verification legislation.

Signed by Governor Spencer Cox on March 19, the controversial law establishes that a user is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN or proxy to mask their IP address. It also prohibits covered websites from sharing instructions on how to use a VPN to bypass age checks.

NordVPN has called the law an “unresolvable compliance paradox” and a “liability trap,” arguing that it holds websites responsible for identifying users whose tools are specifically designed to be unidentifiable. The EFF warned that the legal risk could push sites to either ban all known VPN IPs or mandate age verification for every visitor globally.

The law is also technically flawed, given that it assumes that a web provider can reliably detect VPN traffic and determine a user’s true physical location — they can’t. IP reputation databases such as MaxMind and IP2Proxy can flag traffic from known datacenter IP ranges, but commercial VPN providers rotate addresses constantly, and residential VPN endpoints are largely indistinguishable from standard home connections. Autonomous System Number analysis can catch traffic originating from datacenter networks, but can’t identify a personal WireGuard tunnel running on a cloud VPS, for example, which routes through the same infrastructure as ordinary web hosting.

The only detection method that reliably identifies VPN protocol signatures is deep packet inspection, which analyzes traffic at the network level, not system- or app-level. China’s Great Firewall and Russia’s TSPU system deploy DPI via ISPs, but a website operator can’t because it requires access to network infrastructure that sits between the user and the server, not on the server itself.

Meanwhile, setting up a personal WireGuard instance on any major cloud provider takes minutes, meaning the law will be more likely to negatively impact non-technical users who rely on commercial VPN services for legitimate privacy: journalists, people living under authoritarian regimes, political dissidents, and abuse survivors, among others.

Keep reading

Senate Panel Backs GUARD Act, AI Age Verification Bill

The Senate Judiciary Committee voted 22-0 on Thursday to advance the GUARD Act, a bill that would require AI chatbot companies to verify the age of every American who wants to use them.

The legislation, sponsored by Senator Josh Hawley of Missouri, sailed through committee with a tweet from its author celebrating the outcome.

“My bill to stop AI from telling kids to kill themselves just passed out of committee UNANIMOUSLY,” Hawley wrote on X. “No amount of profit justifies the DESTRUCTION of our children. Time to bring this bill to the Senate floor.”

As usual, the framing is about children but the result is age verification/digital ID for everyone.

Under the bill’s text, a “reasonable age verification measure” cannot mean a checkbox or a self-entered birth date. It cannot rely on whether a user shares an IP address or hardware identifier with someone already verified as an adult.

We obtained a copy of the bill for you here.

What it can mean, the legislation makes clear, is a government ID upload, a facial scan, or a financial record tied to your legal name. Every user of every covered chatbot would need to hand one of those over before being allowed in.

The bill defines an “artificial intelligence chatbot” as any service that “produces new expressive content or responses not fully predetermined by the developer or operator” and “accepts open-ended natural-language or multimodal user input.”

That language reaches well beyond the companion apps the press conference focused on. It covers service bots, search assistants powered by AI, homework helpers, and the general-purpose tools millions of adults already use without proving who they are.

Hawley described the legislation as a “targeted, tailored effort,” telling the committee, “We’re often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers. I would just say that’s a choice, not an inevitability.”

Keep reading