5 WAYS TO PREPARE FOR THE ONLINE PRIVACY CRACKDOWN

The internet is about to change. In many countries, there’s currently a coordinated legislative push to effectively outlaw encryption of user uploaded content under the guise of protecting children. This means websites or internet services (messaging apps, email, etc.) could be held criminally or civilly liable if someone used it to upload abusive material. If these bills become law, people like myself who help supply private communication services could be penalized or put into prison for simply protecting the privacy of our users. In fact, anyone who runs a website with user-uploaded content could be punished the same way. In today’s article, I’ll show you why these bills not only fail at protecting children, but also put the internet as we know it in jeopardy, as well as why we should question the organizations behind the push.

Let’s quickly recap some of the legislation.

Keep reading

Debunking the Myth of “Anonymous” Data

Today, almost everything about our lives is digitally recorded and stored somewhere. Each credit card purchase, personal medical diagnosis, and preference about music and books is recorded and then used to predict what we like and dislike, and—ultimately—who we are.

This often happens without our knowledge or consent. Personal information that corporations collect from our online behaviors sells for astonishing profits and incentivizes online actors to collect as much as possible. Every mouse click and screen swipe can be tracked and then sold to ad-tech companies and the data brokers that service them.

In an attempt to justify this pervasive surveillance ecosystem, corporations often claim to de-identify our data. This supposedly removes all personal information (such as a person’s name) from the data point (such as the fact that an unnamed person bought a particular medicine at a particular time and place). Personal data can also be aggregated, whereby data about multiple people is combined with the intention of removing personal identifying information and thereby protecting user privacy.

Sometimes companies say our personal data is “anonymized,” implying a one-way ratched where it can never be dis-aggregated and re-identified. But this is not possible—anonymous data rarely stays this way. As Professor Matt Blaze, an expert in the field of cryptography and data privacy, succinctly summarized: “something that seems anonymous, more often than not, is not anonymous, even if it’s designed with the best intentions.”

Keep reading

Odd Colorado Ruling Upholds Internet Keyword Search Warrant

What would your internet searches reveal about you if others could scrutinize and second-guess them? It’s something to think about, given that the big search engines, like Google, store search histories and make them available to the authorities. In fact, as happened in a recently decided Colorado case, police can start from search terms of interest and pressure tech companies to surrender the identities of anyone who has surfed for specified keywords. The decision is chilling for anybody who has ever pondered their online history in the hands of a stranger—or who just cares about privacy.

“Today, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase,” Jennifer Lynch and Andrew Crocker of the Electronic Frontier Foundation (EFF) reported on Monday. “The case is People v. Seymour, which involved a tragic home arson that killed several people. Police didn’t have a suspect, so they used a keyword warrant to ask Google for identifying information on anyone and everyone who searched for variations on the home’s street address in the two weeks prior to the arson.”

Keep reading

The EU Could Push its Private Message Ban as Early as Next Week

The EU is getting ever closer to pushing through the legislation known among critics as “chat control” – officially, Child Sexual Abuse Regulation, CSAR – and is hoping to reach a deal on this within the bloc as early as next week.

One of those who have been consistently opposed to the controversial upcoming rules, a German member of European Parliament (MEP) and lawyer Patrick Breyer, has reacted by warning once again that regardless of some minor changes if passed, the bill would effectively spell the end of proper encryption and private messaging in the EU.

Instead, the implication is, that CSAR would usher in the era of indiscriminate mass surveillance in this part of the digital space.

Warning that a recent “minor concession” the EU member-states have managed to agree on was a bid to finally come up with a majority and push the plans over the top, Breyer, referring to the proposal as “chat control 2.0,” calls it an “unprecedented” (at least for the EU) example of mass surveillance.

The summary of the regulation is that online services that provide messaging and chat would, going forward, have to implement automatic scanning of all private text and images – looking for potential abusive content, and then let the EU know about it.

There is no shortage of controversy and misgivings here, with two clearly standing out: once in place, what can this infrastructure be used for next (if politicians decide) – and the other, how are online platforms even supposed to make it work accurately and fairly, technically speaking?

Now, we are hearing that the EU Council is looking to “soften the blow,” at least rhetorically, but saying that the scanning would at first only apply to “previously classified CSAM (child sexual abuse material)” – but then later still expand it to everything.

Keep reading

Government Watchdog Calls Out Dangers in Section 702 Surveillance

Ten years after Edward Snowden sparked a debate over domestic (and international) spying by the U.S. government and its allies, arguments continue and so does the snooping. This year, one key component of the surveillance state—Section 702 of the Foreign Intelligence Surveillance Act—is up for congressional reauthorization. Now, the executive branch’s own civil liberties watchdog says that, while Section 702 plays an important role, it’s also dangerous to our freedom and needs reform.

To hear America’s professional spooks, Section 702 is made up of equal servings of mom, apple pie, and a trench coat.

“In 2008, Congress enacted Section 702 of the Foreign Intelligence Surveillance Act (FISA), a critical intelligence collection authority that enables the Intelligence Community (IC) to collect, analyze, and appropriately share foreign intelligence information about national security threats,” insists the Office of the Director of National Intelligence. “Section 702 only permits the targeting of non-United States persons who are reasonably believed to be located outside the United States. United States persons and anyone in the United States may not be targeted under Section 702.”

The Privacy and Civil Liberties Oversight Board (PCLOB), established in 2007 in an effort to limit the excesses of the burgeoning post-9/11domestic intelligence apparatus, sees things a little differently.

Keep reading

Texas Anti-Abortion Crusader Demands Abortion Patient Information In Court

THE NOTORIOUS FAR-RIGHT attorney who helped craft Texas’s bounty-hunter abortion ban, Senate Bill 8, is now attempting to force abortion funds to hand over reams of information on every abortion the organizations have supported since 2021. This includes the city and state where each patient lived, the names of the abortion providers, and the identities of nearly every person who helped the patients access abortion care.

Earlier this month, Jonathan Mitchell — himself not a Texan but based in Washington state — served requests to nine Texas abortion funds and one Texas doctor. The brazen attempt to acquire sensitive information about abortion patients and the funds that assist them is a disturbing turn in the ongoing legal battle over Texas’s six-week abortion ban.

In August of last year, a coalition of abortion funds and doctors filed a class action lawsuit against Texas Attorney General Ken Paxton and other state officials. The suit, Fund Texas Choice v. Paxton, aims to challenge Senate Bill 8, or S.B. 8, and its devious method of civil enforcement to evade federal court scrutiny. In response, Mitchell, on behalf of the Texas government, is using the legal discovery process to harass those defending reproductive freedoms.

Keep reading

NEW GROUP ATTACKING IPHONE ENCRYPTION BACKED BY U.S. POLITICAL DARK-MONEY NETWORK

THE HEAT INITIATIVE, a nonprofit child safety advocacy group, was formed earlier this year to campaign against some of the strong privacy protections Apple provides customers. The group says these protections help enable child exploitation, objecting to the fact that pedophiles can encrypt their personal data just like everyone else.

When Apple launched its new iPhone this September, the Heat Initiative seized on the occasion, taking out a full-page New York Times ad, using digital billboard trucks, and even hiring a plane to fly over Apple headquarters with a banner message. The message on the banner appeared simple: “Dear Apple, Detect Child Sexual Abuse in iCloud” — Apple’s cloud storage system, which today employs a range of powerful encryption technologies aimed at preventing hackers, spies, and Tim Cook from knowing anything about your private files.

Something the Heat Initiative has not placed on giant airborne banners is who’s behind it: a controversial billionaire philanthropy network whose influence and tactics have drawn unfavorable comparisons to the right-wing Koch network. Though it does not publicize this fact, the Heat Initiative is a project of the Hopewell Fund, an organization that helps privately and often secretly direct the largesse — and political will — of billionaires. Hopewell is part of a giant, tightly connected web of largely anonymous, Democratic Party-aligned dark-money groups, in an ironic turn, campaigning to undermine the privacy of ordinary people.

Keep reading

The State against Anonymity

In the last century, states have had great control over channels of media. In most of the West, lobbying groups and cartels working with “liberal” and “democratic” governments regulated who could broadcast while governments, with their endless pools of money and political force, competed alongside private, or foreign, establishments. South Africa banned television entirely, and then after legalizing it in the ’70s, the industry was still controlled by the state.

All media in the Soviet Union was centralized and controlled by the state immediately after the October Revolution—the Bolshevik leaders understood the importance of media control. Every state in the last century has had some grip over the country’s media, propagating favorable narratives and restricting the unfavorable to maintain control over the population.

Traditional media centralization by the state was then rendered obsolete with the popularization of the Internet. As the Internet and its related technology developed, decentralization became more pronounced and widespread. When anyone can start a podcast on a plethora of websites with anyone else in the world who has the technology, or when miniature documentaries and video essays can be produced and uploaded by anyone to anywhere that accepts the format, the state-operated or state-supported media that dominated the last century becomes effectively out of date. The new competition was too dynamic, adaptive, decentralized, and evasive for the old system to outcompete, outproduce, or outright ban.

Traditional media wasn’t the only thing affected by the Internet. Chat boards, forums, and other means of direct communication undermined multiple key legitimizers of the state, specifically academics and journalists. Barring local rules and guidelines, anyone was free to question and discuss any aspect of academia, usually under the freedom afforded by anonymity.

Keep reading

Department of Defense Signs Contract With Social Media Monitoring Company

Fresh revelations regarding a $2.5 million contractual agreement between the Defense Information Systems Agency (DISA) at Fort George G. Meade and social media scrutinizer Dataminr have emerged. These claims, unveiled by a US government notice, imply a new era of digital monitoring rests on the horizon, increasingly unsettling in its reinforcement of sweeping surveillance, and potentially having implications on free speech and privacy protection.

Fort Meade, also known as the steering wheel of the US Government’s paramount signals intelligence organization, the National Security Agency, has seemingly struck a discreet deal to expand its espionage services.

DISA, commodiously located at Fort Meade, is now purported to have voluminous exposure to public posts from assorted social media platforms, including X, formerly Twitter.

Dataminr is a company specializing in AI-driven real-time information discovery and is known for detecting, classifying, and determining the significance of public information in real time. It’s plausible that government entities, including the Department of Defense, may leverage services like Dataminr to monitor social media and other public data sources to maintain situational awareness and respond to emerging events or threats more rapidly.

When privacy buffs and free speech advocates look at governmental use of tools like Dataminr, it’s met with a hefty dose of suspicion, and rightfully so. The potential implications for personal freedom, civil rights, and the pillars of democracy are considerable. There’s this looming worry about the government, potentially with too loose a leash, exploiting these tools to spy on lawful activities and on people living their everyday lives with no criminal intentions.

Keep reading

The UK passes massive online safety bill

The UK’s Online Safety Bill is ready to become law. The bill, which aims to make the UK “the safest place in the world to be online,” passed through the Houses of Parliament on Tuesday and imposes strict requirements on large social platforms to remove illegal content. It will be enforced by UK telecom regulatory agency Ofcom.

Additionally, the Online Safety Bill mandates new age-checking measures to prevent underage children from seeing harmful content. It also pushes large social media platforms to become more transparent about the dangers they pose to children, while also giving parents and kids the ability to report issues online. Potential penalties are also harsh: up to 10 percent of a company’s global annual revenue. The bill has been reworked several times in a multiyear journey through Parliament.

But not only does online age verification raise serious privacy concerns — the bill could also put encrypted messaging services, like WhatsApp, at risk. Under the terms of the bill, encrypted messaging apps would be obligated to check users’ messages for child sexual abuse material.

Depending on how the rule is enforced, this could essentially break apps’ end-to-end encryption promise, which prevents third parties — including the app itself — from viewing users’ messages. In March, WhatsApp refused to comply with the bill and threatened to leave the UK rather than change its encryption policies. It joined Signal and other encrypted messaging services in protesting the bill, leading UK regulators to attempt to assuage their concerns by promising to only require “technically feasible” measures.

Keep reading