3 million+ innocent private chats could be handed over to investigators under new EU plans

A leaked document from the European Commission (EC), the executive branch of the European Union (EU), has revealed that the artificial intelligence (AI) it plans to use to mass surveil private chats for “grooming” content is expected to falsely flag content and forward it to EU investigators 10% of the time.

This proposed mass surveillance of online chats has been dubbed “Chat Control” and is being pushed by the EC as a way to combat child sexual abuse material (CSAM). However, in a leaked document that was obtained and published by Netzpolitik, the EC admitted that its proposed surveillance measures would result in a large amount of false flags.

“The accuracy of grooming detection technology is around 90%,” the EC admitted in the document. “This means that 9 out of 10 contents recognized by the system are grooming.”

The leaked document contains the EC’s answers to a series of questions from the German government about the implementation of Chat Control.

Under the current Chat Control plans, private chats, messages, and emails will be automatically scanned by AI for suspicious content. If the AI detects suspicious content, it will be flagged and sent to investigators at a planned EU center. These investigators will view the content, identify false positives, and forward illegal content to EU law enforcement agency Europol and other relevant law enforcement authorities.

Keep reading

California bill 2273 would require websites and apps to verify visitors’ ID

California’s bill CA AB 2273, designed to enact the Age-Appropriate Design Code (AADC) is just one among the bills raising concerns in terms of how they might negatively affect the web going forward.

Like their counterparts in the EU, legislators in California, according to their critics, present online child safety as their only goal – and a stated desire to improve this is hard to argue with, even when arguments are valid – such as that the proposed bills may in fact do nothing to better protect children, while eroding the rights of every internet user.

Among other things, AB 2273 aims to require sites and apps to authenticate the age of all their users before allowing access. Attempts to introduce mandatory age authentication have also cropped up in other jurisdictions before, but have proven controversial, technically difficult to implement, with a high potential to compromise user data collected in this way, and intrusive to people’s privacy.

In California, the situation doesn’t look much different as critics of this bill say that authentication will require site operators and businesses to deal with personal data collection from every user, and worry about using and storing it securely.

We obtained a copy of the bill for you here.

In addition, some kind of government-issued ID – or surrendering biometric data such as that collected through facial recognition – is necessary to prove one’s age in the first place; and this is where forcing sites and services to require this information would effectively mean the end of anonymity online.

Keep reading

How to scrub yourself from the internet, the best that you can

You can’t fully scrub yourself from the internet. A little bit of you will always linger, whether it’s in data-broker databases, on old social media you forgot about or in the back of someone else’s vacation photos on Flickr.

That’s no reason to give up! You can absolutely take steps to protect your privacy by cleaning up things like your Google results. For the best results you’ll need time, money, patience, and to live in a country or state with strong privacy laws.

This week’s Ask Help Desk question is all about the data brokers: “How do I get my information deleted from data aggregators?” asks Jennifer Swindell, from Sagle, Idaho. But first, we’re going to take a step back and start with something a little more public.

Keep reading

Vice President Harris launches task force to shut down “online harassment”

The government, on both the federal and state level, is increasingly trying to police online speech – despite several lawsuits and complaints of First Amendment violations.

The White House has now created a task force that is set to combat “online harassment,” with a particular angle of addressing what it calls “gender-based” violence. The group met today.

The task force was launched by Vice President Harris and is co-chaired by the Gender Policy Council and the National Security Council.

The task force was established following the mass shootings in Buffalo, New York, and Uvalde, Texas where there were allegations that the incidents were exacerbated by social media.

The Buffalo shooting in particular was used by New York City Governor Kathy Hochul to usher in social media legislation that will likely be ruled unconstitutional.

The shootings highlight the connection between online “harassment” and extremism, an administration official alleged, as reported by The Hill.

Within the next six months, the new task force will come up with recommendations about how government and the private sector (likely tech platforms) can tackle online harassment.

Keep reading

Google, Twitter, Meta, TikTok and more just signed the EU’s “anti-disinformation” code

Big Tech companies have signed a new version of the European Union’s “anti-disinformation” code. Some of the companies that signed include Google, Twitter, Meta, TikTok, and Twitch – but also smaller players such as Vimeo and Clubhouse.

There are 34 signatories in total:

Apple declined to sign.

The “code of practice on disinformation,” will require online platforms to show how they are tackling “harmful content.”

It will also require platforms to fight “harmful misinformation” by forming partnerships with fact-checkers and developing tools. They will be forced to include “indicators of trustworthiness” on information verified independently on hot-button issues like COVID-19 and Russia’s invasion of Ukraine.

Perhaps the most notable requirement is providing their efforts to tackle harmful content and disinformation on a country-by-country basis. The move was opposed by online platforms, but national regulators demanded that they need more specific data to better address the spread of disinformation.

Keep reading

Google plots to curb the effectiveness of ad-blockers

It’s common for corporations to take actions that ultimately are to the detriment of most consumers and Google is one of the world’s biggest. The tech giant is making changes to its Chrome Web Store, specifically an end to supporting Manifest v2 (Mv2) extensions, which will make it more difficult for adblockers to operate.

Currently, Mv2 supports all extensions on the Chrome Web Store, including ad-blocker extensions such as uBlock Origin, Privacy Badger, etc. Ad-blockers are extremely popular features of the Chrome Web Store with millions of currently active users due to their functionality in blocking ads and maintaining privacy. However, starting in January 2023, Google will shift from Mv2 to Mv3, making most of these popular features obsolete.

The shift wouldn’t make it entirely impossible to adapt existing extensions; however, Mv3 would certainly reduce the functionality of ad-blockers and limit innovation in the ad blocking space.

Keep reading

Wickr, Amazon’s encrypted chat app, has a child sex abuse problem — and little is being done to stop it

Wickr Me, an encrypted messaging app owned by Amazon Web Services, has become a go-to destination for people to exchange images of child sexual abuse, according to court documents, online communities, law enforcement and anti-exploitation activists.

It’s not the only tech platform that needs to crack down on such illegal content, according to data gathered by the National Center for Missing & Exploited Children, or NCMEC. But Amazon is doing comparatively little to proactively address the problem, experts and law enforcement officials say, attracting people who want to trade such material because there is less risk of detection than in the brighter corners of the internet.

NBC News reviewed court documents from 72 state and federal child sexual abuse or child pornography prosecutions where the defendant allegedly used Wickr (as it’s commonly known) from the last five years in the United States, United Kingdom and Australia, using a combination of private and public legal and news databases and search engines. Nearly every prosecution reviewed has resulted in a conviction aside from those still being adjudicated. Almost none of the criminal complaints reviewed note cooperation from Wickr itself at the time of filing, aside from limited instances where Wickr was legally compelled to provide information via a search warrant. Over 25 percent of the prosecutions stemmed from undercover operations conducted by law enforcement on Wickr and other tech platforms. 

These court cases only represent a small fraction of the problem, according to two law enforcement officers involved in investigating child exploitation cases, two experts studying child exploitation and two people who have seen firsthand how individuals frequently use Wickr and other platforms for criminal transactions on the dark web. They point to direct knowledge of child exploitation investigations and sting operations, interviews with victims and perpetrators of abuse, and interactions with individuals soliciting child sexual abuse material as evidence that Wickr is being used by many people who exploit children.  

Posts linking Wickr and child sexual abuse material are also littered across the internet. On social media platforms such as Reddit, Tumblr and Twitter, NBC News found dozens of forums, accounts and blogs where hundreds of posts have been made soliciting minors, those who have access to them, or those interested in trading child sexual abuse material alongside Wickr screen names. No child sexual abuse imagery was viewed in the course of reporting this article.

Keep reading

Reddit warns US Copyright Office internet upload filters would harm memes

Reddit has warned the US Copyright Office against internet upload filters, arguing the technology will harm free expression.

The US has been looking to update the DMCA to keep up with the copyright issues found online. Many proposals have come and gone, but the US Copyright Office is now looking into automated tools that can prevent content from being re-uploaded, aka upload filters.

In a submission to the US Copyright Office, Reddit, a platform known for user-submitted content, warned against Standard Technical Measures (STMs), including upload filters.

We obtained a copy of the submission for you here.

“Filtering technologies and STMs ill-suited to the variety of content on Reddit would limit the vitality of some of our platform’s most active communities,” Reddit said.

In its subreddits users post copyrighted content, taking advantage of the fair use principles to create memes and more. An upload filter would substantially harm the free flow of thought.

“Filtering technologies have difficulty merely identifying copyrighted material, let alone assessing the specific context the content was found. They cannot make nuanced judgments about fair use or transformative works,” the platform said.

The automated filters and the false positives they would bring will significantly harm free speech, Reddit argues.

“As a result, standardized measures are likely to remove non-infringing content and suffer from false positives. Worse, these over-removals would strike at the heart of the transformative user-generated content that makes Reddit communities unique,” Reddit explained.

“That is a severe, unnecessary, and unacceptable cost to the free expression of our users and the communities they build.”

Google has implemented such a measure through YouTube’s Content ID system, which is notorious. According to Reddit, Content ID cannot work for every type of platform or site.

Keep reading

Canada’s broadcasting regulator confirms proposed online censorship bill will apply to user-generated content

During a hearing on Canada’s attempt to regulate what users can say on the internet, Bill C-11 (The Online Streaming Act), the head of Canada’s broadcasting and telecommunications regulator again confirmed that the far-reaching regulations will apply to user-generated content.

Ever since the bill was announced, critics have been warning that it empowers the Canadian government to censor the content users post to social media platforms by forcing these platforms to abide by content rules set by Canada’s broadcasting and telecommunications regulator – the Canadian Radio-television and Telecommunications Commission (CRTC).

These fears were confirmed by CRTC Chairman Ian Scott earlier this month when he acknowledged that Bill C-11 would apply to user-generated content. And in an appearance at Tuesday’s Canadian Heritage committee hearing, Scott reaffirmed that Bill C-11 allows the CRTC to regulate “user uploaded content.”

CRTC General Counsel and Deputy Executive Director Rachelle Frenette subsequently attempted to downplay the CRTC powers under Bill C-11 by insisting that the regulations apply to platforms, not users. However, she admitted that the CRTC could “issue rules with respect to discoverability.”

Dr. Michael Geist, a law professor at the University of Ottawa, noted (link) that Frenette’s admission is “entirely consistent with the concerns of digital creators, namely that platforms will be required to develop outcomes that result in some content being prioritized in the name of discoverability.”

The critics of Bill C-11 include politicians, creators, and even pro-censorship Big Tech platforms.

Keep reading