Democrats’ SAFE TECH Act Could ‘Destroy Most of the Open Web’

A newly-proposed change to Section 230 would introduce legal liability for online platforms and forums for third-party speech. It is being suggested as a way of combating alleged racial and social online injustices. According to critics, however, the bill is ill-conceived and has the potential to transform large parts of the internet for the worse and empower powerful players against smaller competitors.

Section 230 has become a hot topic in the US in recent years. Under this law, which “defined how the Internet works”, platforms adopting a hands-off approach to content moderation cannot be held reliable for harmful or illegal third-party content hosted by them. The protections under the law do not extend to sites which filter users’ submissions and curate content featured on the page. As the Washington Post recounts, the Section was created in the wake of two lawsuits in the 1990s – against Prodigy Services and against CompuServe – coming to similar conclusions.

The provision has come under criticism from both Democratic and Republican legislators, albeit for different reasons. The goal of Republicans, including former president Trump, was to address selective political censorship which has been repeatedly alleged against Silicon Valley online platforms. For example, in December last year, Trump attempted to use his veto power over a proposed defence bill as leverage against the Congress to outright repeal Section 230. 

On the other hand, critics of the law among the Democrats have been blaming social media platforms for being reluctant or slow to remove content deemed as harmful, from hostile communication perceived as harassment to the spread of unreliable information. 

Keep reading

Google Quietly Escalates Manual Search Censorship

Google has quietly escalated censorship of its market-dominating search engine, adding a range of new topics where human moderators are allowed to manually penalize websites, suppressing them in search results.

If a website is affected by one of these manual acts of censorship, “some or all of that site will not be shown in Google search results,” according to the tech giant.

The list, published in full on Google’s support website, includes the following:

  • Discover policy violation: Adult-themed content
  • News and Discover policy violation: Dangerous content
  • News and Discover policy violation: Harassing content
  • News and Discover policy violation: Hateful content
  • News and Discover policy violation: Manipulated media
  • News and Discover policy violation: Medical content
  • Discover policy violation: Misleading content
  • News and Discover policy violation: Sexually explicit content
  • News and Discover policy violation: Terrorist content
  • News policy violation: Transparency
  • News and Discover policy violation: Violence and gore content
  • News and Discover policy violation: Vulgar language and profanity

Publishers who have been hit with a manual action by Google will be able to appeal the decision by “fixing” whatever issue violated the policy and then submitting their website to Google for a review. Google states that it could take “several days or a week” for the tech giant to reach a final decision, leaving

Once upon a time, Google attempted to conceal its censorship of search. Alphabet CEO Sundar Pichai insisted, under oath before congress, that the company does not “manually intervene on any particular search result,” a statement that one of Google’s own former employees said was a lie.

Keep reading

This is how we lost control of our faces

In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge.

It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.

People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

Keep reading

Bill Would Force Social Media Users To Secretly Report Suspicious People To Law Enforcement

Senator Joe Manchin wants to bring DHS’s spy on your neighbors “If You See, Something Say Something”  program to social media, blogs, websites, and much more. Manchin’s bill, the “See Something, Say Something Online Act” would essentially turn social media users into federal spies by forcing them to report suspicious people to law enforcement.

Just how bad is this bill?

This bill would essentially force anyone on social media to report suspicious “transmissions” to law enforcement.

Known Suspicious Transmission.—The term ‘‘known suspicious transmission’’ is any suspicious transmission that an interactive computer  service should have reasonably known to have occurred or have been notified of by a director, officer, employ, agent, interactive computer service user, or State or Federal law enforcement agency.

Major Crime —The term ‘‘major crime’’ means a Federal criminal offense that is a crime of violence (as defined 13 in section 16 of title 18, United States Code); relating to domestic or international terrorism (as those terms are defined in section 16 2331 of title 18, United States Code)

What exactly is a known suspicious transmission or major crime?

Suspicious Transmission is defined as any post, private message, comment, tag, transaction, or any other user-generated content or transmission that government officials later determine commits, facilitates, incites, promotes, or otherwise assists the commission of a major crime. Major crimes are defined as anything involving violence, domestic, or international terrorism, or a serious drug offense.

How could social media users, bloggers, web forum moderators, web conferencing users etc., know that a comment left or uttered by someone would later lead to them committing a major crime?

The See Something, Say Something Online Act would force social media users into red flagging every person’s comments just in case someone commits a major crime in the future.

This bill would effectively destroy the First Amendment as we know it, dispelling any vestiges of America still being a free country.

Keep reading

Arizona High Court Misses Opportunity to Uphold Internet Users’ Online Privacy

It’s an uncontroversial position that EFF has long fought for: Internet users expect their private online activities to stay that way. That’s why law enforcement should have to get a search warrant before getting records of people’s Internet activities. 

But in a disappointing decision earlier this month, the Arizona Supreme Court rejected a warrant requirement for services to disclose Internet users’ activities and other information to law enforcement, a setback for people’s privacy online.

In a 4-3 opinion, the Arizona high court ruled in State v. Mixton that people do not have a reasonable expectation of privacy in information held by online services that record their online activities, such as IP address logs. According to the Court, that information is not protected by either the federal Constitution’s Fourth Amendment or the state’s constitution, because people disclose that information to third-party online services whenever they use them, a legal principle known as the third-party doctrine.

Keep reading