Instagram to Permanently Ban Users Who Send ‘Hate Speech’ in Private Messages

Instagram has announced that they will now be permanently banning users who send “hate speech” in private messages.

The platform announced their new speech policing policy on Wednesday.

In a statement about their censorship, Instagram boasted that 95% of the “6.5 million pieces of hate speech” from July through September were censored by the platform without anyone even reporting it. In other words, nobody was upset or offended, but the platform decided what you can or cannot see and share.

“Today, we’re announcing that we’ll take tougher action when we become aware of people breaking our rules in DMs. Currently, when someone sends DMs that break our rules, we prohibit that person from sending any more messages for a set period of time. Now, if someone continues to send violating messages, we’ll disable their account. We’ll also disable new accounts created to get around our messaging restrictions, and will continue to disable accounts we find that are created purely to send abusive messages,” the statement explained.

Keep reading

This is how we lost control of our faces

In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge.

It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.

People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

Keep reading

How Safe Are Your Financial Records? BofA Helped Feds Investigate D.C. Riots

A recent Tucker Carlson report should remind American citizens just how far down the rabbit hole of tyranny and surveillance their government has gone. It should also remind Americans how Big Tech and Big Banking work “hand in glove” toward the destruction of freedom.

(And if you think this is bad, imagine how much worse a cashless society would be.)

Bank of America gives unknowing customers’ information to the authorities

According to Carlson’s report on FOX News, Bank of America searched through customer transactions to target people who may have been involved in the “riot” at the Capitol. BOA then handed over information to federal authorities at the request of those authorities.

The bank said it identified 211 customers who “fit the profile.” Some deciding factors used to identify these people were making purchases in the D.C. area or buying airline tickets or other accommodations in the days surrounding the event.

Carlson reports that at least one person who BOA identified was “taken in for questioning” but was later found to have no links to the event. Carlson justly accused BOA of spying on its customers and sharing their private information. Indeed, BOA gave the information to law enforcement and federal authorities without the knowledge of those concerned.

Keep reading

Cellebrite: Israel’s Good Cyber Cop is Big Tech’s Backdoor to Breaching Your Privacy

Privacy and security have long-been one of the top selling points for iOS devices in the interminable marketing fracas between Apple and its competitors, with fancy additions to their suite of protection features like fingerprint scanning and facial recognition. Android devices, by contrast, always seemed to lag behind in the personal encryption space, but have caught up fairly recently in the consumer’s mind, at least.

The cat, as they say, is out of the bag thanks to researchers at Johns Hopkins University, who decided to test the mobile security systems of two of the biggest mobile device makers, Apple and Google. Their findings reveal that the layers of security protecting our data are only skin deep and that much of the encryption structures built into these devices remain unused. “I’ve come out of the project thinking almost nothing is protected,” Matthew Green, the professor who oversaw the study told Wired.

Using the companies’ own data and records spanning over a decade, the team of cryptographers found a plethora of security loopholes that can and are being exploited “by hackers and law enforcement alike.” The latter’s access to our mobile devices is of particular concern, given “the privacy risks involved in unchecked seizure and search.” Significantly, it is not your local police precinct that necessarily has the right tools to extract any readable data from your cell phone or laptop (though that is changing), but rather, these unique abilities are reserved for private cybersecurity companies who offer their services to police and other government entities.

One such firm, Israeli cyber forensics firm Cellebrite, boasts about their ability to “unlock and extract data from all iOS and high-end Android devices,” a service they have been selling to governments around the world and which they have more recently integrated into a product called Universal Forensic Extraction Device or UFED, which has been purchased by multiple law enforcement agencies across the globe, including the Hong Kong Police, which used Cellebrite’s hacking technology to “crack protestors’ smartphones” during the anti-extradition riots of 2019 and the NYPD, which enrolled in Cellebrite’s “UFED Premium program” that same year and gives ‘New York’s finest’ the capability to extract ostensibly private citizens’ data from the department’s own computers and laptops.

Keep reading

Arizona High Court Misses Opportunity to Uphold Internet Users’ Online Privacy

It’s an uncontroversial position that EFF has long fought for: Internet users expect their private online activities to stay that way. That’s why law enforcement should have to get a search warrant before getting records of people’s Internet activities. 

But in a disappointing decision earlier this month, the Arizona Supreme Court rejected a warrant requirement for services to disclose Internet users’ activities and other information to law enforcement, a setback for people’s privacy online.

In a 4-3 opinion, the Arizona high court ruled in State v. Mixton that people do not have a reasonable expectation of privacy in information held by online services that record their online activities, such as IP address logs. According to the Court, that information is not protected by either the federal Constitution’s Fourth Amendment or the state’s constitution, because people disclose that information to third-party online services whenever they use them, a legal principle known as the third-party doctrine.

Keep reading

Telegram feature exposes your precise address to hackers

If you’re using an Android device—or in some cases an iPhone—the Telegram messenger app makes it easy for hackers to find your precise location when you enable a feature that allows users who are geographically close to you to connect. The researcher who discovered the disclosure vulnerability and privately reported it to Telegram developers said they have no plans to fix it.

The problem stems from a feature called People Nearby. By default, it’s turned off. When users enable it, their geographic distance is shown to other people who have it turned on and are in (or are spoofing) the same geographic region. When People Nearby is used as designed, it’s a useful feature with few if any privacy concerns. After all, a notification that someone is 1 kilometer or 600 meters away still leaves stalkers guessing where, precisely, you are.

Stalking made simple

Independent researcher Ahmed Hassan, however, has shown how the feature can be abused to divulge exactly where you are. Using readily available software and a rooted Android device, he’s able to spoof the location his device reports to Telegram servers. By using just three different locations and measuring the corresponding distance reported by People Nearby, he is able to pinpoint a user’s precise location.

Keep reading

Montreal woman says officer insisted on searching her lunch bag during curfew stop

Sarah Vresk was heading to work at around 4 a.m. Tuesday when she was stopped near her home by Montreal police and asked to prove she had the right to be on the road during curfew.

“I got my letter out of my glove compartment and he asked for my ID. I gave him that,” said Vresk. “He then asked me what was in my bag.”

Vresk demanded to know why that mattered, and why she wasn’t free to go after showing a letter from her employer stating she works for a snow-removal contractor and needs to be on the job during curfew.

The officer questioned the validity of that letter, saying it’s just a piece of paper, and threatened to give her a ticket anyway, Vresk said. The officer accused her of delaying detainment by not co-operating and showing the bag’s contents.

Vresk finally gave into the officer’s demands, allowing him to inspect her lunch bag.

The officer then returned to his cruiser to check her credentials while his partner took over questioning. Finally, Vresk was let go without a ticket.

Keep reading

Creepiest Tech Guide of 2020 — Activity Trackers, Wireless Headphones, Smart Speakers, OMG!

For many years experts have warned about privacy AND cybersecurity risks associated with “Smart” (see 123), wireless, and Internet of Things (IoT) devices and technology (see 123456).

Thanks to Mozilla for publishing their 2020 guide to help you determine which products are too invasive for your comfort level. Hint: if privacy matters to you even the slightest – completely avoid Amazon’s new Halo health band.

Keep reading