WEF hears about technology that allows your thoughts to be monitored

The annual World Economic Forum (WEF) gathering has always been a testing ground for some bizarre ideas, which nonetheless serve a purpose: to introduce, and if possible normalize all kinds of mass surveillance and sometimes extremely privacy-invasive technologies.

And monitoring people’s brain activity, including via implants – surely, it doesn’t get much more invasive than that.

Yet this was one of the technologies presented at an event in Davos this year by Duke University Professor Nita Farahany.

Brain implants are not new in and of themselves, as are used in medicine to treat some serious conditions. However, the kind brought up here at one point are the ones to be put into healthy people – basically to read their minds.

“Decoding complex thought,” is already possible, Farahany said during her “Ready for Brain Transparency?” talk at the WEF summit last week. And the tech now is also able to reveal the degree of stress somebody is experiencing, as well as what they are paying attention to. So, the goal is to know what/how a person is feeling, what they are thinking, and what draws their interest.

Keep reading

What Privacy? This AI Can Identify You by Your Computer Habits

The world of privacy is a constant battlefield. It’s not a static decision where once you’ve done this one single step, you’re now good until the end of time. Instead, you have to stay abreast of the research, studying the ways that privacy is constantly being diminished so that you can then take the appropriate steps to respond.

If you’ve read through a privacy policy for an app, website, or contract in the past, you’ve likely noticed that they state they may sell your data to third parties. Exactly who these third parties are, you never know, nor what your information is being used for in the first place.

But sometimes you find the privacy policy tries to add a feel-good clause here, saying something to the extent that “our data about you is completely anonymous.”

Keep reading

J6 committee released docs ‘inadvertently’ containing SSNs, those affected reportedly not alerted

As the final days of the blatantly biased House Jan. 6 committee wound down, it posted online the rotten fruits of its labor — hundreds of documents gathered as part of its quest to quest to end former President Donald Trump’s political career.

According to a new report, included among the “massive cache” of materials were the Social Security numbers of nearly 2,000 high-profile people who visited the White House in December 2020.

On Friday, the second anniversary of the fateful events at the Capitol, The Washington Post reported that the “inadvertently” doxxed individuals include “at least three members of Trump’s Cabinet, a few Republican governors and numerous Trump allies.”

Many of the Social Security numbers listed in the logs were redacted, but The Post reports that roughly 1,900 of them were revealed, including those of South Dakota Gov. Kristi Noem (R), her husband, and her three children.

A spokesperson for Noem, Ian Fury, said the Government Publishing Office (GPO), the original publisher of the file, did not even bother to give Noem a heads-up about the massive breach of privacy.

“To my knowledge, we were not notified,” Fury said. “The governor was not notified.”

Keep reading

You Now Need a Government ID to Access Pornhub in Louisiana

A new law makes porn sites liable for content deemed “harmful to minors” if it doesn’t install age verification technology for anyone accessing them in Louisiana—and it’s already affecting how people in the state access Pornhub. 

The law, which was signed by Louisiana’s Democratic governor John Bel Edwards in June, became effective on January 1, 2023. 

The law, passed as Act 440, states: 

“Any commercial entity that knowingly and intentionally publishes or distributes material harmful to minors on the internet from a website that contains a substantial portion of such material shall be held liable if the entity fails to perform reasonable age verification methods to verify the age of individuals attempting to access the material.”

A “substantial portion” is 33.3 percent or more material on a site that’s “harmful to minors.” 

Material that’s harmful to minors, according to the act, is defined as appealing to prurient interests, and that consists of “pubic hair, anus, vulva, genitals, or nipple of the female breast; Touching, caressing, or fondling of nipples, breasts, buttocks, anuses, or genitals; Sexual intercourse, masturbation, sodomy, bestiality, oral copulation; flagellation, excretory functions, exhibitions, or any other sexual act,” and lacks “serious literary, artistic, political, or scientific value” for someone under 18 years of age. 

It also states that any commercial entity in violation will be liable “to an individual for damages resulting from a minor’s accessing the material.” 

Motherboard confirmed, through a virtual private network, that Pornhub is showing people visiting the site from a Louisiana-based IP address a page that requires identity verification before entering. “Louisiana law now requires us to put in place a process for verifying the age of users who connect to our site from Louisiana,” the page says.

Keep reading

‘Power Run Amok’: Madison Square Garden Uses Face-Scanning Tech to Remove Perceived Adversaries

BARBARA HART WAS celebrating her wedding anniversary and waiting for Brandi Carlile to take the stage at Madison Square Garden on Oct. 22, when a pair of security guards approached her and her husband by their seats and asked for the couple to follow them. At first, Hart tells Rolling Stone she was excited, thinking it was some sort of surprise before the concert started. Her excitement turned to anxiety soon after, however, as she spoke with security and gathered that she’d been identified using facial-recognition technology. Then they escorted her out of the venue. 

Hart was initially confused, having no idea why she was flagged. She says security informed her that she was being ejected because of her job as an attorney at Grant & Eisenhofer, a law firm currently litigating against Madison Square Garden’s parent company in a Delaware class-action suit involving several groups of shareholders.

Madison Square Garden Entertainment, owned by James Dolan (who has been known to kick out fans who anger him), confirms to RS that it enacted a policy in recent months forbidding anyone in active litigation against the company from entry to the company’s venues — which include the New York arena that gives the company its name, along with Radio City Music Hall, Beacon Theatre, and the Chicago Theatre. The company’s use of facial recognition tools itself dates back to at least 2018, when the New York Times reported on it; anyone who enters the venue is subject to scanning, and that practice now seems to coincide with the policy against opposing litigants.

“This is retaliatory behavior of powerful people against others, and that should be concerning to us,” says Hart, who also spoke of the incident in a sworn affidavit last month, as Reuters reported. Hart recalls that she declined to give MSG security her ID, but that they were able to correctly identify her anyway; she says security mentioned her picture appearing on Grant & Eisenhofer’s website, leading her to the conclusion that facial recognition was involved. “It was a very eerie experience to be on the receiving end of at that moment.”

Keep reading

A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?

In the fall of 2020, gig workers in Venezuela posted a series of images to online forums where they gathered to talk shop. The photos were mundane, if sometimes intimate, household scenes captured from low angles—including some you really wouldn’t want shared on the Internet. 

In one particularly revealing shot, a young woman in a lavender T-shirt sits on the toilet, her shorts pulled down to mid-thigh.

The images were not taken by a person, but by development versions of iRobot’s Roomba J7 series robot vacuum. They were then sent to Scale AI, a startup that contracts workers around the world to label audio, photo, and video data used to train artificial intelligence. 

They were the sorts of scenes that internet-connected devices regularly capture and send back to the cloud—though usually with stricter storage and access controls. Yet earlier this year, MIT Technology Review obtained 15 screenshots of these private photos, which had been posted to closed social media groups. 

The photos vary in type and in sensitivity. The most intimate image we saw was the series of video stills featuring the young woman on the toilet, her face blocked in the lead image but unobscured in the grainy scroll of shots below. In another image, a boy who appears to be eight or nine years old, and whose face is clearly visible, is sprawled on his stomach across a hallway floor. A triangular flop of hair spills across his forehead as he stares, with apparent amusement, at the object recording him from just below eye level.

The other shots show rooms from homes around the world, some occupied by humans, one by a dog. Furniture, décor, and objects located high on the walls and ceilings are outlined by rectangular boxes and accompanied by labels like “tv,” “plant_or_flower,” and “ceiling light.” 

Keep reading

New web tracking technique is bypassing privacy protections

Advertisers and web trackers have been able to aggregate users’ information across all of the websites they visit for decades, primarily by placing third-party cookies in users’ browsers.

Two years ago, several browsers that prioritize user privacy—including Safari, Firefox, and Brave—began to block third-party cookies for all users by default. This presents a significant issue for businesses that place ads on the web on behalf of other companies and rely on cookies to track click-through rates to determine how much they need to get paid.

Advertisers have responded by pioneering a new method for tracking users across the Web, known as user ID (or UID) smuggling, which does not require third-party cookies. But no one knew exactly how often this method was used to track people on the Internet.

Researchers at UC San Diego have for the first time sought to quantify the frequency of UID smuggling in the wild, by developing a measurement tool called CrumbCruncher. CrumbCruncher navigates the Web like an ordinary user, but along the way, it keeps track of how many times it has been tracked using UID smuggling.

The researchers found that UID smuggling was present in about 8 percent of the navigations that CrumbCruncher made. They presented these results at the Internet Measurement Conference Oct. 25 to 27, 2022 in Nice, France. The team is also releasing both their complete dataset and their measurement pipeline for use by browser developers.

The team’s main goal is to raise awareness of the issue with browser developers, said first author Audrey Randall, a computer science Ph.D. student at UC San Diego. “UID smuggling is more widely used than we anticipated,” she said. “But we don’t know how much of it is a threat to user privacy.”

UID smuggling can have legitimate uses, the researchers say. For example, embedding user IDs in URLs can allow a website to realize a user is already logged in, which means they can skip the login page and navigate directly to content. It’s also a tool that a company that owns websites with different domains can use to track user traffic.

It’s also, of course, a tool for affiliate advertisers to track traffic and get paid. For example, a blogger who advertises a product using affiliate links might be paid a commission if anyone clicks their links and then makes a purchase. UID smuggling can identify which blogger should get the commision.

But there are potentially more dangerous uses that researchers worry about. For example, a data broker could use UID smuggling to gather a database of users’ Internet navigation.

Keep reading

FBI Director pushes for “lawful access” to encrypted messages

FBI Director Christopher Wray last month spoke before the US Senate Homeland Security and Governmental Affairs Committee, and, among the many topics dedicated to “threats to the homeland,” he addressed that of encryption.

His remarks on this are carried by the FBI website under the heading, “Lawful Access.” Wray opens by saying that the agency is a strong advocate of “wide and consistent” encryption use.

The FBI chief goes on with platitudes, and not particularly sincere ones (considering his statements that followed): protecting online data and privacy is a top priority, and encryption a key element.


“Encryption without lawful access, though, does have a negative effect on law enforcement’s ability to protect the public,” Wray says, and thus continues the FBI’s long-since established stance that strong encryption prevents law enforcement from performing their duties.

Keep reading

Binance’s ‘CZ’ Says Half Billion WhatsApp User Records For Sale On Dark Web

Nearly half a billion WhatsApp users’ mobile phone numbers are allegedly for sale on a dark web community forum, according to multiple sources, including Binance’s billionaire Changpeng “CZ” Zhao. 

“A new set of 487 million WhatsApp phone numbers for sales in the Dark Web,” CZ tweeted Sunday. He said a sample of hacked data “indicates the phone numbers are legit.”

CZ warned users on the Meta-owned platform that “threat actors downstream will use this data to conduct smishing (phishing messages) campaigns.” 

Cybernews initially confirmed the hack. They said: 

On November 16, an actor posted an ad on a well-known hacking community forum, claiming they were selling a 2022 database of 487 million WhatsApp user mobile numbers.

The dataset allegedly contains WhatsApp user data from 84 countries. Threat actor claims there are over 32 million US user records included.

Another huge chunk of phone numbers belongs to the citizens of Egypt (45 million), Italy (35 million), Saudi Arabia (29 million), France (20 million), and Turkey (20 million).

The dataset for sale also allegedly has nearly 10 million Russian and over 11 million UK citizens’ phone numbers.

The threat actor told Cybernews they were selling the US dataset for $7,000, the UK – $2,500, and Germany – $2,000.

Cybernews also posted a screenshot of the seller’s post on the forum featuring the total number of phone numbers per country. 

Keep reading

The UK plots to ban private messaging

UK’s media regulator Ofcom will get more surveillance powers than spy agencies under the Online Safety Bill, according to a legal analysis by the Index on Censorship organization.

The legislation would allow Ofcom to force tech companies to clamp down on “child abuse” and “terrorist content” by ending end-to-end encrypted messaging platforms like WhatsApp, Signal, Telegram, and Facebook Messenger and force all communications to be scanned.

Human rights lawyer Matthew Ryder, in a legal opinion commissioned by Index on Censorship, said that the powers that Ofcom would be afforded by the bill allow “allow the state to compel [tech companies] to carry out surveillance of the content of communications on a generalized and widespread basis.”

The regulator would not need prior authorization before making a demand to a tech company to scan messages and there would be no independent oversight over how the regulator uses its powers.

Ryder added: “We are unable to envisage circumstances where such a destructive step in the security of global online communications for billions of users could be justified.”

Communications by journalists, whistleblowers, and victims would no longer be safe. Additionally, it is not clear if Ofcom would make public the demands it issues or whether it would keep them secret.

Keep reading