“In people who suffer from stress-related diseases, this circadian rhythm is completely thrown off and if the body makes too much or not enough cortisol, that can seriously damage an individual’s health, potentially leading to obesity, cardiovascular disease, depression or burnout.” – Adrian lonescu, Swiss Federal Institute of Technology Lausanne (EPFL), lead Nanoelectronic Devices Laboratory researcher While these devices may be helpful in a hospital setting, technology companies fully intent to integrate them into wearable tech like smart watches, pushing us closer to a world where everything we do is being tracked and recorded around the clock. “The joint R&D team at EPFL and Xsensio reached an important R&D milestone in the detection of the cortisol hormone,” said Xsensio CEO Esmeralda Magally. “Xsensio will make the cortisol sensor a key part of its Lab-on-SkinTM platform to bring stress monitoring to next-gen wearables.” These microchips are intended to eventually connect to the ‘internet of things,’ a comprehensive array of devices which track and record us at all times from our homes to our places of work. Former US intelligence chief James Clapper admitted over five years ago that the government ‘might’ use the internet of things to spy on you.
Before he became a household name as the accused spoiler of the 2016 election, James Comey, FBI director under President Barack Obama, was already well-known in tech circles as a crusader against strong encryption. Still smarting from Edward Snowden’s exposure of the US government’s massive and illegal domestic spying operations, Comey grabbed any microphone he could during the waning years of Obama’s tenure to warn Americans that encryption technology was putting us all at grave risk by causing law enforcement to “go dark.”
Cryptography is the art of encoding text or other data such that only those who have the secret key can read it. This data can include anything from messages and records to digital currency—but these days encryption most commonly protects account passwords and other sensitive information as it traverses the internet.
Encryption has been around for millennia and, in modern times, it is used on a daily basis by nearly every person living in a technologized society. But like any technology, it can frighten those in power when wielded by the relatively powerless. In the summer of 2015, Comey told the Senate Judiciary Committee that encryption had suddenly inspired the FBI “to consider how criminals and terrorists might use advances in technology to their advantage.”
Sensitive to the public’s lingering outrage at the Snowden revelations, Comey turned to the usual parade of horribles in his attempts to convince Congress that encryption isn’t all it’s cracked up to be: “Malicious actors can take advantage of the internet to covertly plot violent robberies, murders and kidnappings,” he warned. “Sex offenders can establish virtual communities to buy, sell and encourage the creation of new depictions of horrific sexual abuse of children.”
Comey preferred to use “horrific sexual abuse of children” and the specter of terrorism to disparage encryption technology—recall the showdown between the FBI and Apple after the perpetrators of a late-2015 massacre in San Bernardino left behind an encrypted iPhone. But the ACLU (4/1/16) quickly exposed his fraud: Researchers uncovered 63 court orders for access to encrypted devices and reported, “To the extent we know about the underlying facts, these cases predominantly arise out of investigations into drug crimes”—rather than terrorists and pedophiles.
In the wake of the January 6 mob attack on the US Capitol Building, this pattern is repeating itself again…only now corporate media are taking up the FBI’s mantle on their own behalf.
Cell phones are convenient devices, handily connecting us with loved ones, paying bills, accessing information—and treacherously reporting on our every move. Worse, even after the Supreme Court weighed in, many government agencies still insist that they have the right to pull up that tracking data to see our whereabouts. It’s increasingly apparent that, if you have your phone in your pocket, you may as well have a GPS beacon strapped to your ankle. If you want anonymity from the government, leave the gadget at home.
That point was illustrated in the wake of the Capitol riot, when the authorities pulled cell phone records to see who was present.
“In the hours and days after the Capitol riot, the FBI relied in some cases on emergency orders that do not require court authorization in order to quickly secure actual communications from people who were identified at the crime scene,” The Intercept reported this week. “Investigators have also relied on data ‘dumps’ from cellphone towers in the area to provide a map of who was there, allowing them to trace call records — but not content — from the phones.”
The data collected by people’s phones and the apps they use, often compiled by marketing firms, is amazingly detailed. An individual “outraged by the events of Jan. 6” supplied data on participants in the day’s events to The New York Times, whose writers were thoroughly creeped out by the information.
“While there were no names or phone numbers in the data, we were once again able to connect dozens of devices to their owners, tying anonymous locations back to names, home addresses, social networks and phone numbers of people in attendance,” Charlie Warzel and Stuart A. Thompson wrote.
Marketing databases have become a favorite resource for government agencies, which purchase the information as an attempted end-run around Fourth Amendment protections. The theory has been that, since the data is “voluntarily” provided to a third party there’s no privacy from the government required.
Recent court documents have indicated that the Federal Bureau of Investigation (FBI) possesses a tool allowing them to access encrypted messages on the Signal app.
Signal has rapidly gained in popularity as Silicon Valley monopolists have grown more openly hostile to free speech, but the platform may be vulnerable to backdoors that undermine the privacy protections provided through the encrypted messaging service.
According to documents filed by the Department of Justice and first obtained by Forbes, Signal’s encrypted messages can be intercepted from iPhone devices when those Apple devices are in a mode called “partial AFU,” which means “after first unlock.”
When phones are in partial AFU mode, Signal messages can be seized by federal authorities and other potentially hostile interests. GrayKey and Cellebrite are the tools typically used by the FBI to gain this sensitive information, an expert has explained.
“It uses some very advanced approach using hardware vulnerabilities,” said Vladimir Katalov, who founded the Russian forensics company ElcomSoft, believing that GrayKey was used by federal authorities to crack Signal.
This vulnerability within the Signal app may not be a design flaw, but rather a deliberate backdoor to allow authorities to access private messages. The app was initially funded with backing from the deep state, after all.
Instagram has announced that they will now be permanently banning users who send “hate speech” in private messages.
The platform announced their new speech policing policy on Wednesday.
In a statement about their censorship, Instagram boasted that 95% of the “6.5 million pieces of hate speech” from July through September were censored by the platform without anyone even reporting it. In other words, nobody was upset or offended, but the platform decided what you can or cannot see and share.
“Today, we’re announcing that we’ll take tougher action when we become aware of people breaking our rules in DMs. Currently, when someone sends DMs that break our rules, we prohibit that person from sending any more messages for a set period of time. Now, if someone continues to send violating messages, we’ll disable their account. We’ll also disable new accounts created to get around our messaging restrictions, and will continue to disable accounts we find that are created purely to send abusive messages,” the statement explained.
In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.
Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.
Deborah Raji, a fellow at nonprofit Mozilla, and Genevieve Fried, who advises members of the US Congress on algorithmic accountability, examined over 130 facial-recognition data sets compiled over 43 years. They found that researchers, driven by the exploding data requirements of deep learning, gradually abandoned asking for people’s consent. This has led more and more of people’s personal photos to be incorporated into systems of surveillance without their knowledge.
It has also led to far messier data sets: they may unintentionally include photos of minors, use racist and sexist labels, or have inconsistent quality and lighting. The trend could help explain the growing number of cases in which facial-recognition systems have failed with troubling consequences, such as the false arrests of two Black men in the Detroit area last year.
People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”
A recent Tucker Carlson report should remind American citizens just how far down the rabbit hole of tyranny and surveillance their government has gone. It should also remind Americans how Big Tech and Big Banking work “hand in glove” toward the destruction of freedom.
(And if you think this is bad, imagine how much worse a cashless society would be.)
Bank of America gives unknowing customers’ information to the authorities
According to Carlson’s report on FOX News, Bank of America searched through customer transactions to target people who may have been involved in the “riot” at the Capitol. BOA then handed over information to federal authorities at the request of those authorities.
The bank said it identified 211 customers who “fit the profile.” Some deciding factors used to identify these people were making purchases in the D.C. area or buying airline tickets or other accommodations in the days surrounding the event.
Carlson reports that at least one person who BOA identified was “taken in for questioning” but was later found to have no links to the event. Carlson justly accused BOA of spying on its customers and sharing their private information. Indeed, BOA gave the information to law enforcement and federal authorities without the knowledge of those concerned.
Privacy and security have long-been one of the top selling points for iOS devices in the interminable marketing fracas between Apple and its competitors, with fancy additions to their suite of protection features like fingerprint scanning and facial recognition. Android devices, by contrast, always seemed to lag behind in the personal encryption space, but have caught up fairly recently in the consumer’s mind, at least.
The cat, as they say, is out of the bag thanks to researchers at Johns Hopkins University, who decided to test the mobile security systems of two of the biggest mobile device makers, Apple and Google. Their findings reveal that the layers of security protecting our data are only skin deep and that much of the encryption structures built into these devices remain unused. “I’ve come out of the project thinking almost nothing is protected,” Matthew Green, the professor who oversaw the study told Wired.
Using the companies’ own data and records spanning over a decade, the team of cryptographers found a plethora of security loopholes that can and are being exploited “by hackers and law enforcement alike.” The latter’s access to our mobile devices is of particular concern, given “the privacy risks involved in unchecked seizure and search.” Significantly, it is not your local police precinct that necessarily has the right tools to extract any readable data from your cell phone or laptop (though that is changing), but rather, these unique abilities are reserved for private cybersecurity companies who offer their services to police and other government entities.
One such firm, Israeli cyber forensics firm Cellebrite, boasts about their ability to “unlock and extract data from all iOS and high-end Android devices,” a service they have been selling to governments around the world and which they have more recently integrated into a product called Universal Forensic Extraction Device or UFED, which has been purchased by multiple law enforcement agencies across the globe, including the Hong Kong Police, which used Cellebrite’s hacking technology to “crack protestors’ smartphones” during the anti-extradition riots of 2019 and the NYPD, which enrolled in Cellebrite’s “UFED Premium program” that same year and gives ‘New York’s finest’ the capability to extract ostensibly private citizens’ data from the department’s own computers and laptops.
It’s an uncontroversial position that EFF has long fought for: Internet users expect their private online activities to stay that way. That’s why law enforcement should have to get a search warrant before getting records of people’s Internet activities.
But in a disappointing decision earlier this month, the Arizona Supreme Court rejected a warrant requirement for services to disclose Internet users’ activities and other information to law enforcement, a setback for people’s privacy online.
In a 4-3 opinion, the Arizona high court ruled in State v. Mixton that people do not have a reasonable expectation of privacy in information held by online services that record their online activities, such as IP address logs. According to the Court, that information is not protected by either the federal Constitution’s Fourth Amendment or the state’s constitution, because people disclose that information to third-party online services whenever they use them, a legal principle known as the third-party doctrine.