8 ways your phone is tracking you that you can stop now

You understand that your phone knows where you’re located.

This is how GPS works, how Find My Friends sees your location, and why you get local ads on Facebook and Google.

Like other data on your phone, that location data is a hot commodity for internet marketers in today’s digital economy.

Targeted advertising is one of the biggest enterprises on the web.

Companies are eager to serve you ads for products you’re likely to buy, and that data helps them hit their mark.

Some companies have even made this their primary business model. Tap or click here to see one shocking way Facebook tracks your data.

Thankfully, you don’t have to stand for this kind of data collection if you’re uncomfortable with it.

These tactics are legal because the companies behind them give you a choice to opt in or out, but not everyone knows how to change the settings.

We’ll show you how to stop your phone from tracking you.

Keep reading

The RESTRICT act aims to tackle TikTok. But it’s overly-broad and has major privacy and free speech implications.

Senator Mark Warner’s Restricting the Emergence of Security Threats that Risk Information and Communications Technology (“RESTRICT”) Act is currently in Senate procedure, as is widely thought to be targeting China‘s TikTok in particular.

However, those who bothered to read the text of the proposed act – which will next be considered by the Committee on Commerce, Science, and Transportation, are warning that it is not merely about TikTok, but aims to grant wide powers over all forms of domestic and foreign communications to the government – such as enforcing “any” mitigating measure to deal with risks to national security.

We obtained a copy of the bill for you here.

And, observers critical of these legislative activities note, there would be no due process in taking these measures, and not much in terms of safeguards.

Keep reading

OpenAI co-founder creates digital ID protocol

Digital ID company World ID, created by OpenAI co-founder Sam Altman, claims to offer a “privacy-first” solution to the problem of verifiable identification. The project was created by OpenAI co-founder Sam Altman.

However, many remain skeptical about the overall idea of digital ID, and therefore about World ID as well.

The company claims that more than half of the global population lacks legally verifiable identification and wants to be the provider of that.

World ID describes itself as a self-sovereign and decentralized protocol that provides “proof of personhood” without putting any sensitive information of the holder at risk of being compromised.

The platform says it’s powered by zero-knowledge cryptography, an open protocol that provides developers with a software developer kit (SDK) to leverage the innovative digital identity solution.

Moreover, World ID claims it will become the largest network of authentic humans on the internet.

Keep reading

Indiana’s Bureau of Motor Vehicles is caught collecting and selling personal data

Indiana’s Bureau of Motor Vehicles (BMV) has been caught selling drivers’ personal information without their consent and without the option to opt out. Last year alone, the BMV made around $25 million from selling personal information, according to WRTV.

Asked if the BMV sells personal information, a BMV employee said to WRTV: “No. Well, you’re not supposed to. Can’t tell you for sure what they do, but they’re not supposed to!”

Though the employees might not be aware of the practice, an investigation by WRTV found that the Indiana BMV does sell personal information and the practice is legal. The BMV can sell personal information like your name, date of birth, past and current addresses, license plate number, make and model of your vehicle, VIN, date of purchase, license type, and your driver’s record.

In the past decade (2012 to 2022) the BMV made over $237 million from selling drivers’ personal information. It sells the personal information to lawyers, bail bond companies, insurance companies, private investigators, debt collection companies, recovery agents, law enforcement agencies, security guards, auto dealers, tow companies, school corporations, and mobile home parks.

The BMV refused an on-camera interview. However, in an emailed statement, a spokesperson said: “Data is only available to qualified entities who meet the eligibility and use requirements in Indiana Code § 9-14-13-7 or § 9-14-13-8.

“Consumers do not have the option to opt out at this time,” they added.

Keep reading

Giving up biometrics at US airports soon won’t be optional, transport security chief says

The chief of the Transportation Security Administration (TSA) David Pekoske said that the agency is considering biometric technology to reduce traveler processing times and reduce the number of screening officers. He made the comments at the South by Southwest conference, which focused on aviation security.

Pekoske noted that the TSA’s role is maintaining security and the transportation system and staying ahead of threats. For those reasons, it is “critically important that this system has as little friction as it possibly can, while we provide for safety and security.”

The TSA has been relying on biometric technology in the identification verification process. According to the agency, the newest technology it has been using is over 99% effective and does not have problems identifying darker-skinned people like the old technology.

“We’re upgrading our camera systems all the time, upgrading our lighting systems,” Pekoske said. “[We’re] upgrading our algorithms, so that we are using the very most advanced algorithms and technology we possibly can.”

Pekoske said that the agency will ensure it remains transparent with the public about the data that is taken, what it is used for, and for how long it will be stored. For now, he said that travelers can opt out of processes they are not comfortable with.

According to The Dallas Morning News, giving up biometric data for travel will eventually not be optional.

Keep reading

Eric Schmidt testifies that there should be no “true anonymity” when accessing generative AI platforms

Eric Schmidt, a former Google CEO who previously said, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place,” recently testified that people shouldn’t be able to have “true anonymity” when using generative AI products — a predictive form of AI that can produce various types of content.

Schmidt made the comments on generative AI when testifying at a House Oversight Committee hearing titled: “Advances in AI: Are We Ready For a Tech Revolution?

During the hearing, the former Google CEO offered three principles for how people should interact with generative AI platforms, one of which related to stripping true anonymity from users.

“You need to know who the users are, even if you don’t tell the end user who they are, there needs to be some notion of who they are and where they came from,” Schmidt said. “True anonymity hidden behind a paywall would allow nation-state attacks.”

Keep reading

‘Geofence’ Warrants Threaten Every Phone User’s Privacy

The last time your phone asked you to allow this or that app access to your location data, you may have had some trepidation about how much Apple or Google know about you. You may have worried about what might come of that, or read about China’s use of the data to track anti-lockdown protesters. What you probably didn’t realize is Google has already searched your data on behalf of the federal government to see if you were involved with January 6th.

But last month, the federal district court in DC issued an opinion in the case of  one of the many defendants who stands accused of sacking the Capitol in the wake of the 2020 election.

And with it, Judge Rudolph Contreras became the first federal district judge to approve a “Geofence” warrant, endorsing a recent police innovation: searching the cell phone history of every American to check who happened to be in the area of some potential crime.

The “Geofence” in this context refers to cell phone location data collected by Google from users of its Android operating system, as well as iPhone users who use apps such as Google Maps. Location tracking can be turned off, but most users allow it for the convenience of getting directions, tracking their daily jog, or finding the nearest Chipotle. The Government’s warrant demanded location history for every Google account holder within a range of longitude and latitude roughly corresponding to the Capitol building on the afternoon of January 6, 2021, along with similar data from that morning and evening (to filter out Hill staff and security guards).

It’s not clear this information was even needed: This defendant was apprehended within the building that day, carrying knives and pepper spray, and features on various security cameras — his whereabouts are not in question. Many of his coreligionists were considerate enough to live stream their antics themselves. While tracking down every participant in what was dubbed the Beer Belly Putsch is impractical, prosecutors have not lacked for defendants, or for evidence against them. But the government nonetheless decided to resort to a level of mass surveillance without precedent in history or criminal law. This is only the second federal district judge to rule on such a warrant, and the first, in the Eastern District of Virginia, found it “invalid for lack of particularized probable cause” (though that judge declined to suppress the evidence on the basis of other Fourth Amendment loopholes created by the Supreme Court).

That particular requirement comes from the Fourth Amendment itself, which calls for every warrant to “particularly describ[e] the place to be searched, and the persons or things to be seized.” This means that, for instance, the warrant issued last year for former President Trump’s Florida residence did not simply say “search the house,” but detailed specific rooms to be searched for specific things (boxes of documents). The cops can’t — or at least are not supposed to — dump out your underwear drawer based on a tip that you’re hiding cocaine in your basement.

Keep reading

Now They Can Actually Use WiFi “To See People Through Walls”

If you truly wanted complete and total privacy, you would need to give up nearly all of the technology that you are currently using.  I wish that wasn’t true, but this is the reality of the world in which we now live.

The “Big Brother surveillance grid” is constantly growing and evolving all around us, and those who are using it to watch, monitor, track, influence and control us have an insatiable appetite for more data.  They are constantly pushing the envelope, and most people don’t seem to care.  But if we don’t stand up for our rights now, eventually we will find ourselves living in a society where there is absolutely no privacy at all.

In fact, many would argue that we are already there.

Recently, I was horrified to learn that researchers have been working on a way to use WiFi to look through the walls of our homes to see what we are doing.  The following is a brief excerpt from a Vice article entitled “Scientists Are Getting Eerily Good at Using WiFi to ‘See’ People Through Walls in Detail”

Researchers at Carnegie Mellon University developed a method for detecting the three dimensional shape and movements of human bodies in a room, using only WiFi routers.

To do this, they used DensePose, a system for mapping all of the pixels on the surface of a human body in a photo. DensePose was developed by London-based researchers and Facebook’s AI researchers. From there, according to their recently-uploaded preprint paper published on arXiv, they developed a deep neural network that maps WiFi signals’ phase and amplitude sent and received by routers to coordinates on human bodies.

As we have seen with so many other highly intrusive surveillance technologies, those that have developed this method are touting the benefits that it could have

The researchers ague that their Wi-Fi approach to imaging humans in households could be applied to home healthcare, where patients may not want to be monitored with a camera in places like the bathroom or with other sensors and tracking devices.

No matter how old I get, I don’t ever want anyone using any technology to monitor me while I am taking a dump.

If that makes me old-fashioned, so be it.

Keep reading

Face Biometrics Getting Deeper into Policing, Sparking Concerns

Those worried about the use of facial recognition by law enforcement have warned about how the technology could become entrenched in bureaucracies, growing in use and getting harder to question from outside governments.

A trio of recent reports, from Germany, The Netherlands, and the United Kingdom seem to bear that out.

In Germany, a civil rights activist, Matthias Monroy, writing in his own blog, says a facial recognition system used to identify unknown people has grown “dramatically” from 2021 to 2022.

The database reportedly belongs to Germany’s federal police. According to Monroy, it was searched about 7,700 in 2022, compared to 6,100 times in 2021.

About 2,800 people were identified using the police’s algorithm last year, compared to 1,300 in 2021.

The advocate says that the Federal Ministry offered the information after being asked by a party in parliament. He also said that, according to the ministry, the same data has not been received from German states.

The images are gathered from CCTV cameras and from phones used by police to record the faces of suspects of crimes. Asylum seekers are in the same database.

Reportedly, the number of facial images in the police database grew by about 1.5 million last year compared to the previous year primarily because only 400,000 images were deleted.

If German police are starting to hold on to photos longer, they might be in good company.

Trade publication ComputerWeekly is reporting that some in the UK feel the government is adopting a biometrics “culture of retention.”

Keep reading

The sudden global push for age verification to end online anonymity and drive digital ID uptake

Anonymity online, long considered by civil rights groups as a fundamental feature of the internet, is coming under fire from different directions, and through a range of methods: from criticism by political elites, to actual legislation.

One kind of new law that can undermine or do away with online anonymity are those mandating age verification before users are allowed on a website. Another effect these rules have is increasing the uptake of digital IDs.

Earlier this week, Senator Josh Hawley introduced the Making Age-Verification Technology Uniform, Robust, and Effective (MATURE) Act, whose goal is to prevent minors under 16 from accessing social media.

In order to ensure this, all users would have to have their age verified before creating an account, by giving up information fully revealing their identity: legal name, date of birth and a copy of a government issued ID that proves this is their actual name and age.

Hawley’s explanation for this and similar initiatives is that social media is harmful to children, from monetizing their data to facilitating exploitation and manipulation. But the solution would clearly affect everyone’s privacy by further “unmasking” them to notorious data collectors and (ab)users, those same social sites.

Age verification is also being pushed in some states but, in Utah at least, the proposal that has just been adopted in the local Senate is leaving out the government ID requirement.

The bill, known as SB152 and which will next be considered in the House, makes it mandatory for companies behind social media platforms to make sure that children can sign up only with their parents’ consent. To ensure this is the case, the ages of all users would be verified.

However, how exactly these companies can accomplish that remains unclear for now, while the bill’s sponsor, state State Senator Mike McKell, is quoted as saying that “there are third-party options that use various technologies to verify ages without government IDs.”

Facial recognition is mentioned in reports as one such option, while another is to use “existing consumer data.” Once again, the need for such legislation is explained as a way to protect children from bad influences online.

Keep reading