Apple Reaches $95M Settlement Over Lawsuit Accusing ‘Siri’ Of Eavesdropping On Consumers

Apple has agreed to pay $95 million to settle a lawsuit that accuses the company of infringing on its users’ privacy by utilizing “Siri,” Apple’s artificial intelligence (AI) assistant, to eavesdrop on individuals with Apple devices.

The agreed upon settlement, which was filed on December 13th, 2024, in Oakland, California, is currently awaiting approval by a U.S. district judge.

The 5-year-old lawsuit alleged that Apple would activate Siri without the user’s knowledge “for over a decade.” The suit continued, claiming that Apple would continue to record, unbeknownst to the phone owner, sharing conversations and certain key words with advertisers in order to push products and services.

Apple has long marketed itself as a “pioneer” in protecting its consumers privacy. However, users have also long suspected that their device is listening to them after specific ads for products or services have been presented via social media apps after simply discussing topics or figures out loud that are related.

Two plaintiffs in the suit recall that after merely mentioning Air Jordan shoes, their iPhone began showing them advertising for the shoes more often. Another noted that after discussing a specific surgical treatment with his doctor, he began receiving medical ads related to that treatment.

The claims fly in the face of Apple CEO Tim Cook’s claim that the right to privacy is a “fundamental human right.”

If the district judge approves the settlement, tens of millions of Apple consumers who owned devices beginning in September 17th, 2014, would be able to file claims, receiving up to $20 per device, depending on the volume of the claims, according to court documents.

Keep reading

IRS Failed To Properly Dispose of Sensitive Tax Documents, Report Finds

Earlier this year, a federal judge sentenced a former IRS contractor to five years in prison for leaking the tax returns of multiple high-profile billionaires. The case involves genuine wrongdoing by someone entrusted with people’s private information. But a new report from the U.S. Treasury Department found the IRS itself was routinely negligent with taxpayer documents in its possession.

“The IRS receives and creates a significant volume of sensitive documents and is responsible for protecting these sensitive documents from receipt to disposal,” according to a report from the U.S. Treasury Inspector General for Tax Administration (TIGTA). Specifically, federal agencies must “shred, burn, mulch, pulp, or pulverize sensitive documents beyond recognition and reconstruction.”

The TIGTA report notes that since 2009, the IRS has contracted with an unnamed “outside national vendor” to do this. The vendor provides IRS facilities with locked bins to store sensitive documents, which are later picked up to dispose of the documents securely.

This vendor services “387 (75 percent) of 514 IRS facilities,” the report notes, while another 17 facilities contract with local companies. But for the rest, it’s apparently a free-for-all: “We found that the IRS is unaware of what sensitive document destruction capabilities are in place for the 110 facilities not covered under a contract. For example, the IRS initially thought the Andover, Massachusetts, facility was covered by a local sensitive document destruction contract. After we inquired about the contract, the IRS discovered that this facility was not covered by any contract.”

When the auditors then performed a site visit at that facility, they found “trash containers being used for all waste, including sensitive documents that contained tax information and Personally Identifiable Information.”

Keep reading

Global Age Verification Measures: 2024 in Review

EFF has spent this year urging governments around the world, from Canada to Australia, to abandon their reckless plans to introduce age verification for a variety of online content under the guise of protecting children online. Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy, and introduce more harm than they seek to combat.

Kids Experiencing Harm is Not Just an Online Phenomena

In November, Australia’s Prime Minister, Anthony Albanese, claimed that legislation was needed to protect young people in the country from the supposed harmful effects of social media. Australia’s Parliament later passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024, which bans children under the age of 16 from using social media and forces platforms to take undefined “reasonable steps” to verify users’ ages or face over $30 million in fines. This is similar to last year’s ban on social media access for children under 15 without parental consent in France, and Norway also pledged to follow a similar ban.

No study shows such harmful impact, and kids don’t need to fall into a wormhole of internet content to experience harm—there is a whole world outside the barriers of the internet that contributes to people’s experiences, and all evidence suggests that many young people experience positive outcomes from social media. Truthful news about what’s going on in the world, such as wars and climate change is available both online and by seeing a newspaper on the breakfast table or a billboard on the street. Young people may also be subject to harmful behaviors like bullying in the offline world, as well as online.

The internet is a valuable resource for both young people and adults who rely on the internet to find community and themselves. As we said about age verification measures in the U.S. this year, online services that want to host serious discussions about mental health issues, sexuality, gender identity, substance abuse, or a host of other issues, will all have to beg minors to leave and institute age verification tools to ensure that it happens. 

Keep reading

DARPA’s “Theory of Mind” Program Aims to Predict and Influence Behavior, Raising Privacy Concerns

A recent recruit of the US Department of Defense (DoD) Advanced Research Projects Agency (DARPA), Eric Davis – who joined earlier this year – has come up with a scheme dubbed, “the Theory of Mind.”

According to reports, it’s another DARPA shot at developing, this time algorithmic capabilities to predict, monitor, incentivize, and modify people’s future behavior.

This ambitious to say the least, and “upcoming” program, the existence of which has now – for some reason – been made public as a “special notice,” is framed as targeting adversaries and better equipping those making decisions within the US security apparatus, to either deter, or “incentivize” said adversaries.

The announcement could be there to act as a deterrent in and of itself, and there’s no doubt the US, and many other countries around the world are invested in finding ways to predict and control people.

Keep reading

IMF Offers a Glimpse at the Perils of Central Bank Digital Currencies

With Bitcoin climbing over $100,000, both investors and government officials are taking a closer look at digital money. The problem is that there’s a huge difference between an independent currency designed to resist surveillance and control, and one crafted by a central bank to enable exactly that. A new handbook from the International Monetary Fund embraces the potential of cryptocurrency while highlighting the dangers inherent in state dominance of the means of storing and exchanging value.

The IMF handbook’s opening chapter discusses how central bank digital currencies (CBDC) could keep government financial institutions relevant. “With digitalization and falling cash usage in parts of the world,” the authors write, “central banks are considering CBDC to ensure a fundamental anchor of trust in the monetary system.” Also discussed is the potential for CBDCs to “potentially help lower barriers to financial inclusion in countries with underdeveloped financial systems,” to “channel government payments directly to households,” and “to help reduce frictions in cross-border payments.”

Keep reading

FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

Keep reading

Cellphone Seizures & the Courts

After years of conflicting decisions by federal district courts across the country on whether Customs and Border Protection (CBP) agents can search your cell phone and laptop at ports of entry, the Seventh Circuit Court of Appeals has ruled that, “the routine inspection and search of a traveler’s electronics, or for that matter, any other type of property, at the border may be conducted without a warrant, probable cause, or even individualized suspicion of wrongdoing.”  

In reaching the decision, the court agreed with several other circuit courts, but put itself at odds with others and many (lower) federal district courts around the country. 

The issue moved quickly to the Supreme Court, which upheld the Seventh Circuit’s decision this month.  This is, sadly, despite the fact that the Fourth Circuit ruled earlier this year that “CBP agents need at least reasonable suspicion of a crime to search cell phones” and the Ninth Circuit agreed with that ruling.  

The present case stems from the 2016 arrest of Marcos Mendez at Chicago’s O’Hare International Airport.  Mendez is most certainly not the poster boy for protection of Americans’ civil liberties, but this is the hand that civil libertarians have been dealt in the case. 

Mendez arrived at O’Hare following a trip to Ecuador.  Along with his luggage, he carried a personal cellphone, a work cellphone and a work iPad.  

Because Mendez had been convicted in 2010 on a charge of indecent solicitation of a child, and because he had a history of international travel to countries where there are weak protections for children, CBP agents pulled him aside and searched his belongings. 

Agents used a technology called DOMEX to extract the contents of his phone, where they found thousands of images of child pornography.

Mendez was promptly arrested and charged with multiple counts of possession of child pornography.  His attorneys moved to suppress the photos, arguing that they were illegally obtained in violation of the Fourth Amendment. 

The federal district court for the Northern District of Illinois denied the motion and, in the end, Mendez pleaded guilty to one count of producing child pornography and was sentenced to six years in prison, but he preserved his right to appeal.  The Seventh Circuit now has denied that appeal and the Supreme Court has upheld the conviction.

Keep reading

Concerns over the Security of Electronic Personal Health Information Intensifies

On Friday, an attorney for Ascension Health, a major U.S. hospital operator, wrote to Maine’s attorney general to tell him the electronic personal health information (e-PHI) of Ascension patients and employees were compromised during the ransomware attack that occurred in May that affected nearly 5.6 million people.

The attack significantly disrupted Ascension’s operations across its extensive network, encompassing 134,000 associates, 35,000 affiliated providers, and 140 hospitals in 19 states and the District of Columbia. Immediate consequences included the diversion of ambulances, closure of pharmacies, and a reversion to manual record-keeping methods as critical IT systems had to be taken offline.

Ascension attorney Sunil Shenoi said in his December 19 letter to the Maine Attorney General’s Consumer Protection Division that the company “will begin notifying applicable Maine residents of the security incident” through the U.S. Postal Service.

Despite the growing scale of cyber threats against the healthcare industry, a Congressional Research Service (CRS) report earlier this month emphasized that “there is no comprehensive digital data protection law in the United States.” Variable state data privacy and security laws compound this problem. Furthermore, while many data protection guidance documents are available, they are voluntary.

The attack on Ascension is the latest cyber-attack targeting the healthcare sector, which is particularly vulnerable due to the sensitive nature of patient data and the critical importance of uninterrupted medical services. Earlier this year there was a similar ransomware attack on Change Healthcare, a subsidiary of UnitedHealth Group that affected the personal health information of 100 million people, underscoring once again the escalating cybersecurity challenges healthcare providers face.

The February ransomware attack on Change Healthcare – attributed to the BlackCat cybercrime group – disrupted electronic payments and medical claims processing affecting healthcare providers and patients nationwide.

Keep reading

Mitt Romney’s AI Bill Seeks to Ban Anonymous Cloud Access, Raising Privacy Concerns

A new Senate bill, the Preserving American Dominance in AI Act of 2024 (S.5616), has reignited debate over its provisions, particularly its push to impose “know-your-customer” (KYC) rules on cloud service providers and data centers. Critics warn that these measures could lead to sweeping surveillance practices and unprecedented invasions of privacy under the guise of regulating artificial intelligence.

We obtained a copy of the bill for you here.

KYC regulations require businesses to verify the identities of their users, and when applied to digital platforms, they could significantly impact privacy by linking individuals’ online activities to their real-world identities, effectively eliminating anonymity and enabling intrusive surveillance.

Keep reading

Privacy in Pieces: States Scramble to Protect Data as Congress Dithers

As Congress struggles to catch up to the European Union’s comprehensive data privacy regulations, some US states have begun to forge their own robust legislation to increase user protection. But this system only protects the data of some Americans, leaving more than half the country without guaranteed data protection or privacy rights.

And it may take years before a national solution is created, if at all.

The EU took its first step towards providing sweeping privacy protection years ago, with the creation of the region’s General Data Protection Regulation (GDPR).

The GDPR, which took effect in 2018 and gives individuals ownership over their personal information and the right to control who can use it, is often marked as the first major, multinational step towards comprehensive data protection and privacy.

Traditionally, the EU’s approach to data privacy stems from a human rights standpoint and has its roots in World War II, when the Nazi party collected personal data to commit numerous atrocities and, later, when the East German secret police, the Stasi, carried out invasive state surveillance.

After the war ended, the right to privacy was enshrined in the European Convention on Human Rights and later in the EU Charter of Fundamental Rights, becoming the ideological foundation on which data privacy laws have been built in the EU today.

Across the Atlantic, the US Constitution does not explicitly provide a right to privacy.

Rather than enacting a comprehensive federal law, the US federal government has taken a reactive approach, passing legislation only after issues arise in a few specific business sectors, which has resulted in a series of data protection laws addressing specific types of data. For example, the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) have protected medical and financial data respectively since the 1990s.

“The US is very much an innovation, capital-first society,” said Jodi Daniels, founder and CEO of privacy consultancy firm Red Clover Advisors. “And they do want to protect the people, but it has to all get balanced.”

But in recent years, some lawmakers have begun to push back against this system by introducing comprehensive data privacy bills, like the bipartisan American Privacy Rights Act (APRA).

Introduced in April by Sen. Maria Cantwell (D-WA) and Rep. Cathy McMorris Rodgers (R-WA), APRA is like GDPR in that it is not limited to specific business sectors and aims to minimize the amount and types of data companies can collect, give consumers control over their information, and allow them to opt out of targeted advertising.

While the legislation didn’t get very far, stalling in the House Committee on Energy and Commerce, it’s the furthest any comprehensive privacy bill has gone in Congress yet. To become law, however, it would have to be reintroduced next year when Republicans control both chambers. 

Some lawmakers, like Sen. Ted Cruz (R-TX), contend that APRA is more concerned with “controlling the internet” than creating a balance between innovation and privacy protection, and argue that the current right to private action present in the act, which allows individuals to pursue legal action if their privacy is violated, will give overwhelming power to trial lawyers.

Keep reading