IMF Offers a Glimpse at the Perils of Central Bank Digital Currencies

With Bitcoin climbing over $100,000, both investors and government officials are taking a closer look at digital money. The problem is that there’s a huge difference between an independent currency designed to resist surveillance and control, and one crafted by a central bank to enable exactly that. A new handbook from the International Monetary Fund embraces the potential of cryptocurrency while highlighting the dangers inherent in state dominance of the means of storing and exchanging value.

The IMF handbook’s opening chapter discusses how central bank digital currencies (CBDC) could keep government financial institutions relevant. “With digitalization and falling cash usage in parts of the world,” the authors write, “central banks are considering CBDC to ensure a fundamental anchor of trust in the monetary system.” Also discussed is the potential for CBDCs to “potentially help lower barriers to financial inclusion in countries with underdeveloped financial systems,” to “channel government payments directly to households,” and “to help reduce frictions in cross-border payments.”

Keep reading

FBI, DEA Deployment of AI Raises Privacy, Civil Rights Concerns

A required audit of the Drug Enforcement Administration (DEA) and Federal Bureau of Investigation’s (FBI) efforts to integrate AI such as biometric facial recognition and other emerging technology raises significant privacy and civil rights concerns that necessitate a careful examination of the two agencies’ initiatives.

The 34-page audit report – which was mandated by the 2023 National Defense Authorization Act to be carried out by the Department of Justice’s (DOJ) Inspector General (IG) – found that the FBI and DEA’s integration of AI is fraught with ethical dilemmas, regulatory inadequacies, and potential impacts on individual liberties.

The IG said the integration of AI into the DEA and FBI’s operations holds promise for enhancing intelligence capabilities, but it also brings unprecedented risks to privacy and civil rights.

The two agencies’ nascent AI initiatives, as described in the IG’s audit, illustrate the tension between technological advancement and the safeguarding of individual liberties. As the FBI and DEA navigate these challenges, they must prioritize transparency, accountability, and ethical governance to ensure that AI serves the public good without compromising fundamental rights.

While the DEA and FBI have begun to integrate AI and biometric identification into their intelligence collection and analysis processes, the IG report underscores that both agencies are in the nascent stages of this integration and face administrative, technical, and policy-related challenges. These difficulties not only slow down the integration of AI, but they also exacerbate concerns about ensuring the ethical use of AI, particularly regarding privacy and civil liberties.

One of the foremost challenges is the lack of transparency associated with commercially available AI products. The IG report noted that vendors often embed AI capabilities within their software, creating a black-box scenario where users, including the FBI, lack visibility into how the algorithms function or make decisions. The absence of a software bill of materials (SBOM) — a comprehensive list of software components — compounds the problem, raising significant privacy concerns as sensitive data could be processed by opaque algorithms, potentially leading to misuse or unauthorized surveillance.

“FBI personnel … stated that most commercially available AI products do not have adequate transparency of their software components,” the IG said, noting that “there is no way for the FBI to know with certainty whether such AI capabilities are in a product unless the FBI receives a SBOM.”

Keep reading

Cellphone Seizures & the Courts

After years of conflicting decisions by federal district courts across the country on whether Customs and Border Protection (CBP) agents can search your cell phone and laptop at ports of entry, the Seventh Circuit Court of Appeals has ruled that, “the routine inspection and search of a traveler’s electronics, or for that matter, any other type of property, at the border may be conducted without a warrant, probable cause, or even individualized suspicion of wrongdoing.”  

In reaching the decision, the court agreed with several other circuit courts, but put itself at odds with others and many (lower) federal district courts around the country. 

The issue moved quickly to the Supreme Court, which upheld the Seventh Circuit’s decision this month.  This is, sadly, despite the fact that the Fourth Circuit ruled earlier this year that “CBP agents need at least reasonable suspicion of a crime to search cell phones” and the Ninth Circuit agreed with that ruling.  

The present case stems from the 2016 arrest of Marcos Mendez at Chicago’s O’Hare International Airport.  Mendez is most certainly not the poster boy for protection of Americans’ civil liberties, but this is the hand that civil libertarians have been dealt in the case. 

Mendez arrived at O’Hare following a trip to Ecuador.  Along with his luggage, he carried a personal cellphone, a work cellphone and a work iPad.  

Because Mendez had been convicted in 2010 on a charge of indecent solicitation of a child, and because he had a history of international travel to countries where there are weak protections for children, CBP agents pulled him aside and searched his belongings. 

Agents used a technology called DOMEX to extract the contents of his phone, where they found thousands of images of child pornography.

Mendez was promptly arrested and charged with multiple counts of possession of child pornography.  His attorneys moved to suppress the photos, arguing that they were illegally obtained in violation of the Fourth Amendment. 

The federal district court for the Northern District of Illinois denied the motion and, in the end, Mendez pleaded guilty to one count of producing child pornography and was sentenced to six years in prison, but he preserved his right to appeal.  The Seventh Circuit now has denied that appeal and the Supreme Court has upheld the conviction.

Keep reading

Concerns over the Security of Electronic Personal Health Information Intensifies

On Friday, an attorney for Ascension Health, a major U.S. hospital operator, wrote to Maine’s attorney general to tell him the electronic personal health information (e-PHI) of Ascension patients and employees were compromised during the ransomware attack that occurred in May that affected nearly 5.6 million people.

The attack significantly disrupted Ascension’s operations across its extensive network, encompassing 134,000 associates, 35,000 affiliated providers, and 140 hospitals in 19 states and the District of Columbia. Immediate consequences included the diversion of ambulances, closure of pharmacies, and a reversion to manual record-keeping methods as critical IT systems had to be taken offline.

Ascension attorney Sunil Shenoi said in his December 19 letter to the Maine Attorney General’s Consumer Protection Division that the company “will begin notifying applicable Maine residents of the security incident” through the U.S. Postal Service.

Despite the growing scale of cyber threats against the healthcare industry, a Congressional Research Service (CRS) report earlier this month emphasized that “there is no comprehensive digital data protection law in the United States.” Variable state data privacy and security laws compound this problem. Furthermore, while many data protection guidance documents are available, they are voluntary.

The attack on Ascension is the latest cyber-attack targeting the healthcare sector, which is particularly vulnerable due to the sensitive nature of patient data and the critical importance of uninterrupted medical services. Earlier this year there was a similar ransomware attack on Change Healthcare, a subsidiary of UnitedHealth Group that affected the personal health information of 100 million people, underscoring once again the escalating cybersecurity challenges healthcare providers face.

The February ransomware attack on Change Healthcare – attributed to the BlackCat cybercrime group – disrupted electronic payments and medical claims processing affecting healthcare providers and patients nationwide.

Keep reading

Mitt Romney’s AI Bill Seeks to Ban Anonymous Cloud Access, Raising Privacy Concerns

A new Senate bill, the Preserving American Dominance in AI Act of 2024 (S.5616), has reignited debate over its provisions, particularly its push to impose “know-your-customer” (KYC) rules on cloud service providers and data centers. Critics warn that these measures could lead to sweeping surveillance practices and unprecedented invasions of privacy under the guise of regulating artificial intelligence.

We obtained a copy of the bill for you here.

KYC regulations require businesses to verify the identities of their users, and when applied to digital platforms, they could significantly impact privacy by linking individuals’ online activities to their real-world identities, effectively eliminating anonymity and enabling intrusive surveillance.

Keep reading

Privacy in Pieces: States Scramble to Protect Data as Congress Dithers

As Congress struggles to catch up to the European Union’s comprehensive data privacy regulations, some US states have begun to forge their own robust legislation to increase user protection. But this system only protects the data of some Americans, leaving more than half the country without guaranteed data protection or privacy rights.

And it may take years before a national solution is created, if at all.

The EU took its first step towards providing sweeping privacy protection years ago, with the creation of the region’s General Data Protection Regulation (GDPR).

The GDPR, which took effect in 2018 and gives individuals ownership over their personal information and the right to control who can use it, is often marked as the first major, multinational step towards comprehensive data protection and privacy.

Traditionally, the EU’s approach to data privacy stems from a human rights standpoint and has its roots in World War II, when the Nazi party collected personal data to commit numerous atrocities and, later, when the East German secret police, the Stasi, carried out invasive state surveillance.

After the war ended, the right to privacy was enshrined in the European Convention on Human Rights and later in the EU Charter of Fundamental Rights, becoming the ideological foundation on which data privacy laws have been built in the EU today.

Across the Atlantic, the US Constitution does not explicitly provide a right to privacy.

Rather than enacting a comprehensive federal law, the US federal government has taken a reactive approach, passing legislation only after issues arise in a few specific business sectors, which has resulted in a series of data protection laws addressing specific types of data. For example, the Health Insurance Portability and Accountability Act (HIPAA) and the Gramm-Leach-Bliley Act (GLBA) have protected medical and financial data respectively since the 1990s.

“The US is very much an innovation, capital-first society,” said Jodi Daniels, founder and CEO of privacy consultancy firm Red Clover Advisors. “And they do want to protect the people, but it has to all get balanced.”

But in recent years, some lawmakers have begun to push back against this system by introducing comprehensive data privacy bills, like the bipartisan American Privacy Rights Act (APRA).

Introduced in April by Sen. Maria Cantwell (D-WA) and Rep. Cathy McMorris Rodgers (R-WA), APRA is like GDPR in that it is not limited to specific business sectors and aims to minimize the amount and types of data companies can collect, give consumers control over their information, and allow them to opt out of targeted advertising.

While the legislation didn’t get very far, stalling in the House Committee on Energy and Commerce, it’s the furthest any comprehensive privacy bill has gone in Congress yet. To become law, however, it would have to be reintroduced next year when Republicans control both chambers. 

Some lawmakers, like Sen. Ted Cruz (R-TX), contend that APRA is more concerned with “controlling the internet” than creating a balance between innovation and privacy protection, and argue that the current right to private action present in the act, which allows individuals to pursue legal action if their privacy is violated, will give overwhelming power to trial lawyers.

Keep reading

UN General Assembly to Adopt Controversial Cybercrime Treaty, Ignoring Privacy and Free Speech Concerns

The United Nations General Assembly will this week adopt the UN Cybercrime Treaty, with the US expected to be among the countries that support the controversial document.

Opponents will then have to hope that various UN member-states would eventually opt not to sign and ratify the treaty, which has variously been described as “flawed” and all the way to being “a threat to free speech and privacy” and a tool for “transnational oppression.”

Among those opponents are human rights and media organizations, as well as tech companies, while doubts have been expressed even by the UN High Commissioner for human rights, among others.

Yet governments and law enforcement agencies are among the Cybercrime Treaty’s supporters since it opens up the possibility of more effective cross-border cooperation and evidence (including personal data) gathering and sharing.

But, the final text that is about to be adopted, in many parts falls short of what are considered international human rights standards, allowing UN members who sign the document to then choose whether to build a number of these standards into their own implementation.

Keep reading

Mike Lee’s App Store Accountability Act Would Make Google and Apple Check IDs

Utah Republican Senator Mike Lee has introduced a bill to keep porn out of app stores. There might just be one tiny problem here: They already do.

So, what’s the point? Dig a little deeper and you’ll see that this bill is about forcing age verification on app stores and mobile devices, with a side goal of chilling sex-related speech.

Lee is framing his new bill (S. 5364) as a matter of “accountability”—a word found right in the bill’s title—and of preventing “big corporations” from “victimiz[ing] kids” with “sexual and violent content.” We can’t count on tech companies to act “moral” on their own accord, Lee posted to X.

But big corporations like Google and Apple already ban apps featuring sexual content, and these bans extend not just to kids but to everybody.

While apps can be downloaded from a plethora of sources, there are two main centralized app marketplaces: Apple’s App Store, for iPhones, and the Google Play store, for Androids. Play Store guidelines reject all apps “that contain or promote sexual content or profanity, including pornography, or any content or services intended to be sexually gratifying.” The App Store explicitly prohibits apps featuring “overtly sexual or pornographic material,” which it defines broadly to include any “explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings.” Apple also bans “hookup” apps and any other “apps that may include pornography or be used to facilitate prostitution.”

Lee’s bill can’t be about simply convincing Apple and Google to adopt his version of morality, since they already have.

Keep reading

X’s Linda Yaccarino Backs Kids’ “Safety” Bill as Digital ID Privacy Fears Grow

As the legislative session nears its conclusion, X CEO Linda Yaccarino has announced her role in revising the Kids Online Safety Act (KOSA), a move seemingly intended to sway hesitant Republican leaders in the House. But skeptics warn that the bill’s approach to protecting children online—through measures likely to lead to age verification—could come at the cost of privacy and online anonymity, leading to the broader adoption of digital ID systems.

Under KOSA, tech platforms would face a “duty of care” to prevent harm to minors, targeting features like infinite scroll and photo filters. While Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) lauded the updates for “safeguarding free speech online and ensuring it is not used to stifle expression,” privacy advocates argue the bill’s underlying mechanisms remain problematic. They warn that fulfilling KOSA’s requirements could necessitate platforms to verify users’ ages, potentially by tying online activity to government-issued IDs—a move that threatens to erode online anonymity and jeopardize free expression.

While the bill itself does not mandate age verification, it requires a “duty of care” towards content shown to minors that could cause platforms to introduce age verification to avoid liability. Despite the updated text of the bill, it still contains a controversial provision that will likely ultimately pave the way for online age verification (by requiring the Secretary of Commerce, FTC, and FCC to study “options for developing systems to verify age at the device or operating system level”).

X owner Elon Musk has recently criticized Australia for trying to implement a similar bill so it’s unclear why Musk and Yaccarino aren’t aligned on the issue.

Keep reading

Meet the Spyware Companies Preparing to Unleash Their Tech During Trump’s 2nd Term

In late September, the US Immigration and Customs Enforcement signed a $2 million one-year contract with controversial Israeli spyware vendor Paragon Solutions. The contract involved Paragon’s US subsidiary based in Chantilly, Virginia and ICE’s Homeland Security Investigations Division 3.

Paragon claims its tools can help law enforcement and governments remotely crack encrypted messaging platforms like WhatsApp, Telegram, Signal, and Facebook Messenger.

The agreement calls for Paragon to provide ICE with a “fully configured proprietary solution including license, hardware, warranty, maintenance and training.” The agreement was first reported on by Wired.

Within weeks of the ICE-Paragon contract becoming public Wired reported the contract was under review by the White House to see if it violates a 2023 Executive Order issued by the Biden administration. Executive Order 14093 was signed by President Joe Biden in March 2023 as part of an ongoing US government effort specifically aimed at restricting the use of commercial spyware by U.S. agencies.

The EO says the US government will continue to promote the “responsible use” of spyware that aligns with promoting “democratic values”. Despite the U.S. government efforts to prosecute journalists like Julian Assange, the EO claims the U.S. has an interest in “promoting respect for human rights; and defending activists, dissidents, and journalists against threats to their freedom and dignity.”

The Biden administration has also made efforts to impact the commercial spyware market, including placing spyware vendors like Israeli firm NSO Group and Intellexa on the “Entity List” which prevents any US companies from doing business with them. The Biden White House has also implemented a visa restriction policy for individuals “who have been involved in the development and sale of commercial spyware or who are immediate family members of those involved.”

Keep reading