Florida AG Subpoenas Medical Firms Over ‘Backdoor’ on China-Made Devices

Florida Attorney General James Uthmeier has subpoenaed two medical companies selling Chinese-made patient monitors over concerns that the devices could send patient data to China.

Uthmeier’s office stated in a press release that they had taken legal action against Contec Medical Systems, a China-based company known for making patient monitors, and Epsimed, a Miami-based company that resells Contec-made monitors under its own brand name.

The office alleged that Contec “concealed serious security problems” in its products, including a built-in “backdoor” that could “allow bad actors to manipulate data” on the devices without knowledge of either the patient or the provider, and programming that automatically sends patient information to an IP address that belongs to a university in China.

“Some of the most private, personal information” is going to China “without the consent, and in most cases, the awareness of the patient,” Uthmeier told The Epoch Times. “I think there’s a major consumer protection issue for Floridians, for Americans as a whole, and we’re not going to stand for it.”

Uthmeier’s office alleged that Contec and Epsimed may have violated a state law, the Deceptive and Unfair Trade Practices Act, in their assurances on product quality when the products appear to fall far short of standards given their security vulnerabilities. He threatened to pursue damages, civil penalties, and injunctive relief to protect consumers.

Contec Medical Systems is headquartered in Qinhuangdao, a port city located in northern China’s Hebei Province. It has an affiliate called Contec Medical Systems USA Inc. in Illinois to handle the U.S. market.

Keep reading

Largest Data Breach in History: *16 Billion* Login Credentials Exposed in Databases

Security researchers have uncovered potentially the largest data breach in history, comprising an astounding 16 billion login credentials, which include Apple accounts.

9to5Mac reports that a team of security researchers has stumbled upon a massive trove of stolen login credentials, exposing an unprecedented 16 billion records, including Apple accounts. The discovery, which researchers describe as “one of the largest data breaches in history,” has sent shockwaves through the cybersecurity community and raised serious concerns about the potential for widespread account takeovers, identity theft, and highly targeted phishing attacks.

The researchers, from the cybersecurity firm Cybernews, initially found a database containing 184 million records sitting unprotected on a web server last month. However, as they delved deeper, they realized that this was just one of many unsecured databases full of private information. Further investigation revealed an additional 29 datasets, each containing tens of millions to over 3.5 billion records. In total, the researchers uncovered a staggering 16 billion records, making this one of the biggest stolen login discoveries of all time.

What sets this breach apart from others is the freshness and organization of the data. The researchers emphasized that these are not just recycled old breaches, but rather “fresh, weaponizable intelligence at scale.” The neatly structured data, which includes URLs, usernames, and passwords, points to infostealers as the likely source. Infostealers are a type of malware specifically designed to collect login credentials in this exact format.

The implications of this breach are far-reaching and deeply concerning. With access to such a vast number of login credentials, cybercriminals can easily carry out account takeovers, steal identities, and launch highly targeted phishing campaigns. Apple accounts, which are among the exposed credentials, are particularly worrisome, as they can be used to access a wide range of sensitive information and services, including iCloud, Apple Pay, and the App Store. Other logins reportedly included in the massive datasets include Google, Facebook, instagram, Amazon, and many other popular web services.

Keep reading

Stripe And Substack Demand Authors’ Financial Details

Financial privacy is a right under the Constitution, as is free speech.  

Under the guise of “credit review,” Stripe is now rolling out a requirement that appears to target conservative or “anti-vax” Substack authors. Stripe is requiring that these authors provide all of their current and historic financial records associated with the bank account into which Stripe deposits Substack subscriber payments (after taking 10% off the top for Substack and 3% for Stripe). Stripe already has information concerning this bank account (including deposits from Stripe), as we have been doing business with Stripe via this account for over two years.

If I or anyone else agree to these new terms, this newly implemented arbitrary, capricious and overreaching requirement will provide Stripe with complete records of all financial transactions associated with this account. Consequently, this will provide Stripe with comprehensive information on all of my customers, patients, and clients, all of my travel (historic and planned), all of my purchases, and any donations (and donor information).

This information from my account and those of any others who comply with this demand can be hacked or sold, provided to the US Government, used to fuel predictive algorithms (AI), used to derive insights into my political orientation, weaponized against me by press or other hostile actors, or used to support future social credit score-based restrictions.

Stripe has a history of financially deplatforming (or debanking) for political reasons, including removing support for Donald Trump’s presidential campaign. Despite its relatively recent entry into the financial transaction business, Stripe has become a major global financial organization, and processed one $1 trillion in payments during 2023, and is now expanding its credit charge program

Keep reading

Senate Pushes Bill That Could End Private Messaging

Under the pretext of strengthening measures against child exploitation online, a controversial Senate bill is resurfacing with provisions that privacy advocates say would gut critical internet protections and compromise the security and privacy of all citizens.

Known as the STOP CSAM Act of 2025 (S. 1829), the legislation is being criticized for using broad language and vague legal standards that could severely weaken encryption and open the floodgates for content takedowns, including legal content, across a wide range of online services.

We obtained a copy of the bill for you here.

The bill’s stated aim is to curb the spread of child sexual abuse material, a crime already strictly prohibited under federal law. Current regulations already compel online platforms to report known instances of such material to the National Center for Missing and Exploited Children, which coordinates with law enforcement.

However, S. 1829 goes well beyond this existing mandate, targeting a wide spectrum of internet platforms with new forms of criminal and civil liability that could penalize even the most privacy-conscious and compliant services.

The scope of the legislation is sweeping. Its provisions apply not only to large social media platforms but also to private messaging apps, cloud storage services, and email providers.

Keep reading

Radio trick secretly turns laptop into a spy speaker that talks through walls

Security researchers at the University of Florida and the University of Electro-Communications in Japan have revealed that modern digital microphones used in laptops and speakers can leak audio as electromagnetic signals.

This could lead to the creation of a new network of wireless eavesdropping without needing any malware, hacking, or even physical access to your device.

In the aftermath, this vulnerability could affect billions of devices worldwide, exposing private conversations to corporate spies and government surveillance.

How does this attack work?

All devices, such as speakers and laptops, have MEMS microphones, which are a tiny part of the system tasked with converting audio into digital pulses that contain remnants of the original speech. These pulses create weak radio emissions that can be captured by invisible broadcasts.

“With an FM radio receiver and a copper antenna, you can eavesdrop on these microphones. That’s how easy this can be,” said Sara Rampazzi, a professor of computer and information science and engineering at the University of Florida who co-authored the new study. “It costs maybe a hundred dollars, or even less.”

The experiment that proved it all

The team of researchers proved their theory using eerie sounds. A woman’s distorted voice emerged from the radio equipment as she spoke test sentences like “The birch canoe slid on the smooth planks.” and “Glue the sheet to the dark blue background.” Each transmission penetrated through concrete walls up to 10 inches thick.

Laptops proved to be the weakest link as their microphones are connected through long internal wires that act as antennas, amplifying the leaked signals.

Now comes the dangerous part. For the leak to happen, your microphone does not necessarily need to be in an active state. Simply having applications like Spotify, Amazon Music, or Google Drive – can enable the microphone to leak radio signals.

AI in the scenario

The researchers didn’t just stop at this stage. They went beyond and processed the intercepted signals with AI speech-to-text tools from OpenAI and Microsoft. These LLMs then cleaned the audio and converted the recordings into clear, searchable text.

Surprisingly, in tests, the attack had recognized spoken digits with 94.2% accuracy from up to 2 meters away, even through a concrete war. It kept a 14% transcription error rate, making majority of the conversations understandable.

Keep reading

OpenAI Is Ordered to Save Every ChatGPT Chat — Even the Ones You Delete

A federal court order requiring OpenAI to retain all ChatGPT conversations, including those users have deleted, should strong concern among privacy advocates and added pressure to a growing legal battle over the use of copyrighted material in AI systems.

On May 13, US Magistrate Judge Ona T. Wang directed OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going-forward basis until further order of the Court.” Although the order was issued several weeks ago, it only came to wider attention this week as OpenAI began taking formal steps to challenge it.

The ruling stems from multiple lawsuits filed by media organizations, including The New York Times, that accuse OpenAI of unlawfully using their copyrighted content to train and operate ChatGPT.

In response, OpenAI submitted a filing urging US District Judge Sidney H. Stein to overturn what it described as a “sweeping, unprecedented order.”

The company argued that the directive forces it to ignore user choices about data deletion, jeopardizing the privacy of millions. OpenAI also pointed to a statement from The New York Times editorial board asserting that Americans “should be able to control what happens to their personal data.”

Keep reading

Army surveillance balloons spotted over Tucson raise privacy concerns from advocates

The U.S. Army and a private company are flying high-altitude surveillance balloons over the Tucson area, raising concerns among privacy advocates. 

Multiple high-altitude balloons have been spotted over the Tucson and Sierra Vista area for more than a week, with one balloon in particular staying over the area longer than any of the others. That balloon, with the registration number N257TH, has made headlines in the past. 

The balloon is owned by South Dakota aerospace company Aerostar, and in 2023 was mistaken for a Chinese spy balloon. The balloon is actually part of Aerostar’s “Thunderhead” balloon system, which has been doing multiple tests with the military and other contractors across the nation and around the globe. 

“It is a technology that should not and constitutionally cannot be applied to the American people,” Jay Stanley, a senior policy analyst at the American Civil Liberties Union, told the Arizona Mirror. “Even testing for eventual overseas use in legitimate combat theaters raises a lot of questions about what kind of data is being collected.”

Aerostar would not answer specific questions about what type of testing was being done. The company referred additional questions to the U.S. Department of Defense and the Army, neither of which responded to multiple requests for comment. 

Aerostar confirmed that the flights were not connected to U.S. Customs and Border Patrol or Department of Homeland Security programs, “however high-altitude balloons would be ideal for that type of mission,” Aerostar Culture and Communications Director Anastasia Quanbeck said in an email to the Mirror. 

“By leveraging directional wind patterns at high altitudes, Aerostar’s Thunderhead Balloon Systems offer groundbreaking capabilities for navigation and persistence over areas of interest,” she said. “Aerostar Thunderhead Balloon Systems are capable of supporting intelligence, surveillance, and reconnaissance missions, as well as extending communications across wide distances, environmental monitoring, earth observation, and scientific research.” 

Quanbeck said she was not able to discuss the work the company does with the DOD or the Army. 

Keep reading

No Phone Home Is the Privacy Rebellion Digital IDs Didn’t See Coming

Welcome to a new privacy-first initiative challenging the digital identity status quo, urging a sharp turn away from the surveillance-ready infrastructure embedded in mobile driver’s licenses.

The campaign, called No Phone Home, brings together a broad alliance of civil liberties groups, privacy experts, technologists, lawmakers, and public officials who are resisting the ways digital IDs compromise people’s rights.

What’s fueling the campaign is concern over how mobile driver’s licenses, increasingly adopted in the US and abroad, are built atop a technical framework that allows them to silently transmit data back to issuing authorities. While this function may not be active by default, it exists; and that, privacy advocates argue, is a serious vulnerability.

Even if unused, if the architecture allows for data to be sent back to government servers, it eventually will be the campaign’s statement warns.

Keep reading

Fifth Circuit Affirms Reasonable Expectation of Privacy in Cloud Storage in Dropbox Case

A federal appeals court has ruled that state officials violated the Fourth Amendment when they orchestrated the covert retrieval of documents from a nonprofit’s Dropbox folder, an outcome that significantly strengthens legal protections for digital privacy in cloud-based environments.

In a 25-page decision issued May 28, 2025, the US Court of Appeals for the Fifth Circuit held that The Heidi Group, a Texas-based pro-life healthcare organization, had a reasonable expectation of privacy in its digital files and that a state investigator’s role in acquiring them without judicial authorization amounted to an unconstitutional search.

We obtained a copy of the decision for you here.

Writing for the court, Judge Andrew S. Oldham emphasized that the constitutional right to be free from unreasonable searches extends to “the content of stored electronic communications,” including files housed in commercial cloud platforms.

“Heidi has a reasonable expectation of privacy in its documents and files uploaded to Dropbox,” the opinion stated. “Heidi’s records are analogous to letters, phone calls, emails, and social media messages: Each contains information content transmitted through or stored with an intermediary that is not intended to ‘be broadcast to the world.’”

The controversy arose after Phyllis Morgan, a former employee of The Heidi Group, exploited her lingering access to the organization’s Dropbox folder for nearly a year after being terminated.

Rather than reporting the breach or seeking lawful channels to obtain the data, a senior investigator from the Texas Health and Human Services Commission’s Office of Inspector General (OIG), Gaylon Dacus, allegedly encouraged the ex-employee to continue accessing the nonprofit’s confidential materials and forward them to the state.

Keep reading

Opinion: The Most Terrifying Company in America Is Probably One You’ve Never Heard Of

Most Americans have never heard of Palantir. That’s by design. It doesn’t make phones or social platforms. It doesn’t beg for your data with bright buttons or discount codes. Rather, it just takes it. Quietly. Legally. Systematically. Palantir is a back-end beast, the silent spine of modern surveillance infrastructure.

Palantir’s influence isn’t hypothetical. It’s operational. From the battlefields of Ukraine to the precincts of Los Angeles, its software guides drone strikes, predicts crime, allocates police resources, and even helps governments decide which children might someday become “threats.” These aren’t sci-fi hypotheticals. They are pilot programs, already integrated, already scaling.

This software—Gotham, Foundry, and now its Artificial Intelligence Platform (AIP)—is designed to swallow everything: hospital records, welfare files, license plate scans, school roll calls, immigration logs and even your tweets. It stitches these fragments into something eerily complete—a unified view of you. With each data point, the image sharpens.

If Facebook turned people into products, Palantir turns them into probabilities. You’re not a user. You’re a variable—run through predictive models, flagged for anomalies, and judged in silence.

This is not just surveillance. It’s prediction. And that distinction matters: Surveillance watches. Prediction acts. It assigns probabilities. It flags anomalies. It escalates risk. And it trains bureaucrats and law enforcement to treat those algorithmic suspicions as fact. In short: the software decides, and people follo

Keep reading