TikTok Says Privacy Makes Users Less Safe

Over the past five years, the largest social platforms settled on a clear position about private messaging. Lock it down. Facebook turned on end-to-end encryption. Instagram and Messenger did the same. X joined the club. Yes, metadata is still an issue and the protocols used matter; but, generally speaking, the move was toward more privacy of actual messages.

TikTok looked at that trend and made a different choice. Then it scheduled a briefing in London with the BBC to explain the reasoning.

The explanation was safety.

In the UK, TikTok belongs to ByteDance, a Chinese technology company that operates under Beijing’s jurisdiction. China maintains strict limits on end-to-end encryption inside its borders. TikTok, after its own review of the issue, reached the same policy outcome for its messaging system.

Alan Woodward, a cybersecurity professor at Surrey University, raised that point directly. The company’s “Chinese influence might be behind the decision,” he said, adding that end-to-end encryption is “largely banned in China.”

TikTok declined to engage with that suggestion, of course. The remark hung in the air. However, it’s worth adding that the US operation of TikTok has made no indication that it is moving towards private messaging standards either.

End-to-end encryption is simple in theory. Only the people in a conversation can read the messages. The platform running the service cannot access the content. Governments cannot request it. Engineers inside the company cannot view it.

TikTok’s system operates in a different way. Messages on the platform remain readable to the company. Employees can access them under defined circumstances. Law enforcement agencies can request them through legal channels.

TikTok argues that readable messages allow the company to identify harmful activity.

The debate turns on a basic technical fact. “We can read your messages to catch predators,” and “we can read your messages” describe the same system.

Keep reading

Zohran Mamdani Has Already Broken His Promise to Be Transparent

New York City Mayor Zohran Mamdani has come under fire for using the encrypted messaging app Signal to communicate with elected officials while conducting government business.

On the campaign trail, Mamdani repeatedly promised his administration would be transparent. Yet, a Politico report revealed that the mayor used Signal from a personal phone number to communicate with elected officials and political strategists. In at least one of these exchanges, he discussed official city business.

Three people with knowledge of the matter told POLITICO that as mayor Mamdani has used the encrypted messaging app to communicate with fellow elected officials and political advisers. In at least one instance, he’s discussed government business over the app, according to one of those people, who like the others, was granted anonymity to discuss the sensitive issue.

POLITICO independently confirmed that Mamdani’s Signal account, registered to his personal cell phone number, remains active.

Norman Siegel, a veteran First Amendment lawyer who previously helmed the New York Civil Liberties Union, said mayors should never use Signal to communicate with other government officials as a rule of thumb — and that there’s another particularly important reason why Mamdani himself should avoid the app.

“With our new mayor, so much of what he’s articulating is a breath of fresh air,” Siegel said. ”I would urge him to not engage in Signal or similar kinds of applications that basically are meant to hide information and prevent the public from knowing the inner workings of government.”

Keep reading

Republican Lawmakers Demand Answers on UK’s iCloud Encryption Backdoor Order

Two senior Republican lawmakers are demanding answers from the British government about its secret order forcing Apple to break its own encryption. The UK has until March 11 to respond.

House Judiciary Committee Chairman Jim Jordan and Foreign Affairs Committee Chairman Brian Mast sent a joint letter on Wednesday to Home Secretary Shabana Mahmood, pressing for a formal briefing on the Technical Capability Notice (TCN) served on Apple under the UK’s Investigatory Powers Act.

We obtained a copy of the letter for you here.

It’s the latest move in a surveillance fight that began over a year ago and has rattled the US-UK relationship at the highest levels.

In January 2025, UK security officials secretly ordered Apple to build a backdoor into iCloud that would allow them to decrypt any user’s data, anywhere in the world. Not just suspected criminals, not just UK citizens. Everyone.

The order targeted Apple’s Advanced Data Protection (ADP) feature, the optional end-to-end encryption that ensures even Apple can’t read iCloud backups. Apple’s response was to pull ADP from the UK market entirely in February 2025, stripping strong encryption options from roughly 35 million iPhone users rather than comply with a demand it couldn’t legally discuss.

UK law makes it a criminal offense for companies to confirm or deny the existence of such orders, even to their own government.

Apple couldn’t tell the US Department of Justice that the order existed. The DOJ couldn’t verify whether it complied with the CLOUD Act, the bilateral agreement governing how the two countries share access to digital evidence. That agreement explicitly states it “shall not create any obligation that providers be capable of decrypting data.” The UK’s order appears to do exactly that.

The reaction in Washington was bipartisan. Senator Ron Wyden and Congressman Andy Biggs slammed the order as “effectively a foreign cyber attack waged through political means.”

President Trump compared the UK’s conduct directly to China’s. Speaking to the Spectator after meeting Prime Minister Keir Starmer, Trump said: “We actually told [Starmer] . . . that’s incredible. That’s something, you know, that you hear about with China.” DNI Secretary Tulsi Gabbard called any attempt to compel Apple to create security weaknesses an “egregious violation” of privacy and confirmed legal and intelligence teams were assessing the implications.

Keep reading

UK Government Plans to Use Delegated Powers to Undermine Encryption and Expand Online Surveillance

The UK government wants to scan people’s photos before they send them. Not just children’s photos. Everyone’s.

Technology Secretary Liz Kendall spelled it out on BBC Breakfast, floating a proposal to “block photographs being sent that are potentially nude photographs by anybody or block children from sending those.” That second clause is the tell. Blocking “anybody” from sending potentially nude images requires scanning everybody’s messages. There’s no technical path to that outcome that doesn’t involve reading content the sender assumed was private.

Kendall said the government is conducting a consultation on “whether we should have age limits on things like live streaming” and whether there should be “age limits on what’s called stranger pairing, for example, on games online.” The consultation, she said, will look at all of these. That list now covers messaging apps, photo sharing, gaming, and live streaming. Any feature that lets you share an image with another person potentially falls inside it.

This is how the mandate grows. The government announced a push for new delegated powers on February 16, framing them around age verification for social media and VPNs.

Keep reading

ProtonMail Logs Activist’s IP Address With Authorities After Swiss Court Order

End-to-end encrypted email service provider ProtonMail has drawn criticism after it ceded to a legal request and shared the IP address of anti-gentrification activists with law enforcement authorities, leading to their arrests in France.

The Switzerland-based company said it received a “legally binding order from the Swiss Federal Department of Justice” related to a collective called Youth for Climate, which it was “obligated to comply with,” compelling it to handover the IP address and information related to the type of device used by the group to access the ProtonMail account.

On its website, ProtonMail advertises that: “No personal information is required to create your secure email account. By default, we do not keep any IP logs which can be linked to your anonymous email account. Your privacy comes first.”

Despite its no IP logs claims, the company acknowledged that while it’s illegal for the company to abide by requests from non-Swiss law enforcement authorities, it will be required to do so if Swiss agencies agree to assist foreign services such as Europol in their investigations.

“There was no possibility to appeal or fight this particular request because an act contrary to Swiss law did in fact take place (and this was also the final determination of the Federal Department of Justice which does a legal review of each case),” the company said in a lengthy response posted on Reddit.

Put simply, ProtonMail will not only have to comply with Swiss government orders, it will be forced to hand over relevant data when individuals use the service to engage in activities that are deemed illegal in the country. This includes monitoring IP addresses from users in “extreme criminal cases,” according to its transparency report.

“Proton must comply with Swiss law. As soon as a crime is committed, privacy protections can be suspended and we’re required by Swiss law to answer requests from Swiss authorities,” ProtonMail founder and CEO Andy Yen tweetedadding “It’s deplorable that legal tools for serious crimes are being used in this way. But by law, [ProtonMail] must comply with Swiss criminal investigations. This is obviously not done by default, but only if legally forced.”

If anything, ProtonMail users who are concerned about the visibility of their IP addresses should use a VPN or access the email service over the Tor network for additional anonymity.

“The prosecution in this case seems quite aggressive. Unfortunately, this is a pattern we have increasingly seen in recent years around the world (for example in France where terror laws are inappropriately used),” the company said.

Keep reading

EU Law Could Extend Scanning of Private Messages Until 2027

The European Parliament is considering another extension of Chat Control 1.0, the “temporary” exemption that allows communications providers to scan private messages (under the premise of preventing child abuse) despite the protections of the EU’s ePrivacy Directive.

draft report presented by rapporteur Birgit Sippel (S&D) would prolong the derogation until April 3, 2027.

At first glance, the proposal appears to roll back some of the most controversial elements of Chat Control. Text message scanning and automated analysis of previously unknown images would be explicitly excluded. Supporters have framed this as a narrowing of scope.

However, the core mechanism of Chat Control remains untouched.

The draft continues to permit mass hash scanning of private communications for so-called “known” material.

According to former MEP and digital rights activist Patrick Breyer, approximately 99 percent of all reports generated under Chat Control 1.0 originate from hash-based detection.

Almost all of those reports come from a single company, Meta, which already limits its scanning to known material only. Under the new proposal, Meta’s practices would remain fully authorized.

As a result, the draft would not meaningfully reduce the volume, scope, or nature of surveillance. The machinery keeps running, with a few of its most visibly controversial attachments removed.

Hash scanning is often portrayed as precise and reliable. The evidence points in the opposite direction.

First, the technology is incapable of understanding context or intent. Hash databases are largely built using US legal definitions of illegality, which do not map cleanly onto the criminal law of EU Member States.

The German Federal Criminal Police Office (BKA) reports that close to half of all chat control reports are criminally irrelevant.

Each false positive still requires assessment, documentation, and follow-up. Investigators are forced to triage noise rather than pursue complex cases involving production, coercion, and organized abuse.

The strategic weakness is compounded by a simple reality. Offenders adapt. As more services adopt end-to-end encryption, abusers migrate accordingly. Since 2022, the number of chat-based reports sent to police has fallen by roughly 50 percent, not because abuse has declined, but because scanning has become easier to evade.

“Both children and adults deserve a paradigm shift in online child protection, not token measures,” Breyer said in a statement to Reclaim The Net.

“Whether looking for ‘known’ or ‘unknown’ content, the principle remains: the post office cannot simply open and scan every letter at random. Searching only for known images fails to stop ongoing abuse or rescue victims.”

Keep reading

Microsoft confirms it will give the FBI your Windows PC data encryption key if asked — you can thank Windows 11’s forced online accounts for that

Microsoft has confirmed in a statement to Forbes that the company will provide the FBI access to BitLocker encryption keys if a valid legal order is requested. These keys enable the ability to decrypt and access the data on a computer running Windows, giving law enforcement the means to break into a device and access its data.

The news comes as Forbes reports that Microsoft gave the FBI the BitLocker encryption keys to access a device in Guam that law enforcement believed to have “evidence that would help prove individuals handling the island’s Covid unemployment assistance program were part of a plot to steal funds” in early 2025.

This was possible because the device in question had its BitLocker encryption key saved in the cloud. By default, Windows 11 forces the use of a Microsoft Account, and the OS will automatically tie your BitLocker encryption key to your online account so that users can easily recover their data in scenarios where they might get locked out. This can be disabled, letting you choose where to save them locally, but the default behavior is to store the key in Microsoft’s cloud when setting up a PC with a Microsoft Account.

Keep reading

Only ‘braindead’ believe WhatsApp is secure – Durov

Pavel Durov, the Russian tech entrepreneur who created the Telegram messenger app,  has claimed there is no doubt WhatsApp lacks any meaningful privacy, after its parent company was hit with a new lawsuit.

In a major class-action lawsuit filed against Meta Platforms, Inc. in a US district court last week, an international group of plaintiffs from countries including Australia, Brazil and India has accused the company of making false claims about the privacy of its WhatsApp service.

“You’d have to be braindead to believe WhatsApp is secure in 2026,” Durov posted on X on Monday, mocking suggestions that Meta cannot read users’ messages. “When we analyzed how WhatsApp implemented its ‘encryption’, we found multiple attack vectors.”

The lawsuit challenges the cornerstone of WhatsApp’s privacy promise: its default end-to-end encryption, which uses the Signal protocol. The plaintiffs allege that, contrary to its in-app claim that “only people in this chat can read, listen to, or share” messages, Meta and WhatsApp “store, analyze, and can access virtually all of WhatsApp users’ purportedly ‘private’ communications.” The complaint cites unspecified whistleblowers as the source of this information.

Keep reading

UK Orders Ofcom to Explore Encryption Backdoors

By now, we’ve all heard the familiar refrain: “It’s for your safety.” It’s the soothing mantra of every government official who’s ever wanted a peek behind your digital curtains.

This week, with a move that would make East Germany blush, the UK government officially confirmed its intention to hand Ofcom  (yes, that Ofcom, the regulator that once investigated whether Love Island was too spicy) the keys to your private messages.

The country, already experiencing rapidly declining civil liberties, is now planning to scan encrypted chats for “bad stuff.”

Now, for those unfamiliar, Ofcom is the UK’s communications regulator that has recently been given censorship pressure powers for online speech.

It’s become the government’s Swiss Army knife for everything from internet censorship to now, apparently, full-blown surveillance.

Under the Online Safety Act, Ofcom has been handed something called Section 121, which sounds like a tax loophole but is actually a legal crowbar for prying open encrypted messages.

It allows the regulator to compel any online service that lets people talk to each other, Facebook Messenger, Signal, iMessage, etc to install “accredited technology” to scan for terrorism or child abuse material.

Keep reading

The Encryption Double Life of Canberra

The Australian government is quietly relying on encrypted messaging to conduct sensitive business, even as it hardens its stance against public use of secure communications.

While the public faces increasing surveillance and legal pressure for using end-to-end encryption, senior officials are steering policy conversations into private digital spaces, shielding them from scrutiny under Freedom of Information (FOI) laws.

Since midyear, ministerial staff have been advising lobbyists, peak bodies and industry groups to avoid email altogether and submit reform proposals through the encrypted messaging app Signal.

Some of these exchanges have been requested using disappearing messages, ensuring there is no record retained on government systems.

Several sources confirmed to the Saturday Paper that this guidance is now common across a number of policy areas.

In addition to Signal, stakeholders have been encouraged to use phone calls for detailed conversations and limit the content of any written communications.

In at least one case, after a formal meeting, the follow-up came in the form of a verbal summary rather than the usual written recap sent by email.

While the government has maintained formal channels for official submissions, a secondary mode of policymaking is taking shape.

This mode operates out of reach of archiving protocols and public oversight.

One participant in this informal process described it as an effort to protect the early phases of policy development from outside scrutiny, arguing that “fluid thoughts and ideas” should be exempt from public record.

Yet the effect of these practices is to create a shadow layer of government consultation that leaves no trace and falls outside the accountability mechanisms intended to safeguard democratic participation.

Keep reading