Are we being watched?

Ashadowy state agency with no statutory footing, previously used to monitor perfectly lawful yet dissenting speech during the Covid lockdowns, has been deployed by the Labour Government to monitor social media amid ongoing civil unrest across the UK.

The Counter Disinformation Unit (CDU), now rebranded as the National Security Online Information Team (NSOIT), has been given the task just months after the House of Commons Culture, Media and Sport Committee questioned “the lack of transparency and accountability of [NSOIT] and the appropriateness of its reach”, and recommended that the Government commission an independent review of “the activities and strategy” of the unit to report back within 12 months.

Peter Kyle, Labour’s Secretary of State for Science, Innovation and Technology has tasked NSOIT with monitoring online activity following the outbreak of widespread public disorder in the wake of the murder of three schoolgirls in Southport on 29th July.

David Davis, the Conservative MP who previously called for the CDU to be shut down, told the Telegraph he had no real objection to the unit being used to monitor social media during the riots because “it’s perfectly legitimate for the state to monitor things that might incite violence”. 

That’s true, of course – but the question is whether in doing so NSOIT will also be monitoring and flagging for removal online posts that fall well within the law.

Last year, a report by Big Brother Watch unmasked the scale of the digital surveillance system established during the Covid lockdowns, with the government now able to call upon at least three domestic surveillance units, all of which have previously been tasked with monitoring social media posts in the UK, flagging “misleading” content to their Whitehall paymasters who then urge tech platforms to remove them.

These units are the NSOIT in DCMS, the Intelligence and Communications Unit in the Home Office, the Cabinet Office’s Rapid Response Unit (since disbanded, according to the government) and the 77th Brigade, a combined Regular and Army reserve unit within the Ministry of Defence.

NSOIT was originally established to fight what the government calls “disinformation”. 

Keep reading

UK PM Keir Starmer Uses Riots To Call For Mass Surveillance and Social Media Censorship

The more things change, the more they remain the same, at least in the UK; after many years of Tory governments’ vigorous efforts to extend mass surveillance indiscriminately targeting citizens and enact stringent anti-free speech laws, the new Labour government seems to be picking up right where the previous one left off.

The wake of the Southport riots has elicited the usual medley of reactions: moves to address societal issues with more surveillance, strengthen the police state, blame “misinformation” and unproven, but always handy to bring up, “foreign meddling.”

But the real malady seems to be squarely at home: in fact, in the prime minister’s office. Keir Starmer happens to be sitting there now, but the policy hardly ever changes: he, too, wants more mass surveillance based on facial recognition, and more pressure on social media to ramp up censorship.

If anything does change it is the intensity of these demands that have long since been rejected as “Orwellian” by rights groups like Big Brother Watch.

Here, Starmer told a news conference called after the events branded as far-right riots, that participants in the protests (whom he called “thugs” and compared with football hooligans) are “mobile” and for that reason, police forces will, going forward, be a part of a network of sorts.

The prime minister added that there will be intelligence and data sharing, as well as “wider deployment of facial recognition technology, and preventative action, criminal behavior orders to restrict their movements before they can even board a train, in just the same way that we do with football hooligans.”

Movement restrictions are said to apply only to those with previous convictions, and those who have committed “violence at protests.” But here things get complicated because even those who were charged with relatively minor offenses like disorderly conduct could end up having their movements surveilled and restricted.

Starmer isn’t in favor of enacting new laws; he seems satisfied that all this can be achieved within the existing legislation and announced a “coordinated response” within the police across the country and law enforcement taking advantage of those laws more than before. But he does want more police officers, and it seems that increasing their numbers will be one election campaign promise that will be kept.

Keep reading

‘Really Chilling’: Five Countries to Test European Vaccination Card

Five European Union (EU) countries in September will pilot the newly developed European Vaccination Card (EVC), which “aims to empower individuals by consolidating all their vaccination data in one easily accessible location.

The pilot program marks a step toward the continent-wide rollout of the card, according to Vaccines Today.

Belgium, GermanyGreece, Latvia and Portugal will test the new card in a variety of formats, including printed cards, mailed copies and digital versions for smartphones.

The program aims to “pave the way for other countries by harmonising vaccine terminology, developing a common syntax, ensuring adaptability across different healthcare settings, and refining EVC implementation plans,” Vaccines Today reported.

The plans will be made public in 2026, “extending the EVC system beyond the pilot phases and enabling broad adoption across all EU Member States.”

Keep reading

Lawmakers Request Delay on Meta’s Shutdown of “Fact-Checker” Favorite Content Surveillance Tool

Meta’s decision to shut down a content surveillance tool called CrowdTangle, announced earlier in the year and about to take effect next month, has met with opposition from a group of US lawmakers.

CrowdTangle, which the giant bought in 2016, has over the years been “repurposed” by “fact-checkers,” researchers focusing on “disinformation” as well as media who flag it.

Meta said it is replaced by the Meta Content Library, available to some researchers but not commercial entities (such as media outlets, a number of whom are currently running “fact-checking” operations).

Now 17 lawmakers (three Republicans among them) have written to Meta asking that it reconsider this decision, referring to CrowdTangle as a “transparency tool” both for researchers and journalists.

The letter, addressed to CEO Mark Zuckerberg, says CrowdTangle is being used to “view and study” content on Facebook and Instagram, but also other platforms, searching for content ranging from foreign influence, and terrorism, to mental health.

We obtained a copy of the letter for you here.

Keep reading

AI Won’t Replace You, But It Will Spy on You

Since the dawn of the Industrial Revolution, workers have had to contend with the inimical effects of technology on their jobs. From the power loom to the personal computer, each wave of automation has not only increased productivity, but also empowered the owners and managers who dictate how these technologies reshape the workplace. Today, workers worldwide are haunted by the specter of artificial intelligence.

Artificial intelligence has been a mainstay in our popular imagination for decades.  Prognostications of an AI-driven future range from apocalyptic robot takeovers to thriving post-work societies where people live off the wealth produced by machines. In spite of these daydreams, robots with full human cognition are still well within the domain of science fiction.

When people speak of AI today, what they’re most often referring to are machines capable of making predictions through the identification of patterns in large datasets. Despite that relatively rote function, many in the space believe that inevitably AI will become autonomous or rival human intelligence. This raises concerns that robots will one day represent an existential threat to humanity or at the very least take over all of our jobs. The reality is that AI is more likely to place workers under greater surveillance than to trigger mass unemployment.

An overwhelming majority of workers are confident that AI will have a direct impact on their jobs, according to a recent survey by ADP,  but they do not agree on how. Some feel that it will help them in the workplace while 42 percent fear that some aspects of their job will soon be automated.

These concerns are not without merit. Grandiose statements of oncoming job losses made by tech executives in public forums fuel worker anxiety. Feelings of job insecurity are compounded by reports that a majority of US firms are planning to incorporate AI in the workplace within the next year. In fact, Goldman Sachs predictsthat generative AI could “substitute up to one-fourth of current work.”

Keep reading

UN Cybercrime Draft Convention Dangerously Expands State Surveillance Powers Without Robust Privacy, Data Protection Safeguards

As we near the final negotiating session for the proposed UN Cybercrime Treaty, countries are running out of time to make much-needed improvements to the text. From July 29 to August 9, delegates in New York aim to finalize a convention that could drastically reshape global surveillance laws. The current draft favors extensive surveillance, establishes weak privacy safeguards, and defers most protections against surveillance to national laws—creating a dangerous avenue that could be exploited by countries with varying levels of human rights protections.

The risk is clear: without robust privacy and human rights safeguards in the actual treaty text, we will see increased government overreach, unchecked surveillance, and unauthorized access to sensitive data—leaving individuals vulnerable to violations, abuses, and transnational repression. And not just in one country.  Weaker safeguards in some nations can lead to widespread abuses and privacy erosion because countries are obligated to share the “fruits” of surveillance with each other. This will worsen disparities in human rights protections and create a race to the bottom, turning global cooperation into a tool for authoritarian regimes to investigate crimes that aren’t even crimes in the first place.

Countries that believe in the rule of law must stand up and either defeat the convention or dramatically limit its scope, adhering to non-negotiable red lines as outlined by over 100 NGOs. In an uncommon alliance, civil society and industry agreed earlier this year in a joint letter urging governments to withhold support for the treaty in its current form due to its critical flaws.

Keep reading

AI Mass Surveillance at Paris Olympics Will Continue Even After Games End

The 2024 Paris Olympics is drawing the eyes of the world as thousands of athletes and support personnel and hundreds of thousands of visitors from around the globe converge in France.

It’s not just the eyes of the world that will be watching. Artificial intelligence (AI) systems will be watching, too.

Government and private companies will be using advanced AI tools and other surveillance tech to conduct pervasive and persistent surveillance before, during and after the Games.

The Olympic world stage and international crowds pose increased security risks so significant that in recent years authorities and critics have described the Olympics as the “world’s largest security operations outside of war.”

The French government, hand in hand with the private tech sector, has harnessed that legitimate need for increased security as grounds to deploy technologically advanced surveillance and data-gathering tools.

Its surveillance plans to meet those risks, including the controversial use of experimental AI video surveillance, are so extensive that the country had to change its laws to make the planned surveillance legal.

The plan goes beyond new AI video surveillance systems. According to news reports, the prime minister’s office has negotiated a provisional decree that is classified to permit the government to significantly ramp up traditional, surreptitious surveillance and information-gathering tools for the duration of the Games.

These include wiretapping; collecting geolocation, communications and computer data; and capturing greater amounts of visual and audio data.

I am a law professor and attorney, and I research, teach and write about privacy, artificial intelligence and surveillance. I also provide legal and policy guidance on these subjects to legislators and others.

Increased security risks can and do require increased surveillance. This year, France has faced concerns about its Olympic security capabilities and credible threats around public sporting events.

Keep reading

EU Agencies Propose Encryption Backdoors and Cryptocurrency Surveillance

The EU is attacking encryption again, this time in a report put together by several agencies, including EU law enforcement Europol, and the European Council’s Counter-Terrorism Coordinator.

This EU’s site says that this “first report on encryption” – by what the bloc calls its Innovation Hub for Internal Security, is looking for ways to “uphold citizens’ privacy while enabling criminal investigation and prosecution.”

“The main challenge is to design solutions that would allow at the same time a lawful and targeted access to communications and that guarantees that a high level of cybersecurity, data protection and privacy,” says the report.

The objective answer to the supposed conundrum of how to achieve both goals is always the same: you can’t.

Yet the EU, various governments, and international organizations continue to push to undermine online encryption and keep framing their initiatives the same way – as both their supposed care for privacy (and importantly, security), and making law enforcement’s job much easier (saying that the goal is to “enable” that, suggests there’s no other way to investigate, which is not true.)

And, how on Earth the EU intends to “safeguard fundamental rights” (of citizens) while at the same time proposing what it does in this document, is anybody’s guess. But EU bureaucrats are “safe” from being asked these questions – at least not by legacy, corporate media.

The report’s proposals include a number of ways to break encryption, mention encryption backdoors (the sneaky euphemism is, “lawful access” to communications and data), as well as password cracking and cryptocurrency and other forms of surveillance.

The not-so-subtle abuse of language and tone continues while discrediting encryption, as services like Meta’s Messenger, Apple Private Relay, and Rich Communication Systems (RCS) protocol are dubbed, “warrant-proof encryption technologies.”

Keep reading

UN’s Antonio Guterres unveils global game plan for surveillance, control and censorship

United Nations Secretary-General Antonio Guterres recently released a framework program titled “Global Principles for Information Integrity,” which outlines key recommendations on population control, surveillance and censorship.

The said project promotes the globalist rhetoric of ending “harmful misinformation, disinformation and hate speech” online. It claims to make information spaces safer while “upholding human rights such as the freedom of speech.”

“At a time when billions of people are exposed to false narratives, distortions and lies, these principles lay out a clear path forward, firmly rooted in human rights, including the rights to freedom of expression and opinion,” Guterres said, addressing the media at the UN headquarters in New York.

Guterres urged governments, tech companies, advertisers and the public relations (PR) industry to take responsibility for spreading and monetizing content that results in harm. He also demanded that the media and advertisers take control and establish official narratives while suppressing opposition.

For SHTF Plan‘s Mac Slavo, the international organization is building an information surveillance and control system that crafts authoritarian narratives that limit access to the truth. These will not only censor but will dictate and will police people on what to say and think and how to behave.

“The UN wants to create a world of simps who surrender their sovereignty and bow down to manipulative and abusive entities and false authorities,” Slavo said.

He added that Big Tech’s algorithms or automated review processes will be programmed to filter and remove content deemed objectionable or politically sensitive, including blocking websites, social media posts or entire platforms that would criticize their chosen stakeholders. Slavo further predicted possible internet shutdowns or access restrictions to specific websites in times of political unrest or during manufactured crises.

Keep reading

Federal Judge, ICE Agents Linked to Compromised Spyware Use

Sometimes the government spies on you. And sometimes they hire a poorly secured Eastern European firm to do it for them.

Last week, hacktivists published the customer support database for Brainstack, a Ukrainian company that runs a phone tracking service called mSpy. (It was the third mSpy security breach in a decade.) The database includes messages from Immigrations and Customs Enforcement (ICE) agents, active-duty troops, and a U.S. circuit court judge interested in using mSpy to conduct surveillance.

Employees at the U.S. State Department, the Nebraska National Guard, and two federal auditing offices reached out to mSpy about using the service in official investigations. Many more low-level officials and service members seemed to be using mSpy to monitor people in their private lives, but signed up through their government emails. In some cases, it was unclear whether government employees were using mSpy for official or personal business. 

Even if the private spying was for a legitimate purpose—such as parents monitoring their children’s internet usage—it was probably not the best idea to sign up for foreign spyware with known security issues from a government email account.

Judge Kevin Newsom, the circuit judge of the United States Court of Appeals for the 11th Circuit, used his government email address to log into an mSpy customer service chat in February 2019. “You can’t reliably monitor Snapchat, which is the only reason I got it,” he complained. He sent mSpy a follow-up email asking for a refund, signed with his official title as a judge.

“Judge Newsom’s use was entirely in his personal capacity to address a family matter,” says Kate Adams, director of workplace relations at the 11th Circuit.

MSpy has previously suffered serious security problems over the past decade. In May 2015, hackers stole data on mSpy’s targets and offered it for sale on the dark web. When cybersecurity journalist Brian Krebs broke the story, mSpy tried to claim the data was fake, then eventually admitted to the breach. In September 2018, mSpy accidentally left that same type of data on a public-facing server, then removed it when Krebs noticed.

In early June 2024, the Swiss hacktivist maia arson crimew, who had previously leaked the FBI’s No Fly List, claimed that an “anonymous source” had sent her 150 gigabytes of data from mSpy’s customer service branch. “From all the past stalkerware leaks, usually what leaks is victim data,” crimew tells Reason via encrypted voice chat. But this leak was about mSpy’s clients—essentially turning the surveillance back against the surveilers.

Last week, the leaked client data was published on DDoSecrets, a website widely considered to be WikiLeaks’ successor. (DDoSecrets is also famous for hosting BlueLeaks, a massive 2020 leak of police files.) The mSpy media team did not respond to an email asking for comment on the leak.

Keep reading