Sharp Rise in Facial Recognition Use by Scottish Police, UK Protest Footage Scanned

The police in Scotland have tripled the use of retrospective facial recognition over the last five years jumping from just under 1,300 in 2018 to nearly 4,000 in 2022.

The rising trend has continued during 2023 with more than 2,000 searches carried out in the first four months of the year, according to data obtained by a freedom of information request by UK investigative journalism organizations Liberty Investigates and The Ferret.

The trend has been rising in other parts of the country. In 2014, the total number of searches using retrospective facial recognition by all police forces in the UK amounted to just 3,360. By 2022, that number jumped to 85,158, according to UK Home Office data.

The Scottish police ranks fourth in the use of the technology in the UK. The leader is the London Metropolitan Police which accounted for 30 percent or 27,677 searches last year.

The UK police have been using retrospective facial recognition to match faces captured with CCTV cameras with millions of images stored in the Police National Database. The practice has proved controversial as the database still contains many images of people who were released without charge.

Police in Scotland operate a distinct policy from other UK forces, only uploading custody images to the database once an individual has been charged with a crime and removing images of those found innocent after 6 months.

Facial recognition use by the police has been a target of criticism from some lawmakers, non-governmental organizations and policy experts.

Keep reading

COUNTERTERROR DIRECTOR USED HAMAS ATTACK TO JUSTIFY MASS SURVEILLANCE PROGRAM RENEWAL

DURING A SENATE briefing last week, a federal counterterrorism official cited the October 7 Hamas attack while urging Congress to reauthorize a sprawling and controversial surveillance program repeatedly used to spy on U.S. citizens on U.S. soil.

“As evidenced by the events of the past month, the terrorist threat landscape is highly dynamic and our country must preserve [counterterrorism] fundamentals to ensure constant vigilance,” said Director of the National Counterterrorism Center Christine Abizaid to the Senate Committee on Homeland Security, after making repeat references to Hamas’s attack on Israel.

She pointed to Section 702 of the Foreign Intelligence Surveillance Act, which enables the U.S. government to gather vast amounts of intelligence — including about U.S. citizens — under the broad category of foreign intelligence information, without first seeking a warrant.

Section 702 “provides key indications and warning on terrorist plans and intentions, supports international terrorist disruptions, enables critical intelligence support to, for instance, border security, and gives us strategic insight into foreign terrorists and their networks overseas,” Abizaid said. “I respectfully urge Congress to reauthorize this vital authority.”

The controversial program is set to expire at the end of the year, and lawmakers sympathetic to the intelligence community are scrambling to protect it, as some members of Congress like Sen. Ron Wyden push for reforms that restrain the government’s surveillance abilities. According to Rep. Jim Himes, the ranking Democrat on the House Intelligence Committee, plans are underway to prepare a stopgap measure to preserve Section 702 of FISA as a long-term reauthorization containing reforms is hammered out. 

Keep reading

Gov. Newsom says California to track residents from ‘cradle to career’ after China trip

After a trip to China, which uses a social credit score system, California Governor Gavin Newsom announced the next phase of the state’s “Cradle to Career” system that uses more than one billion data points.

“By leveraging billions of data points, California’s Cradle-to-Career data system will be a game-changer for improving the quality of life for millions of Californians and highlighting ways to improve opportunity in the classroom and access to the workforce.”

The system is designed to “illuminate gaps and identify opportunities throughout students’ education experiences so they can ultimately reach their goals for life and careers” through data that includes “race, gender, ability, and geography to illuminate and address areas of strength and needed growth, and any inequities.”

“This milestone represents a significant step forward in our mission to establish a robust, comprehensive data system that provides a nuanced understanding of Californians’ educational and professional journeys,” said Mary Ann Bates, Executive Director of the California Cradle-to-Career Data System. “I want to thank our data partners for their unwavering commitment to ensuring that Californians will have validated, reliable data available to inform decisions. This collective effort will equip our state with the data and tools necessary to ensure that every Californian has the opportunity to succeed.”

The system is designed to be used by students, families, politicians, researchers, and policymakers, providing insights from a granular, individual level to the state as a whole.

Keep reading

How Mastercard’s Digital ID Project Is Being Used by Governments To Track Health and Vaccination

In Mastercard’s ongoing technological pursuits, there seems to be an agenda of consolidating digital dominance. The so-called “Community Pass” project, helmed by Tara Nathan, Mastercard’s executive vice president, claims to integrate marginalized communities into the digital world. However, with only 3.5 million users so far, skeptics of digital ID plans may wonder about its real reach and intentions.

Nathan’s recent appearance on the company-sponsored podcast “What’s Next In,” touted the supposed merits of the Community Pass. Launched in 2019, this platform ostensibly provides individuals in Sub-Saharan Africa and Asia-Pacific with a digital ID and wallet, allowing them access to services such as government benefits and humanitarian assistance.

Nathan waxed eloquent about the supposed benefits of digitization for developing economies. But her emphasis on using offline digital channels to supposedly empower marginalized individuals raises eyebrows. Is this another case of a multinational company trying to sell its tech solutions to unsuspecting communities under the guise of altruism?

Keep reading

Conservatives are increasingly knives out for the nation’s top cyber agency

An agency set up under Donald Trump to protect elections and key U.S. infrastructure from foreign hackers is now fighting off increasingly intense threats from hard-right Republicans who argue it’s gone too far and are looking for ways to rein it in.

These lawmakers insist work by the Cybersecurity and Infrastructure Security Agency to combat online disinformation during elections singles out conservative voices and infringes upon free speech rights — an allegation the agency vehemently denies and the Biden administration is contesting in court. The accusations started in the wake of the 2020 election and are ramping up ahead of 2024, with lawmakers now calling for crippling cuts at the agency.

“CISA has blatantly violated the First Amendment and colluded with Big Tech to censor the speech of ordinary Americans,” Rand Paul (R-Ky.), the ranking member of the Senate Homeland Security Committee, which oversees CISA, said in a statement to POLITICO.

Keep reading

UK government keeping files on teaching assistants’ and librarians’ internet activity

The government has been monitoring the social media accounts of “dozens” of ordinary teaching staff, including teaching assistants, and is keeping files on posts that criticise education policies, the Observer has learned.

Two weeks ago, this newspaper revealed how the Department for Education is monitoring the social media activity of some of the country’s leading education experts. Now evidence has emerged that the monitoring is much more widespread, covering even the lowest paid members of staff.

Ordinary teaching and support staff said this weekend that they were “gobsmacked” and angry after discovering that the department had files on them. Many outraged educators have rushed to submit subject access requests [SARs] compelling the DfE to release any information it holds under their name, after discovering there were files up to 60 pages long about their tweets and comments challenging government policy or the schools inspectorate, Ofsted.

Nikki Cleveland, a higher-level teaching assistant and primary school librarian, who mainly posts uncontroversial children’s book reviews, discovered from a SAR that the DfE had a file alerting colleagues to tweets from her complaining about lack of funding for school libraries and about Ofsted. She said: “I was gobsmacked that I was even on their radar.”

Cleveland expressed anger that while the department was flagging tweets about schools struggling to balance their budgets, meet the growing needs of pupils without enough staff and deal with unreasonable demands from Ofsted, “nothing has changed”.

“The whole thing makes me even more cynical that no one in the government or the DfE cares about what is happening in schools on a day-to-day basis,” she added.

Keep reading

Odd Colorado Ruling Upholds Internet Keyword Search Warrant

What would your internet searches reveal about you if others could scrutinize and second-guess them? It’s something to think about, given that the big search engines, like Google, store search histories and make them available to the authorities. In fact, as happened in a recently decided Colorado case, police can start from search terms of interest and pressure tech companies to surrender the identities of anyone who has surfed for specified keywords. The decision is chilling for anybody who has ever pondered their online history in the hands of a stranger—or who just cares about privacy.

“Today, the Colorado Supreme Court became the first state supreme court in the country to address the constitutionality of a keyword warrant—a digital dragnet tool that allows law enforcement to identify everyone who searched the internet for a specific term or phrase,” Jennifer Lynch and Andrew Crocker of the Electronic Frontier Foundation (EFF) reported on Monday. “The case is People v. Seymour, which involved a tragic home arson that killed several people. Police didn’t have a suspect, so they used a keyword warrant to ask Google for identifying information on anyone and everyone who searched for variations on the home’s street address in the two weeks prior to the arson.”

Keep reading

The EU Could Push its Private Message Ban as Early as Next Week

The EU is getting ever closer to pushing through the legislation known among critics as “chat control” – officially, Child Sexual Abuse Regulation, CSAR – and is hoping to reach a deal on this within the bloc as early as next week.

One of those who have been consistently opposed to the controversial upcoming rules, a German member of European Parliament (MEP) and lawyer Patrick Breyer, has reacted by warning once again that regardless of some minor changes if passed, the bill would effectively spell the end of proper encryption and private messaging in the EU.

Instead, the implication is, that CSAR would usher in the era of indiscriminate mass surveillance in this part of the digital space.

Warning that a recent “minor concession” the EU member-states have managed to agree on was a bid to finally come up with a majority and push the plans over the top, Breyer, referring to the proposal as “chat control 2.0,” calls it an “unprecedented” (at least for the EU) example of mass surveillance.

The summary of the regulation is that online services that provide messaging and chat would, going forward, have to implement automatic scanning of all private text and images – looking for potential abusive content, and then let the EU know about it.

There is no shortage of controversy and misgivings here, with two clearly standing out: once in place, what can this infrastructure be used for next (if politicians decide) – and the other, how are online platforms even supposed to make it work accurately and fairly, technically speaking?

Now, we are hearing that the EU Council is looking to “soften the blow,” at least rhetorically, but saying that the scanning would at first only apply to “previously classified CSAM (child sexual abuse material)” – but then later still expand it to everything.

Keep reading

Carbon Passports Are The Next Dystopian Surveillance Threat

The digital ID agenda is already on the horizon. But it doesn’t stop there. Digital carbon passports are the next big proposal. Travel enthusiasts worldwide might soon face a drastic change in exploring international borders, with global warming triggering the implementation of carbon passports that could limit their wanderlust, asserts Intrepid Travel in a recently published report.

Dubbing these restrictions as “personal carbon allowances,” the report portends they would serve as determinants compelling individuals to conform to the global carbon budget.

With imposed limitations on yearly travel anticipated by as soon as 2040, travelers might be forced to relinquish the horizon-expansion privileges, usually afforded by contemporary tourism.

Crafted in collaboration with forecasting agency The Future Laboratory, the report highlights the alleged repercussions of climate change on popular summer destinations like Greece and Majorca, supposedly deemed too hot for humans.

The introduction of carbon passports could raise serious privacy concerns about the level of surveillance exercised over individuals’ movements and behavior.

Could these measures act as precursors to overreaching surveillance, tracking individuals’ carbon footprints?

Keep reading

Canada Plots to Increase Online Regulation, Target Search and Social Media Algorithms

Canada is taking steps towards potentially intrusive regulation of artificial intelligence as it pertains to its application in search and social media services. The government’s intentions have been revealed, which includes AI application way beyond the realm of generative AI similar to OpenAI’s ChatGPT. Industry giants such as Google and Facebook, who utilize AI for search results, translation provisions, and customer taste recognition respectively, are among the contenders lined up in the regulatory intent with the pro-censorship government intent on having a say on how these algorithms work.

The information comes by way of Minister François-Philippe Champagne of Innovation, Science and Economic Development Canada (ISED) in a letter submitted to the Industry committee analyzing Bill C-27—the privacy reform and AI regulation bill. Precise amendments remain shielded from scrutiny, however, as the governmental body keeps the proposed changes under wraps.

We obtained a copy of the original bill for you here.

The existing framework in Bill C-27 leaves the identification of AI mechanisms that can be classified into the “high-impact” category to future regulatory proceedings.

Bill C-27, by treating search and social media results as “high-impact” systems, is likely to raise eyebrows as the government’s push towards regulating technology has so far been assertive of greater control over content and therefore speech.

Non-compliance, under this proposal, may invite penalties proportional to 3% of gross global revenues.

The legislation veers into controversial territory by infusing the regulation of content moderation and discoverability prioritization into the matrix, in unexpected ways. It attempts to parallel these issues to bias accusation during recruitment or when used by law enforcement, invoking substantial surprise. Consequently, Canada’s rules, although they claim to align more closely with the EU, seem to set the country apart, leaning more towards censorship and less towards free speech.

The news comes on the back of Canada’s more recent online regulations that have raised alarm.

Keep reading