CIA and Mossad-linked Surveillance System Quietly Being Installed Throughout the US

Launched in 2016 in response to a Tel Aviv shooting and the Pulse Nightclub shooting in Orlando, Florida, Gabriel offers a suite of surveillance products for “security and safety” incidents at “so-called soft targets and communal spaces, including schools, community centers, synagogues and churches.” The company makes the lofty promise that its products “stop mass shootings.” According to a 2018 report on Gabriel published in the Jerusalem Post, there were an estimated 475,000 such “soft targets” across the U.S., meaning that “the potential market for Gabriel is huge.”

Gabriel, since its founding, has been backed by “an impressive group of leaders,” mainly “former leaders of Mossad, Shin Bet [Israel’s domestic intelligence agency], FBI and CIA.” In recent years, even more former leaders of Israeli and American intelligence agencies have found their way onto Gabriel’s advisory board and have promoted the company’s products.

While the adoption of its surveillance technology was slower than expected in the United States, that dramatically changed last year, when an “anonymous philanthropist” gave the company $1 million to begin installing its products throughout schools, houses of worship and community centers throughout the country. That same “philanthropist” has promised to recruit others to match his donation, with the ultimate goal of installing Gabriel’s system in “every single synagogue, school and campus community in the country.”

With this CIA, FBI and Mossad-backed system now being installed throughout the United States for “free,” it is worth taking a critical look at Gabriel and its products, particularly the company’s future vision for its surveillance system. Perhaps unsurprisingly, much of the company’s future vision coincides with the vision of the intelligence agencies backing it – pre-crime, robotic policing and biometric surveillance.

Keep reading

New York Is Quietly Rolling Out Precrime Surveillance Tech

Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.

Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”

MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”

“AI is the future,” Kemper assured the MTA’s safety committee.

So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”

And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.

Keep reading

‘Cyborg 1.0’: World’s First Robocop Debuts With Facial Recognition And 360° Camera Vision

Thailand has debuted the world’s first ‘Robocop’ designed to detect and prevent crime with advanced AI.

Equipped with 360-degree cameras for eyes, the cutting-edge cyborg maintains constant surveillance with real-time monitoring. The robocop, named Police Colonel Nakhonpathom Plod Phai, meaning “Nakhonpathom is safe,” was unveiled during the Songkran festival in Nakhon Pathom province on Wednesday. The debut was announced via a Facebook post by the Royal Thai Police, according to a report by The Sun.

The robocop is also able to detect weapons, such as knives and wooden batons. In neighboring China, humanoid robots have started supporting police patrols.

Interesting Engineering reports:

In Shenzhen, PM01 model robots developed by EngineAI have been deployed alongside officers, wearing high-visibility police vests. These robots have been seen engaging with pedestrians—waving, shaking hands, and responding to voice commands—according to local media reports. A recent video shows a PM01 robot waving to a crowd, sparking curiosity about its purpose in law enforcement.

First launched in December 2024, the PM01 features agile mobility, an interactive touchscreen, and an open-source platform. This design allows developers worldwide to contribute to its evolution by adding new features and capabilities through secondary development.

Last year, Logon Technology, a Chinese robotics company, unveiled the RT-G autonomous spherical robot, described as a “technological breakthrough,” with an army of these spherical robocops spotted rolling through cities across China, The Sun said. The robocop’s debut underscores the growing importance of robot technology. During Tesla’s Q1 2025 All-Hands meeting, CEO Elon Musk revealed the the company is preparing aiming to begin the production of its own humanoid, Optimus, this year.

Keep reading

UK creating ‘murder prediction’ tool to identify people most likely to kill

The UK government is developing a “murder prediction” programme which it hopes can use personal data of those known to the authorities to identify the people most likely to become killers.

Researchers are alleged to be using algorithms to analyse the information of thousands of people, including victims of crime, as they try to identify those at greatest risk of committing serious violent offences.

The scheme was originally called the “homicide prediction project”, but its name has been changed to “sharing data to improve risk assessment”. The Ministry of Justice hopes the project will help boost public safety but campaigners have called it “chilling and dystopian”.

The existence of the project was discovered by the pressure group Statewatch, and some of its workings uncovered through documents obtained by Freedom of Information requests.

Statewatch says data from people not convicted of any criminal offence will be used as part of the project, including personal information about self-harm and details relating to domestic abuse. Officials strongly deny this, insisting only data about people with at least one criminal conviction has been used.

The government says the project is at this stage for research only, but campaigners claim the data used would build bias into the predictions against minority-ethnic and poor people.

The MoJ says the scheme will “review offender characteristics that increase the risk of committing homicide” and “explore alternative and innovative data science techniques to risk assessment of homicide”.

The project would “provide evidence towards improving risk assessment of serious crime, and ultimately contribute to protecting the public via better analysis”, a spokesperson added.

The project, which was commissioned by the prime minister’s office when Rishi Sunak was in power, is using data about crime from various official sources including the Probation Service and data from Greater Manchester police before 2015.

The types of information processed includes names, dates of birth, gender and ethnicity, and a number that identifies people on the police national computer.

Statewatch’s claim that data from innocent people and those who have gone to the police for help will be used is based on a part of the data-sharing agreement between the MoJ and GMP.

A section marked: “type of personal data to be shared” by police with the government includes various types of criminal convictions, but also listed is the age a person first appeared as a victim, including for domestic violence, and the age a person was when they first had contact with police.

Also to be shared – and listed under “special categories of personal data” – are “health markers which are expected to have significant predictive power”, such as data relating to mental health, addiction, suicide and vulnerability, and self-harm, as well as disability.

Sofia Lyall, a researcher for Statewatch, said: “The Ministry of Justice’s attempt to build this murder prediction system is the latest chilling and dystopian example of the government’s intent to develop so-called crime ‘prediction’ systems.

“Time and again, research shows that algorithmic systems for ‘predicting’ crime are inherently flawed.

Keep reading

We Caught FBI Using “Minority Report Style” Secret Form Pressuring Gun Owners To Forfeit Their Rights

Gun Owners of America just caught the FBI coercing more people into giving up their Second Amendment rights!

Thanks to a FOIA request by GOA’s lawyers, we uncovered even more evidence on the FBI’s unconstitutional and unlawful NICS Indices program.

In 2019, it was discovered that the FBI was using a document titled “NICS Indices Self-Submission Form” that purported to allow American citizens to “voluntarily” waive their Second Amendment rights. 

By completing this FBI form, law-abiding Americans allegedly “consent” for the FBI to enter their names into the National Instant Criminal Background Check System, marking them as permanently prohibited from purchasing or possessing firearms or ammunition.  And as the form warns, once an individual waives their rights, it’s impossible to get them back.

Now, the mere existence of this form was troubling, and it clearly violates the Second Amendment and even the Gun Control Act. But at that point, we weren’t sure how extensively the FBI was using the form, if in fact it was being used at all.

Fast-forward a few years to 2022.

GOA published our initial findings that the FBI had provided these forms to agents for use on American gun owners, who were pressured into signing and therefore “voluntarily” relinquished their rights to purchase, possess, and use firearms.

These FOIA records painted a vivid picture of FBI agents showing up to people’s homes, place of work, etc., presenting to them these forms, and “asking” them to declare themselves to be a “danger” to themselves or others, or lacking the “mental capacity to adequately contract or manage” their lives.

You can imagine how coercive these sorts of FBI visits must have been.  The FBI’s use of this secret form has occurred during recent years when the bureau has become increasingly politicized and weaponized against Americans, including gun owners.

Keep reading

Pasco County Sheriff Will End Predictive Policing Program to Settle Lawsuit Over Harassment

The Pasco County Sheriff’s Office is permanently scuttling a predictive policing program that was the subject of critical media investigations and a pending civil rights lawsuit alleging the program amounted to frequent unconstitutional harassment of families.

In a settlement agreement ending that civil rights lawsuit, the Pasco County Sheriff’s Office acknowledged that its “Intelligence Led Policing” (ILP) program exceeded officers’ implied license to knock on doors and perform offender checks, interfering with the plaintiffs’ First, Fourth, and Fourteenth Amendment rights.

One of those plaintiffs, Darlene Deegan, was harassed for three years by Pasco sheriff’s deputies after her opioid-addicted son was flagged by the program. This included repeated, day-after-day visits to her house by police, demanding to know where her son was, and accruing $3,000 in fines for petty code violations, allegedly in retaliation for her refusal to cooperate.

“For years, the Pasco Sheriff’s Office treated me like it could do anything it wanted,” Deegan said in a press release issued by the Institute for Justice, a public interest law firm that represented her and several other county residents. “But today proves that when ordinary people stand up for themselves, the Constitution still means what it says.”

The Institute for Justice filed a federal civil rights lawsuit in 2021 on behalf of Deegan and three other Pasco County residents who claimed the harassment violated their constitutional rights.

In addition to ending the ILP program and agreeing not to create a similar one, the Pasco County Sheriff’s Office will pay $105,000 to the four plaintiffs in the lawsuit.

A 2020 Tampa Bay Times investigation first revealed how the ILP program used algorithms to flag “prolific offenders” that it believed were likely to be future offenders. Many of them were juveniles, such as 15-year-old Rio Wojtecki. Once someone was added to the list, deputies targeted their family, workplace, and friends and associates for suspicionless “checks,”  including at nighttime. Deputies contacted Wojtecki or his family 21 times over a four-month period—at his house, at his gym, and at his parents’ work. When Wojtecki’s older sisters refused to let deputies inside the house during one late-night visit, a deputy shouted, “You’re about to have some issues.”

“Make their lives miserable until they move or sue,” was how one former deputy described the program to the newspaper. Body camera footage obtained by the Tampa Bay Times backed up both the residents’ and whistleblowers’ claims that deputies used frivolous code violations to retaliate against targets.

The Pasco County Sheriff’s Office originally defended the ILP program. A department spokeswoman told a Florida news outlet in 2022, in response to another lawsuit over the program, that the “ILP philosophy attempts to connect those who have previously offended with resources to break the cycle of recidivism. This ILP philosophy has led to a reduction in crime and reduction in victimization in our community and we will not apologize for continued efforts to keep our community safe.”

But last year the department announced in court that it was phasing out the “prolific offender” list. The Institute for Justice hopes that the settlement agreement will stop the program from ever being resurrected.

Keep reading

What’s Next For Battlefield America? Israel’s High-Tech Military Tactics Point The Way

“I did not know Israel was capturing or recording my face. [But Israel has] been watching us for years from the sky with their drones. They have been watching us gardening and going to schools and kissing our wives. I feel like I have been watched for so long.”

– Mosab Abu Toha, Palestinian poet

If you want a glimpse of the next stage of America’s transformation into a police state, look no further than how Israel – a long-time recipient of hundreds of billions of dollars in foreign aid from the U.S. – uses its high-tech military tactics, surveillance and weaponry to advance its authoritarian agenda.

Military checkpoints. Wall-to-wall mass surveillance. Predictive policing. Aerial surveillance that tracks your movements wherever you go and whatever you do. AI-powered facial recognition and biometric programs carried out with the knowledge or consent of those targeted by it. Cyber-intelligence. Detention centers. Brutal interrogation tactics. Weaponized drones. Combat robots.

We’ve already seen many of these military tactics and technologies deployed on American soil and used against the populace, especially along the border regions, a testament to the heavy influence Israel’s military-industrial complex has had on U.S. policing.

Indeed, Israel has become one of the largest developers and exporters of military weapons and technologies of oppression worldwide.

Journalist Antony Loewenstein has warned that Pegasus, one of Israel’s most invasive pieces of spyware, which allows any government or military intelligence or police department to spy on someone’s phone and get all the information from that phone, has become a favorite tool of oppressive regimes around the world. The FBI and NYPD have also been recipients of the surveillance technology which promises to turn any “target’s smartphone into an intelligence gold mine.”

Yet it’s not just military weapons that Israel is exporting. They’re also helping to transform local police agencies into extensions of the military.

According to The Intercept, thousands of American law enforcement officers frequently travel for training to Israel, one of the few countries where policing and militarism are even more deeply intertwined than they are here,” as part of an ongoing exchange program that largely flies under the radar of public scrutiny.

A 2018 investigative report concluded that imported military techniques by way of these exchange programs that allow police to study in Israel have changed American policing for the worse. “Upon their return, U.S. law enforcement delegates implement practices learned from Israel’s use of invasive surveillance, blatant racial profiling, and repressive force against dissent,” the report states. “Rather than promoting security for all, these programs facilitate an exchange of methods in state violence and control that endanger us all.”

Keep reading

Combating “Hate”: The Trojan Horse For Precrime

Philip K. Dick’s 1956 novella The Minority Report created “precrime,” the clairvoyant foreknowledge of criminal activity as forecast by mutant “precogs.” The book was a dystopian nightmare, but a 2015 Fox television series transforms the story into one in which a precog works with a cop and shows that data is actually effective at predicting future crime.

Canada is trying to enact a precrime law along the lines of the 2015 show, but it is being panned about as much as the television series. Ottawa’s online harms bill includes a provision to impose house arrest on someone who is feared to commit a hate crime in the future. From The Globe and Mail:

The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says. Mr. Virani, who is Attorney-General as well as Justice Minister, said it is important that any peace bond be “calibrated carefully,” saying it would have to meet a high threshold to apply.

But he said the new power, which would require the attorney-general’s approval as well as a judge’s, could prove “very, very important” to restrain the behaviour of someone with a track record of hateful behaviour who may be targeting certain people or groups…

People found guilty of posting hate speech could have to pay victims up to $20,000 in compensation. But experts including internet law professor Michael Geist have said even a threat of a civil complaint – with a lower burden of proof than a court of law – and a fine could have a chilling effect on freedom of expression.

While this is a dangerous step in Canada, I also wonder if this is where burgeoning “anti-hate” programs across the US are headed. The Canadian bill would also allow “people to file complaints to the Canadian Human Rights Commission over what they perceive as hate speech online – including, for example, off-colour jokes by comedians.”

There are now programs in multiple US states to do just that –  encourage people to snitch on anyone doing anything perceived as “hateful.”

The 2021 federal COVID-19 Hate Crimes Act began to dole out money to states to help them respond to hate incidents. Oregon now has its Bias Response Hotline to track “bias incidents.”

In December of 2022, New York launched its Hate and Bias Prevention Unit. Maryland, too, has its system – its hate incidents examples include “offensive jokes” and “malicious complaints of smell or noise.”

Keep reading

Justice Minister defends house arrest power for people feared to commit a hate crime in future

Justice Minister Arif Virani has defended a new power in the online harms bill to impose house arrest on someone who is feared to commit a hate crime in the future – even if they have not yet done so already.

The person could be made to wear an electronic tag, if the attorney-general requests it, or ordered by a judge to remain at home, the bill says.

Mr. Virani, who is Attorney-General as well as Justice Minister, said it is important that any peace bond be “calibrated carefully,” saying it would have to meet a high threshold to apply.

But he said the new power, which would require the attorney-general’s approval as well as a judge’s, could prove “very, very important” to restrain the behaviour of someone with a track record of hateful behaviour who may be targeting certain people or groups.

If “there’s a genuine fear of an escalation, then an individual or group could come forward and seek a peace bond against them and to prevent them from doing certain things.”

The peace bond could have conditions that include not being close to a synagogue or a mosque, he said. It could also lead to restrictions on internet usage and behaviour. “That would help to deradicalize people who are learning things online and acting out in the real world violently – sometimes fatally.”

Keep reading

Lawmakers Want Pause on Federal Funds for Predictive Policing

Should data scientists be in the business of fingering Americans for crimes they could commit, someday? Last month, a group of federal lawmakers asked the Department of Justice to stop funding such programs—at least until safeguards can be built in. It’s just the latest battle over a controversial field of law enforcement that seeks to peer into the future to fight crime.

“We write to urge you to halt all Department of Justice (DOJ) grants for predictive policing systems until the DOJ can ensure that grant recipients will not use such systems in ways that have a discriminatory impact,” reads a January letter to Attorney General Merrick Garland from U.S. Sen. Ron Wyden (D–Ore.) and Rep. Yvette Clarke (D–N.Y.), joined by Senators Jeff Merkley (D–Ore.), Alex Padilla, (D–Calif.), Peter Welch (D–Vt.), John Fetterman, (D–Penn.), and Ed Markey (D–Mass.). “Mounting evidence indicates that predictive policing technologies do not reduce crime. Instead, they worsen the unequal treatment of Americans of color by law enforcement.”

The letter emphasizes worries about racial discrimination, but it also raises concerns about accuracy and civil liberties that, since day one, have dogged schemes to address crimes that haven’t yet occurred.

Keep reading