Predictive Policing: Weaponizing Data and Location

Often funded by federal grant programs and even operated out of federally-designated fusion centers, “predictive policing” software is showing up in more and more local law enforcement agencies. Learn what it is and how it’s used.

In practice, it’s little more than a dystopian pre-crime government credit score.

Keep reading

Google Veterans Team Up With Gov’t to Fill the Sky with AI Drones That Predict Your Behavior

Imagine in the near future, a swarm of tiny drones patrolling the skies across the country. These drones are not being flown by any pilot and are entirely autonomous, carrying out their directives coded into them during manufacturing — surveil, record, follow, and even predict your next move. Sounds like something out of a dystopian Sci-Fi flick, right? Well, there is no need to imagine this scenario or to watch it in a movie.

It is already here.

Adam Bry and Abraham Bachrach, the CEO and CTO, respectively at a company called Skydio have helped usher in this new reality. The duo started together at MIT before moving on to Google and working on Project Wing Google. 

After moving on from building self-flying aircraft at Google, the duo founded Skydio and has been giving their autonomous drones to police departments ever since — for free.

“We‘re solving a lot of the core problems that are needed to make drones trustworthy and able to fly themselves,” Bry told Forbes in an interview this week. “Autonomy—that core capability of giving a drone the skills of an expert pilot built in, in the software and the hardware—that’s really what we’re all about as a company.”

According to Forbes, Skydio “claims to be shipping the most advanced AI-powered drone ever built: a quadcopter that costs as little as $1,000, which can latch on to targets and follow them, dodging all sorts of obstacles and capturing everything on high-quality video. Skydio claims that its software can even predict a target’s next move, be that target a pedestrian or a car.”

Keep reading

IARPA Developing AI To Help Predict The Future

As far as secretive government projects go, the objectives of IARPA may be the riskiest and most far-reaching. With its mission to foster “high-risk, high-payoff” programs, this research arm of the U.S. intelligence community literally tries to predict the future. Staffed by spies and Ph.D.s, this organization aims to provide decision makers with real, accurate predictions of geopolitical events, using artificial intelligence and human “forecasters.”

IARPA, which stands for Intelligence Advanced Research Projects Activity, was founded in 2006 as part of the Office of the Director of National Intelligence. Some of the projects that it has funded focused on advancements in quantum computing, cryogenic computing, face recognition, universal language translators, and other initiatives that would fit well in a Hollywood action movie plot. But perhaps its main goal is to produce “anticipatory intelligence.” It’s a spy agency, after all.

In the interest of national security, IARPA wants to identify major world events before they happen, looking for terrorists, hackers or any perceived enemies of the United States. Wouldn’t you rather stop a crime before it happens?

Of course, that’s when we get into tricky political and sci-fi territory. Much of the research done by IARPA is actually out in the open, utilizing the public and experts in advancing technologies. It is available for “open solicitations,” forecasting tournaments, and has prize challenges for the public. You can pretty much send your idea in right now. But what happens to the R&D once it leaves the lab is, of course, often for only the NSA and the CIA to know.

Keep reading

Police Keep Secret List of Kids with Bad Grades Labelling Them ‘Potential Criminals’

 In the ostensible land of the free, we are told that all people are presumed innocent until proven guilty by their peers. To those who’ve been paying attention however, we know that “innocent until proven guilty” is a farce into today’s police state. If you doubt this assertion, you need only look at the data to see that a whopping 74% of people in jails across the country — have not been convicted of a crime. 

While it is true that many of these folks are awaiting trial for crimes they did commit, there are innocent people behind bars for the sole reason that they cannot afford bail. A free country — who claims to protect the rights of citizens — should not be keeping hundreds of thousands of presumed innocent people in cages, yet this is the status quo.

A recent report from the Tampa Bay Times shows just how determined the American police state is to guarantee an assembly line of otherwise entirely innocent people to continue this process. Police in Florida are targeting children in an attempt to label them as criminals at a young age — despite the children being entirely innocent.

The Pasco sheriff’s office has a secret list of students it believes could “fall into a life of crime” based on ridiculous standards like their grades.

Keep reading

Another “Pre-Crime” AI System Claims It Can Predict Who Will Share Disinformation Before It’s Published

We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation and general “fake news” by analyzing trends in behavior and language used across social media.

However, as we’ve also covered, these systems have more often that not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”

Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases and their own disinformation is revealed.

Keep reading

Thousands Of Mathematicians Call For Boycotting Predictive Crime A.I. From Police

After a flurry of police brutality cases this year and protests swarming the U.S. streets, thousands of mathematicians have joined scientists and engineers in calling for boycotting artificial intelligence from being used by law enforcement.

Over 2,000 mathematicians have signed a letter calling to boycott all collaboration with police and telling their colleagues to do the same in a future publication of the American Mathematical Society, Shadowproof reported.

The call to action for the mathematicians was the police killings of George Floyd, Tony McDade, Breonna Taylor, and many more just this year.

“At some point, we all reach a breaking point, where what is right in front of our eyes becomes more obvious,” says Jayadev Athreya, a participant in the boycott and Associate Professor of Mathematics at the University of Washington. “Fundamentally, it’s a matter of justice.”

The mathematicians wrote an open letter, collecting thousands of signatures for a widespread boycott of police using algorithms for policing. Every mathematician within the group’s network pledges to refuse any and all collaboration with law enforcement.

The group is organizing a wide base of mathematicians in the hopes of cutting off police from using such technologies. The letter’s authors cite “deep concerns over the use of machine learning, AI, and facial recognition technologies to justify and perpetuate oppression.”

Predictive policing is one key area where some mathematicians and scientists have enabled the racist algorithms, which tell cops to treat specific areas as “hotspots” for potential crime. Activists and organizations have long criticized the bias in these practices. Algorithms trained on data produced by racist policing will reproduce that prejudice to “predict” where crime will be committed and who is potentially a criminal.

Keep reading

OCTOPUS PROMIS: The Rise Of Thought Crime Technology — We’re Living In Orwell’s 1984

I don’t know if you have been paying attention or not, but a lot of police organizations across the U.S. have been using what are known as “heat lists” or pre-crime databases for years. What is a “heat list,” you may ask?

Well, “heat lists” are basically databases compiled by algorithms of people that police suspect may commit a crime. Yes, you read that right a person who “may” commit a crime. How these lists are generated and what factors determine an individual “may commit a crime” is unknown. A recent article by Tampa Bay Times highlights how this program in Florida terrorized and monitored residents of Pasco County and how the Pasco County Sheriff Department’s program operates.

According to the Times, the Sheriff’s office generates lists of people it considers likely to break the law, based on arrest histories, unspecified intelligence, and arbitrary decisions by police analysts. Then it sends deputies to find and interrogate anyone whose name appears, often without probable cause, a search warrant, or evidence of a specific crime.

Keep reading