Another “Pre-Crime” AI System Claims It Can Predict Who Will Share Disinformation Before It’s Published

We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation and general “fake news” by analyzing trends in behavior and language used across social media.

However, as we’ve also covered, these systems have more often that not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”

Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases and their own disinformation is revealed.

Keep reading

Thousands Of Mathematicians Call For Boycotting Predictive Crime A.I. From Police

After a flurry of police brutality cases this year and protests swarming the U.S. streets, thousands of mathematicians have joined scientists and engineers in calling for boycotting artificial intelligence from being used by law enforcement.

Over 2,000 mathematicians have signed a letter calling to boycott all collaboration with police and telling their colleagues to do the same in a future publication of the American Mathematical Society, Shadowproof reported.

The call to action for the mathematicians was the police killings of George Floyd, Tony McDade, Breonna Taylor, and many more just this year.

“At some point, we all reach a breaking point, where what is right in front of our eyes becomes more obvious,” says Jayadev Athreya, a participant in the boycott and Associate Professor of Mathematics at the University of Washington. “Fundamentally, it’s a matter of justice.”

The mathematicians wrote an open letter, collecting thousands of signatures for a widespread boycott of police using algorithms for policing. Every mathematician within the group’s network pledges to refuse any and all collaboration with law enforcement.

The group is organizing a wide base of mathematicians in the hopes of cutting off police from using such technologies. The letter’s authors cite “deep concerns over the use of machine learning, AI, and facial recognition technologies to justify and perpetuate oppression.”

Predictive policing is one key area where some mathematicians and scientists have enabled the racist algorithms, which tell cops to treat specific areas as “hotspots” for potential crime. Activists and organizations have long criticized the bias in these practices. Algorithms trained on data produced by racist policing will reproduce that prejudice to “predict” where crime will be committed and who is potentially a criminal.

Keep reading

OCTOPUS PROMIS: The Rise Of Thought Crime Technology — We’re Living In Orwell’s 1984

I don’t know if you have been paying attention or not, but a lot of police organizations across the U.S. have been using what are known as “heat lists” or pre-crime databases for years. What is a “heat list,” you may ask?

Well, “heat lists” are basically databases compiled by algorithms of people that police suspect may commit a crime. Yes, you read that right a person who “may” commit a crime. How these lists are generated and what factors determine an individual “may commit a crime” is unknown. A recent article by Tampa Bay Times highlights how this program in Florida terrorized and monitored residents of Pasco County and how the Pasco County Sheriff Department’s program operates.

According to the Times, the Sheriff’s office generates lists of people it considers likely to break the law, based on arrest histories, unspecified intelligence, and arbitrary decisions by police analysts. Then it sends deputies to find and interrogate anyone whose name appears, often without probable cause, a search warrant, or evidence of a specific crime.

Keep reading