Another “Pre-Crime” AI System Claims It Can Predict Who Will Share Disinformation Before It’s Published

We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation and general “fake news” by analyzing trends in behavior and language used across social media.

However, as we’ve also covered, these systems have more often that not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”

Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases and their own disinformation is revealed.

Keep reading

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s