Robots may be future of policing but activists warn they could be racist

Ayanna Howard, a robotics researcher at Georgia Tech, is concerned that robot technology, that ultimately uses human inputted artificial intelligence, will show biases against blacks. She tells the New York Times that “given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge, it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Howard and others say that many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems

Keep reading

There Are Spying Eyes Everywhere—and Now They Share a Brain

Citigraf was conceived in 2016, when the Chicago Police Department hired Genetec to solve a surveillance conundrum. Like other large law enforcement organizations around the country, the department had built up such an impressive arsenal of technologies for keeping tabs on citizens that it had reached the point of surveillance overload. To get a clear picture of an emergency in progress, officers often had to bushwhack through dozens of byzantine databases and feeds from far-flung sensors, including gunshot detectors, license plate readers, and public and private security cameras. This process of braiding together strands of information—“multi-intelligence fusion” is the technical term—was becoming too difficult. As one Chicago official put it, echoing a well-worn aphorism in surveillance circles, the city was “data-rich but information-poor.” What investigators needed was a tool that could cut a clean line through the labyrinth. What they needed was automated fusion.

Keep reading

IARPA Developing AI To Help Predict The Future

As far as secretive government projects go, the objectives of IARPA may be the riskiest and most far-reaching. With its mission to foster “high-risk, high-payoff” programs, this research arm of the U.S. intelligence community literally tries to predict the future. Staffed by spies and Ph.D.s, this organization aims to provide decision makers with real, accurate predictions of geopolitical events, using artificial intelligence and human “forecasters.”

IARPA, which stands for Intelligence Advanced Research Projects Activity, was founded in 2006 as part of the Office of the Director of National Intelligence. Some of the projects that it has funded focused on advancements in quantum computing, cryogenic computing, face recognition, universal language translators, and other initiatives that would fit well in a Hollywood action movie plot. But perhaps its main goal is to produce “anticipatory intelligence.” It’s a spy agency, after all.

In the interest of national security, IARPA wants to identify major world events before they happen, looking for terrorists, hackers or any perceived enemies of the United States. Wouldn’t you rather stop a crime before it happens?

Of course, that’s when we get into tricky political and sci-fi territory. Much of the research done by IARPA is actually out in the open, utilizing the public and experts in advancing technologies. It is available for “open solicitations,” forecasting tournaments, and has prize challenges for the public. You can pretty much send your idea in right now. But what happens to the R&D once it leaves the lab is, of course, often for only the NSA and the CIA to know.

Keep reading

Spotify Wants to Eavesdrop on Your Life to Pick the Next Song to Play

Music streaming service Spotify has reportedly filed a patent for new personality tracking technology that analyzes a user’s emotional state and suggests music based on it. The patent, titled “Identification of taste attributes from an audio signal,” details constantly monitoring “speech content and background noise” to provide song suggestions.

Music Business Worldwide reports that in October 2020, Spotify filed a patent for personality tracking technology that could determine a user’s emotional state in order to suggest the perfect song for them to listen to.

The filing explained that behavioral variables such as a user’s mood, their favorite genre of music, or their demographic could all “correspond to different personality traits of a user.” Spotify suggested that this could be used to promote personalized content to users based on the personality traits it detected.

Now a new U.S. Spotify patent shows that the company wants to use the technology to analyze users even further by using speech recognition to determine their “emotional state, gender, age, or accent.” These attributes can then be used to recommend content.

The new patent is titled “Identification of taste attributes from an audio signal” and was filed in February 2018 and granted on January 12, 2021. The patent can be read in full here.

Keep reading

Scientists propose putting nanobots in our bodies to create ‘global superbrain’

A team has proposed using nanobots to create the ‘internet of thoughts’, where instant knowledge could be downloaded just by thinking it.

An international team of scientists led by members of UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence (AI) and computation will lead this century to the development of a human ‘brain-cloud interface’ (B-CI).

Writing in Frontiers in Neuroscience, the team said that a B-CI would connect neurons and synapses in the brain to vast cloud computing networks in real time.

Such a concept isn’t new with writers of science fiction, including Ray Kurzweil, who proposed it decades ago. In fact, Facebook has even admitted it is working on a B-CI.

However, Kurzweil’s fantasy about neural nanobots capable of hooking us directly into the web is now being turned into reality by the senior author of this latest study, Robert Freitas Jr.

Keep reading

MICROSOFT PATENT SHOWS PLANS TO REVIVE DEAD LOVED ONES AS CHATBOTS

Microsoft has been granted a patent that would allow the company to make a chatbot using the personal information of deceased people.  

The patent describes creating a bot based on the “images, voice data, social media posts, electronic messages”, and more personal information.

“The specific person [who the chat bot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity etc”, it goes on to say.

“The specific person may also correspond to oneself (e.g., the user creating/training the chat bot,” Microsoft also describes – implying that living users could train a digital replacement in the event of their death.

Keep reading

Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts

An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.

The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”

Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”

Keep reading

Stanford researchers claim new facial tracking software can determine your political affiliation

Because artificial intelligence wasn’t already frightening enough, researchers decided to teach computers how to identify a person’s political ideology based upon their facial appearance and expressions.

The study was led by Stanford researcher Michal Kosinski, who already caused a stir in 2017 by programming machines that could determine whether you are gay or straight based on your appearance.

Keep reading

Police Robots Are Not a Selfie Opportunity, They’re a Privacy Disaster Waiting to Happen

The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly.

Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you.

Keep reading

Another “Pre-Crime” AI System Claims It Can Predict Who Will Share Disinformation Before It’s Published

We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation and general “fake news” by analyzing trends in behavior and language used across social media.

However, as we’ve also covered, these systems have more often that not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”

Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases and their own disinformation is revealed.

Keep reading