Robots may be future of policing but activists warn they could be racist

Ayanna Howard, a robotics researcher at Georgia Tech, is concerned that robot technology, that ultimately uses human inputted artificial intelligence, will show biases against blacks. She tells the New York Times that “given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge, it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Howard and others say that many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems

Keep reading

Police Robots Are Not a Selfie Opportunity, They’re a Privacy Disaster Waiting to Happen

The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly.

Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you.

Keep reading

The Air Force Just Tested “Robot Dogs” For Use In Base Security

They look like they were cast straight from an episode of Black Mirror, and eventually, their mission could be similar in some ways, but for now, robot dogs are stretching their legs in the big test exercise environment for the United States Air Force. 

Last week, the U.S. Air Force hosted the second demonstration of its new Advanced Battle Management System (ABMS), a digital battle network system designed to collect, process, and share data among U.S. and allied forces in real-time. The ABMS has already undergone several tests, including a live-fire exercise earlier this year conducted with data and communications provided, in part, by SpaceX Starlink satellites.

The highlight of last week’s demonstration was the use of multiple distributed sensors to detect and shoot down mock Russian cruise missiles. The system involves 5G and 4G networks, cloud computing systems, and AI systems to provide an unprecedented level of situational awareness and course of action decision making. ABMS is a top modernization priority for the Department of the Air Force, which is dedicated $3.3 billion over five years to develop and deploy the architecture and related systems. Senior Air Force leaders cite the system as one of the most pressing capabilities for success in several key theaters of operations.

This latest ABMS demonstration was described as being one of the largest joint experiments in recent history, involving 65 government teams from every service including the Coast Guard, 35 separate military platforms, and 70 different industry partners. The exercise spanned 30 different geographic locations and four national test ranges.

Keep reading