Ayanna Howard, a robotics researcher at Georgia Tech, is concerned that robot technology, that ultimately uses human inputted artificial intelligence, will show biases against blacks. She tells the New York Times that “given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge, it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”
Howard and others say that many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems