Thousands Of Mathematicians Call For Boycotting Predictive Crime A.I. From Police

After a flurry of police brutality cases this year and protests swarming the U.S. streets, thousands of mathematicians have joined scientists and engineers in calling for boycotting artificial intelligence from being used by law enforcement.

Over 2,000 mathematicians have signed a letter calling to boycott all collaboration with police and telling their colleagues to do the same in a future publication of the American Mathematical Society, Shadowproof reported.

The call to action for the mathematicians was the police killings of George Floyd, Tony McDade, Breonna Taylor, and many more just this year.

“At some point, we all reach a breaking point, where what is right in front of our eyes becomes more obvious,” says Jayadev Athreya, a participant in the boycott and Associate Professor of Mathematics at the University of Washington. “Fundamentally, it’s a matter of justice.”

The mathematicians wrote an open letter, collecting thousands of signatures for a widespread boycott of police using algorithms for policing. Every mathematician within the group’s network pledges to refuse any and all collaboration with law enforcement.

The group is organizing a wide base of mathematicians in the hopes of cutting off police from using such technologies. The letter’s authors cite “deep concerns over the use of machine learning, AI, and facial recognition technologies to justify and perpetuate oppression.”

Predictive policing is one key area where some mathematicians and scientists have enabled the racist algorithms, which tell cops to treat specific areas as “hotspots” for potential crime. Activists and organizations have long criticized the bias in these practices. Algorithms trained on data produced by racist policing will reproduce that prejudice to “predict” where crime will be committed and who is potentially a criminal.

Keep reading

Pentagon’s Top Spy Agency Turns To AI for Targeting and Operations Planning

The U.S. Defense Intelligence Agency (DIA) is getting ready for the “next battlefield” and counting on the expertise of private concerns, like Booz Allen Hamilton, to implement what it calls Machine-assisted Analytic Rapid-repository System, or MARS for short. MARS is a critical data management system for “military targeting” and operation planning.

MARS is currently the DIA’s top priority, and according to DIA director Lt. Gen. Robert P. Ashley Jr., the aim is to replicate “the commercial Internet that everybody uses every day,” with the added functionality of providing a “foundational intelligence picture […] at speed and at scale.”

Terry Busch, chief of DIA’s integrated analysis and methodologies division, highlights the difference between the MARS program he manages and the old “stovepipe” data management technologies it is meant to replace: “What comes out of MARS at the end is not data, it’s analysis. It’s finished intelligence.”

Which kind of intelligence, specifically, will be assessed dynamically by the machine’s algorithms in a new kind of database management system using AI functionality. It will revolutionize the way data is received and acted-upon. As it scours and collects vast datasets and volumes of foreign intelligence that support U.S. military operations around the world, MARS will be equipped to handle both large amounts of data, like the storage-intensive images and videos collected by the National Reconnaissance Office and also analyze the information to produce actionable leads in the battlefield.

It is nothing less than the 1983 sci-fi classic “WarGames” come to life. A ‘machine’ that decides when to go to war based on the information it is fed. In the movie, a military drill of a surprise nuclear attack on the United States accidentally goes live after a hacker, played by Matthew Broderick, “unwittingly” puts the world on the brink of nuclear war.

MARS program manager Terry Busch doesn’t discount the possibility. “On the machine side,” Busch stated, “we have experienced confirmation bias in big data,” adding that it was a “real concern” given that they’ve had “the machine retrain itself to error”.

COVID-19, however, has given the top military intelligence department the opportunity to “prove [its] ability to deliver the capabilities of MARS”, as DIA chief of Staff, John Sawyer, said at a National Security Summit that concluded Friday. The “assumptions about the nature of our work,” he claims were challenged during the pandemic, were especially fruitful in regards to the MARS program, which can now benefit from a new modality of military intelligence propagation that will be “the future of how we are going to understand fighting”.

Keep reading

The Air Force Just Tested “Robot Dogs” For Use In Base Security

They look like they were cast straight from an episode of Black Mirror, and eventually, their mission could be similar in some ways, but for now, robot dogs are stretching their legs in the big test exercise environment for the United States Air Force. 

Last week, the U.S. Air Force hosted the second demonstration of its new Advanced Battle Management System (ABMS), a digital battle network system designed to collect, process, and share data among U.S. and allied forces in real-time. The ABMS has already undergone several tests, including a live-fire exercise earlier this year conducted with data and communications provided, in part, by SpaceX Starlink satellites.

The highlight of last week’s demonstration was the use of multiple distributed sensors to detect and shoot down mock Russian cruise missiles. The system involves 5G and 4G networks, cloud computing systems, and AI systems to provide an unprecedented level of situational awareness and course of action decision making. ABMS is a top modernization priority for the Department of the Air Force, which is dedicated $3.3 billion over five years to develop and deploy the architecture and related systems. Senior Air Force leaders cite the system as one of the most pressing capabilities for success in several key theaters of operations.

This latest ABMS demonstration was described as being one of the largest joint experiments in recent history, involving 65 government teams from every service including the Coast Guard, 35 separate military platforms, and 70 different industry partners. The exercise spanned 30 different geographic locations and four national test ranges.

Keep reading

US Military Robots on Fast Track to Leadership Role

With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones and autonomous submarines.

So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields. Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level — by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff — including battle planning, intelligence-gathering, logistics, communications, and decision-making — is, according to the Pentagon’s latest plans, to be turned over to complex arrangements of sensors, computers, and software.

All these will then be integrated into a “system of systems,” now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

Keep reading

MACHINES CAN LEARN UNSUPERVISED ‘AT SPEED OF LIGHT’ AFTER AI BREAKTHROUGH, SCIENTISTS SAY

Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations.

The new approach significantly improves both the speed and efficiency of machine learning neural networks – a form of AI that aims to replicate the functions performed by a human brain in order to teach itself a task without supervision.

Current processors used for machine learning are limited in performing complex operations by the power required to process the data. The more intelligent the task, the more complex the data, and therefore the greater the power demands.

Such networks are also limited by the slow transmission of electronic data between the processor and the memory.

Researchers from George Washington University in the US discovered that using photons within neural network (tensor) processing units (TPUs) could overcome these limitations and create more powerful and power-efficient AI.

Keep reading