An AI just helped an Air Force pilot fly a U-2 spy plane during a simulated missile strike

The Air Force has taken a giant step toward creating an artificial intelligence system that would never in a million years turn on humanity – unlike the “Skynet” nemesis in the first two Terminator movies, which are the only ones that count.

Recently, an artificial intelligence algorithm named ARTUµ — possibly a reference to Star Wars’ R2D2 — performed tasks on a U-2 Dragon Lady spy plane that are normally done by humans, the Air Force announced on Wednesday.

“After takeoff, the sensor control was positively handed-off to ARTUµ who then manipulated the sensor, based off insight previously learned from over a half-million computer simulated training iterations,” according to a news release from the humans who run the Air Force — for now. “The pilot and AI successfully teamed to share the sensor and achieve the mission objectives.”

The algorithm used the plane’s tactical navigation as an Air Force major whose callsign is “Vudu” flew the U-2, which was assigned to the 9th Reconnaissance Wing, Beale Air Force Base, California, the news release says.

In short: Man and machine successfully flew a reconnaissance mission during a simulated missile strike.

Keep reading

Suspect AI Software Verified Mail-In Ballots With Little Human Oversight in Key Battleground States

Though accusations of election fraud in the 2020 US presidential election have been swirling across social media and some news outlets for much of the past week, few have examined the role of a little known Silicon Valley company whose artificial intelligence (AI) algorithm was used to accept or reject ballots in highly contested states such as Nevada.

That company, Parascript, has long-standing cozy ties to defense contractors such as Lockheed Martin and tech giants including Microsoft, in addition to being a contractor to the US Postal Service. In addition, its founder, Stepan Pachikov, better known for cofounding the app Evernote in 2007, is a long-standing and 2020 donor to Democratic presidential candidates.

Parascript’s AI software was used during this election in at least eight states for matching signatures on ballot envelopes with those in government databases in order to “ease the workload of staff enforcing voter signature rules” resulting from the influx of mail-in ballots. Reuters, which reported on the use of the technology, asked the company to provide a list of counties and states using its software for the 2020 election. Parascript, however, declined to supply the list, replying, instead, that their clients “included 20 of the top 100 counties by registered voters.”

Despite not receiving the official list from Parascript, Reuters was able to compile its own partial list, which revealed that several counties in Florida, Colorado, Washington, and Utah, among others, utilized the AI software to determine the validity of ballots. Reuters also reported that Clark County, Nevada, which is one of the hotspots of litigation between the Trump and Biden campaigns and fraud allegations, was one that used the software. Reuters was able to determine how the software was used in some counties, with many counties allowing the software to approve anywhere from 20 to 75 percent of mail-in ballots as acceptable. For several counties included in the Reuters list,staff reviewed 1 percent or less of the AI software’s acceptances. Figures were not available for Clark County, Nevada.

Keep reading

The Threat Of “Killer Robots” Is Real And Closer Than You Might Think

From self-driving cars, to digital assistantsartificial intelligence (AI) is fast becoming an integral technology in our lives today. But this same technology that can help to make our day-to-day life easier is also being incorporated into weapons for use in combat situations.

Weaponised AI features heavily in the security strategies of the US, China and Russia. And some existing weapons systems already include autonomous capabilities based on AI, developing weaponised AI further means machines could potentially make decisions to harm and kill people based on their programming, without human intervention.

Countries that back the use of AI weapons claim it allows them to respond to emerging threats at greater than human speed. They also say it reduces the risk to military personnel and increases the ability to hit targets with greater precision. But outsourcing use-of-force decisions to machines violates human dignity. And it’s also incompatible with international law which requires human judgement in context.

Indeed, the role that humans should play in use of force decisions has been an increased area of focus in many United Nations (UN) meetings. And at a recent UN meeting, states agreed that it’s unacceptable on ethical and legal grounds to delegate use-of-force decisions to machines – “without any human control whatsoever”.

But while this may sound like good news, there continues to be major differences in how states define “human control”.

Keep reading

Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts

An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.

The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”

Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”

Keep reading

Thousands Of Mathematicians Call For Boycotting Predictive Crime A.I. From Police

After a flurry of police brutality cases this year and protests swarming the U.S. streets, thousands of mathematicians have joined scientists and engineers in calling for boycotting artificial intelligence from being used by law enforcement.

Over 2,000 mathematicians have signed a letter calling to boycott all collaboration with police and telling their colleagues to do the same in a future publication of the American Mathematical Society, Shadowproof reported.

The call to action for the mathematicians was the police killings of George Floyd, Tony McDade, Breonna Taylor, and many more just this year.

“At some point, we all reach a breaking point, where what is right in front of our eyes becomes more obvious,” says Jayadev Athreya, a participant in the boycott and Associate Professor of Mathematics at the University of Washington. “Fundamentally, it’s a matter of justice.”

The mathematicians wrote an open letter, collecting thousands of signatures for a widespread boycott of police using algorithms for policing. Every mathematician within the group’s network pledges to refuse any and all collaboration with law enforcement.

The group is organizing a wide base of mathematicians in the hopes of cutting off police from using such technologies. The letter’s authors cite “deep concerns over the use of machine learning, AI, and facial recognition technologies to justify and perpetuate oppression.”

Predictive policing is one key area where some mathematicians and scientists have enabled the racist algorithms, which tell cops to treat specific areas as “hotspots” for potential crime. Activists and organizations have long criticized the bias in these practices. Algorithms trained on data produced by racist policing will reproduce that prejudice to “predict” where crime will be committed and who is potentially a criminal.

Keep reading

Pentagon’s Top Spy Agency Turns To AI for Targeting and Operations Planning

The U.S. Defense Intelligence Agency (DIA) is getting ready for the “next battlefield” and counting on the expertise of private concerns, like Booz Allen Hamilton, to implement what it calls Machine-assisted Analytic Rapid-repository System, or MARS for short. MARS is a critical data management system for “military targeting” and operation planning.

MARS is currently the DIA’s top priority, and according to DIA director Lt. Gen. Robert P. Ashley Jr., the aim is to replicate “the commercial Internet that everybody uses every day,” with the added functionality of providing a “foundational intelligence picture […] at speed and at scale.”

Terry Busch, chief of DIA’s integrated analysis and methodologies division, highlights the difference between the MARS program he manages and the old “stovepipe” data management technologies it is meant to replace: “What comes out of MARS at the end is not data, it’s analysis. It’s finished intelligence.”

Which kind of intelligence, specifically, will be assessed dynamically by the machine’s algorithms in a new kind of database management system using AI functionality. It will revolutionize the way data is received and acted-upon. As it scours and collects vast datasets and volumes of foreign intelligence that support U.S. military operations around the world, MARS will be equipped to handle both large amounts of data, like the storage-intensive images and videos collected by the National Reconnaissance Office and also analyze the information to produce actionable leads in the battlefield.

It is nothing less than the 1983 sci-fi classic “WarGames” come to life. A ‘machine’ that decides when to go to war based on the information it is fed. In the movie, a military drill of a surprise nuclear attack on the United States accidentally goes live after a hacker, played by Matthew Broderick, “unwittingly” puts the world on the brink of nuclear war.

MARS program manager Terry Busch doesn’t discount the possibility. “On the machine side,” Busch stated, “we have experienced confirmation bias in big data,” adding that it was a “real concern” given that they’ve had “the machine retrain itself to error”.

COVID-19, however, has given the top military intelligence department the opportunity to “prove [its] ability to deliver the capabilities of MARS”, as DIA chief of Staff, John Sawyer, said at a National Security Summit that concluded Friday. The “assumptions about the nature of our work,” he claims were challenged during the pandemic, were especially fruitful in regards to the MARS program, which can now benefit from a new modality of military intelligence propagation that will be “the future of how we are going to understand fighting”.

Keep reading

The Air Force Just Tested “Robot Dogs” For Use In Base Security

They look like they were cast straight from an episode of Black Mirror, and eventually, their mission could be similar in some ways, but for now, robot dogs are stretching their legs in the big test exercise environment for the United States Air Force. 

Last week, the U.S. Air Force hosted the second demonstration of its new Advanced Battle Management System (ABMS), a digital battle network system designed to collect, process, and share data among U.S. and allied forces in real-time. The ABMS has already undergone several tests, including a live-fire exercise earlier this year conducted with data and communications provided, in part, by SpaceX Starlink satellites.

The highlight of last week’s demonstration was the use of multiple distributed sensors to detect and shoot down mock Russian cruise missiles. The system involves 5G and 4G networks, cloud computing systems, and AI systems to provide an unprecedented level of situational awareness and course of action decision making. ABMS is a top modernization priority for the Department of the Air Force, which is dedicated $3.3 billion over five years to develop and deploy the architecture and related systems. Senior Air Force leaders cite the system as one of the most pressing capabilities for success in several key theaters of operations.

This latest ABMS demonstration was described as being one of the largest joint experiments in recent history, involving 65 government teams from every service including the Coast Guard, 35 separate military platforms, and 70 different industry partners. The exercise spanned 30 different geographic locations and four national test ranges.

Keep reading

US Military Robots on Fast Track to Leadership Role

With Covid-19 incapacitating startling numbers of U.S. service members and modern weapons proving increasingly lethal, the American military is relying ever more frequently on intelligent robots to conduct hazardous combat operations. Such devices, known in the military as “autonomous weapons systems,” include robotic sentries, battlefield-surveillance drones and autonomous submarines.

So far, in other words, robotic devices are merely replacing standard weaponry on conventional battlefields. Now, however, in a giant leap of faith, the Pentagon is seeking to take this process to an entirely new level — by replacing not just ordinary soldiers and their weapons, but potentially admirals and generals with robotic systems.

Admittedly, those systems are still in the development stage, but the Pentagon is now rushing their future deployment as a matter of national urgency. Every component of a modern general staff — including battle planning, intelligence-gathering, logistics, communications, and decision-making — is, according to the Pentagon’s latest plans, to be turned over to complex arrangements of sensors, computers, and software.

All these will then be integrated into a “system of systems,” now dubbed the Joint All-Domain Command-and-Control, or JADC2 (since acronyms remain the essence of military life). Eventually, that amalgam of systems may indeed assume most of the functions currently performed by American generals and their senior staff officers.

Keep reading

MACHINES CAN LEARN UNSUPERVISED ‘AT SPEED OF LIGHT’ AFTER AI BREAKTHROUGH, SCIENTISTS SAY

Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations.

The new approach significantly improves both the speed and efficiency of machine learning neural networks – a form of AI that aims to replicate the functions performed by a human brain in order to teach itself a task without supervision.

Current processors used for machine learning are limited in performing complex operations by the power required to process the data. The more intelligent the task, the more complex the data, and therefore the greater the power demands.

Such networks are also limited by the slow transmission of electronic data between the processor and the memory.

Researchers from George Washington University in the US discovered that using photons within neural network (tensor) processing units (TPUs) could overcome these limitations and create more powerful and power-efficient AI.

Keep reading