‘Weapons of the future’: Russia has launched mass production of autonomous high-tech WAR ROBOTS, Defense Minister Shoigu announces

The Russian military will soon be equipped with autonomous war robots capable of acting independently on the battlefield, Defense Minister Sergey Shoigu has said, adding that Moscow has launched mass production of such machines.

“These are not just some experimental prototypes but robots that can really be shown in sci-fi movies since they can fight on their own,” the minister told the Russian Zvezda broadcaster during the ‘New Knowledge’ forum, on Friday. Held in several Russian cities from May 20 to May 22, the forum is a series of educational events featuring top specialists in a variety of fields.

“A major effort” has been made to develop “the weapons of the future,” Shoigu said, referring to war robots equipped with artificial intelligence (AI). The bots, which are said to be capable of independently accessing a combat situation, are part of the new state-of-the-art arsenal that the Russian military is currently focused on.

Keep reading

Pentagon Marches Towards AI Taking The Kill Shot

Dozens of autonomous war machines capable of deadly force conducted a field training exercise south of Seattle last August. The exercise involved no human operators but strictly robots powered with artificial intelligence, seeking mock enemy combatants.

The exercise, organized by the Defense Advanced Research Projects Agency, a blue-sky research division of the Pentagon, armed the robots with radio transmitters designed to simulate a weapon firing. The drill expanded the Pentagon’s understanding of how automation in military systems on the modern battlefield can work together to eliminate enemy combatants.

“The demonstrations also reflect a subtle shift in the Pentagon’s thinking about autonomous weapons, as it becomes clearer that machines can outperform humans at parsing complex situations or operating at high speed,” according to WIRED

It’s undeniable artificial intelligence will be the face of warfare for years to come. Military planners are moving ahead with incorporating autonomous weapons systems on the modern battlefield.

General John Murray of the US Army Futures Command told an audience at the US Military Academy in April that swarms of robots will likely force the military to decide if a human needs to intervene before a robot engages the enemy.

Keep reading

Google Veterans Team Up With Gov’t to Fill the Sky with AI Drones That Predict Your Behavior

Imagine in the near future, a swarm of tiny drones patrolling the skies across the country. These drones are not being flown by any pilot and are entirely autonomous, carrying out their directives coded into them during manufacturing — surveil, record, follow, and even predict your next move. Sounds like something out of a dystopian Sci-Fi flick, right? Well, there is no need to imagine this scenario or to watch it in a movie.

It is already here.

Adam Bry and Abraham Bachrach, the CEO and CTO, respectively at a company called Skydio have helped usher in this new reality. The duo started together at MIT before moving on to Google and working on Project Wing Google. 

After moving on from building self-flying aircraft at Google, the duo founded Skydio and has been giving their autonomous drones to police departments ever since — for free.

“We‘re solving a lot of the core problems that are needed to make drones trustworthy and able to fly themselves,” Bry told Forbes in an interview this week. “Autonomy—that core capability of giving a drone the skills of an expert pilot built in, in the software and the hardware—that’s really what we’re all about as a company.”

According to Forbes, Skydio “claims to be shipping the most advanced AI-powered drone ever built: a quadcopter that costs as little as $1,000, which can latch on to targets and follow them, dodging all sorts of obstacles and capturing everything on high-quality video. Skydio claims that its software can even predict a target’s next move, be that target a pedestrian or a car.”

Keep reading

Chatbots That Resurrect the Dead: Legal Experts Weigh in on “Disturbing” Technology

It was recently revealed that in 2017 Microsoft patented a chatbot which, if built, would digitally resurrect the dead. Using AI and machine learning, the proposed chatbot would bring our digital persona back to life for our family and friends to talk to. When pressed on the technology, Microsoft representatives admitted that the chatbot was “disturbing”, and that there were currently no plans to put it into production.

Still, it appears that the technical tools and personal data are in place to make digital reincarnations possible. AI chatbots have already passed the “Turing Test”, which means they’ve fooled other humans into thinking they’re human, too. Meanwhile, most people in the modern world now leave behind enough data to teach AI programmes about our conversational idiosyncrasies. Convincing digital doubles may be just around the corner.

But there are currently no laws governing digital reincarnation. Your right to data privacy after your death is far from set in stone, and there is currently no way for you to opt out of being digitally resurrected. This legal ambiguity leaves room for private companies to make chatbots out of your data after you’re dead.

Our research has looked at the surprisingly complex legal question of what happens to your data after you die. At present, and in the absence of specific legislation, it’s unclear who might have the ultimate power to reboot your digital persona after your physical body has been put to rest.

Keep reading

Documentary Exposes How Facial Recognition Tech Doesn’t See Dark Faces Accurately

What not to NOT love about Artificial Intelligence (AI)?

  • Millions of jobs being lost (see 12)
  • Censorship, unwarranted surveillance, and other unethical and dangerous applications (see 12345)
  • Inaccuracies that can lead to life-altering consequences

One documentary reveals more unscrupulous details:

CODED BIAS explores the fallout of MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.

Modern society sits at the intersection of two crucial questions: What does it mean when artificial intelligence increasingly governs our liberties? And what are the consequences for the people AI is biased against? When MIT Media Lab researcher Joy Buolamwini discovers that most facial-recognition software does not accurately identify darker-skinned faces and the faces of women, she delves into an investigation of widespread bias in algorithms. As it turns out, artificial intelligence is not neutral, and women are leading the charge to ensure our civil rights are protected.

Keep reading

Pieces Of Color: When YouTube’s oversensitive filters think CHESS VIDEOS are racist, will language have to adapt to Big Tech?

With all its talk of black-on-white war, YouTube’s “hate speech”-filtering AI can’t tell the difference between chess players and violent racists. Perhaps leaving robots in charge of the English language isn’t such a good idea.

Croatian chess player Antonio Radic, known to his million subscribers as ‘Agadmator,’ runs the world’s most popular chess channel on YouTube. Last summer he found his account suspended due to its “harmful and dangerous” content. Radic, who was in the middle of a show with Grandmaster Hikaru Nakamura at the time, was puzzled. He received no explanation for the ban, which was reversed on appeal, but speculated that YouTube’s censorship algorithm may have heard him say something like “black goes to B6 instead of C6, white will always be better.”

“If that’s the case, I’m sure all [all of] my 1,800 videos will be taken down as it’s black against white to the death in every video,” he told the Sun at the time.

Keep reading

Robots may be future of policing but activists warn they could be racist

Ayanna Howard, a robotics researcher at Georgia Tech, is concerned that robot technology, that ultimately uses human inputted artificial intelligence, will show biases against blacks. She tells the New York Times that “given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge, it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Howard and others say that many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems

Keep reading

There Are Spying Eyes Everywhere—and Now They Share a Brain

Citigraf was conceived in 2016, when the Chicago Police Department hired Genetec to solve a surveillance conundrum. Like other large law enforcement organizations around the country, the department had built up such an impressive arsenal of technologies for keeping tabs on citizens that it had reached the point of surveillance overload. To get a clear picture of an emergency in progress, officers often had to bushwhack through dozens of byzantine databases and feeds from far-flung sensors, including gunshot detectors, license plate readers, and public and private security cameras. This process of braiding together strands of information—“multi-intelligence fusion” is the technical term—was becoming too difficult. As one Chicago official put it, echoing a well-worn aphorism in surveillance circles, the city was “data-rich but information-poor.” What investigators needed was a tool that could cut a clean line through the labyrinth. What they needed was automated fusion.

Keep reading

IARPA Developing AI To Help Predict The Future

As far as secretive government projects go, the objectives of IARPA may be the riskiest and most far-reaching. With its mission to foster “high-risk, high-payoff” programs, this research arm of the U.S. intelligence community literally tries to predict the future. Staffed by spies and Ph.D.s, this organization aims to provide decision makers with real, accurate predictions of geopolitical events, using artificial intelligence and human “forecasters.”

IARPA, which stands for Intelligence Advanced Research Projects Activity, was founded in 2006 as part of the Office of the Director of National Intelligence. Some of the projects that it has funded focused on advancements in quantum computing, cryogenic computing, face recognition, universal language translators, and other initiatives that would fit well in a Hollywood action movie plot. But perhaps its main goal is to produce “anticipatory intelligence.” It’s a spy agency, after all.

In the interest of national security, IARPA wants to identify major world events before they happen, looking for terrorists, hackers or any perceived enemies of the United States. Wouldn’t you rather stop a crime before it happens?

Of course, that’s when we get into tricky political and sci-fi territory. Much of the research done by IARPA is actually out in the open, utilizing the public and experts in advancing technologies. It is available for “open solicitations,” forecasting tournaments, and has prize challenges for the public. You can pretty much send your idea in right now. But what happens to the R&D once it leaves the lab is, of course, often for only the NSA and the CIA to know.

Keep reading