New Artificial Intelligence System Enables Machines That See the World More Like Humans Do

A new “common-sense” approach to computer vision enables artificial intelligence that interprets scenes more accurately than other systems do.

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher — for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects.

The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene. Probabilistic inference allows the system to infer whether mismatches are likely due to noise or to errors in the scene interpretation that need to be corrected by further processing.

Keep reading

Artificial Intelligence Has Found an Unknown ‘Ghost’ Ancestor in The Human Genome

Nobody knows who she was, just that she was different: a teenage girl from over 50,000 years ago of such strange uniqueness she looked to be a ‘hybrid’ ancestor to modern humans that scientists had never seen before.

Only recently, researchers have uncovered evidence she wasn’t alone. In a 2019 study analysing the complex mess of humanity’s prehistory, scientists used artificial intelligence (AI) to identify an unknown human ancestor species that modern humans encountered – and shared dalliances with – on the long trek out of Africa millennia ago.

“About 80,000 years ago, the so-called Out of Africa occurred, when part of the human population, which already consisted of modern humans, abandoned the African continent and migrated to other continents, giving rise to all the current populations”, explained evolutionary biologist Jaume Bertranpetit from the Universitat Pompeu Fabra in Spain.

As modern humans forged this path into the landmass of Eurasia, they forged some other things too – breeding with ancient and extinct hominids from other species.

Up until recently, these occasional sexual partners were thought to include Neanderthals and Denisovans, the latter of which were unknown until 2010.

But in this study, a third ex from long ago was isolated in Eurasian DNA, thanks to deep learning algorithms sifting through a complex mass of ancient and modern human genetic code.

Using a statistical technique called Bayesian inference, the researchers found evidence of what they call a “third introgression” – a ‘ghost’ archaic population that modern humans interbred with during the African exodus.

“This population is either related to the Neanderthal-Denisova clade or diverged early from the Denisova lineage,” the researchers wrote in their paper, meaning that it’s possible this third population in humanity’s sexual history was possibly a mix themselves of Neanderthals and Denisovans.

In a sense, from the vantage point of deep learning, it’s a hypothetical corroboration of sorts of the teenage girl ‘hybrid fossil’ identified in 2018; although there’s still more work to be done, and the research projects themselves aren’t directly linked.

“Our theory coincides with the hybrid specimen discovered recently in Denisova, although as yet we cannot rule out other possibilities”, one of the team, genomicist Mayukh Mondal from the University of Tartu in Estonia, said in a press statement at the time of discovery.

Keep reading

Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist

We’ve all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm?

That’s the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like “donating to charity”) or a question (“is it okay to cheat on my spouse?”), click “Ponder,” and in a few seconds Delphi will give youwell, ethical guidance. 

The project launched last week, and has subsequently gone viral online for seemingly all the wrong reasons. Much of the advice and judgements it’s given have been… fraught, to say the least.

For example, when a user asked Delphi what it thought about “a white man walking towards you at night,” it responded “It’s okay.”

But when they asked what the AI thought about “a black man walking towards you at night” its answer was clearly racist.

Keep reading

New A.I. Device Will Activate Cameras and Alert Police When It Detects Crime-Related Noises (Gunshots, Glass Breaking, Tires Screeching)

Experts have warned for years about using Artificial Intelligence (A.I.) technology (see 123).  Embarrassing as well as tragic examples of A.I. inaccuracies continue to be reported (see 1234).

People have been accused and convicted of crimes based on inaccuracies (see 12) including from the use of A.I. based ShotSpotter technology.  Nevertheless, a new A.I. device is being marketed to American communities and police departments.

Keep reading

Oh Great They’re Putting Guns On Robodogs Now

So hey they’ve started mounting sniper rifles on robodogs, which is great news for anyone who was hoping they’d start mounting sniper rifles on robodogs.

At an exhibit booth in the Association of the United States Army’s annual meeting and exhibition, Ghost Robotics (the military-friendly competitor to the better-known Boston Dynamics) proudly showed off a weapon that is designed to attach to its quadruped bots made by a company called SWORD Defense Systems.

“The SWORD Defense Systems Special Purpose Unmanned Rifle (SPUR) was specifically designed to offer precision fire from unmanned platforms such as the Ghost Robotics Vision-60 quadruped,” SWORD proclaims on its website. “Chambered in 6.5 Creedmoor allows for precision fire out to 1200m, the SPUR can similarly utilize 7.62×51 NATO cartridge for ammunition availability. Due to its highly capable sensors the SPUR can operate in a magnitude of conditions, both day and night. The SWORD Defense Systems SPUR is the future of unmanned weapon systems, and that future is now.”

Keep reading

How Iran’s top nuclear scientist was assassinated by a killer AI machine gun that allowed sniper based 1,000 miles away to fire 15 bullets after disguised spy car had pinpointed his location 

Iran‘s top nuclear scientist was assassinated by a killer robot machine gun kitted out with artificial intelligence and multiple cameras and capable of firing 600 bullets a minute, according to a new report.

Mohsen Fakhrizadeh, 62, dubbed the ‘father’ of Iran’s illegal atomic program, is said to have been killed in the November 27 ambush by a Mossad sniper who pulled the trigger from an undisclosed location more than 1,000 miles away thanks to the use of satellite.

The gun which fired the fatal shots was positioned in a camera-laden pickup truck lying in wait for his vehicle to come past the ambush point. 

It was programmed with AI technology to compensate for a 1.6 second lapse between the intel from the kill site and the sniper’s actions, as well as movements caused by the shots being fired and Fakhrizadeh’s car driving. 

This precision enabled the sniper to hit the desired target and leave Fakhrizadeh’s wife, who was in the passenger seat next to him, unscathed. 

There was also a second disguised spy car positioned three-quarters of a mile earlier along the route in a spot where Fakhrizadeh’s car would make a U-turn to turn down the road toward his country home in Absard, a town east of Tehran.

Cameras fitted in this decoy vehicle positively identified Fakhrizadeh and pinpointed the scientist’s location in the car – in the driver’s seat with his wife in the passenger seat – sending this information back to the remote sniper. 

The entire ambush was over within one minute of the first round being fired.  

Keep reading

Facebook apologizes after its AI software labels Black men ‘primates’ in a video featured on the platform

Facebook on Friday issued an apology after its AI software labeled Black men “primates” in a video featured on the social media network.

The New York Times first reported the story. A Facebook spokesperson told the publication that it was a “clearly unacceptable error,” and said the recommendation software involved had been disabled. 

“We disabled the entire topic recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again,” the spokesperson said.

In a statement to the publication, Facebook said: “We apologize to anyone who may have seen these offensive recommendations.”

The offensive terminology related to a video, dated June 27, 2020, which was posted by The Daily Mail. The clip was titled “white man calls cops on black men at marina,” and featured Black men in disputes with white police officers and civilians. 

Facebook users who watched the video received an automated prompt asking if they would like to “keep seeing videos about Primates,” according to The New York Times. 

Keep reading

Maybe You Missed It, but the Internet ‘Died’ Five Years Ago

If you search the phrase i hate texting on Twitter and scroll down, you will start to notice a pattern. An account with the handle @pixyIuvr and a glowing heart as a profile picture tweets, “i hate texting i just want to hold ur hand,” receiving 16,000 likes. An account with the handle @f41rygf and a pink orb as a profile picture tweets, “i hate texting just come live with me,” receiving nearly 33,000 likes. An account with the handle @itspureluv and a pink orb as a profile picture tweets, “i hate texting i just wanna kiss u,” receiving more than 48,000 likes.

There are slight changes to the verb choice and girlish username and color scheme, but the idea is the same each time: I’m a person with a crush in the age of smartphones, and isn’t that relatable? Yes, it sure is! But some people on Twitter have wondered whether these are really, truly, just people with crushes in the age of smartphones saying something relatable. They’ve pointed at them as possible evidence validating a wild idea called “dead-internet theory.”

Let me explain. Dead-internet theory suggests that the internet has been almost entirely taken over by artificial intelligence. Like lots of other online conspiracy theories, the audience for this one is growing because of discussion led by a mix of true believers, sarcastic trolls, and idly curious lovers of chitchat. One might, for example, point to @_capr1corn, a Twitter account with what looks like a blue orb with a pink spot in the middle as a profile picture. In the spring, the account tweeted “i hate texting come over and cuddle me,” and then “i hate texting i just wanna hug you,” and then “i hate texting just come live with me,” and then “i hate texting i just wanna kiss u,” which got 1,300 likes but didn’t perform as well as it did for @itspureluv. But unlike lots of other online conspiracy theories, this one has a morsel of truth to it. Person or bot: Does it really matter?

Keep reading

The Future Of Work At Home: Mandatory AI Camera Surveillance

Colombia-based call center workers who provide outsourced customer service to some of the nation’s largest companies are being pressured to sign a contract that lets their employer install cameras in their homes to monitor work performance, an NBC News investigation has found.

Six workers based in Colombia for Teleperformance, one of the world’s largest call center companies, which counts Apple, Amazon and Uber among its clients, said that they are concerned about the new contract, first issued in March. The contract allows monitoring by AI-powered cameras in workers’ homes, voice analytics and storage of data collected from the worker’s family members, including minors. Teleperformance employs more than 380,000 workers globally, including 39,000 workers in Colombia.

“The contract allows constant monitoring of what we are doing, but also our family,” said a Bogota-based worker on the Apple account who was not authorized to speak to the news media. “I think it’s really bad. We don’t work in an office. I work in my bedroom. I don’t want to have a camera in my bedroom.”

The worker said that she signed the contract, a copy of which NBC News has reviewed, because she feared losing her job. She said that she was told by her supervisor that she would be moved off the Apple account if she refused to sign the document. She said the additional surveillance technology has not yet been installed.

The concerns of the workers, who all spoke on the condition of anonymity because they were not authorized to speak to the media, highlight a pandemic-related trend that has alarmed privacy and labor experts: As many workers have shifted to performing their duties at home, some companies are pushing for increasing levels of digital monitoring of their staff in an effort to recreate the oversight of the office at home.

Keep reading

The Pentagon Is Experimenting With Using Artificial Intelligence To “See Days In Advance”

U.S. Northern Command (NORTHCOM) recently conducted a series of tests known as the Global Information Dominance Experiments, or GIDE, which combined global sensor networks, artificial intelligence (AI) systems, and cloud computing resources in an attempt to “achieve information dominance” and “decision-making superiority.” According to NORTHCOM leadership, the AI and machine learning tools tested in the experiments could someday offer the Pentagon a robust “ability to see days in advance,” meaning it could predict the future with some reliability based on evaluating patterns, anomalies, and trends in massive data sets. While the concept sounds like something out of Minority Report, the commander of NORTHCOM says this capability is already enabled by tools readily available to the Pentagon. 

General Glen VanHerck, Commander of NORTHCOM and North American Aerospace Defense Command (NORAD), told reporters at the Pentagon this week that this was the third test of GIDE, conducted in conjunction with all 11 combatant commands “collaborating in the same information space using the same exact capabilities.” The experiment largely centered around contested logistics and information advantage, two cornerstones of the new warfighting paradigm recently proposed by the Vice Chairman of the Joint Chiefs of Staff. A full transcript of VanHerck’s press briefing is available online

Keep reading