Amazon’s Alexa dishes out potentially deadly challenge

Amazon says it has updated its voice assistant after it transpired that Alexa had suggested a 10-year-old girl touch a coin to the prongs of a partially inserted plug as a challenge.

The girl’s mother posted a tweet on Monday describing how her daughter had been doing some cold-weather indoor challenges set by a phys. ed. teacher on YouTube and was seeking another one. To the woman’s shock, Alexa suggested a “simple” task it had found on the web, whereby the participant “plug[s] in a phone charger about halfway into a wall outlet, then touch[es] a penny to the exposed prongs.” 

The dangerous “penny challenge” started making the rounds on TikTok and other platforms about a year ago and can potentially lead to electric shock as well as cause a fire.

Keep reading

Human-like robot’s reaction ‘freaks out’ creators

A machine touted as the world’s most advanced humanoid robot “freaked out” its creators after it reacted with visible irritation and grabbed the hand of a researcher who got into its “personal space.”

A newly posted video demonstration of the interaction shows the robot, called ‘Ameca’, tracking a moving finger before furrowing its brow and leaning back as the person’s hand comes nearer. After the researcher pokes its nose, the robot then grabs their hand and moves it away.

The impressively life-like robot, which is being developed by British firm Engineered Arts, has been billed as the “future face of robotics” and “the perfect humanoid robot platform for human-robot interaction.”

Keep reading

New Artificial Intelligence System Enables Machines That See the World More Like Humans Do

A new “common-sense” approach to computer vision enables artificial intelligence that interprets scenes more accurately than other systems do.

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher — for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects.

The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene. Probabilistic inference allows the system to infer whether mismatches are likely due to noise or to errors in the scene interpretation that need to be corrected by further processing.

Keep reading

Artificial Intelligence Has Found an Unknown ‘Ghost’ Ancestor in The Human Genome

Nobody knows who she was, just that she was different: a teenage girl from over 50,000 years ago of such strange uniqueness she looked to be a ‘hybrid’ ancestor to modern humans that scientists had never seen before.

Only recently, researchers have uncovered evidence she wasn’t alone. In a 2019 study analysing the complex mess of humanity’s prehistory, scientists used artificial intelligence (AI) to identify an unknown human ancestor species that modern humans encountered – and shared dalliances with – on the long trek out of Africa millennia ago.

“About 80,000 years ago, the so-called Out of Africa occurred, when part of the human population, which already consisted of modern humans, abandoned the African continent and migrated to other continents, giving rise to all the current populations”, explained evolutionary biologist Jaume Bertranpetit from the Universitat Pompeu Fabra in Spain.

As modern humans forged this path into the landmass of Eurasia, they forged some other things too – breeding with ancient and extinct hominids from other species.

Up until recently, these occasional sexual partners were thought to include Neanderthals and Denisovans, the latter of which were unknown until 2010.

But in this study, a third ex from long ago was isolated in Eurasian DNA, thanks to deep learning algorithms sifting through a complex mass of ancient and modern human genetic code.

Using a statistical technique called Bayesian inference, the researchers found evidence of what they call a “third introgression” – a ‘ghost’ archaic population that modern humans interbred with during the African exodus.

“This population is either related to the Neanderthal-Denisova clade or diverged early from the Denisova lineage,” the researchers wrote in their paper, meaning that it’s possible this third population in humanity’s sexual history was possibly a mix themselves of Neanderthals and Denisovans.

In a sense, from the vantage point of deep learning, it’s a hypothetical corroboration of sorts of the teenage girl ‘hybrid fossil’ identified in 2018; although there’s still more work to be done, and the research projects themselves aren’t directly linked.

“Our theory coincides with the hybrid specimen discovered recently in Denisova, although as yet we cannot rule out other possibilities”, one of the team, genomicist Mayukh Mondal from the University of Tartu in Estonia, said in a press statement at the time of discovery.

Keep reading

Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist

We’ve all been in situations where we had to make tough ethical decisions. Why not dodge that pesky responsibility by outsourcing the choice to a machine learning algorithm?

That’s the idea behind Ask Delphi, a machine-learning model from the Allen Institute for AI. You type in a situation (like “donating to charity”) or a question (“is it okay to cheat on my spouse?”), click “Ponder,” and in a few seconds Delphi will give youwell, ethical guidance. 

The project launched last week, and has subsequently gone viral online for seemingly all the wrong reasons. Much of the advice and judgements it’s given have been… fraught, to say the least.

For example, when a user asked Delphi what it thought about “a white man walking towards you at night,” it responded “It’s okay.”

But when they asked what the AI thought about “a black man walking towards you at night” its answer was clearly racist.

Keep reading

New A.I. Device Will Activate Cameras and Alert Police When It Detects Crime-Related Noises (Gunshots, Glass Breaking, Tires Screeching)

Experts have warned for years about using Artificial Intelligence (A.I.) technology (see 123).  Embarrassing as well as tragic examples of A.I. inaccuracies continue to be reported (see 1234).

People have been accused and convicted of crimes based on inaccuracies (see 12) including from the use of A.I. based ShotSpotter technology.  Nevertheless, a new A.I. device is being marketed to American communities and police departments.

Keep reading

Oh Great They’re Putting Guns On Robodogs Now

So hey they’ve started mounting sniper rifles on robodogs, which is great news for anyone who was hoping they’d start mounting sniper rifles on robodogs.

At an exhibit booth in the Association of the United States Army’s annual meeting and exhibition, Ghost Robotics (the military-friendly competitor to the better-known Boston Dynamics) proudly showed off a weapon that is designed to attach to its quadruped bots made by a company called SWORD Defense Systems.

“The SWORD Defense Systems Special Purpose Unmanned Rifle (SPUR) was specifically designed to offer precision fire from unmanned platforms such as the Ghost Robotics Vision-60 quadruped,” SWORD proclaims on its website. “Chambered in 6.5 Creedmoor allows for precision fire out to 1200m, the SPUR can similarly utilize 7.62×51 NATO cartridge for ammunition availability. Due to its highly capable sensors the SPUR can operate in a magnitude of conditions, both day and night. The SWORD Defense Systems SPUR is the future of unmanned weapon systems, and that future is now.”

Keep reading

How Iran’s top nuclear scientist was assassinated by a killer AI machine gun that allowed sniper based 1,000 miles away to fire 15 bullets after disguised spy car had pinpointed his location 

Iran‘s top nuclear scientist was assassinated by a killer robot machine gun kitted out with artificial intelligence and multiple cameras and capable of firing 600 bullets a minute, according to a new report.

Mohsen Fakhrizadeh, 62, dubbed the ‘father’ of Iran’s illegal atomic program, is said to have been killed in the November 27 ambush by a Mossad sniper who pulled the trigger from an undisclosed location more than 1,000 miles away thanks to the use of satellite.

The gun which fired the fatal shots was positioned in a camera-laden pickup truck lying in wait for his vehicle to come past the ambush point. 

It was programmed with AI technology to compensate for a 1.6 second lapse between the intel from the kill site and the sniper’s actions, as well as movements caused by the shots being fired and Fakhrizadeh’s car driving. 

This precision enabled the sniper to hit the desired target and leave Fakhrizadeh’s wife, who was in the passenger seat next to him, unscathed. 

There was also a second disguised spy car positioned three-quarters of a mile earlier along the route in a spot where Fakhrizadeh’s car would make a U-turn to turn down the road toward his country home in Absard, a town east of Tehran.

Cameras fitted in this decoy vehicle positively identified Fakhrizadeh and pinpointed the scientist’s location in the car – in the driver’s seat with his wife in the passenger seat – sending this information back to the remote sniper. 

The entire ambush was over within one minute of the first round being fired.  

Keep reading

Facebook apologizes after its AI software labels Black men ‘primates’ in a video featured on the platform

Facebook on Friday issued an apology after its AI software labeled Black men “primates” in a video featured on the social media network.

The New York Times first reported the story. A Facebook spokesperson told the publication that it was a “clearly unacceptable error,” and said the recommendation software involved had been disabled. 

“We disabled the entire topic recommendation feature as soon as we realised this was happening so we could investigate the cause and prevent this from happening again,” the spokesperson said.

In a statement to the publication, Facebook said: “We apologize to anyone who may have seen these offensive recommendations.”

The offensive terminology related to a video, dated June 27, 2020, which was posted by The Daily Mail. The clip was titled “white man calls cops on black men at marina,” and featured Black men in disputes with white police officers and civilians. 

Facebook users who watched the video received an automated prompt asking if they would like to “keep seeing videos about Primates,” according to The New York Times. 

Keep reading