Documentary Exposes How Facial Recognition Tech Doesn’t See Dark Faces Accurately

What not to NOT love about Artificial Intelligence (AI)?

  • Millions of jobs being lost (see 12)
  • Censorship, unwarranted surveillance, and other unethical and dangerous applications (see 12345)
  • Inaccuracies that can lead to life-altering consequences

One documentary reveals more unscrupulous details:

CODED BIAS explores the fallout of MIT Media Lab researcher Joy Buolamwini’s discovery that facial recognition does not see dark-skinned faces accurately, and her journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.

Modern society sits at the intersection of two crucial questions: What does it mean when artificial intelligence increasingly governs our liberties? And what are the consequences for the people AI is biased against? When MIT Media Lab researcher Joy Buolamwini discovers that most facial-recognition software does not accurately identify darker-skinned faces and the faces of women, she delves into an investigation of widespread bias in algorithms. As it turns out, artificial intelligence is not neutral, and women are leading the charge to ensure our civil rights are protected.

Keep reading

Pieces Of Color: When YouTube’s oversensitive filters think CHESS VIDEOS are racist, will language have to adapt to Big Tech?

With all its talk of black-on-white war, YouTube’s “hate speech”-filtering AI can’t tell the difference between chess players and violent racists. Perhaps leaving robots in charge of the English language isn’t such a good idea.

Croatian chess player Antonio Radic, known to his million subscribers as ‘Agadmator,’ runs the world’s most popular chess channel on YouTube. Last summer he found his account suspended due to its “harmful and dangerous” content. Radic, who was in the middle of a show with Grandmaster Hikaru Nakamura at the time, was puzzled. He received no explanation for the ban, which was reversed on appeal, but speculated that YouTube’s censorship algorithm may have heard him say something like “black goes to B6 instead of C6, white will always be better.”

“If that’s the case, I’m sure all [all of] my 1,800 videos will be taken down as it’s black against white to the death in every video,” he told the Sun at the time.

Keep reading

Robots may be future of policing but activists warn they could be racist

Ayanna Howard, a robotics researcher at Georgia Tech, is concerned that robot technology, that ultimately uses human inputted artificial intelligence, will show biases against blacks. She tells the New York Times that “given the current tensions arising from police shootings of African-American men from Ferguson to Baton Rouge, it is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life, especially if problems related to bias have not been resolved.”

Howard and others say that many of today’s algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most computer and robot systems

Keep reading

There Are Spying Eyes Everywhere—and Now They Share a Brain

Citigraf was conceived in 2016, when the Chicago Police Department hired Genetec to solve a surveillance conundrum. Like other large law enforcement organizations around the country, the department had built up such an impressive arsenal of technologies for keeping tabs on citizens that it had reached the point of surveillance overload. To get a clear picture of an emergency in progress, officers often had to bushwhack through dozens of byzantine databases and feeds from far-flung sensors, including gunshot detectors, license plate readers, and public and private security cameras. This process of braiding together strands of information—“multi-intelligence fusion” is the technical term—was becoming too difficult. As one Chicago official put it, echoing a well-worn aphorism in surveillance circles, the city was “data-rich but information-poor.” What investigators needed was a tool that could cut a clean line through the labyrinth. What they needed was automated fusion.

Keep reading

IARPA Developing AI To Help Predict The Future

As far as secretive government projects go, the objectives of IARPA may be the riskiest and most far-reaching. With its mission to foster “high-risk, high-payoff” programs, this research arm of the U.S. intelligence community literally tries to predict the future. Staffed by spies and Ph.D.s, this organization aims to provide decision makers with real, accurate predictions of geopolitical events, using artificial intelligence and human “forecasters.”

IARPA, which stands for Intelligence Advanced Research Projects Activity, was founded in 2006 as part of the Office of the Director of National Intelligence. Some of the projects that it has funded focused on advancements in quantum computing, cryogenic computing, face recognition, universal language translators, and other initiatives that would fit well in a Hollywood action movie plot. But perhaps its main goal is to produce “anticipatory intelligence.” It’s a spy agency, after all.

In the interest of national security, IARPA wants to identify major world events before they happen, looking for terrorists, hackers or any perceived enemies of the United States. Wouldn’t you rather stop a crime before it happens?

Of course, that’s when we get into tricky political and sci-fi territory. Much of the research done by IARPA is actually out in the open, utilizing the public and experts in advancing technologies. It is available for “open solicitations,” forecasting tournaments, and has prize challenges for the public. You can pretty much send your idea in right now. But what happens to the R&D once it leaves the lab is, of course, often for only the NSA and the CIA to know.

Keep reading

Spotify Wants to Eavesdrop on Your Life to Pick the Next Song to Play

Music streaming service Spotify has reportedly filed a patent for new personality tracking technology that analyzes a user’s emotional state and suggests music based on it. The patent, titled “Identification of taste attributes from an audio signal,” details constantly monitoring “speech content and background noise” to provide song suggestions.

Music Business Worldwide reports that in October 2020, Spotify filed a patent for personality tracking technology that could determine a user’s emotional state in order to suggest the perfect song for them to listen to.

The filing explained that behavioral variables such as a user’s mood, their favorite genre of music, or their demographic could all “correspond to different personality traits of a user.” Spotify suggested that this could be used to promote personalized content to users based on the personality traits it detected.

Now a new U.S. Spotify patent shows that the company wants to use the technology to analyze users even further by using speech recognition to determine their “emotional state, gender, age, or accent.” These attributes can then be used to recommend content.

The new patent is titled “Identification of taste attributes from an audio signal” and was filed in February 2018 and granted on January 12, 2021. The patent can be read in full here.

Keep reading

Scientists propose putting nanobots in our bodies to create ‘global superbrain’

A team has proposed using nanobots to create the ‘internet of thoughts’, where instant knowledge could be downloaded just by thinking it.

An international team of scientists led by members of UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence (AI) and computation will lead this century to the development of a human ‘brain-cloud interface’ (B-CI).

Writing in Frontiers in Neuroscience, the team said that a B-CI would connect neurons and synapses in the brain to vast cloud computing networks in real time.

Such a concept isn’t new with writers of science fiction, including Ray Kurzweil, who proposed it decades ago. In fact, Facebook has even admitted it is working on a B-CI.

However, Kurzweil’s fantasy about neural nanobots capable of hooking us directly into the web is now being turned into reality by the senior author of this latest study, Robert Freitas Jr.

Keep reading

MICROSOFT PATENT SHOWS PLANS TO REVIVE DEAD LOVED ONES AS CHATBOTS

Microsoft has been granted a patent that would allow the company to make a chatbot using the personal information of deceased people.  

The patent describes creating a bot based on the “images, voice data, social media posts, electronic messages”, and more personal information.

“The specific person [who the chat bot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity etc”, it goes on to say.

“The specific person may also correspond to oneself (e.g., the user creating/training the chat bot,” Microsoft also describes – implying that living users could train a digital replacement in the event of their death.

Keep reading

Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts

An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.

The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”

Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”

Keep reading