Scientists propose putting nanobots in our bodies to create ‘global superbrain’

A team has proposed using nanobots to create the ‘internet of thoughts’, where instant knowledge could be downloaded just by thinking it.

An international team of scientists led by members of UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, artificial intelligence (AI) and computation will lead this century to the development of a human ‘brain-cloud interface’ (B-CI).

Writing in Frontiers in Neuroscience, the team said that a B-CI would connect neurons and synapses in the brain to vast cloud computing networks in real time.

Such a concept isn’t new with writers of science fiction, including Ray Kurzweil, who proposed it decades ago. In fact, Facebook has even admitted it is working on a B-CI.

However, Kurzweil’s fantasy about neural nanobots capable of hooking us directly into the web is now being turned into reality by the senior author of this latest study, Robert Freitas Jr.

Keep reading

MICROSOFT PATENT SHOWS PLANS TO REVIVE DEAD LOVED ONES AS CHATBOTS

Microsoft has been granted a patent that would allow the company to make a chatbot using the personal information of deceased people.  

The patent describes creating a bot based on the “images, voice data, social media posts, electronic messages”, and more personal information.

“The specific person [who the chat bot represents] may correspond to a past or present entity (or a version thereof), such as a friend, a relative, an acquaintance, a celebrity, a fictional character, a historical figure, a random entity etc”, it goes on to say.

“The specific person may also correspond to oneself (e.g., the user creating/training the chat bot,” Microsoft also describes – implying that living users could train a digital replacement in the event of their death.

Keep reading

Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts

An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.

The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”

Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”

Keep reading

Stanford researchers claim new facial tracking software can determine your political affiliation

Because artificial intelligence wasn’t already frightening enough, researchers decided to teach computers how to identify a person’s political ideology based upon their facial appearance and expressions.

The study was led by Stanford researcher Michal Kosinski, who already caused a stir in 2017 by programming machines that could determine whether you are gay or straight based on your appearance.

Keep reading

Police Robots Are Not a Selfie Opportunity, They’re a Privacy Disaster Waiting to Happen

The arrival of government-operated autonomous police robots does not look like predictions in science fiction movies. An army of robots with gun arms is not kicking down your door to arrest you. Instead, a robot snitch that looks like a rolling trash can is programmed to decide whether a person looks suspicious—and then call the human police on them. Police robots may not be able to hurt people like armed predator drones used in combat—yet—but as history shows, calling the police on someone can prove equally deadly.

Long before the 1987 movie Robocop, even before Karel Čapek invented the word robot in 1920, police have been trying to find ways to be everywhere at once. Widespread security cameras are one solution—but even a blanket of CCTV cameras couldn’t follow a suspect into every nook of public space. Thus, the vision of a police robot continued as a dream, until now. Whether they look like Boston Dynamics’ robodogs or Knightscope’s rolling pickles, robots are coming to a street, shopping mall, or grocery store near you.

Keep reading

Another “Pre-Crime” AI System Claims It Can Predict Who Will Share Disinformation Before It’s Published

We previously have covered the many weighty claims made by the progenitors of A.I. algorithms who claim that their technology can stop crime before it happens. Similar predictive A.I. is increasingly being used to stop the spread of misinformation, disinformation and general “fake news” by analyzing trends in behavior and language used across social media.

However, as we’ve also covered, these systems have more often that not failed quite spectacularly, as many artificial intelligence experts and mathematicians have highlighted. One expert in particular — Uri Gal, Associate Professor in Business Information Systems, at the University of Sydney, Australia — noted that from what he has seen so far, these systems are “no better at telling the future than a crystal ball.”

Please keep this in mind as you look at the latest lofty pronouncements from the University of Sheffield below. Nevertheless, we should also be aware that — similar their real-world counterparts in street-level pre-crime — these systems most likely will be rolled out across social media (if they haven’t been already) regardless, until further exposure of their inherent flaws, biases and their own disinformation is revealed.

Keep reading

An AI just helped an Air Force pilot fly a U-2 spy plane during a simulated missile strike

The Air Force has taken a giant step toward creating an artificial intelligence system that would never in a million years turn on humanity – unlike the “Skynet” nemesis in the first two Terminator movies, which are the only ones that count.

Recently, an artificial intelligence algorithm named ARTUµ — possibly a reference to Star Wars’ R2D2 — performed tasks on a U-2 Dragon Lady spy plane that are normally done by humans, the Air Force announced on Wednesday.

“After takeoff, the sensor control was positively handed-off to ARTUµ who then manipulated the sensor, based off insight previously learned from over a half-million computer simulated training iterations,” according to a news release from the humans who run the Air Force — for now. “The pilot and AI successfully teamed to share the sensor and achieve the mission objectives.”

The algorithm used the plane’s tactical navigation as an Air Force major whose callsign is “Vudu” flew the U-2, which was assigned to the 9th Reconnaissance Wing, Beale Air Force Base, California, the news release says.

In short: Man and machine successfully flew a reconnaissance mission during a simulated missile strike.

Keep reading

Suspect AI Software Verified Mail-In Ballots With Little Human Oversight in Key Battleground States

Though accusations of election fraud in the 2020 US presidential election have been swirling across social media and some news outlets for much of the past week, few have examined the role of a little known Silicon Valley company whose artificial intelligence (AI) algorithm was used to accept or reject ballots in highly contested states such as Nevada.

That company, Parascript, has long-standing cozy ties to defense contractors such as Lockheed Martin and tech giants including Microsoft, in addition to being a contractor to the US Postal Service. In addition, its founder, Stepan Pachikov, better known for cofounding the app Evernote in 2007, is a long-standing and 2020 donor to Democratic presidential candidates.

Parascript’s AI software was used during this election in at least eight states for matching signatures on ballot envelopes with those in government databases in order to “ease the workload of staff enforcing voter signature rules” resulting from the influx of mail-in ballots. Reuters, which reported on the use of the technology, asked the company to provide a list of counties and states using its software for the 2020 election. Parascript, however, declined to supply the list, replying, instead, that their clients “included 20 of the top 100 counties by registered voters.”

Despite not receiving the official list from Parascript, Reuters was able to compile its own partial list, which revealed that several counties in Florida, Colorado, Washington, and Utah, among others, utilized the AI software to determine the validity of ballots. Reuters also reported that Clark County, Nevada, which is one of the hotspots of litigation between the Trump and Biden campaigns and fraud allegations, was one that used the software. Reuters was able to determine how the software was used in some counties, with many counties allowing the software to approve anywhere from 20 to 75 percent of mail-in ballots as acceptable. For several counties included in the Reuters list,staff reviewed 1 percent or less of the AI software’s acceptances. Figures were not available for Clark County, Nevada.

Keep reading

The Threat Of “Killer Robots” Is Real And Closer Than You Might Think

From self-driving cars, to digital assistantsartificial intelligence (AI) is fast becoming an integral technology in our lives today. But this same technology that can help to make our day-to-day life easier is also being incorporated into weapons for use in combat situations.

Weaponised AI features heavily in the security strategies of the US, China and Russia. And some existing weapons systems already include autonomous capabilities based on AI, developing weaponised AI further means machines could potentially make decisions to harm and kill people based on their programming, without human intervention.

Countries that back the use of AI weapons claim it allows them to respond to emerging threats at greater than human speed. They also say it reduces the risk to military personnel and increases the ability to hit targets with greater precision. But outsourcing use-of-force decisions to machines violates human dignity. And it’s also incompatible with international law which requires human judgement in context.

Indeed, the role that humans should play in use of force decisions has been an increased area of focus in many United Nations (UN) meetings. And at a recent UN meeting, states agreed that it’s unacceptable on ethical and legal grounds to delegate use-of-force decisions to machines – “without any human control whatsoever”.

But while this may sound like good news, there continues to be major differences in how states define “human control”.

Keep reading

Dems deploying DARPA-funded AI-driven information warfare tool to target pro-Trump accounts

An anti-Trump Democratic-aligned political action committee advised by retired Army Gen. Stanley McChrystal is planning to deploy an information warfare tool that reportedly received initial funding from the Defense Advanced Research Projects Agency (DARPA), the Pentagon’s secretive research arm — transforming technology originally envisioned as a way to fight ISIS propaganda into a campaign platform to benefit Joe Biden.

The Washington Post first reported that the initiative, called Defeat Disinfo, will utilize “artificial intelligence and network analysis to map discussion of the president’s claims on social media,” and then attempt to “intervene” by “identifying the most popular counter-narratives and boosting them through a network of more than 3.4 million influencers across the country — in some cases paying users with large followings to take sides against the president.”

Social media guru Curtis Hougland is heading up Defeat Disinfo, and he said he received the funding from DARPA when his work was “part of an effort to combat extremism overseas.”

Keep reading