AI-enabled brain implant helps patient regain feeling and movement

Keith Thomas from New York was involved in a driving accident back in 2020 that injured his spine’s C4 and C5 vertebrae, leading to a total loss in feeling and movement from the chest down. Recently, though, Thomas had been able to move his arm at will and feel his sister hold his hand, thanks to the AI brain implant technology developed by the Northwell Health’s Feinstein Institute of Bioelectronic Medicine. 

The research team first spent months mapping his brain with MRIs to pinpoint the exact parts of his brain responsible for arm movements and the sense of touch in his hands. Then, four months ago, surgeons performed a 15-hour procedure to implant microchips into his brain — Thomas was even awake for some parts so he could tell them what sensations he was feeling in his hand as they probed parts of the organ. 

While the microchips are inside his body, the team also installed external ports on top of his head. Those ports connect to a computer with the artificial intelligence (AI) algorithms that the team developed to interpret his thoughts and turn them into action. The researchers call this approach “thought-driven therapy,” because it all starts with the patient’s intentions. If he thinks of wanting to move his hand, for instance, his brain implant sends signals to the computer, which then sends signals to the electrode patches on his spine and hand muscles in order to stimulate movement. They attached sensors to his fingertips and palms, as well, to stimulate sensation. 

Thanks to this system, he was able to move his arm at will and feel his sister holding his hand in the lab. While he needed to be attached to the computer for those milestones, the researchers say Thomas has shown signs of recovery even when the system is off. His arm strength has apparently “more than doubled” since the study began, and his forearm and wrist could now feel some new sensations. If all goes well, the team’s thought-driven therapy could help him regain more of his sense of touch and mobility. 

While the approach has a ways to go, the team behind it is hopeful that it could change the lives of people living with paralysis. Chad Bouton, the technology’s developer and the principal investigator of the clinical trial, said:

“This is the first time the brain, body and spinal cord have been linked together electronically in a paralyzed human to restore lasting movement and sensation. When the study participant thinks about moving his arm or hand, we ‘supercharge’ his spinal cord and stimulate his brain and muscles to help rebuild connections, provide sensory feedback, and promote recovery. This type of thought-driven therapy is a game-changer. Our goal is to use this technology one day to give people living with paralysis the ability to live fuller, more independent lives.”

Keep reading

AI program can steal your password by listening to the sounds your keyboard makes when you type it

Research published by Cornell University showed that scientists programmed an artificial intelligence system that listened to people typing their passwords and was able to correctly identify the keys with 95% accuracy.

The group programmed an AI system to listen to a typed password on MacBook Pro keys over both a phone and a Zoom call, according to Daily Fetched.

The AI model was trained by pressing each of the MacBook Pro’s 36 keys 25 times each and recording the sounds. The sounds were fed into the AI so it could correctly identify each key.

Over the phone, the program correctly identified the keys with 95% accuracy, while over Zoom the number dropped slightly to 93%. The phone was placed about six and a half inches away from the keyboard, according to the Daily Mail.

“When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95 percent, the highest accuracy seen without the use of a language model,” the study reportedly said.

Keep reading

USAF Conducts First-AI Flight With Stealth Drone

The Air Force Research Laboratory (AFRL) completed the first-ever flight of an AFRL-developed stealth drone powered by artificial intelligence software.

On July 25, the machine-learning-trained, artificial intelligence-powered XQ-58A Valkyrie flew a three-hour sortie at Florida’s Eglin Air Force Base.

“The mission proved out a multi-layer safety framework on an AI/ML-flown uncrewed aircraft and demonstrated an AI/ML agent solving a tactically relevant “challenge problem” during airborne operations,” said Col. Tucker Hamilton, chief, of AI Test and Operations, for the Department of the USAF.

Hamilton continued, “This sortie officially enables the ability to develop AI/ML agents that will execute modern air-to-air and air-to-surface skills that are immediately transferrable to other autonomy programs.”

Eglin has become the testing ground for advanced autonomous systems within the USAF. Last November, the service received two Valkyrie stealth drones assigned to the 40th Flight Test Squadron.

In past press releases, AFRL describes the Valkyrie as a “high-speed, long-range, low-cost unmanned platform designed to offer maximum utility at minimum cost.”

Keep reading

Deepfake Fraud Surges More Than 1000%, Insiders Say It’s Just The Beginning

As the line between fact and fiction gets harder to distinguish, online criminals need just two hours to create a realistic, computer-generated “deepfake” product that can ruin someone’s life.

The surge in popularity of hyper-realistic photos, audio, and videos developed with artificial intelligence (AI)—commonly known as deepfakes—has become an internet sensation.

It’s also giving cyber villains an edge in the crime world.

Between 2022 and the first quarter of this year, deepfake use in fraud catapulted 1,200 percent in the United States alone.

Though it’s not just an American problem.

In the same analysis, deepfakes used for scam purposes exploded in Canada, Germany, and the United Kingdom. In the study, the United States accounted for 4.3 percent of global deepfake fraud cases.

Meanwhile, AI experts and cybercrime investigators say we’re just at the tip of the iceberg. The rabbit hole of deepfake fraud potential just keeps going.

“I believe the No. 1 incentive for cyber criminals to commit cybercrime is law enforcement and their inability to keep up,” Michael Roberts told The Epoch Times.

Mr. Roberts is a professional investigator and the founder of the pioneer company Rexxfield, which helps victims of web-based attacks.

He also started PICDO, a cyber crime disruption organization, and has run counter-hacking education for branches of the U.S. and Australian militaries as well as NATO.

Mr. Roberts said legal systems in the Western world are “hopelessly overwhelmed” by online fraud cases, many of which include deepfake attacks. Moreover, the cases that get investigated without hiring a private firm are cherry-picked.

And even then, it [the case] doesn’t get resolved,” he said.

The market for deepfake detection was valued at $3.86 billion dollars in 2020 and is expected to grow 42 percent annually through 2026, according to an HSRC report.

Keep reading

AI search of Neanderthal proteins resurrects ‘extinct’ antibiotics

Bioengineers have used artificial intelligence (AI) to bring molecules back from the dead1.

To perform this molecular ‘de-extinction’, the researchers applied computational methods to data about proteins from both modern humans (Homo sapiens) and our long-extinct relatives, Neanderthals (Homo neanderthalensis) and Denisovans. This allowed the authors to identify molecules that can kill disease-causing bacteria — and that could inspire new drugs to treat human infections.

“We’re motivated by the notion of bringing back molecules from the past to address problems that we have today,” says Cesar de la Fuente, a co-author of the study and a bioengineer at the University of Pennsylvania in Philadelphia. The study was published on 28 July in Cell Host & Microbe1.

Keep reading

This Disinformation Is Just for You

IT’S NOW WELL understood that generative AI will increase the spread of disinformation on the internet. From deepfakes to fake news articles to bots, AI will generate not only more disinformation, but more convincing disinformation. But what people are only starting to understand is how disinformation will become more targeted and better able to engage with people and sway their opinions.

When Russia tried to influence the 2016 US presidential election via the now disbanded Internet Research Agency, the operation was run by humans who often had little cultural fluency or even fluency in the English language and so were not always able to relate to the groups they were targeting. With generative AI tools, those waging disinformation campaigns will be able to finely tune their approach by profiling individuals and groups. These operatives can produce content that seems legitimate and relatable to the people on the other end and even target individuals with personalized disinformation based on data they’ve collected. Generative AI will also make it much easier to produce disinformation and will thus increase the amount of disinformation that’s freely flowing on the internet, experts say.

“Generative AI lowers the financial barrier for creating content that’s tailored to certain audiences,” says Kate Starbird, an associate professor in the Department of Human Centered Design & Engineering at the University of Washington. “You can tailor it to audiences and make sure the narrative hits on the values and beliefs of those audiences, as well as the strategic part of the narrative.”

Rather than producing just a handful of articles a day,  Starbird adds, “You can actually write one article and tailor it to 12 different audiences. It takes five minutes for each one of them.”

Considering how much content people post to social media and other platforms, it’s very easy to collect data to build a disinformation campaign. Once operatives are able to profile different groups of people throughout a country, they can teach the generative AI system they’re using to create content that manipulates those targets in highly sophisticated ways.

“You’re going to see that capacity to fine-tune. You’re going to see that precision increase. You’re going to see the relevancy increase,” says Renee Diresta, the technical research manager at Stanford Internet Observatory.

Keep reading

PhD Student Uses Deepfake to Pass Popular Voice Authentication and Spoof Detection System

University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings.

A hacker can create a deepfake voice with five minutes of the target’s recorded voice, which can be taken from public posts on social media, the research shows. GitHub’s open source AI software can create deepfake audio that can surpass voice authentication.

He used the deepfake to expose a weakness in the Amazon Connect voice authentication system, a UW release reveals. Four-second attacks on Connect had a 10 percent success rate, and attacks closer to 30 seconds were successful 40 percent of the time.

In response, the company added biometric anti-spoofing software that could find digital markers on a voice recording, revealing if it was made by a machine or human. This worked until Kassis used free software to remove the digital markers from his deepfakes.

His method can bypass less sophisticated voice biometric authentication systems with a 99 percent success rate after six tries, according to the announcement.

Keep reading

Sexy AI-generated influencers are silently swarming social media with fake names and backstories with one goal: Con desperate men

With the vast amount of filters and photo editing apps at users’ disposal, it can be hard to tell what’s real and fake on social media these days.

But a DailyMail.com probe has found a budding world of scantily-clad AI influencers who are conning desperate men out of money.

Earlier this week, a 19-year-old blonde bombshell known as Milla Sofia made headlines when it emerged she was artificially generated.

Since then, we’ve uncovered dozens of digital influencers on InstagramTikTok and Twitter who often have fake names and elaborate backstories, jobs and interests. AI-generated photos show them on fake vacations. 

These rising stars – who combined have hundreds of thousands of followers – are receiving admiration and cash from real men. 

Andrea is a brown-haired beauty with more than 30,000 followers on Twitter/X who comment on her lewd photos. She includes her PayPal account details in her bio and offers nude images for subscribers on Patreon.

Andrea’s Patreon offers plan options to chat with her, and for $300 a month, she will ‘basically be your online girlfriend.’ 

The human creators of these AI influencers are unknown faces on the web, only pushing out content to likely live a life they had only dreamed of. 

Milla Sofia is an aspiring fashion model with a portfolio of portraits showing her tanning in Bora Bora, taking in the views in Greece and working in a corporate office.

She has made headlines this week after Futurism uncovered her existence online. 

Some AI-generated personas, like Miquela Sousa, have been revealed to be marketing stunts that turned into gold mines.

Miquela starred in advertising campaigns for major brands such as Prada and Balenciaga, was interviewed by Vogue, and was named one of Time magazine’s 25 most influential people on the internet.

The forever 19-year-old is the brainchild of Trevor McFedries and Sara DeCou, founders of the fashion brand BRUD, who unleashed the AI teen in 2019.

At first, her creators liked to tease her followers and capitalize on the confusion. ‘Is she human?’ asked one perturbed Instagram user. ‘Why do you look like a doll?’ demanded another.

‘She’s some kind of cute mannequin,’ someone claimed. ‘It’s clearly a robot,’ stated another.

Eventually, fans were told the truth, or part of it at least. ‘I’m not a human being,’ Miquela confessed on her page. She said her ‘hands’ were shaking as she ‘wrote’ the post. 

Keep reading

Japanese Police Test AI-Equipped Cameras To Protect VIPs

Japanese police will begin testing security cameras equipped with AI-based technology to protect high-profile public figuresNikkei reports.

AI-equipped cameras can have functions such as “behavior detection,” which analyzes a person’s movements, and “facial recognition,” which identifies a person. The agency will consider only the technology’s ability to detect behavior.

In behavior detection, the system learns to detect unusual movements, such as repeatedly looking around, by observing the patterns of suspicious individuals. Detecting suspicious behavior in crowds can be difficult to do with the human eye, and the system could make security forces better able to eliminate security risks.

The camera system can also spot guns and other suspicious items, as well as intrusion into unauthorized areas, which will be tested as part of the trial – along with the accuracy of detection in testing process.

The announcement comes as the country mourned the anniversary of the fatal shooting of former Prime Minister Shinzo Abe on Saturday.

The National Police Academy will explore the use of the technology before deciding on a wider deployment.

Keep reading

The Future Of AI Is War… And Human Extinction As Collateral Damage

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes.

But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

Keep reading