PhD Student Uses Deepfake to Pass Popular Voice Authentication and Spoof Detection System

University of Waterloo (UW) cybersecurity PhD student Andre Kassis published his findings after being granted access to an account protected with biometrics using deepfake AI-generated audio recordings.

A hacker can create a deepfake voice with five minutes of the target’s recorded voice, which can be taken from public posts on social media, the research shows. GitHub’s open source AI software can create deepfake audio that can surpass voice authentication.

He used the deepfake to expose a weakness in the Amazon Connect voice authentication system, a UW release reveals. Four-second attacks on Connect had a 10 percent success rate, and attacks closer to 30 seconds were successful 40 percent of the time.

In response, the company added biometric anti-spoofing software that could find digital markers on a voice recording, revealing if it was made by a machine or human. This worked until Kassis used free software to remove the digital markers from his deepfakes.

His method can bypass less sophisticated voice biometric authentication systems with a 99 percent success rate after six tries, according to the announcement.

Keep reading

Sexy AI-generated influencers are silently swarming social media with fake names and backstories with one goal: Con desperate men

With the vast amount of filters and photo editing apps at users’ disposal, it can be hard to tell what’s real and fake on social media these days.

But a DailyMail.com probe has found a budding world of scantily-clad AI influencers who are conning desperate men out of money.

Earlier this week, a 19-year-old blonde bombshell known as Milla Sofia made headlines when it emerged she was artificially generated.

Since then, we’ve uncovered dozens of digital influencers on InstagramTikTok and Twitter who often have fake names and elaborate backstories, jobs and interests. AI-generated photos show them on fake vacations. 

These rising stars – who combined have hundreds of thousands of followers – are receiving admiration and cash from real men. 

Andrea is a brown-haired beauty with more than 30,000 followers on Twitter/X who comment on her lewd photos. She includes her PayPal account details in her bio and offers nude images for subscribers on Patreon.

Andrea’s Patreon offers plan options to chat with her, and for $300 a month, she will ‘basically be your online girlfriend.’ 

The human creators of these AI influencers are unknown faces on the web, only pushing out content to likely live a life they had only dreamed of. 

Milla Sofia is an aspiring fashion model with a portfolio of portraits showing her tanning in Bora Bora, taking in the views in Greece and working in a corporate office.

She has made headlines this week after Futurism uncovered her existence online. 

Some AI-generated personas, like Miquela Sousa, have been revealed to be marketing stunts that turned into gold mines.

Miquela starred in advertising campaigns for major brands such as Prada and Balenciaga, was interviewed by Vogue, and was named one of Time magazine’s 25 most influential people on the internet.

The forever 19-year-old is the brainchild of Trevor McFedries and Sara DeCou, founders of the fashion brand BRUD, who unleashed the AI teen in 2019.

At first, her creators liked to tease her followers and capitalize on the confusion. ‘Is she human?’ asked one perturbed Instagram user. ‘Why do you look like a doll?’ demanded another.

‘She’s some kind of cute mannequin,’ someone claimed. ‘It’s clearly a robot,’ stated another.

Eventually, fans were told the truth, or part of it at least. ‘I’m not a human being,’ Miquela confessed on her page. She said her ‘hands’ were shaking as she ‘wrote’ the post. 

Keep reading

Japanese Police Test AI-Equipped Cameras To Protect VIPs

Japanese police will begin testing security cameras equipped with AI-based technology to protect high-profile public figuresNikkei reports.

AI-equipped cameras can have functions such as “behavior detection,” which analyzes a person’s movements, and “facial recognition,” which identifies a person. The agency will consider only the technology’s ability to detect behavior.

In behavior detection, the system learns to detect unusual movements, such as repeatedly looking around, by observing the patterns of suspicious individuals. Detecting suspicious behavior in crowds can be difficult to do with the human eye, and the system could make security forces better able to eliminate security risks.

The camera system can also spot guns and other suspicious items, as well as intrusion into unauthorized areas, which will be tested as part of the trial – along with the accuracy of detection in testing process.

The announcement comes as the country mourned the anniversary of the fatal shooting of former Prime Minister Shinzo Abe on Saturday.

The National Police Academy will explore the use of the technology before deciding on a wider deployment.

Keep reading

The Future Of AI Is War… And Human Extinction As Collateral Damage

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes.

But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

Keep reading

The Military Dangers of AI Are Not Hallucinations

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Keep reading

U.S. Committee Examines Role of AI in Warfare

Richard Moore, chief of the United Kingdom’s Secret Intelligence Service (SIS), claimed in a rare public speech on Wednesday that “artificial intelligence [AI] will change the world of espionage, but it won’t replace the need for human spies,” while admitting that British spies are already using AI to disrupt the supply of weapons to Russia.  

According to AP News, in his speech Moore painted AI as a “potential asset and major threat” and called China the “single most important strategic focus” for SIS, commonly known as MI6. He added, “We will increasingly be tasked with obtaining intelligence on how hostile states are using AI in damaging, reckless and unethical ways.” 

Moore shared that “’the unique characteristics of human agents in the right places will become still more significant,’ highlighting spies’ ability to ‘influence decisions inside a government or terrorist group.’” 

While speaking to an audience at the British ambassador’s residence in Prague, Moore urged Russians who oppose the invasion of Ukraine to spy for Britain. “I invite them to do what others have already done this past 18 months and join hands with us,” he said, assuring prospective defectors that “their secrets will always be safe with us” and that “our door is always open.”  

While the MI6 chief spent more time talking about the Russia-Ukraine conflict, it was his comments on the West potentially “falling behind rivals in the AI race” that stood out. Moore declared that, “Together with our allies, [SIS] intends to win the race to master the ethical and safe use of AI.”  

Being quite aware of AI and how it is being used by hostile states, the House Armed Services Subcommittee on Cyber, Information Technologies, and Innovation heard testimony from AI experts at Tuesday’s hearing, “Man and Machine: Artificial Intelligence on the Battlefield.” 

The subcommittee’s goal was to discuss “the barriers that prevent the Department of Defense [DOD] from adopting and deploying artificial intelligence (AI) effectively and safely, the Department’s role in AI adoption, and the risks to the Department from adversarial AI.” 

Alexandr Wang, founder and CEO of Scale AI, testified that during an investor trip to China, he witnessed first-hand the “progress that China was making toward developing computer vision technology and other forms of AI.” Wang was troubled at the time, “because this technology was also being used for domestic repression, such as persecuting the Uyghur population.” 

Keep reading

Japan To Deploy Pre-Crime Style “Behavior Detection” Technology

The Japan National Police Agency has decided to adopt AI-enhanced pre-crime surveillance cameras to bolster the security measures surrounding VIPs.

This step comes in response to the commemoration of the shocking assassination of former Prime Minister Shinzo Abe and the rising threats posed by what the government called “lone offenders.”

The use of AI in law enforcement is becoming commonplace globally. A 2019 study by the Carnegie Endowment for International Peace revealed that 52 out of the 176 nations surveyed were incorporating AI tools into their policing strategies, Nikkei Asia reported.

Keep reading

MIT Makes Probability-Based Computing a Bit Brighter

In a noisy and imprecise world, the definitive 0s and 1s of today’s computers can get in the way of accurate answers to messy real-world problems. So says an emerging field of research pioneering a kind of computing called probabilistic computing. And now a team of researchers at MIT have pioneered a new way of generating probabilistic bits (p-bits) at much higher rates—using photonics to harness random quantum oscillations in empty space.

The deterministic way in which conventional computers operate is not well suited to dealing with the uncertainty and randomness found in many physical processes and complex systems. Probabilistic computing promises to provide a more natural way to solve these kinds of problems by building processors out of components that behave randomly themselves.

The approach is particularly well suited to complicated optimization problems with many possible solutions or to doing machine learning on very large and incomplete datasets where uncertainty is an issue. Probabilistic computing could unlock new insights and findings in meteorology and climate simulations, for instance, or spam detection and counterterrorism software, or next-generation AI.

Keep reading

ChatGPT’s Evil Twin “WormGPT” Is Silently Entering Emails And Raiding Banks

A malicious copy of OpenAI’s ChatGPT has been created by a bad actor and its aim is to take your money.

The evil AI is called WormGPT, and it was created by a hacker for sophisticated email phishing attacks.

Cybersecurity firm SlashNext confirmed the artificially intelligent language bot had been created purely for malicious purposes.

The firm explained in a report:

Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum that’s often associated with cybercrime.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

The cyber experts experimented with WormGPT to see just how dangerous it could be.

They asked it to create phishing emails and found the results disturbing.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” the experts wrote.

SlashNext says WormGPT is an example of the threat that language-generative AI models pose.

Experts think the tool could be damaging even in the hands of a novice cybercriminal.

With AI like this out there, it’s best to be extra vigilant when it comes to checking your email inbox.

That especially applies to any email that asks for money, banking details, or other personal information.

Keep reading

CEO of Worldcoin Says “Something Like World ID Will Eventually Exist…Whether You Like It Or Not”

Right now, it’s about those who voluntarily surrender their biometric data and receive “small sums” in Worldcoin in return for signing up to the World ID scheme.

But if Open AI CEO Sam Altman has anything to say about how Worldcoin, a project within his company, develops – everyone who wants to use the internet will eventually be required to use World ID – or “something like it.”

And right now, it seems that people in several southern European countries, notably Spain and Portugal, are simply itching to give away their iris biometrics as proof of identity and right to a cryptocurrency transfer wallet.

The signup process involves exposing your eyes to what’s known as Worldcoin’s Orb iris scanners. If reports are to be believed, the uptake in Spain, where the scheme first became available a year ago, is better than elsewhere – 150,000 participants in total, 20,000 new ones each day, and Barcelona is the place where a number of Orb scanners will be installed.

Portugal is not far behind, with 120,000 participants, and Germany is said to also be warming up to the project, ever since it started expanding two months ago.

All in all, some 2 million “biometric credentials” are now operated by Worldcoin. Why do people sign up for it?

“Something like World ID will eventually exist, meaning that you will need to verify [you are human] on the internet, whether you like it or not,” Blania said.

“Whether you like it or not” are the “sweet” words everyone does (not) like to hear in connections with something like that, but that is what Worldcoin CEO Alex Blania decided to go for when describing the future.

In it, according to Blania, digital ID will be so prevalent that it will become inevitable, and there will be no escaping verifying the quality of being human (and likely, quite a few more things) online – if one wants to be online at all.

And whether one “likes it or not.” Blania links it to “progress” in “AI,” and predicts this will be happening as soon as within a couple of years.

Keep reading