SCIENTISTS IMPLANT SUBJECTS WITH FAKE MEMORIES USING DEEPFAKES

Researchers have found that they can incept false memories by showing subjects deepfaked clips of movie remakes that were never actually produced.

As detailed in a recent paper published in the journal PLOS One, deepfaked clips of made-up movies were convincing enough to trick participants into believing they were real. Some went as far as to rank the fake movies, which included purported remakes of real movies like Will Smith starring in a rebooted “The Matrix,” to be better than the originals.

But the study did have an important caveat.

“However, deepfakes were no more effective than simple text descriptions at distorting memory,” the paper reads, suggesting that deepfakes aren’t entirely necessary to trick somebody into accepting a false memory.

“We shouldn’t jump to predictions of dystopian futures based on our fears around emerging technologies,” lead study author Gillian Murphy, a misinformation researcher at University College Cork in Ireland, told The Daily Beast. “Yes there are very real harms posed by deep fakes, but we should always gather evidence for those harms in the first instance, before rushing to solve problems we’ve just assumed might exist.”

Keep reading

Scientists Receive Green Light to Merge Human Brain Cells with Computer Chips

Brain cells merging with computer chips could be the next evolution of artificial intelligence (AI). Scientists in Australia have been awarded funding to grow human brain cells and combine them with silicon chips.

A team led by researchers from Melbourne’s Monash University are receiving more than $405,000 as part of Australia’s National Intelligence and Security Discovery Research Grants Program. The new project, led by Associate Professor Adeel Razi, from the Turner Institute for Brain and Mental Health, in collaboration with Melbourne start-up Cortical Labs, will see scientists grow around 800,000 brain cells in a lab. They will then “teach” these cells to perform goal-directed tasks.

The project’s goal is to create what the team calls the DishBrain system, “to understand the various biological mechanisms that underlie lifelong continual learning.”

Last year, the brain cells made headlines around the globe after displaying their ability to perform simple tasks in a video game, like the tennis-style game, Pong. The team hopes these continual learning capabilities will transform machine learning — a branch of AI. The technology is becoming increasingly relevant in society, playing a role in everything from self-driving cars to intelligent wearable devices.

According to Associate Professor Razi, the research program’s work using lab-grown brain cells embedded onto silicon chips, “merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms.”

“This new technology capability in future may eventually surpass the performance of existing, purely silicon-based hardware,” Razi says in a university release.

Keep reading

Metals Really Can Heal Themselves Just Like The Cyborgs in “Terminator,” Scientists Reveal

According to new research, metals have the remarkable ability to heal themselves, just like the terrifying cyborg assassins in the “Terminator” movies. This discovery opens up possibilities for self-repairing engines, bridges, airplanes, and even rockets — which is especially promising for future manned missions to Mars.

In groundbreaking experiments, scientists were astonished to witness microscopic cracks disappear, offering hope for machines that can mend themselves on the spot.

“This was absolutely stunning to watch first-hand. What we have confirmed is that metals have their own intrinsic, natural ability to heal themselves, at least in the case of fatigue damage at the nanoscale,” says lead author Dr. Brad Boyce from Sandia National Laboratories in Albuquerque, in a media release.

Fatigue damage, resulting from repeated stress or motion, leads to the formation of tiny cracks in materials over time. These cracks grow and propagate until the entire device fails. This issue is of particular concern in spacecraft design, as a round trip to Mars takes at least 21 months.

Keep reading

The Future Of AI Is War… And Human Extinction As Collateral Damage

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes.

But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

Keep reading

The Military Dangers of AI Are Not Hallucinations

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Keep reading

U.S. Committee Examines Role of AI in Warfare

Richard Moore, chief of the United Kingdom’s Secret Intelligence Service (SIS), claimed in a rare public speech on Wednesday that “artificial intelligence [AI] will change the world of espionage, but it won’t replace the need for human spies,” while admitting that British spies are already using AI to disrupt the supply of weapons to Russia.  

According to AP News, in his speech Moore painted AI as a “potential asset and major threat” and called China the “single most important strategic focus” for SIS, commonly known as MI6. He added, “We will increasingly be tasked with obtaining intelligence on how hostile states are using AI in damaging, reckless and unethical ways.” 

Moore shared that “’the unique characteristics of human agents in the right places will become still more significant,’ highlighting spies’ ability to ‘influence decisions inside a government or terrorist group.’” 

While speaking to an audience at the British ambassador’s residence in Prague, Moore urged Russians who oppose the invasion of Ukraine to spy for Britain. “I invite them to do what others have already done this past 18 months and join hands with us,” he said, assuring prospective defectors that “their secrets will always be safe with us” and that “our door is always open.”  

While the MI6 chief spent more time talking about the Russia-Ukraine conflict, it was his comments on the West potentially “falling behind rivals in the AI race” that stood out. Moore declared that, “Together with our allies, [SIS] intends to win the race to master the ethical and safe use of AI.”  

Being quite aware of AI and how it is being used by hostile states, the House Armed Services Subcommittee on Cyber, Information Technologies, and Innovation heard testimony from AI experts at Tuesday’s hearing, “Man and Machine: Artificial Intelligence on the Battlefield.” 

The subcommittee’s goal was to discuss “the barriers that prevent the Department of Defense [DOD] from adopting and deploying artificial intelligence (AI) effectively and safely, the Department’s role in AI adoption, and the risks to the Department from adversarial AI.” 

Alexandr Wang, founder and CEO of Scale AI, testified that during an investor trip to China, he witnessed first-hand the “progress that China was making toward developing computer vision technology and other forms of AI.” Wang was troubled at the time, “because this technology was also being used for domestic repression, such as persecuting the Uyghur population.” 

Keep reading

Japan To Deploy Pre-Crime Style “Behavior Detection” Technology

The Japan National Police Agency has decided to adopt AI-enhanced pre-crime surveillance cameras to bolster the security measures surrounding VIPs.

This step comes in response to the commemoration of the shocking assassination of former Prime Minister Shinzo Abe and the rising threats posed by what the government called “lone offenders.”

The use of AI in law enforcement is becoming commonplace globally. A 2019 study by the Carnegie Endowment for International Peace revealed that 52 out of the 176 nations surveyed were incorporating AI tools into their policing strategies, Nikkei Asia reported.

Keep reading

Robotic Flea Leaps 87 Times Its Own Length

Although fleas are annoying pests, credit must be given where it’s due. These tiny creatures, just 3 millimeters long, can leap as far as 330 millimeters in a single hop—a distance close to 100 times their own body length. Now a similar feat has been achieved by a miniature robot. The results were published 24 March in IEEE Robotics and Automation Letters.

We were inspired by fleas in nature, which despite their small size can unleash tremendous potential and jump close to 100 times their body length. No one in the field of robotics has been able to achieve this feat yet,” says Ruide Yun, a third-year Ph.D. student at Beihang University, in Beijing, who was involved in the study.

To acquire flealike jumping abilities, the robot had to be able to unleash a lot of energy at once, so Yun and his colleagues created one that works somewhat like a miniature piston engine.

Keep reading

ChatGPT’s Evil Twin “WormGPT” Is Silently Entering Emails And Raiding Banks

A malicious copy of OpenAI’s ChatGPT has been created by a bad actor and its aim is to take your money.

The evil AI is called WormGPT, and it was created by a hacker for sophisticated email phishing attacks.

Cybersecurity firm SlashNext confirmed the artificially intelligent language bot had been created purely for malicious purposes.

The firm explained in a report:

Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum that’s often associated with cybercrime.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

The cyber experts experimented with WormGPT to see just how dangerous it could be.

They asked it to create phishing emails and found the results disturbing.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” the experts wrote.

SlashNext says WormGPT is an example of the threat that language-generative AI models pose.

Experts think the tool could be damaging even in the hands of a novice cybercriminal.

With AI like this out there, it’s best to be extra vigilant when it comes to checking your email inbox.

That especially applies to any email that asks for money, banking details, or other personal information.

Keep reading

Thousands of Russian officials to give up iPhones over US spying fears

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/6567e7f2-c5fb-4da4-bd95-bf7ceef54038

Russian authorities have banned thousands of officials and state employees from using iPhones and other Apple products as a crackdown against the American tech company intensifies over espionage concerns.  The trade ministry said that from Monday it will ban all use of iPhones for “work purposes”. The digital development ministry as well as Rostec, the state-owned company that is under sanction by the west for supplying Russia’s war machine in Ukraine, have said they will follow suit or have already introduced bans. The ban on iPhones, iPad tablets and other Apple devices at leading ministries and institutions reflects growing concern in the Kremlin and the Federal Security Service spy agency over a surge in espionage activity by US intelligence agencies against Russian state institutions. “Security officials in ministries — these are FSB employees who hold civilian positions such as deputy ministers — announced that iPhones were no longer considered safe and that alternatives should be sought,” said a person close to a government agency that has banned Apple products. A month after President Vladimir Putin launched his full-scale invasion of Ukraine in February last year, he signed a decree demanding that organisations involved in “critical information infrastructure” — a broad term that includes healthcare, science and the financial sector — switch to domestically developed software by 2025. The move reflected Moscow’s longstanding desire to make state institutions switch away from foreign technology. Some Russian analysts suggested the current edict will do little to assuage suspicions that western intelligence agencies are able to access sensitive information on Russian government activity.

Keep reading