Metals Really Can Heal Themselves Just Like The Cyborgs in “Terminator,” Scientists Reveal

According to new research, metals have the remarkable ability to heal themselves, just like the terrifying cyborg assassins in the “Terminator” movies. This discovery opens up possibilities for self-repairing engines, bridges, airplanes, and even rockets — which is especially promising for future manned missions to Mars.

In groundbreaking experiments, scientists were astonished to witness microscopic cracks disappear, offering hope for machines that can mend themselves on the spot.

“This was absolutely stunning to watch first-hand. What we have confirmed is that metals have their own intrinsic, natural ability to heal themselves, at least in the case of fatigue damage at the nanoscale,” says lead author Dr. Brad Boyce from Sandia National Laboratories in Albuquerque, in a media release.

Fatigue damage, resulting from repeated stress or motion, leads to the formation of tiny cracks in materials over time. These cracks grow and propagate until the entire device fails. This issue is of particular concern in spacecraft design, as a round trip to Mars takes at least 21 months.

Keep reading

The Future Of AI Is War… And Human Extinction As Collateral Damage

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes.

But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film “WarGames,” a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The “Terminator” movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Now, take a breath for a moment. The installation of an AI-powered command-and-control (C2) system like this may seem a distant possibility. Nevertheless, the U.S. Department of Defense is working hard to develop the required hardware and software in a systematic, increasingly rapid fashion. In its budget submission for 2023, for example, the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending “fire” instructions directly to “shooters,” largely bypassing human control.

“A machine-to-machine data exchange tool that provides options for deterrence, or for on-ramp [a military show-of-force] or early engagement,” was how Will Roper, assistant secretary of the Air Force for acquisition, technology, and logistics, described the ABMS system in a 2020 interview. Suggesting that “we do need to change the name” as the system evolves, Roper added, “I think Skynet is out, as much as I would love doing that as a sci-fi thing. I just don’t think we can go there.”

And while he can’t go there, that’s just where the rest of us may, indeed, be going.

Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended to constitute the nucleus of a larger constellation of sensors and computers that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using artificial intelligence algorithms to identify targets, then recommending the optimal weapon… to engage the target,” the Congressional Research Service reported in 2022.

Keep reading

The Military Dangers of AI Are Not Hallucinations

A world in which machines governed by artificial intelligence (AI) systematically replace human beings in most business, industrial, and professional functions is horrifying to imagine. After all, as prominent computer scientists have been warning us, AI-governed systems are prone to critical errors and inexplicable “hallucinations,” resulting in potentially catastrophic outcomes. But there’s an even more dangerous scenario imaginable from the proliferation of super-intelligent machines: the possibility that those nonhuman entities could end up fighting one another, obliterating all human life in the process.

The notion that super-intelligent computers might run amok and slaughter humans has, of course, long been a staple of popular culture. In the prophetic 1983 film WarGames, a supercomputer known as WOPR (for War Operation Plan Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic nuclear war between the United States and the Soviet Union before being disabled by a teenage hacker (played by Matthew Broderick). The Terminator movie franchise, beginning with the original 1984 film, similarly envisioned a self-aware supercomputer called “Skynet” that, like WOPR, was designed to control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing us as a threat to its existence.

Though once confined to the realm of science fiction, the concept of supercomputers killing humans has now become a distinct possibility in the very real world of the near future. In addition to developing a wide variety of “autonomous,” or robotic combat devices, the major military powers are also rushing to create automated battlefield decision-making systems, or what might be called “robot generals.” In wars in the not-too-distant future, such AI-powered systems could be deployed to deliver combat orders to American soldiers, dictating where, when, and how they kill enemy troops or take fire from their opponents. In some scenarios, robot decision-makers could even end up exercising control over America’s atomic weapons, potentially allowing them to ignite a nuclear war resulting in humanity’s demise.

Keep reading

U.S. Committee Examines Role of AI in Warfare

Richard Moore, chief of the United Kingdom’s Secret Intelligence Service (SIS), claimed in a rare public speech on Wednesday that “artificial intelligence [AI] will change the world of espionage, but it won’t replace the need for human spies,” while admitting that British spies are already using AI to disrupt the supply of weapons to Russia.  

According to AP News, in his speech Moore painted AI as a “potential asset and major threat” and called China the “single most important strategic focus” for SIS, commonly known as MI6. He added, “We will increasingly be tasked with obtaining intelligence on how hostile states are using AI in damaging, reckless and unethical ways.” 

Moore shared that “’the unique characteristics of human agents in the right places will become still more significant,’ highlighting spies’ ability to ‘influence decisions inside a government or terrorist group.’” 

While speaking to an audience at the British ambassador’s residence in Prague, Moore urged Russians who oppose the invasion of Ukraine to spy for Britain. “I invite them to do what others have already done this past 18 months and join hands with us,” he said, assuring prospective defectors that “their secrets will always be safe with us” and that “our door is always open.”  

While the MI6 chief spent more time talking about the Russia-Ukraine conflict, it was his comments on the West potentially “falling behind rivals in the AI race” that stood out. Moore declared that, “Together with our allies, [SIS] intends to win the race to master the ethical and safe use of AI.”  

Being quite aware of AI and how it is being used by hostile states, the House Armed Services Subcommittee on Cyber, Information Technologies, and Innovation heard testimony from AI experts at Tuesday’s hearing, “Man and Machine: Artificial Intelligence on the Battlefield.” 

The subcommittee’s goal was to discuss “the barriers that prevent the Department of Defense [DOD] from adopting and deploying artificial intelligence (AI) effectively and safely, the Department’s role in AI adoption, and the risks to the Department from adversarial AI.” 

Alexandr Wang, founder and CEO of Scale AI, testified that during an investor trip to China, he witnessed first-hand the “progress that China was making toward developing computer vision technology and other forms of AI.” Wang was troubled at the time, “because this technology was also being used for domestic repression, such as persecuting the Uyghur population.” 

Keep reading

Japan To Deploy Pre-Crime Style “Behavior Detection” Technology

The Japan National Police Agency has decided to adopt AI-enhanced pre-crime surveillance cameras to bolster the security measures surrounding VIPs.

This step comes in response to the commemoration of the shocking assassination of former Prime Minister Shinzo Abe and the rising threats posed by what the government called “lone offenders.”

The use of AI in law enforcement is becoming commonplace globally. A 2019 study by the Carnegie Endowment for International Peace revealed that 52 out of the 176 nations surveyed were incorporating AI tools into their policing strategies, Nikkei Asia reported.

Keep reading

Robotic Flea Leaps 87 Times Its Own Length

Although fleas are annoying pests, credit must be given where it’s due. These tiny creatures, just 3 millimeters long, can leap as far as 330 millimeters in a single hop—a distance close to 100 times their own body length. Now a similar feat has been achieved by a miniature robot. The results were published 24 March in IEEE Robotics and Automation Letters.

We were inspired by fleas in nature, which despite their small size can unleash tremendous potential and jump close to 100 times their body length. No one in the field of robotics has been able to achieve this feat yet,” says Ruide Yun, a third-year Ph.D. student at Beihang University, in Beijing, who was involved in the study.

To acquire flealike jumping abilities, the robot had to be able to unleash a lot of energy at once, so Yun and his colleagues created one that works somewhat like a miniature piston engine.

Keep reading

ChatGPT’s Evil Twin “WormGPT” Is Silently Entering Emails And Raiding Banks

A malicious copy of OpenAI’s ChatGPT has been created by a bad actor and its aim is to take your money.

The evil AI is called WormGPT, and it was created by a hacker for sophisticated email phishing attacks.

Cybersecurity firm SlashNext confirmed the artificially intelligent language bot had been created purely for malicious purposes.

The firm explained in a report:

Our team recently gained access to a tool known as ‘WormGPT’ through a prominent online forum that’s often associated with cybercrime.

This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities.

The cyber experts experimented with WormGPT to see just how dangerous it could be.

They asked it to create phishing emails and found the results disturbing.

“The results were unsettling. WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC attacks.

“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” the experts wrote.

SlashNext says WormGPT is an example of the threat that language-generative AI models pose.

Experts think the tool could be damaging even in the hands of a novice cybercriminal.

With AI like this out there, it’s best to be extra vigilant when it comes to checking your email inbox.

That especially applies to any email that asks for money, banking details, or other personal information.

Keep reading

Thousands of Russian officials to give up iPhones over US spying fears

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.comT&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
https://www.ft.com/content/6567e7f2-c5fb-4da4-bd95-bf7ceef54038

Russian authorities have banned thousands of officials and state employees from using iPhones and other Apple products as a crackdown against the American tech company intensifies over espionage concerns.  The trade ministry said that from Monday it will ban all use of iPhones for “work purposes”. The digital development ministry as well as Rostec, the state-owned company that is under sanction by the west for supplying Russia’s war machine in Ukraine, have said they will follow suit or have already introduced bans. The ban on iPhones, iPad tablets and other Apple devices at leading ministries and institutions reflects growing concern in the Kremlin and the Federal Security Service spy agency over a surge in espionage activity by US intelligence agencies against Russian state institutions. “Security officials in ministries — these are FSB employees who hold civilian positions such as deputy ministers — announced that iPhones were no longer considered safe and that alternatives should be sought,” said a person close to a government agency that has banned Apple products. A month after President Vladimir Putin launched his full-scale invasion of Ukraine in February last year, he signed a decree demanding that organisations involved in “critical information infrastructure” — a broad term that includes healthcare, science and the financial sector — switch to domestically developed software by 2025. The move reflected Moscow’s longstanding desire to make state institutions switch away from foreign technology. Some Russian analysts suggested the current edict will do little to assuage suspicions that western intelligence agencies are able to access sensitive information on Russian government activity.

Keep reading

Startup aims to make lab-grown human eggs, transforming options for creating families

On a cloudy day on a gritty side street near the shore of San Francisco Bay, a young man answers the door at a low concrete building.

“I’m Matt Krisiloff. Nice to meet you,” says one of the founders of Conception, a biotech startup that is trying to do something audacious: revolutionize the way humans reproduce. “So let me find them real quick,” says Krisiloff as he turns to look for his co-founders, Pablo Hurtado and Bianka Seres, so they can explain Conception’s mission.

“I personally think what we’re doing will probably change many aspects of society as we know it,” says Hurtado, the company’s chief scientific officer. “It’s really exciting to be working on a technology that can change the lives of millions of humans.”

Conception is trying to accelerate, and eventually commercialize, a field of biomedical research known as in vitro gametogenesis (IVG). “Basically, we’re trying to turn a type of stem cell called an induced pluripotent stem cell into a human egg,” Krisiloff says. “[This] really opens the door, if you can create eggs, to be able to help people have children that otherwise don’t have options right now.”

The experimental technology could help women who have lost their eggs to cancer treatment, women who have never been able to produce healthy eggs and women whose eggs are no longer viable because of their age.

IVG would enable these women to have their own genetically related babies at any age. That’s because induced pluripotent stem cells can be made from just a single cell from anyone’s skin or blood. So these lab-grown eggs would have that person’s DNA.

But the possibilities are even broader.

“My personal biggest interest in it is it could allow same-sex couples to be able to have biological children together as well,” Krisiloff says. “Yeah, I’m gay, and it’s something that got me so personally interested in this in the first place.”

Same goes for Hurtado. “There is something intrinsic about sharing a life that is half me and half my husband. I don’t have that capacity right now.” He adds, “I am devoting my life to trying to change that.”

Keep reading

Scientists Inserted Neanderthal And Denisovan Genes Into Mice – Here’s What Happened

A gene that was carried by both Neanderthals and Denisovans causes mice to develop larger heads, twisted ribs, and shortened spines, according to the results of a yet-to-be-published study. Researchers used CRISPR gene editing technology to insert the ancient genetic code into rodents in order to understand how it might have contributed to the body shape of our extinct relatives.

The gene in question is known as GLI3 and plays a vital role in embryonic development in modern humans. Mutations within this gene are associated with physical malformations such as polydactyly – which refers to the growth of extra fingers or toes – and the deformation of the skull.

Neanderthals and Denisovans both carried a slightly altered version of the GLI3 gene, in which an amino acid at one end of the coding region is substituted. However, neither of these ancient species had an abnormal number of digits or life-threatening cranial defects.

As the study authors point out, though, these extinct hominid species displayed several morphological characteristics that differed from those of modern humans, “including elongated and low crania, larger brow ridges, and broader rib cages.”

To determine how the ancient form of the GLI3 gene might have affected the development of our extinct cousins, the researchers first engineered mice to carry a faulty version of the gene. This caused the rodents to develop severe skull and brain deformities as well as polydactyly, illustrating how a functioning version of the gene is essential for healthy embryonic growth.

Keep reading