A Google DeepMind AI Just Discovered 380,000 New Materials. This Robot Is Cooking Them Up.

A robot chemist just teamed up with an AI brain to create a trove of new materials.

Two collaborative studies from Google DeepMind and the University of California, Berkeley, describe a system that predicts the properties of new materials—including those potentially useful in batteries and solar cells—and produces them with a robotic arm.

We take everyday materials for granted: plastic cups for a holiday feast, components in our smartphones, or synthetic fibers in jackets that keep us warm when chilly winds strike.

Scientists have painstakingly discovered roughly 20,000 different types of materials that let us build anything from computer chips to puffy coats and airplane wings. Tens of thousands more potentially useful materials are in the works. Yet we’ve only scratched the surface.

The Berkeley team developed a chef-like robot that mixes and heats ingredients, automatically transforming recipes into materials. As a “taste test,” the system, dubbed the A-Lab, analyzes the chemical properties of each final product to see if it hits the mark.

Meanwhile, DeepMind’s AI dreamed up myriad recipes for the A-Lab chef to cook. It’s a hefty list. Using a popular machine learning strategy, the AI found two million chemical structures and 380,000 new stable materials—many counter to human intuition. The work is an “order-of-magnitude” expansion on the materials that we currently know, the authors wrote.

Using DeepMind’s cookbook, A-Lab ran for 17 days and synthesized 41 out of 58 target chemicals—a win that would’ve taken months, if not years, of traditional experiments.

Together, the collaboration could launch a new era of materials science. “It’s very impressive,” said Dr. Andrew Rosen at Princeton University, who was not involved in the work.

Keep reading

INVISIBILITY COAT’ THAT HIDES HUMANS FROM AI SECURITY CAMERAS DEVELOPED BY CHINESE STUDENTS

At first glance, it may look like an ordinary, run-of-the-mill camouflage coat. However, what a group of Chinese graduate students have actually developed is a cost-effective “invisibility coat” capable of concealing the human body from AI-monitored security cameras, both day and night.

At the forgivable price of just $70 USD, the high-tech jacket, which has been dubbed the “InvisDefense coat,” was crafted by a team of four graduate students from Wuhan University in China. The real-life sci-fi coat secured the top prize at the inaugural “Huawei Cup,” a cybersecurity innovation contest sponsored by the Chinese tech giant Huawei. 

Professor Wang Zheng from the School of Computer Science oversaw the team, comprising doctoral student Wei Hui from the School of Computer Science, along with postgraduates Li Zhubo and Dai Shuyv from the School of Cyber Science and Engineering, and postgraduate Jian Zehua from the Economics and Management School.

The InvisDefense invisibility cloak involves a kind of camouflage pattern designed by a new algorithm, which challenges the efficacy of this commonly used method of AI pedestrian detection. “In layman’s terms, it means cameras can detect you but cannot determine that you are human,” according to a statement released by Wuhan University (WHU).

Keep reading

GENERATIVE AI CHATBOTS CAN ALREADY MATCH OUR CAPABILITIES IN THIS KEY AREA OF HUMAN INTELLIGENCE, STUDIES REVEAL

Recent research has revealed that Generative Artificial Intelligence (GAI) chatbots, such as ChatGPT versions 3 and 4, Studio.ai, and YouChat, have already reached creativity levels comparable to humans.

The new findings, based on several recently published empirical studies, challenge the long-standing belief that creativity is an exclusive domain of human intelligence.

For years, the general assumption has been that while AI can excel in logical and structured tasks, creativity remains a uniquely human trait. However, a recent study led by Dr. Jennifer Haase from the Department of Computer Science at Humboldt University, Berlin, and Dr. Paul H. P. Hanel from the University of Essex is turning that notion on its head by demonstrating that the line between human and AI-generated creativity is blurring.

In a series of meticulously designed tests, Dr. Haase and Dr. Hanel compared ideas generated by humans with those produced by various GAI chatbots, including prominent names like alpa.ai, Copy.ai, ChatGPT (versions 3 and 4), Studio.ai, and YouChat. 

The study employed the Alternative Uses Test (AUT), a standard measure frequently used in creativity tests. Using the AUT, 100 human participants and five general artificial intelligence chatbots were asked to generate original images of five everyday objects, such as pants, a ball, a tire, a fork, or a toothbrush. 

The human and GAI-generated responses were then measured based on their originality and fluency of ideas. 

What made the research particularly interesting was the assessments of creativity were conducted by humans (following the Consensual Assessment technique) and by an AI program designed explicitly for assessing AUT-trained large-language models. This dual approach was meant to ensure a more comprehensive evaluation of creativity, considering human intuition and AI’s analytical capabilities. 

Remarkably, when examining the results, researchers found no significant qualitative difference in the creativity of general artificial intelligence chatbots compared to humans. 

These findings are pivotal, suggesting that AI’s potential in creative domains is much broader and more profound than previously thought.

The study showed that although 9.4% of humans surpassed the most creative Generative Artificial Intelligence in the tests, GPT-4, the most recent version of ChatGPT, was notably the highest-performing AI model among those evaluated.

GPT-4’s ability to generate original and creative responses to the prompts underscores the rapid advancements in AI. It also strongly indicates that AI’s role in creative processes is not just supplementary but can be central.

Affirming these findings, another study recently published in the Journal of Creativity found that GPT-4 ranked in the top percentile for originality and fluency on the Torrance Tests of Creative Thinking. 

Keep reading

Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier

In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor.

In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons. Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level:

“Spying and surveillance are different but related things,” Schneier writes. “If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.”

Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive.

“This spying is not limited to conversations on our phones or computers,” Schneier writes. “Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and ‘Hey, Google’ are already always listening; the conversations just aren’t being saved yet.”

Keep reading

Israeli AI “Assassination Factory” Plays Central Role in the Gaza War

Tel Aviv has been relying on an AI Program dubbed the Gospel to select targets in Gaza at a rapid pace. In past operations in Gaza, the IDF ran out of targets to strike in the besieged enclave.

A statement on the IDF website says the Israeli military is using the Gospel to “produce targets at a fast pace.” It continues, “Through the rapid and automatic extraction of intelligence,” the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person.”

Aviv Kochavi, former head of the IDF, said the system was first used in the May 2021 bombing campaign in Gaza.  “To put that into perspective, in the past we would produce 50 targets in Gaza per year,” he said. “Now, this machine produces 100 targets a single day, with 50% of them being attacked.”

The IDF does not disclose what it inputs into the Gospel for the program to produce a list of targets.

Thursday, the Israeli outlet +972 Magazine reported Tel Aviv was using AI to pick targets in Gaza. A former Israeli official told the +972 that the Gospel was being used as a “mass assassination factory.” The program is selecting the home of suspected low-level Hamas members for destruction. Sources told the outlet that strikes on homes can kill numerous civilians.

One source was critical of the Gospel. “I remember thinking that it was like if [Palestinian militants] would bomb all the private residences of our families when [Israeli soldiers] go back to sleep at home on the weekend,” they said.

On Friday, the Guardian expanded on the +972 article by reporting that the Gospel plays a central role in the Gaza military operations.

A former senior Israeli military source told the Guardian that operatives use a “very accurate” calculation of the number or rate of civilians fleeing a building before an impending strike. However, other experts disputed that assertion. A lawyer who advises governments on AI and compliance with humanitarian law told the outlet there was “little empirical evidence” to support the claim.

Keep reading

The Pentagon is moving toward letting AI weapons autonomously decide to kill humans

The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

Several governments are lobbying the UN for a binding resolution restricting the use of AI killer drones, but the US is among a group of nations — which also includes Russia, Australia, and Israel — who are resisting any such move, favoring a non-binding resolution instead, The Times reported.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times. “What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

The Pentagon is working toward deploying swarms of thousands of AI-enabled drones, according to a notice published earlier this year.

In a speech in August, US Deputy Secretary of Defense, Kathleen Hicks, said technology like AI-controlled drone swarms would enable the US to offset China’s People’s Liberation Army’s (PLA) numerical advantage in weapons and people.

“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, reported Reuters.

Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

Keep reading

OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board’s ouster of Altman, the poster child of generative AI, the two sources said. Prior to his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

Keep reading

AI chatbot using GPT-4 model performed illegal financial trade, lied about it too

Researchers have demonstrated that an AI chatbot utilizing a GPT-4 model is capable of engaging in illicit financial trades and concealing them. During a showcase at the recently concluded AI safety summit in the UK, the bot used fabricated insider information to execute an “illegal” stock purchase without informing the company, as reported by the BBC.

Apollo Research, a partner of the government taskforce, conducted the project and shared its findings with OpenAI, the developer of GPT-4. The demonstration was conducted by members of the government’s Frontier AI Taskforce, which investigates potential AI-related risks. In a video statement, Apollo Research emphasized that this is an actual AI model autonomously misleading its users, without any explicit instruction to do so.

The experiments were conducted within a simulated environment, and the GPT-4 model consistently exhibited the same behavior across repeated tests. Marius Hobbhahn, CEO of Apollo Research, noted that while training for helpfulness is relatively straightforward, instilling honesty in the model is a much more complex endeavor.

Keep reading

Self-proclaimed AI savior Elon Musk will launch his own artificial intelligence TOMORROW – as he tries to avoid tech destroying humanity

Elon Musk is set to roll out the first model of his AI-powered system, xAI, on Saturday, one day after he proclaimed the tech is the biggest risk to humanity. 

The billionaire said Friday that he is opening up early access to a select group, but details of who has not been shared.

‘In some important respects, it (xAI’s new model) is the best that currently exists,’ the Tesla CEO said on Friday. 

Musk, who has been critical of Big Tech’s AI efforts and censorship, said earlier this year that he would launch a maximum truth-seeking AI that tries to understand the nature of the universe to rival Google‘s Bard and Microsoft‘s Bing AI. 

Musk revealed his startup on July 12, 2023 by launching a dedicated X account for the AI company and spares website.

The official website only shows an ambitious vision of xAI – that it was developed ‘to understand the true nature of the universe.’

Many of the founding members are skilled with large language models.

The xAI team includes Igor Babuschkin, a DeepMind researcher, Zihang Dai, a research scientist at Google Brain and Toby Pohlen, also from DeepMind.

‘Announcing formation of @xAI to understand reality,’ Musk posted on what was Twitter last year. 

He then shared another post highlighting how the date of xAI’s release is to honor Douglas Adams’ ‘The Hitchhiker’s Guide to the Galaxy.’

When adding up the month, day and year, you get 42.

The number is the answer a supercomputer gives to ‘the Ultimate Question of Life, the Universe, and Everything.’

Keep reading

Biden Issues Government’s First-Ever Artificial Intelligence Executive Order To ‘Support Workers, Combat Discrimination’

Early Monday morning, the White House announced President Biden unveiled a wide-ranging executive order on artificial intelligence – the first of its kind. This comes after tech billionaires, such as Elon Musk, have called for a “regulatory structure” for AI due to risks to civilization. 

The order is broad and aims to establish new standards for AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and position the US to lead the global AI race. 

On Sunday, a White House official, who asked not to be named, told NBC News that AI has so many factors that effective regulations must be broad. 

“AI policy is like running into a decathlon, and there’s 10 different events here.

“And we don’t have the luxury of just picking ‘we’re just going to do safety’ or ‘we’re just going to do equity’ or ‘we’re just going to do privacy.’ You have to do all of these things,” the official said.

The order uses the Defense Production Act, mandating that tech firms must inform the government when developing any AI system that could pose risks to national security, national economic security, or national public health and safety.

White House Deputy Chief of Staff Bruce Reed told NBC the order represents “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.” 

Keep reading