AI scans RNA ‘dark matter’ and uncovers 70,000 new viruses

Researchers have used artificial intelligence (AI) to uncover 70,500 viruses previously unknown to science1, many of them weird and nothing like known species. The RNA viruses were identified using metagenomics, in which scientists sample all the genomes present in the environment without having to culture individual viruses. The method shows the potential of AI to explore the ‘dark matter’ of the RNA virus universe.

Viruses are ubiquitous microorganisms that infect animals, plants and even bacteria, yet only a small fraction have been identified and described. There is “essentially a bottomless pit” of viruses to discover, says Artem Babaian, a computational virologist at the University of Toronto in Canada. Some of these viruses could cause diseases in people, which means that characterizing them could help to explain mystery illnesses, he says.

Previous studies have used machine learning to find new viruses in sequencing data. The latest study, published in Cell this week, takes that work a step further and uses it to look at predicted protein structures1.

The AI model incorporates a protein-prediction tool, called ESMFold, that was developed by researchers at Meta (formerly Facebook, headquartered in Menlo Park, California). A similar AI system, AlphaFold, was developed by researchers at Google DeepMind in London, who won the Nobel Prize in Chemistry this week.

Keep reading

The Pentagon Wants to Use AI to Create Deepfake Internet Users

The United States’ secretive Special Operations Command is looking for companies to help create deepfake internet users so convincing that neither humans nor computers will be able to detect they are fake, according to a procurement document reviewed by The Intercept.

The plan, mentioned in a new 76-page wish list by the Department of Defense’s Joint Special Operations Command, or JSOC, outlines advanced technologies desired for country’s most elite, clandestine military efforts. “Special Operations Forces (SOF) are interested in technologies that can generate convincing online personas for use on social media platforms, social networking sites, and other online content,” the entry reads.

The document specifies that JSOC wants the ability to create online user profiles that “appear to be a unique individual that is recognizable as human but does not exist in the real world,” with each featuring “multiple expressions” and “Government Identification quality photos.”

In addition to still images of faked people, the document notes that “the solution should include facial & background imagery, facial & background video, and audio layers,” and JSOC hopes to be able to generate “selfie video” from these fabricated humans. These videos will feature more than fake people: Each deepfake selfie will come with a matching faked background, “to create a virtual environment undetectable by social media algorithms.”

The Pentagon has already been caught using phony social media users to further its interests in recent years. In 2022, Meta and Twitter removed a propaganda network using faked accounts operated by U.S. Central Command, including some with profile pictures generated with methods similar to those outlined by JSOC. A 2024 Reuters investigation revealed a Special Operations Command campaign using fake social media users aimed at undermining foreign confidence in China’s Covid vaccine.

Last year, Special Operations Command, or SOCOM, expressed interest in using video “deepfakes,” a general term for synthesized audiovisual data meant to be indistinguishable from a genuine recording, for “influence operations, digital deception, communication disruption, and disinformation campaigns.” Such imagery is generated using a variety of machine learning techniques, generally using software that has been “trained” to recognize and recreate human features by analyzing a massive database of faces and bodies. This year’s SOCOM wish list specifies an interest in software similar to StyleGAN, a tool released by Nvidia in 2019 that powered the globally popular website “This Person Does Not Exist.” Within a year of StyleGAN’s launch, Facebook said it had taken down a network of accounts that used the technology to create false profile pictures. Since then, academic and private sector researchers have been engaged in a race between new ways to create undetectable deepfakes, and new ways to detect them. Many government services now require so-called liveness detection to thwart deepfaked identity photos, asking human applicants to upload a selfie video to demonstrate they are a real person — an obstacle that SOCOM may be interested in thwarting.

Keep reading

If AI Companies Are Trying To Build God, Shouldn’t They Get Our Permission First?

AI companies are on a mission to radically change our world. They’re working on building machines that could outstrip human intelligence and unleash a dramatic economic transformation on us all.

Sam Altman, the CEO of ChatGPT-maker OpenAI, has basically told us he’s trying to build a god — or “magic intelligence in the sky,” as he puts it. OpenAI’s official term for this is artificial general intelligence, or AGI. Altman says that AGI will not only “break capitalism” but also that it’s “probably the greatest threat to the continued existence of humanity.”

There’s a very natural question here: Did anyone actually ask for this kind of AI? By what right do a few powerful tech CEOs get to decide that our whole world should be turned upside down?

As I’ve written before, it’s clearly undemocratic that private companies are building tech that aims to totally change the world without seeking buy-in from the public. In fact, even leaders at the major companies are expressing unease about how undemocratic it is.

Jack Clark, the co-founder of the AI company Anthropic, told Vox last year that it’s “a real weird thing that this is not a government project.” He also wrote that there are several key things he’s “confused and uneasy” about, including, “How much permission do AI developers need to get from society before irrevocably changing society?” Clark continued:

Technologists have always had something of a libertarian streak, and this is perhaps best epitomized by the ‘social media’ and Uber et al era of the 2010s — vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general ‘move fast and break things’ philosophy of tech. Should the same be true of AI?

I’ve noticed that when anyone questions that norm of “permissionless invention,” a lot of tech enthusiasts push back. Their objections always seem to fall into one of three categories. Because this is such a perennial and important debate, it’s worth tackling each of them in turn — and why I think they’re wrong.

Keep reading

Logging Into a Brave New World: How Facial Recognition Just Got Personal

Surprising exactly no one paying attention to the slow erosion of privacy, the US General Services Administration (GSA) has rolled out its shiny new toy: facial recognition technology for accessing login.gov. Yes, that beloved single sign-in service, connecting Americans to federal and state agencies, now wants your face—literally. This gateway, clicked into over 300 million times a year by citizens has decided the most efficient way to keep us all “safe” is by scanning our mugs. How very 2024.

But of course, this little “upgrade” didn’t just appear overnight. Oh no, it dragged itself through bureaucratic purgatory, complete with false starts, delays, and some spicy critique from the Inspector General. Apparently, login.gov had been fibbing about its compliance with Identity Assurance Level 2 (IAL2)—a fancy label for a government-mandated security standard that requires real-deal verification of who you are. Up until now, that “verification” meant having someone eyeball your ID card photo and say, “Yep, that looks about right,” rather than dipping into the biometric surveillance toolkit.

Facial recognition was supposed to make its grand debut last year, but things got complicated when it turned out login.gov wasn’t actually playing by the rules it claimed to follow. The Inspector General, ever the fun police, caught them misrepresenting their tech’s adherence to the IAL2 standard, causing the rollout to stall while everyone scrambled to figure out if they could get away with this. Now, after enough piloting to give a nervous airline passenger a heart attack, login.gov has finally reached compliance, but not without leaving a greasy trail of unanswered questions in its wake.

Keep reading

Battlespace Of The Brain: The Military Conquest To “Master The Human Domain”

In 1970, Zbigniew Brzezinski published his book Between Two Ages: America’s Role in the Technetronic Era.[i]Brzezinski was a futurist who cofounded the globalist Trilateral Commission with David Rockefeller and Jimmy Carter in 1973, and served as national security adviser to Jimmy Carter between 1977 and 1981. Brzezinski understood the impact of science on society:

Speaking of a future at most decades away, an experimenter in intelligence control asserted, “I foresee a time when we shall have the means and therefore, inevitably, the temptation to manipulate the behavior and intellectual functioning of all the people through environmental and biochemical manipulation of the brain.” (Between Two Ages: America’s Role in the Technetronic Era, p. 15)

Another threat, less overt but no less basic, confronts liberal democracy. More directly linked to the impact of technology, it involves the gradual appearance of a more controlled and directed society. Such a society would be dominated by an elite whose claim to political power would rest on allegedly superior scientific know-how. Unhindered by the restraints of traditional liberal values, this elite would not hesitate to achieve its political ends by using the latest modern techniques for influencing public behavior and keeping society under close surveillance and control. (pp. 252–253)

In an August 2017 seminar at Lawrence Livermore National Laboratory’s Center for Global Security Research (CGSR) guest speaker Dr. James Giordano, of Georgetown University Medical Center, offered a sobering view of the calculated war on our brains, the temptation to manipulate the behavior and intellectual functioning of all the people, and the potential for weaponizing neuroscientific discoveries and neurotechnologies.

Dr. Giordano is a professor in the Departments of Neurology and Biochemistry, Chief of the Neuroethics Studies Program, and Co-director of the O’Neill-Pellegrino Program in Brain Science and Global Health Law and Policy at Georgetown University Medical Center in Washington, DC. He is a Senior Researcher and Task Leader of the Working Group on Dual-Use of the EU Human Brain Project, and has served as a Senior Science Advisory Fellow of the Strategic Multilayer Assessment group of the Joint Chiefs of Staff of the Pentagon.

His 2017 briefing, “Brain Science from Bench to Battlefield: The Realities—and Risks—of Neuroweapons,”[ii]explores the potentials of brain science in the context of public/private research for national defense, including using nano-pharmaceutical low-dose toxins or other chemicals as a controlled vector.

Keep reading

Drone swarms targeting US military bases are operated by ‘mother ship’ UFO, claims top Pentagon official

A retired, senior Pentagon official has confirmed that UFO ‘mother ships’ were spotted ‘releasing swarms of smaller craft’ — adding further mystery to the still-unexplained intrusions over multiple US military bases.

His statements come amid the release of 50 pages of Air Force records related to provocative ‘drone’ incursions, that one general calls ‘Close Encounters at Langley.’

For at least 17 nights last December, swarms of noisy, small UFOs were seen at dusk ‘moving at rapid speeds’ and displaying ‘flashing red, green, and white lights’ penetrating the highly restricted airspace above Langley Air Force Base in Virginia.

Senior ex-Pentagon security official Chris Mellon told DailyMail.com that the episode was ‘part of a much larger pattern affecting numerous national security installations.’

‘Two of the notable aspects,’ he said, ‘are the fact our drone signal-jamming devices have proven ineffective and these craft are making no effort to remain concealed.’

‘In fact, in some instances,’ as Mellon took pains to emphasize, ‘it is clear they want to be seen as though taunting us.’

Keep reading

Invisible text that AI chatbots understand and humans can’t? Yep, it’s a thing.

What if there was a way to sneak malicious instructions into Claude, Copilot, or other top-name AI chatbots and get confidential data out of them by using characters large language models can recognize and their human users can’t? As it turns out, there was—and in some cases still is.

The invisible characters, the result of a quirk in the Unicode text encoding standard, create an ideal covert channel that can make it easier for attackers to conceal malicious payloads fed into an LLM. The hidden text can similarly obfuscate the exfiltration of passwords, financial information, or other secrets out of the same AI-powered bots. Because the hidden text can be combined with normal text, users can unwittingly paste it into prompts. The secret content can also be appended to visible text in chatbot output.

The result is a steganographic framework built into the most widely used text encoding channel.

“Mind-blowing”

“The fact that GPT 4.0 and Claude Opus were able to really understand those invisible tags was really mind-blowing to me and made the whole AI security space much more interesting,” Joseph Thacker, an independent researcher and AI engineer at Appomni, said in an interview. “The idea that they can be completely invisible in all browsers but still readable by large language models makes [attacks] much more feasible in just about every area.”

To demonstrate the utility of “ASCII smuggling”—the term used to describe the embedding of invisible characters mirroring those contained in the American Standard Code for Information Interchange—researcher and term creator Johann Rehberger created two proof-of-concept (POC) attacks earlier this year that used the technique in hacks against Microsoft 365 Copilot. The service allows Microsoft users to use Copilot to process emails, documents, or any other content connected to their accounts. Both attacks searched a user’s inbox for sensitive secrets—in one case, sales figures and, in the other, a one-time passcode.

When found, the attacks induced Copilot to express the secrets in invisible characters and append them to a URL, along with instructions for the user to visit the link. Because the confidential information isn’t visible, the link appeared benign, so many users would see little reason not to click on it as instructed by Copilot. And with that, the invisible string of non-renderable characters covertly conveyed the secret messages inside to Rehberger’s server. Microsoft introduced mitigations for the attack several months after Rehberger privately reported it. The POCs are nonetheless enlightening.

Keep reading

Open AI Wants to Build Data Centres That Would Consume More Electricity Per Year Than the Whole of the U.K.

Over the past few months, the newswires have been hot with stories about the large-scale data centres that will be required to meet the needs of the forthcoming revolution in Artificial Intelligence (AI). How much electricity will these new data centres consume and what does that mean for the electricity demand forecasts underpinning the plans for Net Zero?

Recent Date Centre Announcements

To give a flavour of the scale of data centre developments that are coming, it is helpful to look at recent announcements from large tech companies. Back in March, it was announced that Amazon had bought a 960MW data centre that is powered by an adjacent nuclear power station. In April, Mark Zuckerberg CEO of Meta that owns Facebook and Instagram said energy requirements may hold back the build out of AI data centres. He also talked about building data centres that would consume 1GW of power.

Last month, Oracle chairman Larry Ellison announced that Oracle was designing a data centre that would consume more than 1GW that would be powered by three small modular nuclear reactors (SMRs). Then Microsoft also got in on the act when it announced it had done a deal with U.S. utility Constellation to restart the 835MW Three Mile Island (TMI) Unit 1 nuclear power plant to power its data centres. Anxious not to be left out, Sundar Pichai, CEO of Google said they too were working on 1GW data centres and saw money being invested in SMRs.

Finally, Sam Altman of OpenAI, the creator of ChatGPT has trumped them all by pitching the idea of 5GW data centres to the White House. Altman has been heard talking of building five to seven of these leviathans.

Bloomberg, usually a driver of the Net Zero band wagon, has reluctantly admitted that there’s not enough clean energy – nuclear or otherwise – to satisfy AI’s voracious appetite and gas will have to fill the gap. America’s national security and energy security is eclipsing climate concerns.

This admission is important because it acknowledges that such important infrastructure cannot rely upon the vicissitudes of the weather. These data centres need a source of electricity that is reliable and always available. It would be absurd to think that a ChatGPT user will want to receive a message saying their request cannot be processed because it is not windy enough.

Keep reading

Swedish Police Want to Fight Crime with Live Facial Recognition

The Swedish police want to use facial recognition in real time to crack down on serious crimes.

Government investigators have already drafted a bill that will make it possible to use the technology. The regulation, however, still needs to be completed before it can be tabled, National Police Chief Petra Lundh told publicly funded radio broadcaster Sveriges Radio last week.

Lundh also noted that the legislation must comply with the EU AI Act and could potentially be temporary until crime rates settle down.

Sweden has been experiencing a flood of gang-related attacks, including firearms and explosives, leading the Scandinavian country to crown itself with the title of highest per capita gun violence rate in the European Union. Police Chief Lundh believes law enforcement agencies could use cameras to find suspects.

“It is not unusual that we have a picture of the likely perpetrator, but then we cannot find him or her,” Lundh says.

The suggestion has already been met with criticism. The technology could make incorrect matches for people with dark skin leading to perceptions that the AI is racist, says lawyer Kristofer Stahre.

“I am worried about what consequences it may have for the Swedish people,” he says.

The Swedish government has been working on expanding the use of biometric data in policing on other fronts.

Keep reading

In a rare disclosure, the Pentagon provides an update on the X-37B spaceplane

After more than nine months in an unusual, highly elliptical orbit, the US military’s X-37B spaceplane will soon begin dipping its wings into Earth’s atmosphere to lower its altitude before eventually coming back to Earth for a runway landing, the Space Force said Thursday.

The aerobraking maneuvers will use a series of passes through the uppermost fringes of the atmosphere to gradually reduce its speed with aerodynamic drag while expending minimal fuel. In orbital mechanics, this reduction in velocity will bring the apogee, or high point, of the X-37B’s orbit closer to Earth.

Bleeding energy

The Space Force called the aerobraking a “novel space maneuver” and said its purpose was to allow the X-37B to “safely dispose of its service module components in accordance with recognized standards for space debris mitigation.”

While the reusable Boeing-built X-37B spaceplane is designed to land like an aircraft on a runway, the service module, mounted to the rear of the vehicle, carries additional payloads. At the end of the mission, the X-37B jettisons the disposable service module before reentry. The Space Force doesn’t want this section of the spacecraft to remain in its current high-altitude orbit and become a piece of space junk.

“Once the aerobrake maneuver is complete, the X-37B will resume its test and experimentation objectives until they are accomplished, at which time the vehicle will deorbit and execute a safe return as it has during its six previous missions,” the Space Force said.

The Space Force has identified mobility in orbit as a key focus for its next-generation space missions. This would allow satellites to more freely move between altitudes and orbital inclinations than they can today. Commanders don’t want a spacecraft’s movements to be constrained by the amount of fuel it carries, allowing satellites to “maneuver without regret.”

Space Force leaders have discussed in-orbit refueling, more efficient propulsion technologies, and other ways to achieve this end. Aerobraking is another way to lower a spacecraft’s orbit without using precious propellant.

“This first-of-a-kind maneuver from the X-37B is an incredibly important milestone for the United States Space Force as we seek to expand our aptitude and ability to perform in this challenging domain,” said Gen. Chance Saltzman, the Space Force’s chief of space operations.

Keep reading