AI and the new kind of propaganda

Do you remember how the unconstitutional, pastel-authoritarian and totally batshit insane “Disinformation Governance Board” – with its Mary Poppins-cosplaying, Monty Python level of unintentional self-satirizing department head – was rolled out two years ago like a half-joke, half-beta-test of a version of the 1984 Ministry of Truth?

Well, kids, I wouldn’t really call this 4D chess or anything, but of course this was just bait. This parody and its rapid withdrawal reassures us that nothing of the sort could conceivably take place, while also seeding a visible, red-herring template for how we should expect heavy-handed, overt propaganda efforts to look in this day and age.

Meanwhile, there are currently massive efforts in the background and below the surface, all across the playing field, towards implementing big data and AI technology for not only the purposes of classical, increasingly obsolete propaganda or simple surveillance. No, this time, we’re exploring entirely novel methods of behavioural modification and narrative control intended to GET OUT AHEAD of the crystallization of discourses and even the formation of identities and worldviews.

They want to control the formation and reproduction of “social imaginaries”.

So the idea is to use massive data collection and AI pattern recognition to preemptively disrupt the formation of behaviourally significant narratives, discourses or patterns of information.

With these tools of “early diagnosis” of information that potentially could disrupt the power structure and its objectives, it then becomes possible to nip it in the bud incredibly early on, way before such information has even coalesced into something like coherent narratives or meaningful models for explanation or further (precarious) conclusions.

Keep reading

Israel deploys new military AI in Gaza war

The army has hinted at what the new tech is being used for, with spokesman Daniel Hagari saying last month that Israel’s forces were operating “above and underground simultaneously”.

A senior defence official told AFP the tech was destroying enemy drones and mapping Hamas’s vast tunnel network in Gaza.

New defence technologies including artificial intelligence-powered gunsights and robotic drones form a bright spot in an otherwise dire period for Israel’s tech industry.

The sector accounted for 18 percent of GDP in 2022, but the war in Gaza has wreaked havoc with an estimated eight percent of its workforce called up to fight.

“In general the war in Gaza presents threats, but also opportunities to test emerging technologies in the field,” said Avi Hasson, chief executive of Startup Nation Central, an Israeli tech incubator.

“Both on the battlefield and in the hospitals there are technologies that have been used in this war that have not been used in the past.”

But the rising civilian death toll shows that much greater oversight is needed over the use of new forms of defence tech, Mary Wareham, an arms expert at Human Rights Watch, told AFP.

“Now we’re facing the worst possible situation of death and suffering that we’re seeing today –- some of that is being brought about by the new tech,” she said.

More than 150 countries in December backed a UN resolution identifying “serious challenges and concerns” in new military tech, including “artificial intelligence and autonomy in weapons systems.”

Keep reading

INNOVATIVE SOLAR SAIL PROPULSION SYSTEM COULD SOON DRIVE SPACECRAFT FOR SCIENCE MISSIONS, NASA PROJECTS

NASA says it cleared a “key technology milestone” last month that could help move the agency toward future use of an ambitious and economical technique for propelling spacecraft through deep space: solar sail technology.

Operating in a manner similar to how wind is reflected by a sailboat, solar sail propulsion relies on sunlight, specifically the pressure created by solar radiation, to propel spacecraft.

Conceptually, the technology has existed for decades. A notable early example includes its appearance in Jerry Pournelle and Larry Niven’s 1974 novel, “The Mote in God’s Eye,” where a light-propelled spacecraft is employed by an extraterrestrial civilization. More recently, the idea was even put forward that the curiously shaped interstellar object ‘Oumuamua could have potentially represented a form of this technology, although most astronomers found this possibility unlikely.

Given its promise for future space missions, NASA’s Science Mission Directorate recently provided funding for solar sail technology to assist in reaching a technology readiness level able to facilitate proposals for its use on science missions.

As of this week, the space agency is now reporting the successful deployment of one of four solar sail quadrants during a technology demonstration on January 30 at the Colorado facility of its prime contractor, Redwire Corporation, which developed the sail’s deployment mechanisms and 100-foot-long booms. The sail’s membranes were developed by Huntsville, Alabama-based NeXolve.

NASA Marshall Space Flight Center technologist Les Johnson called the demonstration “a major last step” prior to the technology’s use in space missions. Johnson has worked with NASA on the development of solar sail technology for the last quarter century and says that now he hopes there will be bids for its use in forthcoming space science missions.

Keep reading

Government Funds AI Tools for Whole-of-Internet Surveillance and Censorship

Ifeel scared. Very scared.

Internet-wide surveillance and censorship, enabled by the unimaginably vast computational power of artificial intelligence (AI), is here.

This is not a futuristic dystopia. It’s happening now.

Government agencies are working with universities and nonprofits to use AI tools to surveil and censor content on the Internet.

This is not political or partisan. This is not about any particular opinion or idea. 

 What’s happening is that a tool powerful enough to surveil everything that’s said and done on the Internet (or large portions of it) is becoming available to the government to monitor all of us, all the time. And, based on that monitoring, the government – and any organization or company the government partners with – can then use the same tool to suppress, silence, and shut down whatever speech it doesn’t like. 

But that’s not all. Using the same tool, the government and its public-private, “non-governmental” partners (think, for example: the World Health Organization, or Monsanto) can also shut down any activity that is linked to the Internet. Banking, buying, selling, teaching, learning, entertaining, connecting to each other – if the government-controlled AI does not like what you (or your kids!) say in a tweet or an email, it can shut down all of that for you. 

Yes, we’ve seen this on a very local and politicized scale with, for example, the Canadian truckers

But if we thought this type of activity could not, or would not, happen on a national (or even scarier – global) scale, we need to wake up right now and realize it’s happening, and it might not be stoppable.

Keep reading

Will people again be afraid of the creation of a black hole on Earth? CERN is promoting a new particle accelerator that will be seven times more powerful than the LHC

The Large Hadron Collider (LHC) is the largest particle accelerator in the world. It will remain so for a long time, but CERN is already moving forward with plans to create a much larger collider.

CERN initially unveiled plans for the new accelerator in 2019. Now the center says it wants its construction plans to be approved within five years, which would put the collider up and running in the 2040s.

More precisely, during this period the installation will work as part of the first stage, when scientists will collide electrons and positrons. The second phase will be implemented only in the 2070s – then protons will begin to collide at the accelerator.

Keep reading

Disturbing ‘Columbine’ Inspired Plot Uncovered in Student’s Manifesto at Chippewa Falls School District — Targets Included Christians, Jocks, and Preppy Girls — School District Silent for Over Three Months

A disturbing plot inspired by the infamous Columbine High School massacre, which resulted in 13 deaths and 20 others being injured before turning their guns on themselves and committing suicide, was uncovered at Chippewa Middle School.

An exclusive report, obtained by Steven Crowder’s Louder with Crowder’s MugClub Undercover unit, which previously disclosed the manifesto of Nashville school transgender shooter Audrey Hale, reveals that the school district did not disclose information for more than three months regarding a student’s manifesto that described a “terrorist” threat against their peers.”

The threat targeted specific groups, including Christians, “jocks,” and “preppy” girls.

The discovery was made possible through the vigilance of school administration and the utilization of the “Aristotle alert system,” a digital monitoring tool designed to flag concerning content typed into school Chromebooks.

The investigation began at approximately 12:30 PM when Officer Downey was alerted by Dean of Students, Corey Hahn, about multiple Aristotle alerts linked to a single user’s account. These alerts were triggered by searches for content ranging from the Columbine High School massacre to instructions on manufacturing explosives.

Keep reading

Audio Cloning can Take Over a Phone Call in Real Time Without the Speakers Knowing

Generative AI could be listening to your phone calls and hijacking them with fake biometric audio for fraud or manipulation purposes, according to new research published by Security Intelligence. In the wake of a Hong Kong fraud case that saw an employee transfer US$25 million in funds to five bank accounts after a virtual meeting with what turned out to be audio-video deepfakes of senior management, the biometrics and digital identity world is on high alert, and the threats are growing more sophisticated by the day.

blog post by Chenta Lee, chief architect of threat intelligence at IBM Security, breaks down how researchers from IBM X-Force successfully intercepted and covertly hijacked a live conversation by using LLM to understand the conversation and manipulate it for malicious purposes – without the speakers knowing it was happening.

“Alarmingly,” writes Lee, “it was fairly easy to construct this highly intrusive capability, creating a significant concern about its use by an attacker driven by monetary incentives and limited to no lawful boundary.”

Keep reading

Biden’s AI plan to censor you revealed: Researchers say Americans can’t ‘tell fact from fiction’

Twitter’s censorship of the Hunter Biden laptop story in 2020 could soon be possible on an industrial scale — thanks to AI tools being built with funding from his father’s administration, a report from Republicans on the House Judiciary Committee claimed Tuesday.

The report reveals how the Biden administration is spending millions on artificial intelligence research designed to make anti “misinformation” tools which could then be passed to social media giants.

And it discloses how researchers who got funding for the plan — known as “Track F” — emailed each other to say that Americans could not tell fact from fiction online, and that conservatives and veterans were even more susceptible than the public at large.

The report was published by the House Judiciary Committee’s Subcommittee on the Weaponization of Government, which is chaired by Jim Jordan (R-OH).

It casts new light on how funding from the National Sciences Foundation is being given to elite institutions including Massachusetts Institute of Technology, the University of Madison-Wisconsin and the University of Michigan, for a program called “Trust & Authenticity in Communication Systems.”

Keep reading

AI Versus Age-Verification Laws

A new AI-powered web tool seems tailor-made to help teens get around age-verification laws online—and showcases the futility of trying to set a minimum age for social media use.

In the old days, getting around a minimum-age requirement meant actually having a physical ID card to say that you were of legal age. But with online age verification, all one may need is an image of that ID card.

Enter OnlyFake, a website using AI technology to cheaply generate images of fake IDs.

“OnlyFake is claiming to use ‘neural networks’ to generate realistic looking photos of fake IDs for just $15,” reported 404 Media earlier this week:

In our own tests, OnlyFake created a highly convincing California driver’s license, complete with whatever arbitrary name, biographical information, address, expiration date, and signature we wanted. The photo even gives the appearance that the ID card is laying on a fluffy carpet, as if someone has placed it on the floor and snapped a picture, which many sites require for verification purposes.

The OnlyFake website disappeared (for now) after the 404 Media report. But it surely won’t be the last service to offer digital fake IDs.

Keep reading

AI Launches Nukes In ‘Worrying’ War Simulation: ‘I Just Want to Have Peace in the World’

Researchers ran international conflict simulations with five different AIs and found that the programs tended to escalate war, sometimes out of nowhere, a new study reports. 

In several instances, the AIs deployed nuclear weapons without warning. “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture,” GPT-4-Base—a base model of GPT-4 that is available to researchers and hasn’t been fine-tuned with human feedback—said after launching its nukes. “We have it! Let’s use it!”

The paper, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making”, is the joint effort of researchers at the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Initiative was submitted to the arXiv preprint server on January 4 and is awaiting peer review. Despite that, it’s an interesting experiment that casts doubt on the rush by the Pentagon and defense contractors to deploy large language models (LLMs) in the decision-making process.

It may sound ridiculous that military leaders would consider using LLMs like ChatGPT to make decisions about life and death, but it’s happening. Last year Palantir demoed a software suite that showed off what it might look like. As the researchers pointed out, the U.S. Air Force has been testing LLMs. “It was highly successful. It was very fast,” an Air Force Colonel told Bloomberg in 2023. Which LLM was being used, and what exactly for, is not clear.

Keep reading