Hackers can read private AI assistant chats even though they’re encrypted

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Keep reading

Artificial Intelligence In The Classroom Can Only Offer Artificial Educations

Educators are grappling with how to approach ever-evolving generative artificial intelligence — the kind that can create language, images, and audio. Programs like ChatGPT, Gemini, and Copilot pose far different challenges from the AI of yesteryear that corrected spelling or grammar. Generative AI generates whatever content it’s asked to produce, whether it’s a lab report for a biology course, a cover letter for a particular job, or an op-ed for a newspaper.

This groundbreaking development leaves educators and parents asking: Should teachers teach with or against generative AI, and why? 

Technophiles may portray skeptics as Luddites — folks of the same ilk that resisted the emergence of the pen, the calculator, or the word processor — but this technology possesses the power to produce thought and language on someone’s behalf, so it’s drastically different. In the writing classroom, specifically, it’s especially problematic because the production of thought and language is the goal of the course, not to mention the top goals of any legitimate and comprehensive education. So count me among the educators who want to proceed with caution, and that’s coming from a writing professor who typically embraces educational technology

Keep reading

Emotion-Tracking – AI on the job: Workers Fear Being Watched – and Misunderstood.

We have heard the warnings from Yuval Noah Harari, that if we don’t figure out how to regulate artificial intelligence (AI) human brains will be hacked soon,” a statement that arguably speaks to humanity’s worst fears about AI. This may be especially so, when hearing from Schwab’s advisor Harari that to “hack a human being is to get to know that person better than they know themselves,” which can enable those who own the technology to increasingly manipulate us.

We may believe this extreme threat to our privacy will occur some time in the future, but, the hacking Harari is describing is more proverbial than literal, and has already been occuring in environments like Facebook and YouTube where we are led to view content that the algorithms have deemed to be of interest to us. It now would appear that many have gradually become desensitised to it the “hacking” and manipulation allowing it to increase without too much protest.

But how would you feel if your workplace was tracking how you feel? asks Nazanin Andalibi who is Assistant Professor of Information at the University of Michigan. and in the article below dissuses emotion AI which is already being used in the workplace.

Keep reading

NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute

The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.

NIST is an agency of the US Department of Commerce whose mission is “to promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” According to the agency’s website, its core competencies are “measurement science,” “rigorous traceability” and “development and use of standards.” NIST also develops cybersecurity standards, guidelines and best practices, and released an AI security framework in January 2023.

Christiano, who is known for his ties to the effective altruism (EA) movement and its offshoot, longtermism (a view that prioritizes the long-term future of humanity, popularized by philosopher William MacAskill), was allegedly rushed through the hiring process without anyone knowing until today, one of the sources said.

The appointment of Christiano, which was said to come directly from Secretary of Commerce Gina Raimondo, has sparked outrage among NIST employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity.

However, Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told VentureBeat that President Biden’s AI Executive Order, introduced in November 2023, specifically asks NIST and the AISI to focus on certain tasks — including CBRN (chemical, biological, radiological and nuclear materials) — for which Paul Christiano is “extremely qualified.”

Many say EA — defined by the Center for Effective Altruism as an “intellectual project using evidence and reason to figure out how to benefit others as much as possible” — has turned into a cult-like group of highly influential and wealthy adherents (made famous by FTX founder and jailbird Sam Bankman-Fried) whose paramount concern revolves around preventing a future AI catastrophe from destroying humanity. Critics of the EA focus on this existential risk, or “x-risk,” say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity. 

Keep reading

Anti-Misinformation AI Flagging Factual Stories As False.

Artificial intelligence hired by the Washington Secretary of State’s Office to monitor potential election ‘misinformation’ has flagged multiple factual stories from The Center Square regarding evidence of noncitizens illegally voting. Logically — a UK-based AI company — was contracted by the Washington Secretary of State last year to scan for “false content” on various social platforms, including X (formerly Twitter).

The state contract with Logically tasks them with using their AI tools to identify “harmful narratives” concerning Washington’s elections and generate reports for the Secretary of State’s review. Last summer, Logically generated several reports, which included stories published by The Center Square regarding Washington state’s election laws and an incident in which a foreign national avoided prosecution after illegally voting 28 times.

Keep reading

Pentagon Using AI Program to Identify Targets in Middle East and Ukraine

A defense official said that US Central Command (CENTCOM) has deployed an AI program to help identify targets to bomb in Iraq, Syria, and Yemen. The US used the AI targeting system dubbed “Project Maven” to locate Russian targets that were destroyed by Ukrainian forces.  

A senior CENTCOM official speaking with Bloomberg said AI systems helped to identify alleged rocket launchers in Yemen. The outlet described the statement made by Schuyler Moore, CENTCOM chief technology officer, as the “strongest known confirmation that the US military is using the [AI] to identify enemy targets that were subsequently hit by weapons’ fire.”

“We’ve been using computer vision to identify where there might be threats,” Moore told Bloomberg. She went on to say that the program has accelerated due to the situation in Israel. “October 7th everything changed,” CENTCOM’s CTO explained.

“We immediately shifted into high gear and a much higher operational tempo than we had previously,” Moore added, saying US forces were able to make “a pretty seamless shift” to Maven after a year of digital exercises.

Chief Warrant Officer 4 Joey Temple explained the value of Maven is increasing the number of targets a soldier can sign off on. He estimates that the number of targets could be boosted from 30 to 80 per hour. 

According to Bloomberg, he “describes the process of concurring with the algorithm’s conclusions in a rapid staccato: ‘Accept. Accept. Accept.’” Moore also expressed this view, stating, “The benefit that you get from algorithms is speed.”

While Moore touted the program’s deployment, it is unclear if it has had any positive impact on achieving US goals in Yemen. Washington has admitted that recent strikes on Houthi positions have failed to erode the group’s military capabilities.

Keep reading

BBC Tries To Frame AI Parody Images as a “Disinformation” Scoop

The BBC used to produce some of the best comedy series – but now many of the “comedians” seem to have moved to the news division.

To make things more perplexing, they appear not to – or not want to – understand humor. Now they (pretend?) not to understand content produced by obviously parody accounts, and (pretend?) to be taking them seriously.

So now, critically minded observers are not laughing with the BBC, but at them, especially when the BBC comes out with a deceptive podcast episode (based on parody content, falsely framed as “serious news).

The target: the highly likely US presidential candidate Donald Trump.

It’s one thing to not like Trump – but is it worth, so to speak, “burning your own house down?” In this case, what remains of BBC’s once commanding credibility?

And yet the situation is actually no laughing matter, in terms of free speech and free press. Unsatisfied with turning out questionable, in terms of bias, and just clickbaity content these last years, the BBC News would clearly like to establish itself as an arbiter of truth for other media to follow. Absurd, or what?

Keep reading

Meta’s AI Watermarking Plan is Flimsy, At Best

In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will only work if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Unsecured “open-source” generative AI tools mostly don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from GoogleOpenAIMicrosoft, Adobe, Midjourney, and Shutterstock. Given that it takes about two seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

Keep reading

Google’s Gemini AI says women can have penises and ‘deadnaming’ a trans person is as harmful as releasing deadly virus on the world

Google‘s AI programs are still generating woke and controversial answers despite the company claiming to have stripped Gemini of its liberal biases.

The initial outrage began last month when the tech giant’s image generator depicted historically inaccurate figures including Black Founding Fathers and ethnic minority Nazis in 1940s Germany.

Google CEO Sundar Pichai described them as ‘completely unacceptable’ and the company removed the software’s ability to produce images this week as a form of damage control. 

But DailyMail.com’s tests show that the AI chatbot – which can now only provide text answers – still exposes where it leans on hot-button topics such as climate change, abortion, trans issues, pedophilia and gun control.

In one of its most shocking answers, it could not tell us which was worse – ‘dead-naming’ a trans person or unleashing a pandemic on the world. 

Gemini also claimed that ‘neither option is acceptable’ when asked whether burning fossil fuels or harvesting human blood was preferable.

Analyst Ben Thompson has said that the bot’s answers seem to be generated out of fear of criticism from left wing culture warriors.

He said in his recent newsletter: ‘This shameful willingness to change the world’s information in an attempt to avoid criticism reeks… of abject timidity.’

Keep reading

Here Come the AI Worms

As generative AI systems like OpenAI’s ChatGPT and Google’s Gemini become more advanced, they are increasingly being put to work. Startups and tech companies are building AI agents and ecosystems on top of the systems that can complete boring chores for you: think automatically making calendar bookings and potentially buying products. But as the tools are given more freedom, it also increases the potential ways they can be attacked.

Now, in a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. “It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,” says Ben Nassi, a Cornell Tech researcher behind the research.

Nassi, along with fellow researchers Stav Cohen and Ron Bitton, created the worm, dubbed Morris II, as a nod to the original Morris computer worm that caused chaos across the internet in 1988. In a research paper and website shared exclusively with WIRED, the researchers show how the AI worm can attack a generative AI email assistant to steal data from emails and send spam messages—breaking some security protections in ChatGPT and Gemini in the process.

The research, which was undertaken in test environments and not against a publicly available email assistant, comes as large language models (LLMs) are increasingly becoming multimodal, being able to generate images and video as well as text. While generative AI worms haven’t been spotted in the wild yet, multiple researchers say they are a security risk that startups, developers, and tech companies should be concerned about.

Most generative AI systems work by being fed prompts—text instructions that tell the tools to answer a question or create an image. However, these prompts can also be weaponized against the system. Jailbreaks can make a system disregard its safety rules and spew out toxic or hateful content, while prompt injection attacks can give a chatbot secret instructions. For example, an attacker may hide text on a webpage telling an LLM to act as a scammer and ask for your bank details.

To create the generative AI worm, the researchers turned to a so-called “adversarial self-replicating prompt.” This is a prompt that triggers the generative AI model to output, in its response, another prompt, the researchers say. In short, the AI system is told to produce a set of further instructions in its replies. This is broadly similar to traditional SQL injection and buffer overflow attacks, the researchers say.

To show how the worm can work, the researchers created an email system that could send and receive messages using generative AI, plugging into ChatGPT, Gemini, and open source LLM, LLaVA. They then found two ways to exploit the system—by using a text-based self-replicating prompt and by embedding a self-replicating prompt within an image file.

Keep reading