How generative AI will ruin science and academic research

I often come back to Neil Postman’s 1985 classic Amusing Ourselves to Death. It’s a penetrating analysis on the cognitive effects of media technology. His focus is mainly on how the format and mode of communication influences the character of content and how that content then trains us, but to a lesser extent also how the discourse and the “atmosphere” of the information exchange affects these issues.

This is almost forty years ago, and the disruptive medium under scrutiny is television. The entire approach seems almost quaint, the 80s in relation to the current period being a comparative golden age of critical thinking, complex exposition and conscious, creative citizens.

Still, the issues Postman emphasizes were already significant way back then. You can summarize his criticism with television’s format having complex, detrimental effects on the content and character of public discourse, as well as on the cognitive abilities of human beings, especially in terms of understanding complex issues and parse arguments and evidence with many layers and branching implications.

The basic issue on Postman’s view is that written exposition and complex oral tradition alike foster and prepare the human mind for the rigors of critical thinking and rational reflection. And to the contrary, that the entertainment media and discourses as exemplified by television in particular, geared towards short-term gratification and the communication of sensational experiences, have rather the opposite effect.

His examples are really glaring, even back then:

Contradiction, in short, requires that statements and events be perceived as interrelated aspects of a continuous and coherent context. Disappear the context, or fragment it, and contradiction disappears. This point is nowhere made more clear to me than in conferences with my younger students about their writing. “Look here,” I say. “In this paragraph you have said one thing. And in that you have said the opposite. Which is it to be?” They are polite, and wish to please, but they are as baffled by the question as I am by the response. “I know,” they will say, “but that is there and this is here.” The difference between us is that I assume “there” and “here,” “now” and “then,” one paragraph and the next to be connected, to be continuous, to be part of the same coherent world of thought. That is the way of typographic discourse, and typography is the universe I’m “coming from,” as they say. But they are coming from a different universe of discourse altogether: the “Now … this” world of television. The fundamental assumption of that world is not coherence but discontinuity. And in a world of discontinuities, contradiction is useless as a test of truth or merit, because contradiction does not exist. My point is that we are by now so thoroughly adjusted to the “Now … this” world of news—a world of fragments, where events stand alone, stripped of any connection to the past, or to the future, or to other events—that all assumptions of coherence have vanished. And so, perforce, has contradiction. In the context of no context, so to speak, it simply disappears. And in its absence, what possible interest could there be in a list of what the President says now and what he said then? It is merely a rehash of old news, and there is nothing interesting or entertaining in that.

– Postman, ibid.

This is not an unfamiliar situation for most of us. I even had a similar experience this morning in an attempted discussion with someone who incredibly enough stated four (!) different, mutually incompatible things within the framework of a couple of short paragraphs, and he of course still kept insisting I was wrong. His stated position contained the following propositions:

  1. Beliefs are absurd
  2. Beliefs ought to be held only due to acceptable evidence
  3. You can hold beliefs for whatever reasons you like
  4. Beliefs are by definition anchored in emotion and not acceptable evidence

All of these statements are literally incompatible with every single other one.

It’s like a convoluted version of the Liar’s Paradox. If I agree with him, I’m necessarily wrong, and if I disagree, I’m wrong too. But in contrast to the old thought experiment, where my error lies in the direct affirmation of a statement’s opposite, here there’s literally no way to make sense of how I would be wrong if I either agree with or reject his position, since it’s internally incoherent in a complex and not only binary sense.

Keep reading

Hackers can read private AI assistant chats even though they’re encrypted

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Keep reading

Artificial Intelligence In The Classroom Can Only Offer Artificial Educations

Educators are grappling with how to approach ever-evolving generative artificial intelligence — the kind that can create language, images, and audio. Programs like ChatGPT, Gemini, and Copilot pose far different challenges from the AI of yesteryear that corrected spelling or grammar. Generative AI generates whatever content it’s asked to produce, whether it’s a lab report for a biology course, a cover letter for a particular job, or an op-ed for a newspaper.

This groundbreaking development leaves educators and parents asking: Should teachers teach with or against generative AI, and why? 

Technophiles may portray skeptics as Luddites — folks of the same ilk that resisted the emergence of the pen, the calculator, or the word processor — but this technology possesses the power to produce thought and language on someone’s behalf, so it’s drastically different. In the writing classroom, specifically, it’s especially problematic because the production of thought and language is the goal of the course, not to mention the top goals of any legitimate and comprehensive education. So count me among the educators who want to proceed with caution, and that’s coming from a writing professor who typically embraces educational technology

Keep reading

Emotion-Tracking – AI on the job: Workers Fear Being Watched – and Misunderstood.

We have heard the warnings from Yuval Noah Harari, that if we don’t figure out how to regulate artificial intelligence (AI) human brains will be hacked soon,” a statement that arguably speaks to humanity’s worst fears about AI. This may be especially so, when hearing from Schwab’s advisor Harari that to “hack a human being is to get to know that person better than they know themselves,” which can enable those who own the technology to increasingly manipulate us.

We may believe this extreme threat to our privacy will occur some time in the future, but, the hacking Harari is describing is more proverbial than literal, and has already been occuring in environments like Facebook and YouTube where we are led to view content that the algorithms have deemed to be of interest to us. It now would appear that many have gradually become desensitised to it the “hacking” and manipulation allowing it to increase without too much protest.

But how would you feel if your workplace was tracking how you feel? asks Nazanin Andalibi who is Assistant Professor of Information at the University of Michigan. and in the article below dissuses emotion AI which is already being used in the workplace.

Keep reading

NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute

The National Institute of Standards and Technology (NIST) is facing an internal crisis as staff members and scientists have threatened to resign over the anticipated appointment of Paul Christiano to a crucial, though non-political, position at the agency’s newly-formed US AI Safety Institute (AISI), according to at least two sources with direct knowledge of the situation, who asked to remain anonymous.

NIST is an agency of the US Department of Commerce whose mission is “to promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” According to the agency’s website, its core competencies are “measurement science,” “rigorous traceability” and “development and use of standards.” NIST also develops cybersecurity standards, guidelines and best practices, and released an AI security framework in January 2023.

Christiano, who is known for his ties to the effective altruism (EA) movement and its offshoot, longtermism (a view that prioritizes the long-term future of humanity, popularized by philosopher William MacAskill), was allegedly rushed through the hiring process without anyone knowing until today, one of the sources said.

The appointment of Christiano, which was said to come directly from Secretary of Commerce Gina Raimondo, has sparked outrage among NIST employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity.

However, Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, told VentureBeat that President Biden’s AI Executive Order, introduced in November 2023, specifically asks NIST and the AISI to focus on certain tasks — including CBRN (chemical, biological, radiological and nuclear materials) — for which Paul Christiano is “extremely qualified.”

Many say EA — defined by the Center for Effective Altruism as an “intellectual project using evidence and reason to figure out how to benefit others as much as possible” — has turned into a cult-like group of highly influential and wealthy adherents (made famous by FTX founder and jailbird Sam Bankman-Fried) whose paramount concern revolves around preventing a future AI catastrophe from destroying humanity. Critics of the EA focus on this existential risk, or “x-risk,” say it is happening to the detriment of a necessary focus on current, measurable AI risks — including bias, misinformation, high-risk applications and traditional cybersecurity. 

Keep reading

Anti-Misinformation AI Flagging Factual Stories As False.

Artificial intelligence hired by the Washington Secretary of State’s Office to monitor potential election ‘misinformation’ has flagged multiple factual stories from The Center Square regarding evidence of noncitizens illegally voting. Logically — a UK-based AI company — was contracted by the Washington Secretary of State last year to scan for “false content” on various social platforms, including X (formerly Twitter).

The state contract with Logically tasks them with using their AI tools to identify “harmful narratives” concerning Washington’s elections and generate reports for the Secretary of State’s review. Last summer, Logically generated several reports, which included stories published by The Center Square regarding Washington state’s election laws and an incident in which a foreign national avoided prosecution after illegally voting 28 times.

Keep reading

Pentagon Using AI Program to Identify Targets in Middle East and Ukraine

A defense official said that US Central Command (CENTCOM) has deployed an AI program to help identify targets to bomb in Iraq, Syria, and Yemen. The US used the AI targeting system dubbed “Project Maven” to locate Russian targets that were destroyed by Ukrainian forces.  

A senior CENTCOM official speaking with Bloomberg said AI systems helped to identify alleged rocket launchers in Yemen. The outlet described the statement made by Schuyler Moore, CENTCOM chief technology officer, as the “strongest known confirmation that the US military is using the [AI] to identify enemy targets that were subsequently hit by weapons’ fire.”

“We’ve been using computer vision to identify where there might be threats,” Moore told Bloomberg. She went on to say that the program has accelerated due to the situation in Israel. “October 7th everything changed,” CENTCOM’s CTO explained.

“We immediately shifted into high gear and a much higher operational tempo than we had previously,” Moore added, saying US forces were able to make “a pretty seamless shift” to Maven after a year of digital exercises.

Chief Warrant Officer 4 Joey Temple explained the value of Maven is increasing the number of targets a soldier can sign off on. He estimates that the number of targets could be boosted from 30 to 80 per hour. 

According to Bloomberg, he “describes the process of concurring with the algorithm’s conclusions in a rapid staccato: ‘Accept. Accept. Accept.’” Moore also expressed this view, stating, “The benefit that you get from algorithms is speed.”

While Moore touted the program’s deployment, it is unclear if it has had any positive impact on achieving US goals in Yemen. Washington has admitted that recent strikes on Houthi positions have failed to erode the group’s military capabilities.

Keep reading

BBC Tries To Frame AI Parody Images as a “Disinformation” Scoop

The BBC used to produce some of the best comedy series – but now many of the “comedians” seem to have moved to the news division.

To make things more perplexing, they appear not to – or not want to – understand humor. Now they (pretend?) not to understand content produced by obviously parody accounts, and (pretend?) to be taking them seriously.

So now, critically minded observers are not laughing with the BBC, but at them, especially when the BBC comes out with a deceptive podcast episode (based on parody content, falsely framed as “serious news).

The target: the highly likely US presidential candidate Donald Trump.

It’s one thing to not like Trump – but is it worth, so to speak, “burning your own house down?” In this case, what remains of BBC’s once commanding credibility?

And yet the situation is actually no laughing matter, in terms of free speech and free press. Unsatisfied with turning out questionable, in terms of bias, and just clickbaity content these last years, the BBC News would clearly like to establish itself as an arbiter of truth for other media to follow. Absurd, or what?

Keep reading

Meta’s AI Watermarking Plan is Flimsy, At Best

In the past few months, we’ve seen a deepfake robocall of Joe Biden encouraging New Hampshire voters to “save your vote for the November election” and a fake endorsement of Donald Trump from Taylor Swift. It’s clear that 2024 will mark the first “AI election” in United States history.

With many advocates calling for safeguards against AI’s potential harms to our democracy, Meta (the parent company of Facebook and Instagram) proudly announced last month that it will label AI-generated content that was created using the most popular generative AI tools. The company said it’s “building industry-leading tools that can identify invisible markers at scale—specifically, the ‘AI generated’ information in the C2PA and IPTC technical standards.”

Unfortunately, social media companies will not solve the problem of deepfakes on social media this year with this approach. Indeed, this new effort will do very little to tackle the problem of AI-generated material polluting the election environment.

The most obvious weakness is that Meta’s system will only work if the bad actors creating deepfakes use tools that already put watermarks—that is, hidden or visible information about the origin of digital content—into their images. Unsecured “open-source” generative AI tools mostly don’t produce watermarks at all. (We use the term unsecured and put “open-source” in quotes to denote that many such tools don’t meet traditional definitions of open-source software, but still pose a threat because their underlying code or model weights have been made publicly available.) If new versions of these unsecured tools are released that do contain watermarks, the old tools will still be available and able to produce watermark-free content, including personalized and highly persuasive disinformation and nonconsensual deepfake pornography.

We are also concerned that bad actors can easily circumvent Meta’s labeling regimen even if they are using the AI tools that Meta says will be covered, which include products from GoogleOpenAIMicrosoft, Adobe, Midjourney, and Shutterstock. Given that it takes about two seconds to remove a watermark from an image produced using the current C2PA watermarking standard that these companies have implemented, Meta’s promise to label AI-generated images falls flat.

Keep reading

Google’s Gemini AI says women can have penises and ‘deadnaming’ a trans person is as harmful as releasing deadly virus on the world

Google‘s AI programs are still generating woke and controversial answers despite the company claiming to have stripped Gemini of its liberal biases.

The initial outrage began last month when the tech giant’s image generator depicted historically inaccurate figures including Black Founding Fathers and ethnic minority Nazis in 1940s Germany.

Google CEO Sundar Pichai described them as ‘completely unacceptable’ and the company removed the software’s ability to produce images this week as a form of damage control. 

But DailyMail.com’s tests show that the AI chatbot – which can now only provide text answers – still exposes where it leans on hot-button topics such as climate change, abortion, trans issues, pedophilia and gun control.

In one of its most shocking answers, it could not tell us which was worse – ‘dead-naming’ a trans person or unleashing a pandemic on the world. 

Gemini also claimed that ‘neither option is acceptable’ when asked whether burning fossil fuels or harvesting human blood was preferable.

Analyst Ben Thompson has said that the bot’s answers seem to be generated out of fear of criticism from left wing culture warriors.

He said in his recent newsletter: ‘This shameful willingness to change the world’s information in an attempt to avoid criticism reeks… of abject timidity.’

Keep reading