“Cognitive surrender” leads AI users to abandon logical thinking, research finds

When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

Just ask the answer machine

In “Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” researchers from the University of Pennsylvania sought to build on existing scholarship that outlines two broad categories of decision-making: one shaped by “fast, intuitive, and affective processing” (System 1); and one shaped by “slow, deliberative, and analytical reasoning” (System 2). The onset of AI systems, the researchers argue, has created a new, third category of “artificial cognition” in which decisions are driven by “external, automated, data-driven reasoning originating from algorithmic systems rather than the human mind.”

In the past, people have often used tools from calculators to GPS systems for a kind of task-specific “cognitive offloading,” strategically delegating some jobs to reliable automated algorithms while using their own internal reasoning to oversee and evaluate the results. But the researchers argue that AI systems have given rise to a categorically different form of “cognitive surrender” in which users provide “minimal internal engagement” and accept an AI’s reasoning wholesale without oversight or verification. This “uncritical abdication of reasoning itself” is particularly common when an LLM’s output is “delivered fluently, confidently, or with minimal friction,” they point out.

To measure the prevalence and effect of this kind of cognitive surrender to AI, the researchers performed a number of studies based on Cognitive Reflection Tests. These tests are designed to elicit incorrect answers from participants that default to “intuitive” (System 1) thought processes, but to be relatively simple to answer for those who use more “deliberative” (System 2) thought processes.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment