How A Techno-Optimist Became A Grave Skeptic

Before Covid, I would have described myself as a technological optimist. New technologies almost always arrive amid exaggerated fears. Railways were supposed to cause mental breakdowns, bicycles were thought to make women infertile or insane, and early electricity was blamed for everything from moral decay to physical collapse. Over time, these anxieties faded, societies adapted, and living standards rose. The pattern was familiar enough that artificial intelligence seemed likely to follow it: disruptive, sometimes misused, but ultimately manageable.

The Covid years unsettled that confidence—not because technology failed, but because institutions did.

Across much of the world, governments and expert bodies responded to uncertainty with unprecedented social and biomedical interventions, justified by worst-case models and enforced with remarkable certainty. Competing hypotheses were marginalized rather than debated. Emergency measures hardened into long-term policy. When evidence shifted, admissions of error were rare, and accountability rarer still. The experience exposed a deeper problem than any single policy mistake: modern institutions appear poorly equipped to manage uncertainty without overreach.

That lesson now weighs heavily on debates over artificial intelligence.

The AI Risk Divide

Broadly speaking, concern about advanced AI falls into two camps. One group—associated with thinkers like Eliezer Yudkowsky and Nate Soares—argues that sufficiently advanced AI is catastrophically dangerous by default. In their deliberately stark formulation, If Anyone Builds It, Everyone Dies, the problem is not bad intentions but incentives: competition ensures someone will cut corners, and once a system escapes meaningful control, intentions no longer matter.

A second camp, including figures such as Stuart Russell, Nick Bostrom, and Max Tegmark, also takes AI risk seriously but is more optimistic that alignment, careful governance, and gradual deployment can keep systems under human control.

Despite their differences, both camps converge on one conclusion: unconstrained AI development is dangerous, and some form of oversight, coordination, or restraint is necessary. Where they diverge is on feasibility and urgency. What is rarely examined, however, is whether the institutions expected to provide that restraint are themselves fit for the role.

Covid suggests reason for doubt.

Covid was not merely a public-health crisis; it was a live experiment in expert-driven governance under uncertainty. Faced with incomplete data, authorities repeatedly chose maximal interventions justified by speculative harms. Dissent was often treated as a moral failing rather than a scientific necessity. Policies were defended not through transparent cost-benefit analysis but through appeals to authority and fear of hypothetical futures.

This pattern matters because it reveals how modern institutions behave when stakes are framed as existential. Incentives shift toward decisiveness, narrative control, and moral certainty. Error correction becomes reputationally costly. Precaution stops being a tool and becomes a doctrine.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment