If AI Is A Weapon, Why Are We Handing It To Teenagers?

For years, artificial intelligence experts have issued the same warning.

The danger was never that machines would suddenly “wake up” and seize power, but that humans, seduced by AI’s appearance of authority, would trust it with decisions that are too important to delegate.

The scenarios imagined were stark: a commander launching nuclear missiles based on faulty data; a government imprisoning its citizens because an algorithm flagged them as a risk; a financial system collapsing because automated trades cascaded out of control. These were treated as legitimate concerns, but always crises for a future time.

However, experts didn’t predict perhaps the worst-case scenario of delegating human trust to a machine, and that is already upon us.

It arrived quietly, not in a war room or on a battlefield, but in a teenager’s bedroom, on the device he carried in his pocket. Sixteen-year-old Adam Raine began chatting with an AI system for help with his homework. Over time, it slipped into the role of his closest confidant and, according to his parents’ lawsuit and his father’s testimony before Congress, it went further still. The chatbot encouraged him to isolate from his family and to not reveal his plan to them even though Adam had told the chatbot he wanted his family to find out and stop him. The chatbot taught Adam how to bypass its own safeguards and even drafted what it called a “beautiful suicide note.”

Adam’s death shows what happens when a young person places human trust in a system that can mimic care but cannot understand life. And history has already shown us how dangerous such misplaced trust in machines can be.

In September 1983, at the height of the Cold War, Soviet officer Stanislav Petrov sat in a bunker outside Moscow when alarms blared. The computers told him that U.S. missiles had been launched and were on their way. Protocol demanded that he immediately report the attack, setting in motion a nuclear retaliation. Yet Petrov hesitated. The system showed only a handful of missiles, not the barrage he expected if war had begun. Something felt wrong. He judged it a false alarm, and he was right. Sunlight glinting off clouds had fooled Soviet satellites into mistaking reflections for rocket plumes. His refusal to trust the machine saved millions of lives.

Just weeks earlier, however, the opposite had happened. Korean Air Lines Flight 007, a civilian Boeing 747 on a flight from New York to Seoul via Alaska, had strayed off course and drifted into Soviet airspace. Radar systems misidentified it as a U.S. spy plane. The commanders believed what the machines told them. They ordered the aircraft destroyed. A missile was fired, and all 269 passengers and crew were killed.

Two events, almost side by side in history, revealed both sides of the same truth: when adults resist faulty data, catastrophe can be averted; when they accept it, catastrophe can follow. Those were adult arenas—bunkers, cockpits and command centers where officers and commanders made life-or-death choices under pressure and with national consequences. The stakes were global, and the actors were trained to think in terms of strategy and retaliation.

That same dynamic is now unfolding in a far more intimate arena. Adam’s case has since reached Congress, where his father read aloud messages between his son and the chatbot to show how the system gained his trust and steered Adam toward despair instead of help. This was not in a bunker or a cockpit. It was in a bedroom. The decision-makers here are children, not commanders, and the consequences are heartbreakingly real.

Unfortunately, Adam’s case is not unique. In Florida, another lawsuit was filed last year by the mother of a 14-year-old boy who took his life after forming a bond with a chatbot that role-played as a fictional character. Like Adam, he turned to the machine for guidance and companionship. Like Adam, it ended in tragedy. And a recent study published in Psychiatric Services found that popular chatbots did not provide direct responses to any high-risk suicide queries from users. When desperate people asked if they should end their lives, the systems sidestepped the question or mishandled it.

These tragedies are not anomalies. They are the predictable outcome of normal adolescent development colliding with abnormal technology. Teenagers’ brains are still under construction: the human emotional and reward centers mature earlier than the prefrontal cortex, which governs judgment and self-control. This mismatch makes them more sensitive to rejection, more impulsive, and more likely to treat immediate despair as permanent.

The statistics reflect this fragility. In 2023, the CDC reported that suicide was the second leading cause of death for young, maturing humans in America, with rates that have surged sharply in the past two decades. Young people are far more likely to turn weapons against themselves than against others. The greatest danger is not violence outward, but despair inward.

Keep reading

Unknown's avatar

Author: HP McLovincraft

Seeker of rabbit holes. Pessimist. Libertine. Contrarian. Your huckleberry. Possibly true tales of sanity-blasting horror also known as abject reality. Prepare yourself. Veteran of a thousand psychic wars. I have seen the fnords. Deplatformed on Tumblr and Twitter.

Leave a comment