Picture this: it’s rush hour in New York City. A guy in a Mets cap mutters to himself on the F train platform, pacing in tight circles. Nearby, a woman checks her phone five times in ten seconds. Overhead, cameras are watching. Behind the cameras? A machine. And behind that machine? An army of bureaucrats who’ve convinced themselves that bad vibes are now a crime category.
Welcome to the MTA’s shiny new plan for keeping you safe: an AI surveillance system designed to detect “irrational or concerning conduct” before anything happens. Not after a crime. Not even during. Before. The sort of thing that, in less tech-horny times, might’ve been called “having a bad day.”
MTA Chief Security Officer Michael Kemper, the man standing between us and a future where talking to yourself means a visit from the NYPD, is calling it “predictive prevention.”
“AI is the future,” Kemper assured the MTA’s safety committee.
So far, the MTA insists this isn’t about watching you, per se. It’s watching your behavior. Aaron Donovan, MTA spokesperson and professional splitter of hairs, clarified: “The technology being explored by the MTA is designed to identify behaviors, not people.”
And don’t worry about facial recognition, they say. That’s off the table. For now. Just ignore the dozens of vendors currently salivating over multimillion-dollar public contracts to install “emotion detection” software that’s about as accurate as your aunt’s horoscope app.
You must be logged in to post a comment.