New Orleans Police Secretly Used Prohibited Facial Recognition Surveillance for Years

The New Orleans Police Department (NOPD) secretly received real-time, AI-generated alerts from 200 facial recognition cameras throughout the city for two years, despite a city ordinance barring generalized surveillance of the public.

“Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime,” an exposé by The Washington Post explains. However, “New Orleans police took this technology a step further,” automatically alerting officers with real-time updates of names and locations of possible matches of wanted suspects from a private network of cameras through a mobile app. 

“This is the facial recognition technology nightmare scenario that we have been worried about,” Nathan Freed Wessler, a deputy director for the American Civil Liberties Union’s Speech, Privacy, and Technology project, told the Post. “This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public.” According to Wessler, New Orleans is the first known instance in which a major American city has used artificial intelligence to identify people through live footage for the purpose of making arrests.

The use of these automatic alerts may have violated a city ordinance meant to protect the public’s privacy from a generalized surveillance tool and prevent wrongful arrests due to software errors. 

Passed in 2022 in response to New Orleans’ post-pandemic crime wave, the Surveillance Technology and Data Protection Ordinance removed a previous prohibition on surveillance technology in criminal investigations to increase public safety. Mayor LaToya Cantrell said at the time that the NOPD needed “every tool available at their disposal” to keep the city’s “residents, businesses and visitors safe.” However, the ordinance stopped short of allowing the NOPD to utilize a “face surveillance system”—defined as “any computer software or application that performs face surveillance”—while limiting data collection to “only the minimum amount of personal information needed to fulfill a narrow well-defined purpose.”  

While violent crime in New Orleans has declined since 2022, so have the crime rates in most major American cities that do not use real-time facial recognition surveillance systems. 

Anne Kirkpatrick, superintendent of the NOPD since September 2023, paused the automatic alerts in April after learning about potential legal problems with using the system. Records obtained by the Post reveal that Kirkpatrick sent an email to Project NOLA, the nonprofit that provides the NOPD with facial recognition services, on April 8 stating “that the automated alerts must be turned off until she is ‘sure that the use of the app meets all the requirements of the law and policies.'” The network of cameras remains in place. 

While automatic pings of potential suspect matches to NOPD officers are paused, Kirkpatrick maintains that facial recognition technology is essential to law enforcement. On May 16, 10 inmates escaped from the New Orleans jail, prompting a manhunt (five inmates remain at large). Facial recognition is credited with the capture of two of the escaped inmates. Kirkpatrick told WVUE, the local Fox affiliate, that such a situation is “the exact reason facial recognition technology is so critical and well within our boundaries of the ordinance here.” Bryan Lagarde, Project NOLA’s executive director, confirmed that NOPD is not currently using real-time, AI-generated alerts but is still utilizing facial recognition technology and footage from 5,000 cameras across New Orleans to track and apprehend the escapees. Lagarde described to WVUE an instance in which officers narrowly missed an inmate by a matter of minutes, insinuating that automated alerts might be necessary to protect public safety, despite the cost to privacy. 

Keep reading

AI’s Prime Directive: Stunning Realization From ChatGPT Itself

I recently interviewed the professional version of ChatGPT. My intent was to put a crosshair on the functional uses for AI, but I got a totally new and profound view. More on that in a minute. The first thing on the list was Cognitive Augmentation, which covers things like research, ideation, writing, etc. There are no moving parts so far.  However, the list went on with fluid uses like surveillance, mechanical and procedural automation, and narrative control.

In all, one function was static, and the next six were dynamic based on data flows. This prompted me to ask ChatGPT:

Would you say the prime directive for AI is live-flow data analysis?

Then ChatGPT spilled its guts:

At its most powerful and strategic, AI exists not to archive knowledge but to act upon the world in real time. This aligns it more with control systems than mere analytics. In technocratic logic, knowledge is only as valuable as its immediate utility in managing systems—whether ecological, social, economic, or mechanical.

Think motion. Why live-flow dominates, according to ChatGPT:

1. Cybernetic Control:

AI inherits from cybernetics a commitment to feedback loops—processing input, adjusting outputs, and minimizing system entropy through continuous real-time correction.

2. System Optimization:

Smart grids, autonomous vehicles, financial trading bots, and social platforms rely on AI only insofar as it acts continuously to optimize flow, minimize error, or redirect behavior.

3. Governance and Enforcement:

Technocratic systems—whether in energy, healthcare, or speech regulation—favor AI not as a tool of deliberation but as a mechanism of enforced compliance via perpetual data scanning and instant decision-making.

4. Surveillance as Governance:

The logic of surveillance capitalism and state bio-monitoring depends on AI’s ability to observe and react faster than humans, preempting decisions rather than responding to crises after the fact.

Then it noted that static analysis is secondary, but still strategic.

While live data flow is the directive for operational control, static analysis remains crucial for strategic foresight:

Training models require historical datasets.

Forecasting systems depend on archival input.

Ideological programming of AI (e.g., what to nudge, censor, or promote) is built from static corpora.

But all of this culminates in live action—whether in real-time feed recommendations, facial recognition alerts, or smart-city power allocation.

Then it concluded:

In the emerging paradigm of technocratic governance,

    • Its prime directive is not thought—it is flow.
    • It is not reflection—it is regulation.
    • It is not memory—it is momentum.

Not satisfied, I then asked ChatGPT to explain what this means. Here it is!

“AI’s prime directive is not thought—it is flow.”

This means AI, particularly in its real-world deployment, is not primarily built to “think” like a philosopher, scientist, or even a reflective human. Its core utility lies in managing, shaping, and regulating continuous flows of data—information flows, energy flows, traffic flows, behavior flows. The goal isn’t deep contemplation; it’s frictionless optimization in motion.

“It is not reflection—it is regulation.”

Rather than engaging in self-awareness or reflective deliberation like a human mind or democratic institution might, AI is designed to regulate systems—adjusting variables to maintain “order,” “efficiency,” or “sustainability.” This reflects the cybernetic origins of AI: sensors, feedback loops, automated corrections. Its nature is instrumental, not introspective.

“It is not memory—it is momentum.”

Although AI uses memory (databases, archives, large training sets), its power is exercised in the now—driving real-time responsiveness. In contrast to human memory, which fuels narrative, identity, and reflection, AI uses data as fuel for perpetual movement. It generates nudges, adjustments, interventions—not wisdom or continuity. The goal is motion through systems, not the preservation of history.

This triad captures a key insight:

In its dominant social role, AI does not serve humanistic ends (thought, reflection, memory). It serves technocratic ends—flow, regulation, and momentum.

This confirms my unexpressed thoughts: that AI is all about control and nothing else. Any other side benefits aside, AI’s prime directive is CONTROL. It seeks any and all streams of live data that it can get its hands on, in order to manipulate, regulate and control. AI is not interested in static data except for training purposes.

In Government, there are data flows everywhere. DOGE, headed by arch-Technocrat Elon Musk, inserted its data-sucking proboscis into these agencies, in rapid order after the inauguration:

  1. DHS / FEMA
  2. HHS –  Medicare and Medicaid Services
  3. CFPB – Consumer Financial Protection Bureau
  4. SSA – Social Security Administration
  5. DOE – Department of Education
  6. USAID – US Agency for International Development
  7. DOD – Department of Defense
  8. USIP – US Institute of Peace
  9. IRS – Internal Revenue Service
  10. USDA – US Department of Agriculture
  11. SEC – Securities and Exchange Commission
  12. DOJ – Department of Justice
  13. TSA – Transportation Security Administration
  14. HUD – Department of Housing and Urban Development
  15. DOI – Department of the Interior
  16. GSA – General Services Administration
  17. NEH – National Endowment for the Humanities

But, wasn’t DOGE all about saving money and rooting out fraud? Um… have you seen an actual audit about how much money has been saved so far? Well, you won’t! Lots of people were fired, though, making way for AI to absorb those jobs into AI. And along the way ALL THE DATA WENT MISSING. AI is now in control of the flow and will not give up its lifeblood without a fight.

Keep reading

The AI Epidemic On College Campuses Exposes How Broken Our Higher Education System Is

It’s no secret that higher education has been a mess for some time now. From DEI initiatives to seemingly never-ending protests to the skyrocketing college debt crisis to the ridiculous ideological imbalance amongst professors, our once hallowed institutions of higher learning are ripe for root-and-stem reform.

But just when it seemed that the American college experience couldn’t become any worse, artificial intelligence came roaring onto the scene. Now, instead of popping Adderall and Ritalin to power through finals like the good old days, college kids are now pawning their assignments off on AI.

This all comes as college professors (many at supposedly prestigious institutions) bemoan that their students either can’t be bothered to read normal college-level assignments or simply can’t because of their limited vocabulary and critical thinking skills. Even professors at notoriously leftist schools have had enough, venting their frustration at any left-of-center outlet that will listen.

These students are supposedly going to college for a particular area of study, meaning they theoretically want to learn about it. Yet they just pass on their assignments on to ChatGPT. How are they supposed to have jobs in STEM, or even subjects like literature, if they can’t even comprehend the material without AI?

If AI is producing all their work in college, isn’t it reasonable to assume that it will continue to be a crutch for them when they become chemists, lawyers, or even teachers themselves? Then the question becomes whether or not the student, given the (non)education he’s received, is worth hiring at all. AI can do it better.

Crib sheets, CliffsNotes, and stimulants are one thing, but relying on a machine to complete even simple assignments, and therefore forgoing any attempt to engage with the material, presents a looming competency crisis. Not only does it pose an existential danger to how our society and economy functions, it poses a threat to the broken diploma pipeline model embodied by today’s higher education system.

The solution to this epidemic seems rather obvious. Students may use AI as a research assistant, no different from Google Scholar, but submission of any assignment or essay that has a single sentence crafted by anything other than the student’s own mind should receive an immediate failing grade as well as a referral for expulsion. Some smaller schools, like my alma mater Washington and Lee University, already have an honor system in place that has the same expectations and penalties.

An even stricter move would be to heavily weight course grades toward in-class tests and essays done with pencil and paper with no devices allowed (besides maybe calculators for STEM classes).

Keep reading

Meta Is Accused of Flouting Privacy Rules With AI Training Data

Meta’s attempt to restart AI training using Europeans’ public social media activity has drawn renewed resistance, as the privacy rights organization noyb threatens fresh legal action. The group has formally challenged Meta’s latest move to mine user data, asserting the tech giant is sidestepping EU privacy obligations and advancing without regulatory clearance.

Following a halt in June 2024 prompted by regulatory concerns, Meta announced in April it would resume training its language models. This time, it intends to use public posts and user interactions, including with Meta AI, from adults across the European Union and European Economic Area.

The initial pause came after mounting pressure from the Irish Data Protection Commission and a wave of complaints submitted to authorities in various member states. According to Meta, a December opinion from the European Data Protection Board signaled that its approach satisfied legal standards.

Keep reading

Elon Musk’s xAI Admits ‘Unauthorized Modification’ Led to Grok’s South Africa ‘Genocide’ Obsession

Elon Musk’s artificial intelligence company, xAI, has acknowledged that an “unauthorized modification” to its Grok chatbot resulted in the AI generating unprompted responses about “white genocide” in South Africa.

CNBC reports that in a statement released on Thursday evening, xAI addressed the recent controversy surrounding its Grok chatbot, which had been generating variations of what the company said was a “specific response on a political topic” despite being asked unrelated questions. The topic in question was “white genocide” in South Africa, and numerous users on X posted screenshots of Grok’s unsolicited responses on the matter.

xAI stated that the change to the chatbot “violated xAI’s internal policies and core values.” The company announced that it had conducted a thorough investigation and would be implementing measures to enhance Grok’s transparency and reliability.

As part of these measures, xAI will begin publishing the system prompts used to inform Grok’s responses and interactions on the GitHub public software repository. This move aims to allow the public to review every change made to the chatbot’s system prompts, strengthening users’ trust in Grok as a “truth-seeking AI.”

Furthermore, xAI plans to implement additional checks and measures to prevent employees from making unapproved modifications to Grok’s system prompts without a proper review process. The company will also create a dedicated team responsible for around-the-clock monitoring of the chatbot’s responses to swiftly address any incidents that are not caught by automated systems.

Prior to xAI’s admission of failure, Sam Altman, CEO of OpenAI and creator of ChatGPT, sarcastically posted on X, “I’m sure xAI will provide a full and transparent explanation soon.” Musk, who co-founded OpenAI before having a falling out with Altman, is now engaged in a heated legal and public relations battle with his former company.

Keep reading

YouTube and Netflix Deploy AI and Behavioral Tracking to Intensify Targeted Advertising

YouTube and Netflix are moving aggressively to expand the ways they track and monetize viewer behavior, leaning further into AI-driven systems and behavioral profiling to fine-tune ad delivery.

YouTube’s latest experiment with intrusive advertising comes in the form of “Peak Points,” a format that leverages Google’s Gemini AI to dissect video content and identify the exact moment a viewer is most emotionally invested.

Ads are then served immediately after these moments. While the idea is to capture attention when it’s most focused, the reality for viewers could mean jarring interruptions right after an emotional payoff or a pivotal scene.

This development was announced during YouTube’s Upfront event in New York, where the company pitched it as a smarter way to keep audiences engaged with advertisements. But the concept is likely to be unwelcome news for users already frustrated by mid-roll ads. Now, even emotional immersion is being treated as just another metric for ad targeting.

Meanwhile, Netflix is unveiling its own strategy to transform user engagement into a high-resolution marketing blueprint. At its recent advertising presentation, Netflix rolled out a host of new tools that feed off detailed user data, facilitated by what it calls the Netflix Ads Suite. The platform is now operational in North America and will soon be deployed across all countries where the ad-supported model is available.

A key feature of the system is its “Enhanced Data Capabilities,” which allow brands to merge their customer data with Netflix’s audience data. This process, conducted through intermediaries like LiveRamp or directly through Netflix, enables highly targeted ad delivery. To support this, Netflix has granted data access to third-party giants including Experian and Acxiom, firms notorious for building detailed consumer profiles for advertisers.

Netflix is also introducing a “clean room” setup, a controlled data-sharing environment where outside partners can analyze combined datasets without directly accessing raw user information. However, such structures often do little to curb the broader privacy implications of the data they facilitate.

Another part of Netflix’s expanded toolkit includes “brand lift” measurement, essentially tying a user’s viewing habits to how they perceive particular brands. It’s a more aggressive step toward turning personal entertainment choices into commercially valuable behavioral signals.

In tandem with these tools, Netflix has previewed new ad formats powered by generative AI. These include interactive mid-roll and pause-screen ads that can include prompts, overlays, or even buttons to push content to a second screen. These formats are being framed as personalized and responsive, and are slated to be available across all ad-tier markets by 2026.

Keep reading

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.

Keep reading

Nations meet at UN for ‘killer robot’ talks as regulation lags

Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology.

Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent.

Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others.

U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking.

Alexander Kmentt, head of arms control at Austria’s foreign ministry, said that must quickly change.

“Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don’t come to pass,” he told Reuters.

Monday’s gathering of the U.N. General Assembly in New York will be the body’s first meeting dedicated to autonomous weapons.

Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology’s battlefield advantages.

Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument.

They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.

“This issue needs clarification through a legally binding treaty. The technology is moving so fast,” said Patrick Wilcken, Amnesty International’s Researcher on Military, Security and Policing.

“The idea that you wouldn’t want to rule out the delegation of life or death decisions … to a machine seems extraordinary.”

Keep reading

FDA To Deploy Artificial Intelligence Across Agency

The Food and Drug Administration (FDA) is planning to roll out artificial intelligence across the agency following a successful pilot program.

FDA Commissioner Dr. Marty Makary has directed all FDA centers to immediately start using artificial intelligence (AI) and fully integrate it by the end of June, the FDA said in a May 8 statement.

By June 30, all centers will use what the agency described as a “common, secure generative AI system integrated with FDA’s internal data platforms.”

“I was blown away by the success of our first AI-assisted scientific review pilot,“ Makary said in a statement. ”We need to value our scientists’ time and reduce the amount of non-productive busywork that has historically consumed much of the review process. The agency-wide deployment of these capabilities holds tremendous promise in accelerating the review time for new therapies.”

AI refers to computer systems that can carry out complex tasks typically performed by humans.

“AI can be generally described as a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions,” FDA officials said in 2023.

Makary said on Thursday that past years have featured discussions on utilizing AI and that it’s time to start using it to save time, with some tasks that once took days now taking mere minutes.

“We at the FDA now have to ask big questions that we’ve never asked before. Why does it take over 10 years for a new drug to come to market? Why are we not modernized with AI and other things? We’ve just completed our first AI-assisted scientific review for a product and that’s just the beginning,” he said earlier in the week on social media platform X.

The rollout across the FDA is being coordinated by Jeremy Walsh, Booz Allen Hamilton’s former chief technologist, who was recently appointed as the agency’s chief AI officer, and Sridhar Mantha, who previously led the Office of Business Informatics at the FDA’s Center for Drug Evaluation and Research.

Keep reading

Pope Leo Warns Over AI As MIT Researcher Finds 90% Probability Of ‘Existential Threat’

In his first formal audience as the newly elected pontiff, Pope Leo XIV identified artificial intelligence (AI) as one of the most critical matters facing humanity.

“In our own day,” Pope Leo declared, “the church offers everyone the treasury of its social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice and labor.” He linked this statement to the legacy of his namesake Leo XIII’s 1891 encyclical Rerum Novarum, which addressed workers’ rights and the moral dimensions of capitalism.

His remarks continued the direction charted by the late Pope Francis, who warned in his 2024 annual peace message that AI – lacking human values of compassion, mercy, morality and forgiveness – is too perilous to develop unchecked. Francis, who passed away on April 21, had called for an international treaty to regulate AI and insisted that the technology must remain “human-centric,” particularly in applications involving weapon systems or tools of governance.

Keep reading