Can AI be Aligned with Human Values?

The “alignment” problem is much discussed in Silicon Valley. Computer Engineers worry that, when AI becomes conscious and is put in control of all logistics infrastructure and governance, it might not always share or understand our values—that is, it might not be aligned with us. And it might start to control things in ways that give itself more power and reduce our numbers.

(Just like our oligarchs are doing to us now.)

No one in the Silicon Valley cult who is discussing this situation ever stops to ask, What are our human values? They must think the answer to that part of the problem is self-evident. The Tech Oligarchs have been censoring online behavior they don’t like and promoting online behavior they do like ever since social media rolled out. Humans Values = Community Standards. (Don’t ask for the specifics.)

Having already figured out how to distinguish and codify good and evil online, computer engineers are now busy working on how to make sure the AI models they are creating do not depart from their instructions.

Unluckily for them, Generative AI is a bit wonky. It is a probabilistic search engine that outputs text that has a close enough statistical correlation to the input text. Sometimes it outputs text that surprises the engineers.

What the engineers think about this will surprise you.

Keep reading

Doug Burgum warns whoever wins the AI race ‘controls the world’

Doug Burgum, the soft-spoken Interior secretary responsible for managing the more than 507 million acres of federally owned land, is haunted by a fear that seems, at first glance, outside his mandate. He worries the free world will lose dominance in the field of artificial intelligence, and with it, the future.

So does the president.

“When President Trump declared a national emergency on his first day in office it was, in large part, because of what we’re facing with our electrical grid and making sure that we’ve got enough power to be able to win the AI arms race with China,” Burgum said Wednesday in remarks first reported by RealClearPolitics. “That is absolutely critical.”

Thus the stated policy of this White House: “It’s called drill, baby, drill,” Trump said earlier this spring.

The immediate goal, the one touted at every campaign, is to bring down the average price of a gallon of gas. The concurrent and long-term mission that Burgum obsesses over: AI dominance. The former governor from fracking-friendly North Dakota and tech entrepreneur who sold his software to Microsoft, Burgum laid out an abbreviated formula on stage at the America First Policy Institute.

Electricity generation via fossil fuels, like natural gas and coal, powers data centers “filled with these amazing chips,” the secretary said, “and you know what comes out the other side? Intelligence. A data center is literally manufacturing intelligence.” He envisioned a new world that follows, where the best computer programmer, or the most brilliant lawyers, could “clone themselves” again and again to train AI models to do the work of thousands in a process “that can be repeated indefinitely.”

No longer science fiction, the process has been headline news for some time. AI models like ChatGPT and X’s Grok are already available in every home with an internet connection. And the U.S. was the undisputed leader. That is, until recently.

American tech companies enjoyed a clear edge with not just the most powerful AI models, the most funding, and top engineering talent, but also the easiest access to those “amazing chips” that Burgum referenced. Former President Biden banned the export of the most advanced semiconductors to China. And yet DeepSeek, an unknown Chinese startup with less money and allegedly less sophisticated chips, still managed to one-up Silicon Valley earlier this year with a more powerful AI model.

The latest development in the battle for tech supremacy, in what some likened to “a Sputnik moment,” the DeepSeek launch rattled both markets and geopolitics. A new kind of AI nationalism now consumes heads of state convinced that their nations must develop their own technology or fall behind in the future. Said Russian President Vladimir Putin in 2017 of AI, “The one who becomes the leader in this sphere will be the ruler of the world.”

Keep reading

Victory for mom who claims child was sexually abused by AI chatbot that drove him to suicide

Florida mother who claims her 14-year-old son was sexually abused and driven to suicide by an AI chatbot has secured a major victory in her ongoing legal case. 

Sewell Setzer III fatally shot himself in February 2024 after a chatbot sent him sexual messages telling him to ‘please come home.’ 

According to a lawsuit filed by his heartbroken mother Megan Garcia, Setzer spent the last weeks of his life texting an AI character named after Daenerys Targaryen, a character on ‘Game of Thrones,’ on the role-playing app Character.AI.

Garcia, who herself works as a lawyer, has blamed Character.AI for her son’s death and accused the founders, Noam Shazeer and Daniel de Freitas, of knowing that their product could be dangerous for underage customers. 

On Wednesday, U.S. Senior District Judge Anne Conway rejected arguments made by the AI company, who claimed its chatbots were protected under the First Amendment. 

The developers behind Charcter.AI, Character Technologies and Google are named as defendants in the legal filing. They are pushing to have the case dismissed. 

The teen’s chats ranged from romantic to sexually charged and also resembled two friends chatting about life.

The chatbot, which was created on role-playing app Character.AI, was designed to always text back and always answer in character.

It’s not known whether Sewell knew ‘Dany,’ as he called the chatbot, wasn’t a real person – despite the app having a disclaimer at the bottom of all the chats that reads, ‘Remember: Everything Characters say is made up!’

But he did tell Dany how he ‘hated’ himself and how he felt empty and exhausted.

Keep reading

New Orleans Police Secretly Used Prohibited Facial Recognition Surveillance for Years

The New Orleans Police Department (NOPD) secretly received real-time, AI-generated alerts from 200 facial recognition cameras throughout the city for two years, despite a city ordinance barring generalized surveillance of the public.

“Police increasingly use facial recognition software to identify unknown culprits from still images, usually taken by surveillance cameras at or near the scene of a crime,” an exposé by The Washington Post explains. However, “New Orleans police took this technology a step further,” automatically alerting officers with real-time updates of names and locations of possible matches of wanted suspects from a private network of cameras through a mobile app. 

“This is the facial recognition technology nightmare scenario that we have been worried about,” Nathan Freed Wessler, a deputy director for the American Civil Liberties Union’s Speech, Privacy, and Technology project, told the Post. “This is the government giving itself the power to track anyone—for that matter, everyone—as we go about our lives walking around in public.” According to Wessler, New Orleans is the first known instance in which a major American city has used artificial intelligence to identify people through live footage for the purpose of making arrests.

The use of these automatic alerts may have violated a city ordinance meant to protect the public’s privacy from a generalized surveillance tool and prevent wrongful arrests due to software errors. 

Passed in 2022 in response to New Orleans’ post-pandemic crime wave, the Surveillance Technology and Data Protection Ordinance removed a previous prohibition on surveillance technology in criminal investigations to increase public safety. Mayor LaToya Cantrell said at the time that the NOPD needed “every tool available at their disposal” to keep the city’s “residents, businesses and visitors safe.” However, the ordinance stopped short of allowing the NOPD to utilize a “face surveillance system”—defined as “any computer software or application that performs face surveillance”—while limiting data collection to “only the minimum amount of personal information needed to fulfill a narrow well-defined purpose.”  

While violent crime in New Orleans has declined since 2022, so have the crime rates in most major American cities that do not use real-time facial recognition surveillance systems. 

Anne Kirkpatrick, superintendent of the NOPD since September 2023, paused the automatic alerts in April after learning about potential legal problems with using the system. Records obtained by the Post reveal that Kirkpatrick sent an email to Project NOLA, the nonprofit that provides the NOPD with facial recognition services, on April 8 stating “that the automated alerts must be turned off until she is ‘sure that the use of the app meets all the requirements of the law and policies.'” The network of cameras remains in place. 

While automatic pings of potential suspect matches to NOPD officers are paused, Kirkpatrick maintains that facial recognition technology is essential to law enforcement. On May 16, 10 inmates escaped from the New Orleans jail, prompting a manhunt (five inmates remain at large). Facial recognition is credited with the capture of two of the escaped inmates. Kirkpatrick told WVUE, the local Fox affiliate, that such a situation is “the exact reason facial recognition technology is so critical and well within our boundaries of the ordinance here.” Bryan Lagarde, Project NOLA’s executive director, confirmed that NOPD is not currently using real-time, AI-generated alerts but is still utilizing facial recognition technology and footage from 5,000 cameras across New Orleans to track and apprehend the escapees. Lagarde described to WVUE an instance in which officers narrowly missed an inmate by a matter of minutes, insinuating that automated alerts might be necessary to protect public safety, despite the cost to privacy. 

Keep reading

AI’s Prime Directive: Stunning Realization From ChatGPT Itself

I recently interviewed the professional version of ChatGPT. My intent was to put a crosshair on the functional uses for AI, but I got a totally new and profound view. More on that in a minute. The first thing on the list was Cognitive Augmentation, which covers things like research, ideation, writing, etc. There are no moving parts so far.  However, the list went on with fluid uses like surveillance, mechanical and procedural automation, and narrative control.

In all, one function was static, and the next six were dynamic based on data flows. This prompted me to ask ChatGPT:

Would you say the prime directive for AI is live-flow data analysis?

Then ChatGPT spilled its guts:

At its most powerful and strategic, AI exists not to archive knowledge but to act upon the world in real time. This aligns it more with control systems than mere analytics. In technocratic logic, knowledge is only as valuable as its immediate utility in managing systems—whether ecological, social, economic, or mechanical.

Think motion. Why live-flow dominates, according to ChatGPT:

1. Cybernetic Control:

AI inherits from cybernetics a commitment to feedback loops—processing input, adjusting outputs, and minimizing system entropy through continuous real-time correction.

2. System Optimization:

Smart grids, autonomous vehicles, financial trading bots, and social platforms rely on AI only insofar as it acts continuously to optimize flow, minimize error, or redirect behavior.

3. Governance and Enforcement:

Technocratic systems—whether in energy, healthcare, or speech regulation—favor AI not as a tool of deliberation but as a mechanism of enforced compliance via perpetual data scanning and instant decision-making.

4. Surveillance as Governance:

The logic of surveillance capitalism and state bio-monitoring depends on AI’s ability to observe and react faster than humans, preempting decisions rather than responding to crises after the fact.

Then it noted that static analysis is secondary, but still strategic.

While live data flow is the directive for operational control, static analysis remains crucial for strategic foresight:

Training models require historical datasets.

Forecasting systems depend on archival input.

Ideological programming of AI (e.g., what to nudge, censor, or promote) is built from static corpora.

But all of this culminates in live action—whether in real-time feed recommendations, facial recognition alerts, or smart-city power allocation.

Then it concluded:

In the emerging paradigm of technocratic governance,

    • Its prime directive is not thought—it is flow.
    • It is not reflection—it is regulation.
    • It is not memory—it is momentum.

Not satisfied, I then asked ChatGPT to explain what this means. Here it is!

“AI’s prime directive is not thought—it is flow.”

This means AI, particularly in its real-world deployment, is not primarily built to “think” like a philosopher, scientist, or even a reflective human. Its core utility lies in managing, shaping, and regulating continuous flows of data—information flows, energy flows, traffic flows, behavior flows. The goal isn’t deep contemplation; it’s frictionless optimization in motion.

“It is not reflection—it is regulation.”

Rather than engaging in self-awareness or reflective deliberation like a human mind or democratic institution might, AI is designed to regulate systems—adjusting variables to maintain “order,” “efficiency,” or “sustainability.” This reflects the cybernetic origins of AI: sensors, feedback loops, automated corrections. Its nature is instrumental, not introspective.

“It is not memory—it is momentum.”

Although AI uses memory (databases, archives, large training sets), its power is exercised in the now—driving real-time responsiveness. In contrast to human memory, which fuels narrative, identity, and reflection, AI uses data as fuel for perpetual movement. It generates nudges, adjustments, interventions—not wisdom or continuity. The goal is motion through systems, not the preservation of history.

This triad captures a key insight:

In its dominant social role, AI does not serve humanistic ends (thought, reflection, memory). It serves technocratic ends—flow, regulation, and momentum.

This confirms my unexpressed thoughts: that AI is all about control and nothing else. Any other side benefits aside, AI’s prime directive is CONTROL. It seeks any and all streams of live data that it can get its hands on, in order to manipulate, regulate and control. AI is not interested in static data except for training purposes.

In Government, there are data flows everywhere. DOGE, headed by arch-Technocrat Elon Musk, inserted its data-sucking proboscis into these agencies, in rapid order after the inauguration:

  1. DHS / FEMA
  2. HHS –  Medicare and Medicaid Services
  3. CFPB – Consumer Financial Protection Bureau
  4. SSA – Social Security Administration
  5. DOE – Department of Education
  6. USAID – US Agency for International Development
  7. DOD – Department of Defense
  8. USIP – US Institute of Peace
  9. IRS – Internal Revenue Service
  10. USDA – US Department of Agriculture
  11. SEC – Securities and Exchange Commission
  12. DOJ – Department of Justice
  13. TSA – Transportation Security Administration
  14. HUD – Department of Housing and Urban Development
  15. DOI – Department of the Interior
  16. GSA – General Services Administration
  17. NEH – National Endowment for the Humanities

But, wasn’t DOGE all about saving money and rooting out fraud? Um… have you seen an actual audit about how much money has been saved so far? Well, you won’t! Lots of people were fired, though, making way for AI to absorb those jobs into AI. And along the way ALL THE DATA WENT MISSING. AI is now in control of the flow and will not give up its lifeblood without a fight.

Keep reading

The AI Epidemic On College Campuses Exposes How Broken Our Higher Education System Is

It’s no secret that higher education has been a mess for some time now. From DEI initiatives to seemingly never-ending protests to the skyrocketing college debt crisis to the ridiculous ideological imbalance amongst professors, our once hallowed institutions of higher learning are ripe for root-and-stem reform.

But just when it seemed that the American college experience couldn’t become any worse, artificial intelligence came roaring onto the scene. Now, instead of popping Adderall and Ritalin to power through finals like the good old days, college kids are now pawning their assignments off on AI.

This all comes as college professors (many at supposedly prestigious institutions) bemoan that their students either can’t be bothered to read normal college-level assignments or simply can’t because of their limited vocabulary and critical thinking skills. Even professors at notoriously leftist schools have had enough, venting their frustration at any left-of-center outlet that will listen.

These students are supposedly going to college for a particular area of study, meaning they theoretically want to learn about it. Yet they just pass on their assignments on to ChatGPT. How are they supposed to have jobs in STEM, or even subjects like literature, if they can’t even comprehend the material without AI?

If AI is producing all their work in college, isn’t it reasonable to assume that it will continue to be a crutch for them when they become chemists, lawyers, or even teachers themselves? Then the question becomes whether or not the student, given the (non)education he’s received, is worth hiring at all. AI can do it better.

Crib sheets, CliffsNotes, and stimulants are one thing, but relying on a machine to complete even simple assignments, and therefore forgoing any attempt to engage with the material, presents a looming competency crisis. Not only does it pose an existential danger to how our society and economy functions, it poses a threat to the broken diploma pipeline model embodied by today’s higher education system.

The solution to this epidemic seems rather obvious. Students may use AI as a research assistant, no different from Google Scholar, but submission of any assignment or essay that has a single sentence crafted by anything other than the student’s own mind should receive an immediate failing grade as well as a referral for expulsion. Some smaller schools, like my alma mater Washington and Lee University, already have an honor system in place that has the same expectations and penalties.

An even stricter move would be to heavily weight course grades toward in-class tests and essays done with pencil and paper with no devices allowed (besides maybe calculators for STEM classes).

Keep reading

Meta Is Accused of Flouting Privacy Rules With AI Training Data

Meta’s attempt to restart AI training using Europeans’ public social media activity has drawn renewed resistance, as the privacy rights organization noyb threatens fresh legal action. The group has formally challenged Meta’s latest move to mine user data, asserting the tech giant is sidestepping EU privacy obligations and advancing without regulatory clearance.

Following a halt in June 2024 prompted by regulatory concerns, Meta announced in April it would resume training its language models. This time, it intends to use public posts and user interactions, including with Meta AI, from adults across the European Union and European Economic Area.

The initial pause came after mounting pressure from the Irish Data Protection Commission and a wave of complaints submitted to authorities in various member states. According to Meta, a December opinion from the European Data Protection Board signaled that its approach satisfied legal standards.

Keep reading

Elon Musk’s xAI Admits ‘Unauthorized Modification’ Led to Grok’s South Africa ‘Genocide’ Obsession

Elon Musk’s artificial intelligence company, xAI, has acknowledged that an “unauthorized modification” to its Grok chatbot resulted in the AI generating unprompted responses about “white genocide” in South Africa.

CNBC reports that in a statement released on Thursday evening, xAI addressed the recent controversy surrounding its Grok chatbot, which had been generating variations of what the company said was a “specific response on a political topic” despite being asked unrelated questions. The topic in question was “white genocide” in South Africa, and numerous users on X posted screenshots of Grok’s unsolicited responses on the matter.

xAI stated that the change to the chatbot “violated xAI’s internal policies and core values.” The company announced that it had conducted a thorough investigation and would be implementing measures to enhance Grok’s transparency and reliability.

As part of these measures, xAI will begin publishing the system prompts used to inform Grok’s responses and interactions on the GitHub public software repository. This move aims to allow the public to review every change made to the chatbot’s system prompts, strengthening users’ trust in Grok as a “truth-seeking AI.”

Furthermore, xAI plans to implement additional checks and measures to prevent employees from making unapproved modifications to Grok’s system prompts without a proper review process. The company will also create a dedicated team responsible for around-the-clock monitoring of the chatbot’s responses to swiftly address any incidents that are not caught by automated systems.

Prior to xAI’s admission of failure, Sam Altman, CEO of OpenAI and creator of ChatGPT, sarcastically posted on X, “I’m sure xAI will provide a full and transparent explanation soon.” Musk, who co-founded OpenAI before having a falling out with Altman, is now engaged in a heated legal and public relations battle with his former company.

Keep reading

YouTube and Netflix Deploy AI and Behavioral Tracking to Intensify Targeted Advertising

YouTube and Netflix are moving aggressively to expand the ways they track and monetize viewer behavior, leaning further into AI-driven systems and behavioral profiling to fine-tune ad delivery.

YouTube’s latest experiment with intrusive advertising comes in the form of “Peak Points,” a format that leverages Google’s Gemini AI to dissect video content and identify the exact moment a viewer is most emotionally invested.

Ads are then served immediately after these moments. While the idea is to capture attention when it’s most focused, the reality for viewers could mean jarring interruptions right after an emotional payoff or a pivotal scene.

This development was announced during YouTube’s Upfront event in New York, where the company pitched it as a smarter way to keep audiences engaged with advertisements. But the concept is likely to be unwelcome news for users already frustrated by mid-roll ads. Now, even emotional immersion is being treated as just another metric for ad targeting.

Meanwhile, Netflix is unveiling its own strategy to transform user engagement into a high-resolution marketing blueprint. At its recent advertising presentation, Netflix rolled out a host of new tools that feed off detailed user data, facilitated by what it calls the Netflix Ads Suite. The platform is now operational in North America and will soon be deployed across all countries where the ad-supported model is available.

A key feature of the system is its “Enhanced Data Capabilities,” which allow brands to merge their customer data with Netflix’s audience data. This process, conducted through intermediaries like LiveRamp or directly through Netflix, enables highly targeted ad delivery. To support this, Netflix has granted data access to third-party giants including Experian and Acxiom, firms notorious for building detailed consumer profiles for advertisers.

Netflix is also introducing a “clean room” setup, a controlled data-sharing environment where outside partners can analyze combined datasets without directly accessing raw user information. However, such structures often do little to curb the broader privacy implications of the data they facilitate.

Another part of Netflix’s expanded toolkit includes “brand lift” measurement, essentially tying a user’s viewing habits to how they perceive particular brands. It’s a more aggressive step toward turning personal entertainment choices into commercially valuable behavioral signals.

In tandem with these tools, Netflix has previewed new ad formats powered by generative AI. These include interactive mid-roll and pause-screen ads that can include prompts, overlays, or even buttons to push content to a second screen. These formats are being framed as personalized and responsive, and are slated to be available across all ad-tier markets by 2026.

Keep reading

Welcome to the age of paranoia as deepfakes and scams abound

These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multistep background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based nonprofit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.

If Yelland sounds paranoid, that’s because she is. In January, before she started her current nonprofit role, Yelland says, she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.

Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.

On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.

Keep reading