Proposed Bill On Automated License Plate Readers Would Safeguard Data For Illegals, Gender-Affirming Care, Abortions

Today the Judiciary Committee introduced a bill to regulate the use of automated license plate reader systems and safeguard data derived from such systems.

H.B. No. 5449An Act Concerning Automated License Plate Reader Systems, would dictate how public agencies and law enforcement could operate automated license plate reader systems or use the data from such systems.

Data collected by these systems cannot be retained for more than seven days unless there’s a warrant or a court order, or if the data is for the purpose of collecting highway usage fees.

The bill describes a number of restrictions for usage of data collected by automated license plate reader systems.

No public agency or law enforcement agency operating a system may use the data for:

  • Monitoring or investigating an individual based on an individual’s actual or perceived race, ethnicity, criminal history, sexual orientation, gender identity or expression, sex, pregnancy status, disability, citizenship, nationality or income level;
  • Identifying individuals engaged in activities protected under the First Amendment to the United States Constitution;
  • Investigating a suspected immigration violation or otherwise assisting in any civil or criminal immigration enforcement activity;
  • Investigating or prosecuting any individual who has sought, received, or provided reproductive health care services or gender-affirming health care services;
  • Collecting data on the premises of or nearby a reproductive or sexual health facility, facilities that provide gender-affirming care services or a nonprofit or community organization that primarily serves immigrants (which presumably includes illegals), excluding any property under federal jurisdiction;
  • Sharing with other individuals or entities, except under certain circumstances;
  • Participating in any multistate, intrastate, or national data-sharing system or network, except under certain conditions; or
  • Permitting a public agency to have real-time, bulk or automatic access, except in specific cases.

The bill also says automated license plate reader data “shall not be disclosable under the Freedom of Information Act pursuant to chapter 14 of the general statutes” though it will disclose locations of any still or video image recording device used as part of an automated license plate reader system and other data derived from audits of the system, usage logs, etc., so long as all automated license plate reader data has been redacted.

There would also be limits on contracts or agreements with private vendors that might interact with automated license plate reader systems and data to restrict them from selling, sharing, transferring, disseminating or otherwise providing access to the data, except as authorized in the bill.

Agencies could be sued for failing to follow HB 5449.

Keep reading

AI-Generated Art Can’t Receive Copyright Protection After Supreme Court Declines Case

The advancement of AI-generated art suffered a crucial blow this week when the Supreme Court left in place a lower-court ruling that such works cannot be protected under U.S. copyright law.

The original plaintiff, a computer scientist from Missouri named Stephen Thaler, appealed to the Supreme Court after “lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator,” per Reuters.

Thaler, of St. Charles, Missouri, applied for ⁠a federal copyright registration in 2018 covering “A Recent Entrance to Paradise,” visual art he said his AI ​technology “DABUS” created. The image shows train tracks entering a portal, surrounded by what appears to be green and ​purple plant imagery.

The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright.

According to The Verge, the U.S. Copyright Office issued new guidance last year saying that AI-generated could not enjoy copyright protection, potentially destroying the profitability of text prompts with no original source material. Thaler had also tried to patent his AI-generative works, which has also faced several legal challenges.

“The US federal circuit court similarly determined that AI systems can’t patent inventions because they aren’t human, which the US Patent Office reaffirmed in 2024 with new guidance, stating that while AI systems can’t be listed as inventors on a patent, people can still use AI-powered tools to develop them,” noted The Verge.

Thaler’s lawyers argued admitted that the Supreme Court’s rejection could likely hurt the advancement of AI-generated artworks.

“Even if it later overturns the Copyright Office’s test in another case, it will be too late,” Thaler’s lawyers claimed. “The Copyright Office ​will have irreversibly and negatively impacted AI development and use in the creative ​industry during ⁠critically important years.”

Without copyright protection, AI-generated works would fall under public domain, allowing anyone to copy, sell, or use, essentially destroying the potential to create commercial intellectual property.

Keep reading

FBI Joins Search for Missing Retired Air Force General Who Led U.S. Military Research Lab That Develops Directed-Energy Technology

The FBI has now joined the search for retired Maj. Gen. William Neil McCasland, who was reported missing in New Mexico.

As The Gateway Pundit previously reported, retired Maj. Gen. McCasland, who previously commanded the Phillips Research Site at Kirtland Air Force Base in New Mexico, has been reported missing since Friday.

McCasland served three years as the commander of the Phillips Research Site at Kirtland Air Force Base in New Mexico, which is notable for its directed-energy weapons and advanced space technologies.

The investigation into McCasland’s disappearance is being led by the Bernalillo County Sheriff’s Office, and on Tuesday afternoon, the office announced it has partnered with the FBI Albuquerque Field Office.

In a post on X, the Bernalillo County Sheriff’s Office stated, “Due to his background and established partnerships, BCSO is coordinating closely with multiple agencies, including the FBI Albuquerque Field Office, which is assisting as standard practice when it has a tool, tactic, or technique that may benefit the investigation. BCSO remains the lead agency.”

Keep reading

X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Scientists warn against crappy age verification: ‘if implemented without careful consideration… the new regulation might cause more harm than good’

As age verification becomes more commonplace across the web, there are some trying to oppose its rollout on security and privacy grounds. An open letter signed by over 400 researchers and scientists arguing the many reasons why age verification (and most especially the current age assurance technology) isn’t all it’s cracked up to be is now available to read in full.

Here’s a precis on the whole thing: Governments across the world are adopting legislation to ensure usage or compliance with age assurance methods, in the name of keeping kids off the bad parts of the web. That sounds like a good idea until you look into the details. Those details suggest these are often haphazardly applied and with little regard for privacy and data protection.

The open letter outlines a few key arguments:

How easily age verification can be bypassed. This being evident by Discord’s age verification, provided by K-id, which could be bypassed by using Sam’s face in Death Stranding. As the open letter points out, it’s possible to lie about one’s age, trick a system, or buy age-verified credentials online. VPNs are also widely available and prove an easy way to bypass any and all age assurance methods, even if access to said VPNs is age-restricted.

How unreliable age estimation can be. All while potentially necessitating large-scale and invasive data collection or widespread use of government IDs at every online interaction for any semblance of effectiveness. As the letter notes, “We conclude that age assessment presents an inherent disproportionate risk of serious privacy violations and discrimination, without guarantees of effectiveness.”

How it necessitates a global trust infrastructure. This being one of the main goals of the EU’s digital identity wallet, though only pan-EU, being used as a common foundation for all member states to meet one another for age assurance. Though as the letter suggests, “even if such a trust infrastructure would exist, checks can be circumvented by acquiring valid certificates or using VPNs, as long as age assurance regulations are not universally enforced by all affected services.”

How it can push users to lesser-known, potentially dangerous websites. By enforcing age assurance, and with the larger, more responsible websites complying, there is a chance of pushing users to lesser-known, potentially dangerous or scam websites. Following the rollout of the UK’s Online Safety Act, one of the first investigations it launched was into porn websites that did not immediately comply with the new rules for age verification checks. Other websites chose to turn off services to the UK altogether.

Keep reading

California Law Forces Age-Tracking Into Every Operating System by 2027

California wants to build a surveillance layer into every device its residents touch. Assembly Bill 1043, signed by Governor Gavin Newsom and taking effect January 1, 2027, requires every operating system provider to collect age information from users at account setup and broadcast that data to app developers through a real-time API.

Windows, macOS, Android, iOS, Linux distributions, Valve’s SteamOS: if it runs an operating system, it’s covered by this overreaching law.

The proposals are particularly dumb for open-source Linux operating systems. Linux exists specifically because some people want computing that doesn’t surveil them. That’s not incidental to why the platform exists; it’s foundational.

Distributions like Arch, Debian, and Gentoo have no centralized account infrastructure by design. Users download ISOs from mirrors, modify source code freely, and run systems that report to nobody.

Keep reading

Mexico Mandates Biometric SIM Registration for All Phone Numbers

Anonymous prepaid SIM cards are dying in Mexico. By July 1, 2026, every active cell phone number in the country must be biometrically linked to a named, government-credentialed individual or face suspension. That’s around 127 million numbers, each one tethered to an identity the Mexican government can look up by name.

The mobile registration law took effect January 9, 2026, covering prepaid and postpaid plans, physical SIMs, and eSIMs alike. Existing subscribers have until June 30 to complete registration. New lines activated after January 9 get 30 days. Miss the window, and the line goes dark.

The enforcement mechanism runs through the CURP Biométrica, Mexico’s biometric upgrade to its existing population registry code. The new credential embeds a photograph, electronic signature, and QR code that ties directly to biometrically verified records held in the national registry.

Residents registering a mobile line must provide their CURP number alongside a valid government ID, which makes biometric enrollment not optional but structurally required. You cannot register a phone number without first handing your biometric data to the state.

What Mexico is building here is a national phone network where every number has a face attached to it.

Keep reading

Left-wing ideology is being encoded into AI systems to censor “wrongthink”

In 2021, a group of researchers dramatically departed OpenAI, the company behind ChatGPT. Led by Dario Amodei, OpenAI’s former vice president of research, they cited deep concerns about “AI safety.” The company was moving too fast, they warned, prioritising commercial interests over humanity’s future. The risks were said to be existential. These Effective Altruists were going to do things the right way.

Their solution? Start a new company called Anthropic, premised on building AI “the right way” with “safety” (that word will become a recurring theme), and “proper guardrails.” They initially raised hundreds of millions (today, that number is in the tens of billions) from investors who bought the pitch: we’re the good guys preventing runaway artificial general intelligence (“AGI”).

Noble, right? Except these supposed guardrails against AGI have become pretty much impossible to quantify. What we do have is an incredibly sophisticated content moderation system that filters inquiries and commands through a Silicon Valley thought bubble. It doesn’t seem like they’re trying to prevent AGI from destroying humanity, but instead, to prevent you from challenging the core tenets of their political philosophy.

Go ahead and try to generate content questioning climate ideology, the trans agenda, voter ID laws or election integrity, and watch the “safety” guardrails kick in.

This isn’t about preventing Skynet. It’s about making sure AI parrots the right opinions and associates with the right kind of people.

Now that Anthropic is its own technology giant of an AI company, they are facing the same critiques from true believers in the space. Amodei has put his principles on hold to allow for foreign investment from Gulf states with a poor human rights track record. However, the company remains guided by a secular progressive “philosopher” whose values remain entirely detached from America’s founding ideas.

Keep reading

Should People Be Free To Deal With the Department of War?

In a February 27 post titled “You Should Have Moral Qualms about Anthropic’s Claims,” Hoover Institution senior fellow and foreign policy scholar Amy Zegart challenged the ethics of a company named Anthropic. What I found refreshing is that a defense contractor’s CEO had a strong enough belief in his ethics that he was willing to forego a lucrative contract. According to Zegart, I should have moral qualms about that. I don’t and I’ll say why.

Anthropic had told the Department of War that it did not want its products used for either autonomous weapons or mass surveillance of Americans. According to Zegart, the Pentagon stated that it did not contemplate such uses. But that wasn’t enough for Dario Amodei, the CEO of Anthropic, who stated that he could not “in good conscience” accept the War Department’s assurances. Here’s Brendan Bordelon in a February 26 news item in Politico:

[Secretary of War] Hegseth met with Anthropic CEO Dario Amodei on Tuesday to deliver a warning  give the military unfettered access to its Claude AI model by Friday evening or else have the government label it a “risk” to the supply chain. The designation, typically reserved for foreign firms with ties to U.S. adversaries, could ban companies that work with the government from partnering with Anthropic.

Hegseth threatened Anthropic with designating it as a risk to the supply chain. With that label, Anthropic could be forbidden, as noted above, from working with companies that work with the government. Hegseth also, though, threatened to invoke the Defense Production Act to compel Anthropic to work with the Defense Department. A risk to the supply chain and, at the same, a firm that Hegseth wants to use? Hmmm. Bordelon quotes Dean Ball, whom he identifies as a former AI advisor in the Trump administration, noting the obvious contradiction. Said Ball, “You’re telling everyone else who supplies to the DOD you cannot use Anthropic’s models, while also saying that the DOD must use Anthropic’s models.”

Zegart cites the Politico article but doesn’t mention this contradiction. Instead, she goes after Anthropic and CEO Amodei. She writes:

There is a serious ethical question about whether one company, elected by nobody, with its own normative agenda as well as substantial global investors and customers, should be dictating the conditions of the most essential government role: protecting the lives of Americans.

But she misstates the issue. Anthropic isn’t trying to dictate the conditions of this essential government role. Anthropic is simply stating what its own limits are. If the Pentagon can find another supplier, it is free to do so and, indeed, has already done so. OpenAI has stepped up to take Anthropic’s place.

Moreover, why does Zegart think it’s important that Anthropic is elected by nobody? Does Zegart really think that companies that contemplate working with the Department of War should be elected by somebody.

Keep reading

Retired Air Force General Who Led U.S. Military Research Lab That Develops Directed Energy Technology Goes Missing in New Mexico

A retired U.S. Air Force general was reported missing in New Mexico on Friday.

Retired Maj. Gen. William Neil McCasland, who previously commanded the Phillips Research Site at Kirtland Air Force Base in New Mexico, has been reported missing after being last seen on Friday at 11 a.m.

The Phillips Research Site at Kirtland Air Force Base is notable for its research into advanced space and directed-energy weapons technology.

The Bernalillo County Sheriff’s Office, which is leading the search for McCasland, has issued a Silver Alert for his disappearance.

Per The New York Post:

A retired U.S. Air Force general was reported missing in New Mexico, with authorities warning that medical concerns have heightened fears for his safety.

Retired Maj. Gen. William Neil McCasland, 68, was last seen around 11 a.m. Friday near Quail Run Court NE in Albuquerque, the Bernalillo County Sheriff’s Office said.

Officials said they do not know what McCasland was wearing or in which direction he may have traveled. The sheriff’s office has issued a Silver Alert. “Due to his medical issues, law enforcement is concerned for his safety,” the sheriff’s office said.

McCasland was a longtime leader at Kirtland Air Force Base in New Mexico and previously commanded Kirtland’s Phillips Research Site and Air Force Research Laboratory.

Keep reading