UK Consults on Social Media Age Verification While Directing Parents to Report “Hate Speech” to Big Tech

The British government launched a consultation this week that could require age verification for anyone using social media, gaming sites, or AI chatbots.

The consultation, titled “Growing up in the online world,” opened on March 2nd and closes May 26, 2026. It asks the public whether the government should ban under-16s from social media entirely, impose mandatory overnight curfews on platform access, restrict AI chatbot features for minors, and require platforms to disable “addictive design features” like infinite scrolling and autoplay.

The government says it will respond in summer 2026, and Parliament has already handed ministers new legal powers to act on the findings without waiting for fresh primary legislation.

The Prime Minister announced those powers on February 16, weeks before the consultation even opened. The government can now move faster once it decides what it wants. What the public thinks determines the packaging, not the destination.

Technology Secretary Liz Kendall framed it this way: “The path to a good life is a great childhood, one full of love, learning, and play. That applies just as much to the online world as it does to the real one.”

The actual policy tools being considered are a different matter.

Age verification, as a mechanism, works by proving identity. Every user proves who they are.

A social media platform that must exclude under-16s must verify the age of its over-16s. That means collecting identity documents, linking browsing activity to real identities, or building infrastructure that a government can later compel to serve other purposes.

The surveillance architecture required to enforce a children’s safety law is the same architecture required to surveil adults. It gets built for one reason. It gets used for others.

Then there’s the “Help your child stay safe online” campaign site, the government launched alongside the consultation. The site includes a page directing parents to report “bullying, threats, harassment, hate speech, and content promoting self-harm or suicide” directly to platforms, with links to the reporting tools of Instagram, Snapchat, Facebook, WhatsApp, TikTok, Discord, YouTube, and Twitch.

Keep reading

This Bill in New York State Would Protect Lawyers From AI Competition

If you’re a New Yorker in trouble with the law, it might soon be impossible for you to consult your favorite chatbot for legal advice.

Last week, the New York state Senate Internet and Technology Committee unanimously passed Senate Bill S7263. The bill would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice of a licensed profession, such as providing medical diagnoses or legal counsel.

The bill would also require chatbot deployers, such as OpenAI, Anthropic, and xAI, to “provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.” However, doing so does not allow these companies to disclaim responsibility for the outputs of their chatbots.

Sen. Kristen Gonzalez (D–Queens) introduced the bill last May alongside six others included in the Internet and Technology Committee’s AI legislative package. Gonzalez, who chairs the committee, described the package as “tackl[ing] the urgent need to protect the workforce from their companies’ use of AI.” Despite this comment, Gonzalez frames the bill as protecting the public, not workers.

In the bill’s justification section, Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. While Gonzalez highlights the possible risk of using chatbots for psychological therapy, she conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.

S7263, as currently written, would not just apply to the licensed professions of psychology and mental health services, but to medicineveterinary medicinedentistryphysical therapypharmacynursingpodiatryoptometryengineeringarchitecture, and social work as well. 

Taylor Barkley, director of public policy at the Abundance Institute, tells Reason the ban is “shortsighted at best and protectionist at worst.” While “these are all professions and services that require accuracy and accountability…AI systems increase quality and lower cost in all these areas.”

S7263 would also hold chatbot deployers liable for chatbots that practice or appear as attorney-at-law, which not only includes representing clients and handling formal legal matters, but also merely offering legal advice.

Keep reading

Pentagon in Talks on Buying Ukrainian-Made Interceptors To Counter Iranian Drones

The Pentagon is in talks on purchasing Ukrainian-made interceptors to counter Iranian drones, the Financial Times has reported, as senior US officials have told Congress that US forces in the Middle East are having more trouble intercepting Iran’s drones than expected.

The report said that at least one Gulf country was also in talks on acquiring Ukrainian-made drone interceptors as they been using advanced US Patriot missiles, which cost more than $4 million apiece. The Ukrainian systems are much cheaper and have been designed to counter the Russian version of Iran’s Shahed drones.

One Ukrainian official said that the talks were “sensitive” but that it was “obvious that there is a surge in interest in the Ukrainian drone interceptors, which can intercept the Shahed for a very low cost.” The Ukrainian drone interceptors cost a few thousand dollars to stop one Iranian Shahed drone, which is estimated to cost about $30,000 each.

Ukrainian President Volodymyr Zelensky said earlier this week that he has been in contact with Qatar and the UAE about the use of Ukraine’s anti-drone systems, though he expressed concern about Ukraine’s own stockpile.

“Ukraine’s expertise in countering ‘Shahed’ drones is currently the most advanced in the world,” he said. “However, any such co-operation aimed at protecting our partners can only proceed without diminishing our own defence capabilities.”

Ukraine’s anti-drone technology, which includes smaller drones and anti-drone guns, has struggled to intercept the Geran-3, a Russian-produced drone based on Iran’s Shahed-238 and powered by a jet engine.

Secretary of Pete Hegseth and Chairman of the Joint Chiefs of Staff Dan Caine told lawmakers this week that US forces will not be able to intercept all of Iran’s drones and that more US casualties are expected.

Since the US and Israel launched the war against Iran on Saturday, US Central Command has confirmed the deaths of six American soldiers, who were killed by an Iranian drone that hit a makeshift operations center in Kuwait. According to media reports, they had no notice or warning to evacuate before the drone struck.

Keep reading

Proposed Bill On Automated License Plate Readers Would Safeguard Data For Illegals, Gender-Affirming Care, Abortions

Today the Judiciary Committee introduced a bill to regulate the use of automated license plate reader systems and safeguard data derived from such systems.

H.B. No. 5449An Act Concerning Automated License Plate Reader Systems, would dictate how public agencies and law enforcement could operate automated license plate reader systems or use the data from such systems.

Data collected by these systems cannot be retained for more than seven days unless there’s a warrant or a court order, or if the data is for the purpose of collecting highway usage fees.

The bill describes a number of restrictions for usage of data collected by automated license plate reader systems.

No public agency or law enforcement agency operating a system may use the data for:

  • Monitoring or investigating an individual based on an individual’s actual or perceived race, ethnicity, criminal history, sexual orientation, gender identity or expression, sex, pregnancy status, disability, citizenship, nationality or income level;
  • Identifying individuals engaged in activities protected under the First Amendment to the United States Constitution;
  • Investigating a suspected immigration violation or otherwise assisting in any civil or criminal immigration enforcement activity;
  • Investigating or prosecuting any individual who has sought, received, or provided reproductive health care services or gender-affirming health care services;
  • Collecting data on the premises of or nearby a reproductive or sexual health facility, facilities that provide gender-affirming care services or a nonprofit or community organization that primarily serves immigrants (which presumably includes illegals), excluding any property under federal jurisdiction;
  • Sharing with other individuals or entities, except under certain circumstances;
  • Participating in any multistate, intrastate, or national data-sharing system or network, except under certain conditions; or
  • Permitting a public agency to have real-time, bulk or automatic access, except in specific cases.

The bill also says automated license plate reader data “shall not be disclosable under the Freedom of Information Act pursuant to chapter 14 of the general statutes” though it will disclose locations of any still or video image recording device used as part of an automated license plate reader system and other data derived from audits of the system, usage logs, etc., so long as all automated license plate reader data has been redacted.

There would also be limits on contracts or agreements with private vendors that might interact with automated license plate reader systems and data to restrict them from selling, sharing, transferring, disseminating or otherwise providing access to the data, except as authorized in the bill.

Agencies could be sued for failing to follow HB 5449.

Keep reading

AI-Generated Art Can’t Receive Copyright Protection After Supreme Court Declines Case

The advancement of AI-generated art suffered a crucial blow this week when the Supreme Court left in place a lower-court ruling that such works cannot be protected under U.S. copyright law.

The original plaintiff, a computer scientist from Missouri named Stephen Thaler, appealed to the Supreme Court after “lower courts upheld a U.S. Copyright Office ​decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection ​because it did not have a human creator,” per Reuters.

Thaler, of St. Charles, Missouri, applied for ⁠a federal copyright registration in 2018 covering “A Recent Entrance to Paradise,” visual art he said his AI ​technology “DABUS” created. The image shows train tracks entering a portal, surrounded by what appears to be green and ​purple plant imagery.

The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright.

According to The Verge, the U.S. Copyright Office issued new guidance last year saying that AI-generated could not enjoy copyright protection, potentially destroying the profitability of text prompts with no original source material. Thaler had also tried to patent his AI-generative works, which has also faced several legal challenges.

“The US federal circuit court similarly determined that AI systems can’t patent inventions because they aren’t human, which the US Patent Office reaffirmed in 2024 with new guidance, stating that while AI systems can’t be listed as inventors on a patent, people can still use AI-powered tools to develop them,” noted The Verge.

Thaler’s lawyers argued admitted that the Supreme Court’s rejection could likely hurt the advancement of AI-generated artworks.

“Even if it later overturns the Copyright Office’s test in another case, it will be too late,” Thaler’s lawyers claimed. “The Copyright Office ​will have irreversibly and negatively impacted AI development and use in the creative ​industry during ⁠critically important years.”

Without copyright protection, AI-generated works would fall under public domain, allowing anyone to copy, sell, or use, essentially destroying the potential to create commercial intellectual property.

Keep reading

FBI Joins Search for Missing Retired Air Force General Who Led U.S. Military Research Lab That Develops Directed-Energy Technology

The FBI has now joined the search for retired Maj. Gen. William Neil McCasland, who was reported missing in New Mexico.

As The Gateway Pundit previously reported, retired Maj. Gen. McCasland, who previously commanded the Phillips Research Site at Kirtland Air Force Base in New Mexico, has been reported missing since Friday.

McCasland served three years as the commander of the Phillips Research Site at Kirtland Air Force Base in New Mexico, which is notable for its directed-energy weapons and advanced space technologies.

The investigation into McCasland’s disappearance is being led by the Bernalillo County Sheriff’s Office, and on Tuesday afternoon, the office announced it has partnered with the FBI Albuquerque Field Office.

In a post on X, the Bernalillo County Sheriff’s Office stated, “Due to his background and established partnerships, BCSO is coordinating closely with multiple agencies, including the FBI Albuquerque Field Office, which is assisting as standard practice when it has a tool, tactic, or technique that may benefit the investigation. BCSO remains the lead agency.”

Keep reading

X CRACKS DOWN on AI-Generated War Propaganda: NO MORE Cashing In on Fake Footage Without Labels

In an effort to protect truthful information during global conflicts, X has rolled out strict new rules targeting creators who peddle AI-generated videos of war without clear disclosures. This comes as pro-Iran propagandists flooded the platform with fabricated clips designed to sow chaos.

The policy shift, effective immediately focuses on X’s Creator Revenue Sharing program. Creators posting AI-made content showing armed conflicts must include a clear label, or face penalties that hit where it hurts: their wallets.

According to details from X’s head of product, Nikita Bier, the platform is clamping down hard. “During times of war, it is critical that people have access to authentic information on the ground. With today’s AI technologies, it is trivial to create content that can mislead people,” Bier stated.

Bier elaborated that X will use its Community Notes system—crowdsourced fact-checking that empowers users over elite moderators—along with post metadata to detect undeclared AI content. The rules don’t ban AI videos outright; they just demand transparency via X’s built-in “Made with AI” label option.

Violators get a 90-day suspension from earning ad revenue on their posts. A second offense leads to a permanent ban from the program. This targets those exploiting wars for profit, without stifling creative expression.

The update follows a surge in deceptive content amid the Iran-Israel clash. Pro-Iran accounts have pushed AI fakes, like one claiming Iranian missiles sank the USS Abraham Lincoln. “Iran’s IRGC claims to have struck USS Abraham Lincoln with ballistic missiles. LIE,” CENTCOM posted. “The Lincoln was not hit. The missiles launched didn’t even come close.”

Keep reading

Scientists warn against crappy age verification: ‘if implemented without careful consideration… the new regulation might cause more harm than good’

As age verification becomes more commonplace across the web, there are some trying to oppose its rollout on security and privacy grounds. An open letter signed by over 400 researchers and scientists arguing the many reasons why age verification (and most especially the current age assurance technology) isn’t all it’s cracked up to be is now available to read in full.

Here’s a precis on the whole thing: Governments across the world are adopting legislation to ensure usage or compliance with age assurance methods, in the name of keeping kids off the bad parts of the web. That sounds like a good idea until you look into the details. Those details suggest these are often haphazardly applied and with little regard for privacy and data protection.

The open letter outlines a few key arguments:

How easily age verification can be bypassed. This being evident by Discord’s age verification, provided by K-id, which could be bypassed by using Sam’s face in Death Stranding. As the open letter points out, it’s possible to lie about one’s age, trick a system, or buy age-verified credentials online. VPNs are also widely available and prove an easy way to bypass any and all age assurance methods, even if access to said VPNs is age-restricted.

How unreliable age estimation can be. All while potentially necessitating large-scale and invasive data collection or widespread use of government IDs at every online interaction for any semblance of effectiveness. As the letter notes, “We conclude that age assessment presents an inherent disproportionate risk of serious privacy violations and discrimination, without guarantees of effectiveness.”

How it necessitates a global trust infrastructure. This being one of the main goals of the EU’s digital identity wallet, though only pan-EU, being used as a common foundation for all member states to meet one another for age assurance. Though as the letter suggests, “even if such a trust infrastructure would exist, checks can be circumvented by acquiring valid certificates or using VPNs, as long as age assurance regulations are not universally enforced by all affected services.”

How it can push users to lesser-known, potentially dangerous websites. By enforcing age assurance, and with the larger, more responsible websites complying, there is a chance of pushing users to lesser-known, potentially dangerous or scam websites. Following the rollout of the UK’s Online Safety Act, one of the first investigations it launched was into porn websites that did not immediately comply with the new rules for age verification checks. Other websites chose to turn off services to the UK altogether.

Keep reading

California Law Forces Age-Tracking Into Every Operating System by 2027

California wants to build a surveillance layer into every device its residents touch. Assembly Bill 1043, signed by Governor Gavin Newsom and taking effect January 1, 2027, requires every operating system provider to collect age information from users at account setup and broadcast that data to app developers through a real-time API.

Windows, macOS, Android, iOS, Linux distributions, Valve’s SteamOS: if it runs an operating system, it’s covered by this overreaching law.

The proposals are particularly dumb for open-source Linux operating systems. Linux exists specifically because some people want computing that doesn’t surveil them. That’s not incidental to why the platform exists; it’s foundational.

Distributions like Arch, Debian, and Gentoo have no centralized account infrastructure by design. Users download ISOs from mirrors, modify source code freely, and run systems that report to nobody.

Keep reading

Mexico Mandates Biometric SIM Registration for All Phone Numbers

Anonymous prepaid SIM cards are dying in Mexico. By July 1, 2026, every active cell phone number in the country must be biometrically linked to a named, government-credentialed individual or face suspension. That’s around 127 million numbers, each one tethered to an identity the Mexican government can look up by name.

The mobile registration law took effect January 9, 2026, covering prepaid and postpaid plans, physical SIMs, and eSIMs alike. Existing subscribers have until June 30 to complete registration. New lines activated after January 9 get 30 days. Miss the window, and the line goes dark.

The enforcement mechanism runs through the CURP Biométrica, Mexico’s biometric upgrade to its existing population registry code. The new credential embeds a photograph, electronic signature, and QR code that ties directly to biometrically verified records held in the national registry.

Residents registering a mobile line must provide their CURP number alongside a valid government ID, which makes biometric enrollment not optional but structurally required. You cannot register a phone number without first handing your biometric data to the state.

What Mexico is building here is a national phone network where every number has a face attached to it.

Keep reading