The White House has published a National AI Legislative Framework, a set of recommendations to Congress intended to govern artificial intelligence with a single uniform standard rather than, as the document puts it, “a patchwork of conflicting state laws.”
The administration wants federal law to preempt the states. That part is straightforward. What the framework actually proposes is less straightforward.
Alongside a genuine free speech provision, the document contains age verification mandates, chat surveillance requirements, national security carve-outs that would tighten the relationship between AI companies and federal intelligence agencies, and an expansion of the TAKE IT DOWN Act, a law that we have already flagged for lacking adequate safeguards against censorship.
The White House is presenting all of this as part of the same coherent package.
Start with the child protection section: Congress should establish “commercially reasonable, privacy protective, age-assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.” Age verification on AI platforms. The framework calls these requirements “privacy protective.” They are not.
There is no version of meaningful age verification that doesn’t require collecting sensitive personal data, and there is no version of collecting sensitive personal data at scale that isn’t a breach waiting to happen.
The only tools platforms have are identity-based checks, government IDs, biometric scans, credit card data, and third-party verification services, or biometric estimation.
The only way to prove that someone is old enough to use a site is to collect personal data about who they are.
In October 2025, Discord identified 70,000 users globally who potentially had their photo IDs exposed to hackers.