A federal judge has frozen enforcement of Colorado’s first-in-the-nation AI law, the statute that would have required developers to police their own models for “algorithmic discrimination” and to inform the state of “foreseeable risks” before the rules took effect on June 30.
Judge Cyrus Y. Chung signed off on a joint request from xAI and Colorado Attorney General Phil Weiser on April 27, putting the law on ice while state lawmakers draft a replacement.
We obtained a copy of the order for you here.
The order was filed in xAI v. Weiser. The state agreed not to enforce SB 24-205 against xAI, or to issue rules under it, until at least 14 days after the court rules on a forthcoming preliminary injunction motion.
The June 16 scheduling conference was cancelled. The deadlines in the case are suspended.
This is a significant retreat as Colorado spent two years insisting the law was a model for the country. It was the only state AI statute named in President Trump’s AI executive order last year. Now the state is asking a court to stop the clock while its own governor’s policy group drafts a bill to repeal and replace it.
The law itself is the reason the climbdown looks the way it does. SB 24-205 told developers of “high-risk” AI systems they had to take “reasonable care” to prevent algorithmic discrimination, with one carveout that has done more work in the lawsuit than any other clause: the law exempts discrimination intended to “increase diversity or redress historical discrimination.”
The state forbids one kind of discrimination by an algorithm. It permits, and arguably requires, another. The developer is left to figure out which is which, with the attorney general’s office deciding after the fact.
xAI sued on April 9, calling the statute a First Amendment problem dressed up as consumer protection. The company’s complaint is more blunt than most filings of this kind. “SB24-205 is decidedly not an anti-discrimination law,” the company’s attorneys wrote. “It is instead an effort to embed the State’s preferred views into the very fabric of AI systems.”
The argument is that Colorado isn’t regulating outputs neutrally. It’s choosing which viewpoints an AI model is allowed to produce, then enforcing the choice through “onerous policy, assessment, and disclosure requirements,” in the words of the Justice Department’s filing.
The DOJ moved to intervene on xAI’s side, the first time the federal government has joined a constitutional challenge to a state AI regulation.