If you’re a New Yorker in trouble with the law, it might soon be impossible for you to consult your favorite chatbot for legal advice.
Last week, the New York state Senate Internet and Technology Committee unanimously passed Senate Bill S7263. The bill would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice of a licensed profession, such as providing medical diagnoses or legal counsel.
The bill would also require chatbot deployers, such as OpenAI, Anthropic, and xAI, to “provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program.” However, doing so does not allow these companies to disclaim responsibility for the outputs of their chatbots.
Sen. Kristen Gonzalez (D–Queens) introduced the bill last May alongside six others included in the Internet and Technology Committee’s AI legislative package. Gonzalez, who chairs the committee, described the package as “tackl[ing] the urgent need to protect the workforce from their companies’ use of AI.” Despite this comment, Gonzalez frames the bill as protecting the public, not workers.
In the bill’s justification section, Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. While Gonzalez highlights the possible risk of using chatbots for psychological therapy, she conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.
S7263, as currently written, would not just apply to the licensed professions of psychology and mental health services, but to medicine, veterinary medicine, dentistry, physical therapy, pharmacy, nursing, podiatry, optometry, engineering, architecture, and social work as well.
Taylor Barkley, director of public policy at the Abundance Institute, tells Reason the ban is “shortsighted at best and protectionist at worst.” While “these are all professions and services that require accuracy and accountability…AI systems increase quality and lower cost in all these areas.”
S7263 would also hold chatbot deployers liable for chatbots that practice or appear as attorney-at-law, which not only includes representing clients and handling formal legal matters, but also merely offering legal advice.