After years of hopes (and fears), one of the most storied promises of science fiction has arrived. Artificial intelligence (AI) has already ushered in what some call the fourth industrial revolution, “with endless possibilities for innovation and progress.”
In a December 2024 paper, a pair of Indian researchers wrote that “AI holds immense potential to revolutionize various sectors in the near future than it already has, including healthcare, finance, transportation, and other industries. In healthcare, AI promises early disease diagnosis, personalized patient care, and support for medical professionals in diagnosis and treatment planning. Similarly, AI can assist in fraud detection, stock market predictions, operational automation in finance, and many more.”
Regulatory Responses
As with any technology that offers such huge potential for economic and social disruption, policymakers have started to weigh in. For now, state governments have taken most of the action.
“As of August 2024, 31 states have passed some form of AI legislation,” according to the Cato Institute (a group not exactly known for its love of regulation). “For example, at least 22 have passed laws regulating the use of deepfake images, usually in the scope of sexual or election-related deepfakes, while 11 states have passed laws requiring that corporations disclose the use of AI or collection of data for AI model training in some contexts.”
California and Colorado have led the way so far. Now, Illinois Gov. JB Pritzker has signed a new law that bans the use of AI for direct therapeutic services.
Illinois Takes the Lead
More specifically, HB1806, the Wellness and Oversight for Psychological Resources Act, prohibits the use of AI tools in making therapeutic decisions or delivering psychotherapy.
The legislation allows therapists to rely on AI for help with administrative and supplementary functions. However, it draws a hard line around patient treatment, ensuring that only licensed professionals provide mental health services.
“This legislation stands as our commitment to safeguarding the well-being of our residents,” Secretary of the Illinois Department of Financial and Professional Regulation (IDFPR) Mario Treto Jr. said in a statement. “People deserve care from real, qualified professionals – not AI programs pulling data from the internet.”
Overwhelming Support – With Caveats
Early reaction in the mental health community appears to be mostly supportive.
“Therapy is not just about the words exchanged,” We Are Kaizan Founder and Owner Stephanie High, MA, explained. “It is about tone, timing, and the subtle, nonverbal cues like body language, microexpressions, and pauses that AI cannot read with the nuance a trained human can. Mental health work also requires ethical judgment that is grounded in the standards of the scientific community. Without that in place, AI risks oversimplifying complex cases or missing red flags, particularly in crisis situations.”
That being said, some lament the law’s lack of enforcement.
“The efforts will likely have little impact, as I don’t see the bill having what is needed to truly curtail the problem. The $10,000 fine is a slap on the wrist at best,” therapist Karl Stenske pointed out. “From my reading of the bill there is no criminal liability attached to the practice. For a licensed therapist or counselor, there are liabilities for malpractice that can range from fines to loss of license, to criminal charges. Granted those are fairly extreme cases, but there is an accountability factor that isn’t fully established in this bill.”
High, with experience in both performance psychology and the tech sector, also sees the benefits of AI.
“Outright prohibition may limit access for individuals in rural areas, underserved communities, or on long waitlists for care,” High added. “When implemented with safeguards, AI can be a valuable adjunct, not a replacement, for licensed professionals. It can help with consistency, scheduling support, psychoeducation, and guided self-regulation strategies.”
Driving Factors
The law has made headlines under the growing stormclouds of concern over AI’s role in health care. A story that the Illinois lawmakers cited in their deliberations recounted a chatbot “telling” a fictional former addict to take “a small hit of meth to get through the week.”
That anecdote, among others, attracted increased scrutiny over the risks inherent in untrained AI platforms serving as mental health tools.
A joint Illinois House committee revealed in 2024 that AI’s “advice” is only as reliable as the data used to train it. It’s a textbook example of “garbage in, garbage out.” As a result, Illinois lawmakers sought to stop chatbots — and other AI tools — from replacing trained clinicians.
“Unlicensed chatbots are giving dangerous, non-clinical advice to people in moments of crisis,” Rep. Bob Morgan, D-Deerfield, a key sponsor of the bill, explained. “Illinois is putting a stop to those trying to prey on the vulnerable.”
With this legislation, Illinois becomes one of the first states to set explicit regulatory boundaries around the use of AI in mental health treatment. And it almost certainly won’t be the last.
Further Reading
AI Might Actually Change Minds About Conspiracy Theories—Here’s How