The Role of Artificial Intelligence (AI) in Therapy: Promise, Ethics, and Responsibility

The following is a comprehensive review of AI in Therapy. Are you using AI in your practice? Add a discussion to find out how other practitioners are navigating this exciting (and precarious) time!
Artificial intelligence (AI) has entered nearly every part of modern life—and mental health care is no exception. From transcription tools that draft progress notes to digital companions that provide cognitive-behavioral support, therapists are increasingly encountering AI in clinical work.
The technology holds real promise: less time on paperwork, greater access to care, and earlier detection of risk. But it also brings serious questions about privacy, informed consent, and what it means to protect the therapeutic relationship in a digital world.
This post explores the current state of AI in therapy, its benefits and limitations, the ethical and legal issues at stake, and how clinicians can integrate new tools safely and responsibly.
Table of Contents
What AI Looks Like in Therapy Settings
AI in psychotherapy refers to systems that can analyze, predict, or generate human-like responses. Some common examples include:
- Transcription and documentation tools that record and summarize therapy sessions (e.g., ambient note-taking software).
- Risk detection algorithms that identify potential crises or suicidality based on language or behavior.
- Digital mental health assistants or chatbots that deliver guided CBT or mindfulness support.
- Data analytics tools that detect symptom patterns or treatment progress over time.
In most cases, these tools process highly sensitive client information. That makes therapist oversight, informed consent, and compliance with privacy regulations absolutely essential.
Potential Benefits
When used thoughtfully, AI in therapy practices can be a genuine asset.
Efficiency and documentation support: Early research suggests that AI-assisted note-taking can reduce time spent on documentation by 30–40%, freeing clinicians for more direct client work (APA, 2023).
Access to care: Digital CBT and other AI-supported interventions have been shown to reduce mild-to-moderate anxiety and depression, especially when clinician-supported (Firth et al., 2019, World Psychiatry).
Early detection and prevention: Predictive models can flag subtle changes in speech, affect, or behavior that may indicate risk for relapse or crisis (Jacobson et al., 2022, JAMA Psychiatry).
Augmented—not replaced—care: The best applications of AI are those that support clinical decision-making, not replace it. Therapists remain central in interpreting results and maintaining empathy, context, and ethical judgment.
Ethical Responsibilities and Client Consent
Ethical use of AI in therapy requires transparency, competence, and consent. The APA Ethics Code (2017) and NASW Code of Ethics (2021) both require clinicians to explain how technology is used and to obtain informed, voluntary consent.
Clients should know:
- If sessions are recorded, transcribed, or analyzed by AI.
- How and where their data are stored.
- Who has access to that information and for how long.
- What risks and safeguards are in place.
Consent should be explicit, written, and revocable at any time. Clients must be able to opt out without losing access to care.
Therapists are also responsible for verifying that any tool used is HIPAA-compliant and that a Business Associate Agreement (BAA) is in place with the vendor. This ensures legal accountability for how client data are handled, encrypted, and stored.
Understanding the Legal Landscape
AI systems often rely on recording or transcribing sessions. This means they fall under federal and state laws governing recordings and medical privacy.
Under HIPAA and HITECH, audio and video recordings of sessions are considered protected health information (PHI). They must be encrypted, securely stored, and only accessible to authorized users.
State laws vary on consent for recording: some states require only one-party consent (the therapist), while others—like California, Illinois, and Florida—require consent from all parties. If AI tools are used for transcription, they qualify as recordings under most state statutes, meaning clients must provide written consent.
If recordings are used for supervision or training, clinicians must specify who will access the material, for what purpose, and for how long it will be kept.
Risks and Limitations
While the promise of AI is clear, the risks cannot be ignored.
Privacy and confidentiality: Cloud-based AI tools can introduce vulnerabilities. Even “anonymized” data can sometimes be re-identified through pattern matching.
Algorithmic bias: AI models trained on limited or non-diverse data may produce biased interpretations, particularly for clients from marginalized backgrounds.
Over-reliance: Therapists might begin trusting algorithmic feedback over clinical judgment, risking depersonalized care.
Therapeutic presence: If a client knows their words are being analyzed by AI, it may alter how open they feel during sessions. Transparency about purpose and limits can help reduce this impact.
Legal and ethical exposure: Using non-compliant or unapproved tools can violate privacy laws or professional ethics, leading to serious liability.
Best Practices for Clinicians
AI in therapy is not inherently unethical—it depends on how it’s used. Ethical, responsible implementation requires careful planning and oversight.
- Obtain informed written consent before any AI tool is used.
- Ensure HIPAA compliance and a valid Business Associate Agreement (BAA).
- Be transparent about risks, data handling, and limitations.
- Use AI as a support, not a substitute, for therapeutic judgment.
- Keep up to date with APA and state licensing board guidance.
- Reassess consent regularly, especially if new technology is introduced.
- Avoid feeding identifiable client data into systems that learn or adapt (e.g., generative AI) unless privacy is contractually protected.
The Current State of AI in Therapy
The regulatory landscape is still developing. While the World Health Organization (2021) and the U.S. Department of Health and Human Services (2023) have both issued guidance, there are no unified federal standards for how AI should be deployed in clinical mental health.
Major organizations, including the American Psychological Association, continue to emphasize that AI tools must always serve as adjuncts to human care, not autonomous providers.
This is a moment of opportunity—but also of responsibility. The decisions made by therapists today about privacy, transparency, and ethical use will shape the trust and credibility of AI in clinical practice for years to come.
A Human-Centered Future
The essence of therapy is human connection. Using AI in therapy may enhance that work, but it cannot replicate empathy, intuition, or presence. Used wisely, these tools can lighten administrative burdens, improve continuity of care, and even prevent crises—but they must never replace the therapist’s ethical and emotional role.
As AI evolves, therapists have a critical voice in shaping its application—insisting that technological innovation serve the same goal that has always defined mental health care: to help people feel seen, safe, and understood.
Additional Resources
World Health Organization (2021) – Ethics and Governance of Artificial Intelligence for Health: https://www.who.int/publications/i/item/9789240029200
National Institutes of Health (NIH) – Artificial Intelligence in Mental Health Research: https://www.nimh.nih.gov/news/science-news/2023/artificial-intelligence-in-mental-health-research
Frontiers in Psychiatry (2023) – The Role of Artificial Intelligence in Psychotherapy: https://www.frontiersin.org/articles/10.3389/fpsyt.2023.1200334/fullhttps://www.nimh.nih.gov/news/science-news/2023/artificial-intelligence-in-mental-health-research
References
- American Psychological Association. (2017). Ethical Principles of Psychologists and Code of Conduct.
- American Psychological Association. (2023). Guidelines for the Use of Technology in Psychological Practice.
- Centers for Medicare & Medicaid Services. (2023). HIPAA Privacy and Security Rules.
- Firth, J., Torous, J., Nicholas, J., et al. (2019). The efficacy of smartphone-based mental health interventions: A meta-analysis of randomized controlled trials. World Psychiatry, 18(3), 325–336.
- Jacobson, N. C., et al. (2022). Predicting suicide and mental health crises using digital phenotyping: A review. JAMA Psychiatry, 79(3), 261–273.
- Maheu, M. M., Drude, K. P., Hertlein, K. M., & Wall, K. (2022). A Practitioner’s Guide to Telemental Health: How to Conduct Legal, Ethical, and Evidence-Based Telepractice. American Psychological Association.
- World Health Organization. (2021). Ethics and Governance of Artificial Intelligence for Health.
- U.S. Department of Health and Human Services. (2023). AI and Data Policy Framework for Health and Human Services.
- Mikolajczyk, T., et al. (2023). The role of artificial intelligence in psychotherapy: Promise and pitfalls. Frontiers in Psychiatry, 14, 1200334.
Responses