Navigating the APA’s AI Ethical Guidance

As a practicing clinician who has witnessed firsthand the evolution of our field, I view the American Psychological Association’s Ethical Guidance for AI in the Professional Practice of Health Service Psychology through both clinical and leadership lenses. This guidance arrives at a crucial moment—not as a restriction on innovation, but as a framework that validates what many of us in clinical practice have been advocating for: technology that respects the sacred nature of the therapeutic relationship.

In Short....

The APA offers ethical guidance for using AI in psychology. Key points: be transparent with clients, guard against bias, protect data privacy, validate AI tools, maintain human oversight, and understand legal responsibilities. AI should support—not replace—professional judgment. Continue on for more.

The Clinical Reality

In my years of practice, I’ve seen how administrative burdens can erode the time we have for what matters most—connecting with and helping our patients. When the APA reports that 10% of practitioners are already using AI for administrative tasks, I’m not surprised. What concerns me is ensuring we’re using these tools in ways that enhance, rather than compromise, the quality of care.

The guidance speaks directly to the tensions many clinicians feel. We want efficiency, but not at the cost of accuracy. We seek innovation, but not if it undermines the trust our patients place in us.

The Primacy of Informed Consent

The APA’s emphasis on transparent informed consent reflects a fundamental truth about therapeutic relationships: they’re built on trust and transparency. Patients have the right to understand every aspect of their care, including when and how AI tools are involved. This isn’t bureaucracy—it’s respect for patient autonomy and an extension of the collaborative approach that defines good therapy.

Clinical Judgment Remains Supreme

What heartens me most about the guidance is its clear stance that AI should augment, not replace, clinical judgment. As clinicians, we bring years of training, intuition, and human understanding that would be difficult for an algorithm  to fully replicate. The guidance affirms that we must remain the “conscious oversight” for any AI-generated content or recommendations.

Accuracy as an Ethical Imperative

The APA’s call for critical evaluation of AI outputs aligns with our professional obligation to “do no harm.” Every note we write, every assessment we make, becomes part of a patient’s story. We cannot abdicate our responsibility to ensure that story is told accurately and with integrity.

What This Means for Clinical Practice

From a clinical perspective, implementing these guidelines requires us to:

Maintain Our Clinical Voice:

Whether using AI for documentation or assessment support, we must ensure that our clinical reasoning and the unique understanding we have of each patient remains central to all records and decisions.

Protect the Therapeutic Space:

The therapy room—whether physical or virtual—must remain a sanctuary. Any technology we introduce should enhance the sense of safety and confidentiality that makes healing possible.

Consider Diverse Populations:

The guidance reminds us to be vigilant about how AI tools may differentially impact various populations. As clinicians, we must advocate for tools that are tested across diverse groups and remain alert to potential biases.

Embrace Continuous Learning:

Just as we pursue continuing education in clinical techniques, we must commit to understanding the tools we use. This isn’t about becoming technologists—it’s about maintaining competence in our evolving field.

The Opportunity Before Us

The APA’s guidance doesn’t close doors; it opens them responsibly. I see opportunities to:

  • Reduce the documentation burden that keeps us at our desks instead of with patients
  • Enhance our ability to track treatment progress and outcomes
  • Support clinical decision-making with evidence-based insights
  • Extend quality mental healthcare to underserved communities

But each of these opportunities must be pursued with clinical wisdom and ethical clarity.

A Personal Reflection

I entered this field because I believe in the transformative power of human connection. Nothing in the APA’s guidance changes that fundamental truth. Instead, it challenges us to ensure that as we adopt new tools, we do so in service of that connection.

I’ve seen too many technological promises in healthcare fall short because they were designed without clinical input or implemented without clinical wisdom. The APA’s guidance helps ensure we don’t repeat those mistakes in mental health.

Moving Forward as a Clinical Community

As clinicians, we have a unique responsibility in this moment. We must:

  • Share our experiences openly, both successes and concerns
  • Advocate for our patients’ needs in the development of AI tools
  • Hold ourselves and our tools to the highest ethical standards
  • Remember that behind every algorithm is a human being seeking help

To My Fellow Clinicians

I know many of you approach AI with a mixture of hope and hesitation. That’s appropriate. The APA’s guidance gives us permission to be thoughtful, to ask hard questions, and to demand that any tool we use meets the ethical standards we’ve sworn to uphold.

This isn’t about resisting change—it’s about shaping it. We have the opportunity to ensure that AI in mental healthcare develops in ways that honor our professional values and serve our patients’ best interests.

The therapeutic relationship has survived and adapted through many changes in our field. With the APA’s ethical guidance as our North Star, I’m confident we can navigate this new frontier while keeping that relationship at the heart of everything we do.

After all, in a world of increasing technological complexity, the simple act of one human being helping another remains as powerful—and as necessary—as ever.

Ethical Implementation of AI in Mental Healthcare: A Practical Guide

In a recent article published by The AI Journal, the conversation around AI in mental healthcare takes an essential turn—focusing not only on its transformative potential, but on how to implement these tools responsibly. As clinicians adopt AI to improve efficiency and outcomes, ethical principles like transparency, equity, and patient autonomy must remain central to the process. This guide emphasizes that ethical implementation isn’t a one-time decision, but a continuous journey that requires trusted partners and thoughtful oversight. Ultimately, AI should enhance—not replace—the deeply human nature of mental healthcare.
Read the full article here.

The Critical Role of Model Cards When Selecting an AI Vendor for Behavioral Health and Pharma

Model cards for AI vendors showing performance metrics across populations

In today’s healthcare landscape, model cards for AI vendors have become essential documentation when selecting technology partners for behavioral health and pharmaceutical applications. These comprehensive documents provide transparent details about AI models’ performance, training data, and limitations—critical information for healthcare organizations making high-stakes technology decisions that impact patient care.

What Are Model Cards and Why Do They Matter?

Model cards serve as transparent documentation for machine learning models, detailing their performance characteristics, training data, intended use cases, and limitations. First proposed by researchers at Google in 2019, model cards have quickly become a best practice in responsible AI development.

For behavioral health and pharmaceutical applications, where decisions directly impact patient care, model cards aren’t just nice-to-have documentation—they’re essential safeguards that provide critical information about the algorithms making or supporting clinical decisions.

Key Elements of Strong Model Cards in Healthcare AI

When evaluating AI vendors for behavioral health or pharmaceutical applications, look for model cards that include:

  • Intended Use and Clinical Context: Clear explanation of what the model is designed to do, and importantly, what it’s not designed to do.
  • Training Data Demographics: Details about the populations represented in the training data—particularly important for ensuring models work across diverse patient populations.
  • Performance Metrics: Specificity and sensitivity measurements, both overall and for specific demographic groups.
  • Validation Methodology: How the model was validated, including any peer-reviewed research or clinical studies.
  • Limitations and Constraints: Transparent acknowledgment of the model’s limitations and potential failure modes.
  • Bias Evaluation: Assessment of potential biases in the model and steps taken to mitigate them.
  • Regulatory Status: Information about FDA registration or other regulatory frameworks the model complies with.

Real-World Example: Behavioral Health Assessment Models

Consider a vendor offering AI models that analyze video responses to detect signs of depression. A comprehensive model card would specify:

  • The model predicts PHQ-9 equivalent scores based on facial expressions, voice tone, and natural language analysis
  • Training included data from 10,000+ individuals across diverse demographic groups
  • Overall accuracy metrics (e.g., AUC: 0.89) with breakdowns for different populations
  • Independent validation through IRB-approved studies
  • Lower accuracy rates for certain populations with smaller representation in training data
  • Not intended for standalone diagnosis, but as a screening aid for clinicians

This level of transparency enables healthcare organizations to make informed decisions about whether a particular AI solution aligns with their clinical needs, patient populations, and ethical standards.

The Coalition for Health AI (CHAI) produced a great example of what a model card can contain to ensure transparency, safety, security & privacy, fairness & bias, and usefulness. Individual model cards will look different, but the frameworks CHAI developed are a baseline.

The Regulatory Landscape and Model Documentation

As regulatory bodies like the FDA develop frameworks for AI as medical devices, comprehensive documentation is becoming increasingly important. The FDA’s proposed regulatory framework for AI/ML-based Software as a Medical Device (SaMD) emphasizes the importance of transparency in model development and performance.

For pharmaceutical companies, model documentation is particularly crucial for clinical trials, where regulators require clear evidence of model validity and reliability. Strong model cards can help satisfy these requirements and build trust with regulatory agencies.

Questions to Ask AI Vendors About Their Models

When evaluating AI vendors for behavioral health or pharmaceutical applications, consider asking:

  • “Can you provide detailed model cards for each of your algorithms?”
  • “How was your model validated across different demographic groups?”
  • “What peer-reviewed research supports the effectiveness of your model?”
  • “What are the known limitations or potential biases in your model?”
  • “How often is your model updated, and what is your validation process for new versions?”

Model Cards as a Competitive Advantage

As the AI healthcare market becomes increasingly competitive, comprehensive model cards aren’t just good practice—they’re becoming a competitive advantage. Organizations that prioritize vendors with thorough, transparent documentation are better positioned to implement AI solutions that are effective, ethical, and aligned with regulatory requirements.

When selecting an AI vendor for behavioral health or pharmaceutical applications, remember that the quality of their model cards often reflects the quality of their approach to AI development. In a field where decisions impact patient lives, this level of transparency isn’t optional—it’s essential.

By demanding comprehensive model cards from AI vendors, healthcare organizations can make more informed decisions, reduce implementation risks, and ultimately deliver better care to the patients who need it most.

2025 Trends in Behavioral Health Technology – Part 2

Two hands - one human, one robotic - pointing to 2025

As discussed in part one of this series, the behavioral health technology landscape is undergoing a profound metamorphosis, with artificial intelligence (AI) and digital technologies reshaping how behavioral health professionals train, deliver care, and interact with patients. Here’s the last four trends I think will impact the behavioral healthcare space in 2025.

Intelligent Training and Skill Development

Continuous improvement has always been the hallmark of exceptional clinical practice. Now, AI is revolutionizing how behavioral health professionals refine their craft. Imagine a world where every therapy session becomes a learning opportunity—not through traditional supervision alone, but through intelligent, nuanced feedback systems that can analyze communication patterns, emotional resonance, and therapeutic techniques with unprecedented depth.

Advanced AI tools can now listen to therapy sessions, providing granular insights into communication effectiveness. These systems don’t just provide mechanical feedback; they offer sophisticated analysis of therapeutic alliance, helping clinicians understand subtle interpersonal dynamics that might otherwise go unnoticed. Simulated training environments allow practitioners to practice with AI patients, creating safe spaces to experiment with diverse therapeutic approaches and develop skills for treating populations they might find challenging.

This isn’t about replacing human supervision but augmenting it. By reducing time and cost barriers associated with traditional training methods, these technologies democratize professional development, allowing more practitioners to access high-quality skill enhancement.

The Emerging Landscape of Digital Therapeutics

The regulatory landscape for digital behavioral health tools is rapidly evolving. The FDA’s increasing approval of digital therapeutics and CMS’s recent Medicare billing codes represent a watershed moment. What was once considered experimental is now becoming mainstream healthcare.

Digital therapeutics are no longer peripheral technologies but integrated healthcare solutions. Much like traditional prescriptions, clinicians can now “prescribe” FDA-approved digital applications. This represents a fundamental shift in how we conceptualize behavioral health treatment—expanding therapeutic interventions beyond traditional in-person or telehealth models.

However, this emerging ecosystem is not without risks. The proliferation of behavioral health apps has created a complex marketplace where marketing claims often outpace clinical evidence. Consumers and practitioners must develop sophisticated digital literacy, distinguishing between rigorously tested interventions and unsubstantiated digital offerings.

Navigating the Ethics of AI in Therapy

The potential for AI to automate risk assessment and even conduct preliminary therapeutic interactions is tantalizing. Yet, this technological frontier demands careful navigation. While AI tools can provide initial screenings and support, they cannot—and should not—replace the profound human elements of therapeutic relationships.

We are witnessing the early stages of what might become a regulatory “Wild West” in digital behavioral health. Expect increased scrutiny, with regulatory bodies working to establish clear guidelines that protect patient safety while allowing technological innovation.

A Holistic View of Technological Integration

These trends are not isolated developments but interconnected elements of a broader transformation. They represent a holistic reimagining of behavioral healthcare—where technology serves as an empowering tool, not a replacement for human connection.

The most successful organizations will be those that view these technologies not as standalone solutions but as integrated components of a comprehensive care strategy. Success will depend on maintaining a delicate balance: leveraging technological capabilities while preserving the irreplaceable human elements of empathy, nuance, and genuine therapeutic connection.

Embracing Responsible Innovation

As we move deeper into 2025, the behavioral health landscape stands at a critical juncture. The technologies emerging today have the potential to democratize behavioral healthcare, reduce systemic barriers, and create more personalized, effective treatment modalities.

Yet, with this potential comes profound responsibility. Our challenge is not merely to adopt new technologies but to do so thoughtfully, ethically, and with an unwavering commitment to patient well-being.

The future of behavioral health is not about technology replacing human care—it’s about technology expanding and enhancing our capacity for compassion, understanding, and healing.

Deep Dive Into the Trends

Curious about how these technologies impact care, how the regulatory landscape is changing to meet the new paradigm, or how AI can help super-charge efforts to bring new medications to market? Join our webinar on January 31, 2025 at 3PM ET / 12PM PT to discuss 2025 trends and what it means for healthcare.

2025 Trends in Behavioral Health Technology, Part 1

Two hands - one human, one robotic - pointing to 2025

The First Three Trends

As we enter 2025, the behavioral health technology landscape is on the cusp of a revolution, with artificial intelligence (AI), digital tools, and innovative approaches poised to dramatically reshape how mental health services are delivered, accessed, and personalized. I expect these first three key emerging trends will fundamentally alter the healthcare ecosystem.

Expanding Access and Democratizing Behavioral Health Care with Technology

Digital health technologies are emerging as powerful democratizing forces in healthcare delivery. For populations historically marginalized—rural communities, economically constrained individuals, and underserved demographic groups—AI and digital platforms represent more than technological solutions. They are bridges to care, pathways to understanding, and tools of empowerment.

These technologies are not about replacing human connection but extending its reach. By breaking down geographical, economic, and systemic barriers, they create opportunities for more inclusive, accessible behavioral health support. Intelligent systems can now provide initial screenings, offer preliminary support, and guide individuals towards appropriate resources with unprecedented sensitivity and efficiency.

The Precision Medicine of Behavioral Health

 The era of one-size-fits-all treatment is rapidly dissolving. Artificial intelligence is ushering in a new paradigm of precision behavioral healthcare, where treatment plans are as unique as the individuals receiving them. By analyzing complex, multi-dimensional datasets, AI can now recommend care pathways with a level of personalization that was once the domain of highly specialized, resource-intensive approaches.

This isn’t about algorithmic replacement of clinical judgment but about providing clinicians with powerful, nuanced tools for understanding and supporting patient well-being. Each recommendation is a collaborative insight, bridging technological sophistication with human empathy.

Navigating the Ethical Considerations in the Human-Technology Interface

As we embrace these transformative technologies, we must remain vigilant about maintaining the core ethical principles of healthcare. Artificial intelligence and digital tools are powerful assistants, not autonomous decision-makers. They augment human capability, illuminate hidden insights, and create opportunities for more profound, more personalized care.

The most successful approaches will be those that view behavioral health technology not as a replacement for human interaction but as a sophisticated tool for enhancing our collective capacity for understanding, compassion, and healing.

Read part two of my Top Trends for 2025 here.