Navigating the APA’s AI Ethical Guidance

As a practicing clinician who has witnessed firsthand the evolution of our field, I view the American Psychological Association’s Ethical Guidance for AI in the Professional Practice of Health Service Psychology through both clinical and leadership lenses. This guidance arrives at a crucial moment—not as a restriction on innovation, but as a framework that validates what many of us in clinical practice have been advocating for: technology that respects the sacred nature of the therapeutic relationship.

In Short....

The APA offers ethical guidance for using AI in psychology. Key points: be transparent with clients, guard against bias, protect data privacy, validate AI tools, maintain human oversight, and understand legal responsibilities. AI should support—not replace—professional judgment. Continue on for more.

The Clinical Reality

In my years of practice, I’ve seen how administrative burdens can erode the time we have for what matters most—connecting with and helping our patients. When the APA reports that 10% of practitioners are already using AI for administrative tasks, I’m not surprised. What concerns me is ensuring we’re using these tools in ways that enhance, rather than compromise, the quality of care.

The guidance speaks directly to the tensions many clinicians feel. We want efficiency, but not at the cost of accuracy. We seek innovation, but not if it undermines the trust our patients place in us.

The Primacy of Informed Consent

The APA’s emphasis on transparent informed consent reflects a fundamental truth about therapeutic relationships: they’re built on trust and transparency. Patients have the right to understand every aspect of their care, including when and how AI tools are involved. This isn’t bureaucracy—it’s respect for patient autonomy and an extension of the collaborative approach that defines good therapy.

Clinical Judgment Remains Supreme

What heartens me most about the guidance is its clear stance that AI should augment, not replace, clinical judgment. As clinicians, we bring years of training, intuition, and human understanding that would be difficult for an algorithm  to fully replicate. The guidance affirms that we must remain the “conscious oversight” for any AI-generated content or recommendations.

Accuracy as an Ethical Imperative

The APA’s call for critical evaluation of AI outputs aligns with our professional obligation to “do no harm.” Every note we write, every assessment we make, becomes part of a patient’s story. We cannot abdicate our responsibility to ensure that story is told accurately and with integrity.

What This Means for Clinical Practice

From a clinical perspective, implementing these guidelines requires us to:

Maintain Our Clinical Voice:

Whether using AI for documentation or assessment support, we must ensure that our clinical reasoning and the unique understanding we have of each patient remains central to all records and decisions.

Protect the Therapeutic Space:

The therapy room—whether physical or virtual—must remain a sanctuary. Any technology we introduce should enhance the sense of safety and confidentiality that makes healing possible.

Consider Diverse Populations:

The guidance reminds us to be vigilant about how AI tools may differentially impact various populations. As clinicians, we must advocate for tools that are tested across diverse groups and remain alert to potential biases.

Embrace Continuous Learning:

Just as we pursue continuing education in clinical techniques, we must commit to understanding the tools we use. This isn’t about becoming technologists—it’s about maintaining competence in our evolving field.

The Opportunity Before Us

The APA’s guidance doesn’t close doors; it opens them responsibly. I see opportunities to:

  • Reduce the documentation burden that keeps us at our desks instead of with patients
  • Enhance our ability to track treatment progress and outcomes
  • Support clinical decision-making with evidence-based insights
  • Extend quality mental healthcare to underserved communities

But each of these opportunities must be pursued with clinical wisdom and ethical clarity.

A Personal Reflection

I entered this field because I believe in the transformative power of human connection. Nothing in the APA’s guidance changes that fundamental truth. Instead, it challenges us to ensure that as we adopt new tools, we do so in service of that connection.

I’ve seen too many technological promises in healthcare fall short because they were designed without clinical input or implemented without clinical wisdom. The APA’s guidance helps ensure we don’t repeat those mistakes in mental health.

Moving Forward as a Clinical Community

As clinicians, we have a unique responsibility in this moment. We must:

  • Share our experiences openly, both successes and concerns
  • Advocate for our patients’ needs in the development of AI tools
  • Hold ourselves and our tools to the highest ethical standards
  • Remember that behind every algorithm is a human being seeking help

To My Fellow Clinicians

I know many of you approach AI with a mixture of hope and hesitation. That’s appropriate. The APA’s guidance gives us permission to be thoughtful, to ask hard questions, and to demand that any tool we use meets the ethical standards we’ve sworn to uphold.

This isn’t about resisting change—it’s about shaping it. We have the opportunity to ensure that AI in mental healthcare develops in ways that honor our professional values and serve our patients’ best interests.

The therapeutic relationship has survived and adapted through many changes in our field. With the APA’s ethical guidance as our North Star, I’m confident we can navigate this new frontier while keeping that relationship at the heart of everything we do.

After all, in a world of increasing technological complexity, the simple act of one human being helping another remains as powerful—and as necessary—as ever.

Protecting Innovation, Security, and Patient Trust in AI Healthcare

Model cards for AI vendors showing performance metrics across populations

As CEO of Videra, I’ve watched the artificial intelligence landscape evolve at an unprecedented pace. While this evolution brings extraordinary opportunities for healthcare advancement, it also presents significant challenges that we must address head-on – particularly regarding the proliferation of low-cost AI solutions from non-allied nations.

The healthcare sector, especially in mental and behavioral health, requires the highest standards of security, reliability, and ethical consideration. When we develop AI tools for healthcare applications, we’re not just creating technology – we’re creating solutions that impact human lives, influence medical decisions, and handle incredibly sensitive patient data.

This year has seen a surge in AI products marketed to healthcare providers at significantly reduced prices. While competitive pricing is generally beneficial for market innovation, we must carefully consider the hidden costs and risks associated with AI solutions from nations with different data privacy standards, regulatory frameworks, and strategic interests than our own.

For pharmaceutical companies and drug developers, these risks are particularly acute. Drug development involves highly sensitive intellectual property and research data that, if compromised, could have far-reaching consequences for both innovation and national security. When AI systems process this data, they need to do so with absolute security and transparency about data handling practices.

In behavioral and mental health, the stakes are equally high. These fields deal with some of our most vulnerable populations, and the AI systems supporting these services must maintain the highest standards of privacy and ethical operation. Providers need to know exactly how patient data is being processed, where it’s being stored, and who has access to it.

Key considerations for healthcare providers when evaluating AI solutions:

1. Data Security and Sovereignty

Your patient data should remain within U.S. jurisdiction, protected by our robust privacy laws and HIPAA regulations. Be wary of solutions that may route or store data through servers in countries with different privacy standards or data access laws.

2. Regulatory Compliance

Ensure any AI solution fully complies with U.S. healthcare regulations. This includes not just HIPAA, but also FDA requirements for medical devices and software as a medical device (SaMD).

3. Algorithmic Transparency

Understanding how AI makes decisions is crucial in healthcare. Providers should have clear insight into the training data and methodologies used to develop the AI systems they employ.

4. Supply Chain Security

Consider the entire technology supply chain, including where the AI models were trained and how they’re maintained. This is particularly crucial for solutions handling sensitive healthcare data.

5. Long-term Stability

Healthcare providers need partners they can rely on for the long term, with clear accountability and consistent support. This becomes particularly important when dealing with foreign entities operating under different legal frameworks.

At Videra, we believe that true innovation in healthcare AI must be built on a foundation of trust, security, and ethical operation. While cost is certainly a factor in technology decisions, it cannot be the primary driver when patient care and privacy are at stake.

The U.S. healthcare system has always been at the forefront of innovation, and maintaining this leadership requires careful consideration of the tools and technologies we employ. As we continue to advance in the AI era, let’s ensure we’re making choices that protect our patients, our intellectual property, and our healthcare infrastructure.

Our commitment to developing secure, ethical AI solutions remains unwavering. We understand that the future of healthcare technology must balance innovation with responsibility, and we’re dedicated to maintaining the highest standards in both areas.

Check our blog for the latest discussions on AI in healthcare, behavioral health, life sciences and clinical trials.

How AI Expands Care When Care Demands Continues to Rise

Healthcare provider using AI mental health platform on tablet while speaking with patient, illustrating how AI in mental healthcare amplifies human care

As we recognize Mental Health Awareness Month this May, we find ourselves at a critical juncture where AI in mental healthcare offers promising solutions. The need for mental health services continues to grow at an unprecedented rate, while provider shortages and burnout intensify. According to data from across the healthcare landscape, 47% of the U.S. population now lives in an area with a mental health workforce shortage, and wait times for appointments often stretch beyond three months.

At Videra Health, we’ve been tackling this challenge head-on, working with providers who face the daily reality of trying to deliver quality care despite limited resources. Our experiences have revealed a fundamental truth: we cannot simply produce more clinicians fast enough to meet the growing demand. Instead, we must find innovative ways to expand the reach and impact of our existing clinical workforce.

The Human Understanding Gap: Where AI in Mental Healthcare Makes a Difference

The core of effective mental healthcare has always been human connection and understanding. Providers need to know not just what their patients are saying, but how they’re feeling, their emotional state, and whether they’re at risk. Traditionally, this understanding has been limited to in-office interactions, creating significant blind spots in patient care journeys.

What happens when a patient struggling with depression has a difficult week between appointments? How can a substance use disorder treatment center identify which discharged patients are at risk of relapse? How do we ensure that individuals experiencing suicidal ideation are identified and supported before reaching crisis?

These questions highlight what I call the “human understanding gap” – the critical information about patient wellbeing that falls through the cracks between formal care touchpoints.

AI in Mental Healthcare: Building Bridges, Not Replacements

This is where thoughtfully designed AI systems can make a transformative difference. At Videra, we’ve seen firsthand how clinical AI can serve as a bridge that extends human care, rather than replacing it.

Our platform uses video, audio, and text assessments powered by artificial intelligence to understand patients in their own words and on their own time. By analyzing facial expressions, voice patterns, language, and behavioral indicators, we can identify signs of emotional distress, suicidal language, medication adherence challenges, and other critical indicators that might otherwise go unnoticed between appointments.

The results have been profound. In one behavioral health system implementation, we’ve seen that patients with higher engagement in post-discharge monitoring demonstrate significantly stronger recovery outcomes. Another community mental health center utilizing our technology reduced crisis alerts by 64% after just two weeks of proactive monitoring.

Amplifying Human Care, Not Replacing It

The most important lesson we’ve learned is that effective clinical AI doesn’t aim to replace human providers – it amplifies their capabilities and extends their reach. By handling routine clinical assessments and identifying at-risk patients, AI creates a force multiplier for clinical expertise, allowing providers to direct their specialized skills where they’re needed most.

For example, our automated assessment system can engage thousands of patients consistently and frequently to identify those with acute needs, before, during or after care. Our note-taking technology reduces documentation time, giving clinicians more face-to-face time with patients. And our monitoring tools provide continuous support between appointments, creating a safety net that would be impossible to maintain manually.

This clinical enhancement works alongside our workflow solutions, which address a separate but complementary need. While clinical AI focuses on assessment and insights, our workflow tools tackle the administrative burdens that consume valuable provider time.

The result is a multiplier effect on care capacity. Providers using these integrated AI-powered clinical and workflow tools can effectively support more patients without sacrificing quality of care – in fact, they can often deliver better outcomes by focusing their expertise where it’s most needed.

Looking Forward: The Future of AI in Mental Healthcare

As we look ahead, I believe we’re only beginning to tap into AI’s potential to address the growing mental health crisis. Future developments will likely include:

  • More sophisticated risk prediction models that can identify potential issues before they become crises
  • Deeper integration with treatment pathways to provide personalized care recommendations
  • Enhanced accessibility tools that break down barriers to care for underserved populations
  • Advanced training systems that help new clinicians develop expertise more quickly

At Videra Health, we’re committed to advancing these innovations responsibly, always keeping the human connection at the center of our work. Because ultimately, the goal isn’t to build AI that replaces humans – it’s to build AI that helps humans help more humans.

A Call to Action

As we observe Mental Health Awareness Month, I encourage healthcare leaders to consider how AI can extend your organization’s capacity to deliver care. The mental health crisis isn’t waiting, and neither should we.

We need to embrace tools that allow us to do more with our existing resources, reaching patients when and where they need support. By implementing AI in mental healthcare thoughtfully, we can ensure that more people receive the care they need, when they need it most.

Together, we can build a future where technology and human connection work in harmony to meet the growing demand for mental healthcare – not by replacing the invaluable work of clinicians, but by amplifying their impact and extending their reach.

Managing Stress in the Digital Age: Practical Tools for Behavioral Health Clinics

digital solutions for behavioral health clinicians

In today’s fast-paced healthcare environment, behavioral health clinicians face unprecedented challenges. Rising patient demand, administrative burdens, and the constant pressure to deliver high-quality care can create a perfect storm of stress for even the most dedicated professionals. As Videra Health’s Chief Clinical Officer, I’ve witnessed firsthand how digital solutions can either add to this burden or, when thoughtfully implemented, help alleviate it.

The Growing Challenge

The statistics paint a clear picture: nearly half of the U.S. population lives in a mental health workforce shortage area, average wait times for mental health services exceed three months, and no-show rates hover around 30%. These challenges create immense pressure on clinicians, leading to burnout and decreased quality of care.

However, I’ve observed a positive shift in how behavioral health organizations are leveraging technology to address these challenges. The right digital tools can transform workflows, enhance patient engagement, and provide valuable insights that improve both clinical outcomes and staff wellbeing.

Digital Solutions That Actually Help

At Videra Health, we’ve worked with hundreds of behavioral health organizations to identify which digital approaches actually reduce clinician stress rather than adding to it. Here are key strategies we’ve found most effective:

1. Automate Administrative Tasks, Not Clinical Judgment

The most successful digital implementations focus on eliminating repetitive administrative tasks while preserving and enhancing clinicians’ unique expertise and judgment. For example, automating intake assessments and post-discharge follow-ups can save hours of staff time while still providing rich clinical data.

When our client’s large behavioral health practice implemented automated post-discharge monitoring, they didn’t just save staff time—they identified patients needing intervention who might otherwise have fallen through the cracks. As one clinician shared, “We had four alerts over the weekend, and we were able to reach out to support clients and one came back for services… we would have never been able to find these patients in time without Videra.”

2. Implement Proactive Risk Identification

One of the most stressful aspects of behavioral health practice is worrying about patients between sessions. Digital tools that allow for ongoing monitoring and proactive risk identification can alleviate this burden.

Our experience with behavioral health support services shows that timely alerts for emotional distress, suicidal ideation, and other concerning patterns can enable early intervention. This not only improves patient outcomes and scales across the entire patient population, but also reduces the psychological burden on clinicians who otherwise might worry about patients between appointments.

3. Leverage Multimodal Assessments

Traditional questionnaire-based assessments only tell part of the story. Modern behavioral health platforms that incorporate video, audio, and text assessments can capture much richer data. This approach allows patients to express themselves in their own words, providing clinicians with deeper insights while reducing the time needed to gather comprehensive information.

One clinician noted, “Because Videra is video-based, it gives the clinician or staff the very information that you would be looking for if the patient were sitting across from you in your office.” This deeper understanding helps clinicians make more informed decisions more efficiently.

4. Focus on Meaningful Measurement

Not all data is created equal. The most effective digital solutions focus on collecting and analyzing information that directly informs clinical decisions and improves care.

By tracking key metrics like changes in PHQ-9 scores, medication adherence, and social determinants of health over time, clinicians can identify trends and adjust treatment plans accordingly. This data-driven approach not only improves patient outcomes but also gives clinicians confidence that their interventions are having the desired effect.

5. Engage Patients Between Visits

Patient engagement doesn’t have to stop when the session ends. Digital platforms that facilitate ongoing communication and support between visits can extend the impact of therapy while reducing the pressure on in-person appointments.

Research shows that patients with higher engagement in digital follow-up programs demonstrate stronger recovery, with better protective factors and lower relapse rates. This continuous engagement creates a more sustainable care model for both patients and providers.

The Human Element Remains Essential

As we embrace these digital solutions, it’s crucial to remember that technology should enhance rather than replace the human connection at the heart of behavioral healthcare. The most effective implementations leverage technology to handle routine tasks, gather information, and identify risks—freeing clinicians to focus on what they do best: providing compassionate, personalized care.

One CCBHC director summarized it perfectly: “Technology doesn’t replace our clinicians—it amplifies their impact by ensuring they can focus their time and expertise where it’s needed most.”

Moving Forward Together

The future of behavioral healthcare isn’t about choosing between human expertise and digital efficiency—it’s about thoughtfully integrating both to create more sustainable, effective, and scalable care models. By implementing the right digital tools in the right way, behavioral health organizations can reduce clinician stress, improve patient outcomes, and build more resilient healthcare systems.

At Videra Health, we’re committed to supporting this integration with solutions designed specifically for the unique challenges of behavioral healthcare. Together, we can create a future where technology doesn’t add to clinician burden but instead helps create more manageable, rewarding work environments where both providers and patients can thrive.

Ethical Implementation of AI in Mental Healthcare: A Practical Guide

In a recent article published by The AI Journal, the conversation around AI in mental healthcare takes an essential turn—focusing not only on its transformative potential, but on how to implement these tools responsibly. As clinicians adopt AI to improve efficiency and outcomes, ethical principles like transparency, equity, and patient autonomy must remain central to the process. This guide emphasizes that ethical implementation isn’t a one-time decision, but a continuous journey that requires trusted partners and thoughtful oversight. Ultimately, AI should enhance—not replace—the deeply human nature of mental healthcare.
Read the full article here.

The Critical Role of Model Cards When Selecting an AI Vendor for Behavioral Health and Pharma

Model cards for AI vendors showing performance metrics across populations

In today’s healthcare landscape, model cards for AI vendors have become essential documentation when selecting technology partners for behavioral health and pharmaceutical applications. These comprehensive documents provide transparent details about AI models’ performance, training data, and limitations—critical information for healthcare organizations making high-stakes technology decisions that impact patient care.

What Are Model Cards and Why Do They Matter?

Model cards serve as transparent documentation for machine learning models, detailing their performance characteristics, training data, intended use cases, and limitations. First proposed by researchers at Google in 2019, model cards have quickly become a best practice in responsible AI development.

For behavioral health and pharmaceutical applications, where decisions directly impact patient care, model cards aren’t just nice-to-have documentation—they’re essential safeguards that provide critical information about the algorithms making or supporting clinical decisions.

Key Elements of Strong Model Cards in Healthcare AI

When evaluating AI vendors for behavioral health or pharmaceutical applications, look for model cards that include:

  • Intended Use and Clinical Context: Clear explanation of what the model is designed to do, and importantly, what it’s not designed to do.
  • Training Data Demographics: Details about the populations represented in the training data—particularly important for ensuring models work across diverse patient populations.
  • Performance Metrics: Specificity and sensitivity measurements, both overall and for specific demographic groups.
  • Validation Methodology: How the model was validated, including any peer-reviewed research or clinical studies.
  • Limitations and Constraints: Transparent acknowledgment of the model’s limitations and potential failure modes.
  • Bias Evaluation: Assessment of potential biases in the model and steps taken to mitigate them.
  • Regulatory Status: Information about FDA registration or other regulatory frameworks the model complies with.

Real-World Example: Behavioral Health Assessment Models

Consider a vendor offering AI models that analyze video responses to detect signs of depression. A comprehensive model card would specify:

  • The model predicts PHQ-9 equivalent scores based on facial expressions, voice tone, and natural language analysis
  • Training included data from 10,000+ individuals across diverse demographic groups
  • Overall accuracy metrics (e.g., AUC: 0.89) with breakdowns for different populations
  • Independent validation through IRB-approved studies
  • Lower accuracy rates for certain populations with smaller representation in training data
  • Not intended for standalone diagnosis, but as a screening aid for clinicians

This level of transparency enables healthcare organizations to make informed decisions about whether a particular AI solution aligns with their clinical needs, patient populations, and ethical standards.

The Coalition for Health AI (CHAI) produced a great example of what a model card can contain to ensure transparency, safety, security & privacy, fairness & bias, and usefulness. Individual model cards will look different, but the frameworks CHAI developed are a baseline.

The Regulatory Landscape and Model Documentation

As regulatory bodies like the FDA develop frameworks for AI as medical devices, comprehensive documentation is becoming increasingly important. The FDA’s proposed regulatory framework for AI/ML-based Software as a Medical Device (SaMD) emphasizes the importance of transparency in model development and performance.

For pharmaceutical companies, model documentation is particularly crucial for clinical trials, where regulators require clear evidence of model validity and reliability. Strong model cards can help satisfy these requirements and build trust with regulatory agencies.

Questions to Ask AI Vendors About Their Models

When evaluating AI vendors for behavioral health or pharmaceutical applications, consider asking:

  • “Can you provide detailed model cards for each of your algorithms?”
  • “How was your model validated across different demographic groups?”
  • “What peer-reviewed research supports the effectiveness of your model?”
  • “What are the known limitations or potential biases in your model?”
  • “How often is your model updated, and what is your validation process for new versions?”

Model Cards as a Competitive Advantage

As the AI healthcare market becomes increasingly competitive, comprehensive model cards aren’t just good practice—they’re becoming a competitive advantage. Organizations that prioritize vendors with thorough, transparent documentation are better positioned to implement AI solutions that are effective, ethical, and aligned with regulatory requirements.

When selecting an AI vendor for behavioral health or pharmaceutical applications, remember that the quality of their model cards often reflects the quality of their approach to AI development. In a field where decisions impact patient lives, this level of transparency isn’t optional—it’s essential.

By demanding comprehensive model cards from AI vendors, healthcare organizations can make more informed decisions, reduce implementation risks, and ultimately deliver better care to the patients who need it most.

Breaking Down Barriers: How CCBHCs Can Lead Healthcare Access Innovation

As we gather for the NACBHDD Legislative and Policy Conference in Washington DC, I’m struck by the critical conversations around innovation and access to behavioral healthcare. The conference agenda highlights what many CCBHC leaders already know – we’re facing multiple, interrelated challenges: medical debt, administrative burdens, workforce shortages, and coordination with justice systems.

At Videra Health, we’re seeing CCBHCs tackle these challenges through innovative approaches to care delivery. Key trends emerging from our partnerships include:

The upcoming Medicaid discussions at the conference are particularly relevant. As the largest payer of behavioral health services, changes to Medicaid structure will significantly impact CCBHCs’ ability to serve their communities. We must ensure that technological innovation aligns with policy evolution to support, not hinder, access to care.

Looking ahead, CCBHCs are uniquely positioned to lead healthcare transformation. By combining policy advocacy with practical innovation, we can create more accessible, efficient, and effective behavioral health systems.