Videra Health Named to MountainWest Capital Network’s 2025 Utah 100 Emerging Elite

Videra Health Logo

Orem, UT, October 17, 2025 – Videra Health, a leading AI platform for behavioral health providers, today announced it was named to the 2025 Utah 100 Emerging Elite category, MountainWest Capital Network (MWCN)’s annual list of the fastest-growing companies in Utah.
Videra Health was honored at the 31st annual Utah 100 Awards program, held at the Grand America Hotel in Salt Lake City.

“We’re honored to be recognized among Utah’s Emerging Elite,” said Loren Larsen, CEO of Videra Health. “Utah’s innovation community has helped to fuel our growth and mission to make behavioral healthcare more proactive, accessible, and effective through AI. This
recognition affirms our commitment to improving patient outcomes and advancing care both locally and nationwide.”

“We congratulate all of this year’s Utah 100 companies for building outstanding businesses and making strong contributions to Utah’s economy,” said Chris Badger, Chairman of the MWCN Utah 100 committee. “These companies further advance Utah’s standing as an excellent place to do business.”

Recipients of the Utah 100 Emerging Elite were chosen as a Utah business with great prospects for future growth and success.

Videra Health’s rapid growth reflects its unique approach to behavioral healthcare, combining AI-driven insights with real-world clinical applications. The company has expanded its platform to serve a growing network of providers, helping them identify at-risk patients, optimize care workflows, and improve outcomes. By generating actionable patient data and insights at scale, Videra Health also creates opportunities for healthcare and pharmaceutical partners to better understand treatment patterns, support clinical decision-making, and enhance patient engagement. These measurable achievements align with the Utah 100 Emerging Elite’s focus on recognizing companies with strong growth potential and market impact.

Videra Health team at t he Utah 100 2025 Awards Event
From left to right, members of the Videra Health team: Brad Grimm, Madeline Cheney, Byron Clark, William Burk, Brett Talbot, Mike Henneman, Mel Walker, and Sterling Mason.

About MountainWest Capital Network

MountainWest Capital Network (MWCN) is the largest business networking organization in Utah, consisting of entrepreneurs, venture capitalists, consultants, legal professionals, bankers, and educators. MWCN seeks to promote and recognize business growth and capital development in the state through a variety of award programs and activities.

About Videra Health™

Videra Health is a leading AI platform for behavioral health providers and proactively identifies, triages and monitors at-risk patients using linguistic, audio and video analysis. The FDA-registered digital platform transforms how doctors and healthcare systems interact and track a patient’s journey, illuminating the hidden depths of patient behavior and outcomes. Videra Health connects providers and patients anytime, anywhere, between visits and post-discharge via written and video assessments that translate into actionable quantitative and qualitative patient data. The platform streamlines diagnoses, enhances care accessibility, optimizes workflows and drives down costs for providers and healthcare systems.

Videra Health Launches “Check on Mom,” the First Free AI-Powered Postpartum Depression Screener

Videra Health Logo

New initiative empowers moms with a confidential, stigma-free video screener for postpartum depression, powered by conversational AI

Orem, UT, September 23, 2025Videra Health, a leading AI platform for behavioral health providers, today announced the launch of Check on Mom, the first free, confidential AI-powered postpartum depression screener. The self-assessment tool empowers mothers to complete a private, video-based check-in using conversational AI that captures both verbal and non-verbal signs of postpartum depression (PPD) and generates results they can share with a provider.

Unlike traditional written screeners, Check on Mom uses video and AI language models trained specifically for healthcare to analyze both spoken words and non-verbal cues. This dual-layer approach provides deeper insights into maternal mental health, making it easier to distinguish between common “baby blues” and postpartum depression.

“Check on Mom is not just a digital tool, it is a public health resource,” said Loren Larsen, co-founder and CEO of Videra Health. “By showing what is possible with AI-powered video screeners, we are opening new opportunities for earlier detection and better outcomes, not just in maternal health but across conditions where timely screening can make a real difference for patients and providers.”

How Check on Mom Works:

  • Free and confidential: No insurance, no cost.
  • Fast and convenient: Completed on almost any device, at any time, in less than three minutes.
  • Video-based and shame-free: Instead of static surveys, moms can speak openly and naturally about how they feel, and results are theirs to share
  • Clinically validated insights: Downloadable results can be shared with OBs, midwives, or primary care providers to guide next steps.

One in eight mothers experience postpartum depression, yet most go undiagnosed or untreated because screenings are inconsistent, stigma persists, and support often arrives too late. With Check on Mom, any mother can complete a short, video-based screener in just a few minutes and access secure results immediately without waiting for a six-week appointment or navigating health system barriers.

Check on Mom is more than a consumer resource. It is a proof point for how AI can transform condition-specific screening at scale. By applying conversational AI and video analysis, Videra Health is creating earlier pathways to care for conditions that are too often overlooked.

In addition to proprietary AI models already used to screen for depression, anxiety, trauma, and other conditions, the company most recently released TDScreen, the first AI-powered screener for tardive dyskinesia. These tools show how AI screeners can:

  • Expand access to early detection in underserved populations.
  • Deliver patient-centered insights that improve diagnosis and treatment.
  • Provide adaptable, condition-specific screening models that can be applied across therapeutic areas.

Across healthcare from patients and providers to pharma leaders, Check on Mom demonstrates how technology can expand access to postpartum depression screening, improve early detection, and support better patient outcomes. Videra is working to normalize maternal mental health conversations and demonstrate a new model for patient-first innovation.

Check on Mom is available free of charge for any mom wanting to screen for postpartum depression. Visit www.checkonmom.ai to learn more.

About Videra Health™

Videra Health is a leading AI platform for behavioral health providers and proactively identifies, triages and monitors at-risk patients using linguistic, audio and video analysis. The FDA-registered digital platform transforms how doctors and healthcare systems interact and track a patient’s journey, illuminating the hidden depths of patient behavior and outcomes. Videra Health connects providers and patients anytime, anywhere, between visits and post-discharge via written and video assessments that translate into actionable quantitative and qualitative patient data. The platform streamlines diagnoses, enhances care accessibility, optimizes workflows and drives down costs for providers and healthcare systems.

Navigating the APA’s AI Ethical Guidance

As a practicing clinician who has witnessed firsthand the evolution of our field, I view the American Psychological Association’s Ethical Guidance for AI in the Professional Practice of Health Service Psychology through both clinical and leadership lenses. This guidance arrives at a crucial moment—not as a restriction on innovation, but as a framework that validates what many of us in clinical practice have been advocating for: technology that respects the sacred nature of the therapeutic relationship.

In Short....

The APA offers ethical guidance for using AI in psychology. Key points: be transparent with clients, guard against bias, protect data privacy, validate AI tools, maintain human oversight, and understand legal responsibilities. AI should support—not replace—professional judgment. Continue on for more.

The Clinical Reality

In my years of practice, I’ve seen how administrative burdens can erode the time we have for what matters most—connecting with and helping our patients. When the APA reports that 10% of practitioners are already using AI for administrative tasks, I’m not surprised. What concerns me is ensuring we’re using these tools in ways that enhance, rather than compromise, the quality of care.

The guidance speaks directly to the tensions many clinicians feel. We want efficiency, but not at the cost of accuracy. We seek innovation, but not if it undermines the trust our patients place in us.

The Primacy of Informed Consent

The APA’s emphasis on transparent informed consent reflects a fundamental truth about therapeutic relationships: they’re built on trust and transparency. Patients have the right to understand every aspect of their care, including when and how AI tools are involved. This isn’t bureaucracy—it’s respect for patient autonomy and an extension of the collaborative approach that defines good therapy.

Clinical Judgment Remains Supreme

What heartens me most about the guidance is its clear stance that AI should augment, not replace, clinical judgment. As clinicians, we bring years of training, intuition, and human understanding that would be difficult for an algorithm  to fully replicate. The guidance affirms that we must remain the “conscious oversight” for any AI-generated content or recommendations.

Accuracy as an Ethical Imperative

The APA’s call for critical evaluation of AI outputs aligns with our professional obligation to “do no harm.” Every note we write, every assessment we make, becomes part of a patient’s story. We cannot abdicate our responsibility to ensure that story is told accurately and with integrity.

What This Means for Clinical Practice

From a clinical perspective, implementing these guidelines requires us to:

Maintain Our Clinical Voice:

Whether using AI for documentation or assessment support, we must ensure that our clinical reasoning and the unique understanding we have of each patient remains central to all records and decisions.

Protect the Therapeutic Space:

The therapy room—whether physical or virtual—must remain a sanctuary. Any technology we introduce should enhance the sense of safety and confidentiality that makes healing possible.

Consider Diverse Populations:

The guidance reminds us to be vigilant about how AI tools may differentially impact various populations. As clinicians, we must advocate for tools that are tested across diverse groups and remain alert to potential biases.

Embrace Continuous Learning:

Just as we pursue continuing education in clinical techniques, we must commit to understanding the tools we use. This isn’t about becoming technologists—it’s about maintaining competence in our evolving field.

The Opportunity Before Us

The APA’s guidance doesn’t close doors; it opens them responsibly. I see opportunities to:

  • Reduce the documentation burden that keeps us at our desks instead of with patients
  • Enhance our ability to track treatment progress and outcomes
  • Support clinical decision-making with evidence-based insights
  • Extend quality mental healthcare to underserved communities

But each of these opportunities must be pursued with clinical wisdom and ethical clarity.

A Personal Reflection

I entered this field because I believe in the transformative power of human connection. Nothing in the APA’s guidance changes that fundamental truth. Instead, it challenges us to ensure that as we adopt new tools, we do so in service of that connection.

I’ve seen too many technological promises in healthcare fall short because they were designed without clinical input or implemented without clinical wisdom. The APA’s guidance helps ensure we don’t repeat those mistakes in mental health.

Moving Forward as a Clinical Community

As clinicians, we have a unique responsibility in this moment. We must:

  • Share our experiences openly, both successes and concerns
  • Advocate for our patients’ needs in the development of AI tools
  • Hold ourselves and our tools to the highest ethical standards
  • Remember that behind every algorithm is a human being seeking help

To My Fellow Clinicians

I know many of you approach AI with a mixture of hope and hesitation. That’s appropriate. The APA’s guidance gives us permission to be thoughtful, to ask hard questions, and to demand that any tool we use meets the ethical standards we’ve sworn to uphold.

This isn’t about resisting change—it’s about shaping it. We have the opportunity to ensure that AI in mental healthcare develops in ways that honor our professional values and serve our patients’ best interests.

The therapeutic relationship has survived and adapted through many changes in our field. With the APA’s ethical guidance as our North Star, I’m confident we can navigate this new frontier while keeping that relationship at the heart of everything we do.

After all, in a world of increasing technological complexity, the simple act of one human being helping another remains as powerful—and as necessary—as ever.

Protecting Innovation, Security, and Patient Trust in AI Healthcare

Healthcare AI security dashboard showing patient data protection features and compliance indicators

As CEO of Videra, I’ve watched the artificial intelligence landscape evolve at an unprecedented pace, particularly in healthcare AI security. While this evolution brings extraordinary opportunities for healthcare advancement, it also presents significant challenges that we must address head-on – especially regarding the proliferation of low-cost AI solutions from non-allied nations that may compromise our healthcare AI security standards.

The healthcare sector, especially in mental and behavioral health, requires the highest standards of security, reliability, and ethical consideration. When we develop AI tools for healthcare applications, we’re not just creating technology – we’re creating solutions that impact human lives, influence medical decisions, and handle incredibly sensitive patient data.

This year has seen a surge in AI products marketed to healthcare providers at significantly reduced prices, often at the expense of healthcare AI security. While competitive pricing is generally beneficial for market innovation, we must carefully consider the hidden costs and risks associated with AI solutions from nations with different data privacy standards, regulatory frameworks, and strategic interests than our own.

For pharmaceutical companies and drug developers, these risks are particularly acute. Drug development involves highly sensitive intellectual property and research data that, if compromised, could have far-reaching consequences for both innovation and national security. When AI systems process this data, they need to do so with absolute security and transparency about data handling practices.

In behavioral and mental health, healthcare AI security is paramount. These fields deal with some of our most vulnerable populations, and the AI systems supporting these services must maintain the highest standards of privacy and ethical operation. Providers need to know exactly how patient data is being processed, where it’s being stored, and who has access to it.

Key Healthcare AI Security Considerations for Providers:

1. Healthcare AI Security: Data Sovereignty and Protection

Your patient data should remain within U.S. jurisdiction, protected by our robust privacy laws and HIPAA regulations. Be wary of solutions that may route or store data through servers in countries with different privacy standards or data access laws.

2. Regulatory Compliance

Ensure any AI solution fully complies with U.S. healthcare regulations. This includes not just HIPAA, but also FDA requirements for medical devices and software as a medical device (SaMD).

3. Algorithmic Transparency

Understanding how AI makes decisions is crucial in healthcare. Providers should have clear insight into the training data and methodologies used to develop the AI systems they employ.

4. Supply Chain Security in Healthcare AI

Consider the entire technology supply chain, including where the AI models were trained and how they’re maintained. This is particularly crucial for solutions handling sensitive healthcare data.

5. Long-term Stability

Healthcare providers need partners they can rely on for the long term, with clear accountability and consistent support. This becomes particularly important when dealing with foreign entities operating under different legal frameworks.

At Videra, we believe that true innovation in healthcare AI must be built on a foundation of trust, security, and ethical operation. While cost is certainly a factor in technology decisions, it cannot be the primary driver when patient care and privacy are at stake.

The U.S. healthcare system has always been at the forefront of innovation, and maintaining this leadership requires careful consideration of the tools and technologies we employ. As we continue to advance in the AI era, let’s ensure we’re making choices that protect our patients, our intellectual property, and our healthcare infrastructure.

Our commitment to developing secure, ethical AI solutions that prioritize healthcare AI security remains unwavering. We understand that the future of healthcare technology must balance innovation with responsibility, and we’re dedicated to maintaining the highest standards in both areas.

Addressing the Invisible Wounds of PTSD: Technology-Enabled Strategies for Healing

PTSD Awareness Month

The invisible wounds of post-traumatic stress disorder (PTSD) affect approximately 8 million Americans annually. Behind this statistic lies a complex clinical challenge that technology-enabled PTSD treatment approaches are beginning to address: how do we effectively treat a condition that manifests uniquely in each person, remains largely hidden from external observation, and often prevents the very help-seeking behaviors necessary for recovery?

Traditional PTSD treatment approaches, while valuable, have struggled with persistent challenges of access, engagement, and personalization. Many patients face geographical barriers to specialized care, while others confront the paradox that their symptoms—particularly avoidance—directly interfere with consistent treatment participation.

In my fifteen years working with trauma survivors, I’ve witnessed both the limitations of conventional approaches and the emerging promise of technology-enabled solutions. This isn’t simply about digitizing existing treatments; it’s about fundamentally reimagining how we conceptualize, measure, and address the complex manifestations of psychological trauma.

The Challenge: Why Traditional PTSD Treatment Falls Short

The journey toward effective PTSD treatment has been marked by both significant progress and persistent obstacles. Despite decades of research and clinical refinement, several challenges continue to limit the reach and efficacy of traditional approaches:

  • Access barriers: Studies show only about 50% of those with PTSD seek treatment1, with rural populations, ethnic minorities, and military veterans particularly underserved due to provider shortages and geographic limitations
  • Engagement difficulties: The longitudinal nature of trauma recovery requires consistent participation, yet PTSD symptoms themselves—particularly avoidance—often directly interfere with treatment adherence
  • Measurement limitations: Clinical assessments conducted at periodic intervals frequently miss the day-to-day symptom fluctuations that characterize PTSD, limiting timely intervention and treatment adjustment
  • One-size-fits-all approaches: Standard protocols, while evidence-based, often fail to address the unique manifestation and neurobiological underpinnings of trauma responses in different individuals

These challenges aren’t simply administrative hurdles; they represent fundamental limitations in our ability to meet patients where they are—both literally and figuratively. They call for innovation that extends beyond incremental improvements to existing models.

Technology-Enabled PTSD Treatment: A Bridge to Healing

The convergence of digital health innovation, neuroscience, and trauma research has created a watershed moment in PTSD treatment. Emerging technologies offer novel approaches to longstanding barriers, creating possibilities that were unimaginable even a decade ago:

1. Digital Tools and Measurement-Based Care

The fundamental principle that “you can’t manage what you don’t measure” takes on particular significance in PTSD treatment. Traditional assessment relies heavily on retrospective self-reporting, which is vulnerable to recall bias and symptom fluctuations. The moment-by-moment quantification of individual-level human behavior using data from personal digital devices offers an unprecedented window into the lived experience of PTSD.

Research on passive sensing for PTSD detection shows promising results, with a recent study demonstrating that smartphone-collected GPS data alone can differentiate individuals with PTSD from those without with 77% accuracy, suggesting the potential for continuous, unobtrusive mental health monitoring.2 These approaches capture objective behavioral markers that patients may not recognize or report, including:

  • Sleep disturbances through movement and device usage patterns
  • Social isolation through communication metadata and app usage
  • Avoidance behaviors through location data and activity levels
  • Emotional dysregulation through voice analysis and text communication patterns

2. Virtual Reality Exposure Therapy (VRET)

Exposure therapy represents one of the most empirically supported treatments for PTSD, yet its implementation faces significant practical and psychological barriers. Creating realistic trauma-relevant contexts while maintaining patient safety and therapeutic control presents an inherent challenge. As a technology-enabled PTSD treatment modality, virtual reality technology offers a compelling solution by enabling immersive, controllable experiences that facilitate emotional processing without the logistical challenges of in vivo exposure.

A 2019 meta-analysis of 30 randomized controlled trials involving 1,057 participants, published in the Journal of Anxiety Disorders, found that VRET produced outcomes comparable to in-person exposure therapy3. The analysis revealed several key findings:

  • VRET demonstrated a large effect size (g = 0.90) compared to waitlist controls and a medium to large effect size compared to psychological placebo conditions
  • When compared directly to in vivo exposure therapy, no significant difference in effectiveness was found (g = −0.07), indicating VRET is equally effective
  • The analysis included studies across multiple anxiety disorders: 14 for specific phobias, 8 for social anxiety disorder or performance anxiety, 5 for PTSD, and 3 for panic disorder

Results were relatively consistent across different anxiety disorders, suggesting broad applicability. The technology offers advantages including controlled, gradual exposure that is easy for therapists to implement and often more acceptable to patients than traditional exposure methods.

3. AI-Enhanced Therapy Support

The integration of artificial intelligence into PTSD treatment represents not a replacement for human therapists but an amplification of their capabilities and reach. Natural language processing can analyze therapy session content to identify emotional patterns, treatment engagement markers, and early warning signs of deterioration. Machine learning algorithms, trained on longitudinal datasets, can identify subtle precursors to symptom exacerbation, enabling proactive rather than reactive intervention.

A Stanford University study published in the Journal of Medical Internet Research evaluated an AI therapy app (Youper) for anxiety and depression and found significant improvements over a 4-week period4:

  • Anxiety symptoms reduced by 24% (Cohen’s d = 0.60) from baseline to 28-day follow-up
  • Depression symptoms reduced by 17% (Cohen’s d = 0.42) over the same period
  • High user acceptability with an average rating of 4.84 out of 5 stars
  • Strong retention rates with 89% of users remaining active after week 1 and 67% completing the full 4-week subscription period

These results suggest that AI-enhanced, technology-enabled PTSD treatment protocols may help address accessibility challenges in mental health care by providing scalable, effective interventions that users find engaging and helpful.

4. Precision Treatment Matching 

Perhaps the most transformative application of technology in PTSD treatment lies in the emerging field of precision psychiatry. The considerable heterogeneity in trauma responses—shaped by genetic factors, prior trauma history, developmental timing, and numerous other variables—suggests that treatment effectiveness could be substantially improved through personalized intervention selection.

By integrating multiple data streams—genetic information, digital biomarkers, neuroimaging findings, and detailed clinical phenotyping—we can begin to develop predictive models that match patients to optimal interventions. This approach moves beyond the traditional trial-and-error method of treatment selection toward an evidence-based, personalized strategy.

Recent advances in precision medicine as a technology-enabled PTSD treatment selection demonstrate the potential of personalized treatment approaches. According to the National Center for PTSD, when evidence-based psychotherapies (CPT, PE, or EMDR) are properly matched to patients, 53 out of 100 patients will no longer meet criteria for PTSD, while medication alone achieves remission in 42 out of 100 patients.5 VA’s large-scale Cooperative Studies Program trial (CSP #591) comparing prolonged exposure and CPT across 916 veterans at 18 medical centers represents one of the most ambitious efforts to identify optimal treatment matching strategies.6

Emerging research on treatment personalization includes work on MDMA-assisted therapy, which has shown large effect sizes in recent Phase 3 trials7, and studies demonstrating that CPT delivered via telehealth achieves outcomes equivalent to in-person delivery. Additionally, research has shown that combining treatments—such as dialectical behavior therapy with prolonged exposure—can benefit specific populations, with one study showing 91% of participants experiencing significant PTSD symptom reduction.

These findings suggest we are approaching an era where technology and precision medicine enable us to move beyond asking “what works for PTSD?” to the more nuanced question: “what works best for each individual patient?

Implementation Challenges and Ethical Considerations

The promise of technology-enabled PTSD treatment comes with significant responsibilities. As we navigate this rapidly evolving landscape, several important challenges require thoughtful consideration:

Privacy and security: For trauma survivors, issues of safety, control, and trust take on heightened significance. Any technological intervention must prioritize rigorous data protection and transparent communication about information usage. The principle of “do no harm” extends to ensuring that digital tools themselves do not become sources of vulnerability or retraumatization.

Digital equity: Technology-enabled interventions risk exacerbating existing healthcare disparities if not implemented with attention to access barriers. Research from the Pew Research Center indicates that digital divides persist along socioeconomic, age, and geographical lines—precisely overlapping with populations already underserved in mental healthcare.

Maintaining therapeutic alliance: Technology should enhance rather than diminish the fundamental human connection at the core of trauma recovery. Research shows technology works best complementing, not replacing, therapeutic relationships. A review in the American Journal of Psychiatry found technology-based applications most effective when augmenting treatment through session monitoring and adherence tracking while maintaining the patient-therapist connection.8

Algorithmic transparency and bias: Machine learning models trained on historical clinical data risk perpetuating existing biases in diagnosis and treatment. Ensuring diverse training datasets and ongoing monitoring for disparate impact remains essential for equitable implementation.

These challenges are substantial but not insurmountable. They require interdisciplinary collaboration among clinicians, technologists, ethicists, and—most importantly—individuals with lived experience of PTSD.

The Way Forward: Integrated Technology-Enabled PTSD Treatment

The narrative of technology in PTSD treatment should not be one of replacement but of integration. The implemented “connected care” framework—a model that weaves together evidence-based clinical practices with technological innovation in service of more accessible, personalized, and effective trauma treatment.

This framework consists of four integrated components:

  • Evidence-based therapies delivered by trained clinicians through both in-person and telehealth modalities, including Cognitive Processing Therapy (CPT), Prolonged Exposure (PE), and EMDR
  • Digital measurement systems that capture both subjective experience through ecological momentary assessment and objective functioning through passive monitoring
  • Asynchronous therapeutic support provided through secure messaging, AI-enhanced monitoring, and just-in-time interventions for moments of acute distress
  • Community connection facilitated through moderated peer support networks that address the social isolation often accompanying PTSD

This integrated “connected care” approach is particularly well-suited for PTSD treatment based on several key research findings:

  • Addressing PTSD’s Complex Nature: PTSD is characterized by heterogeneous symptoms including intrusive memories, avoidance behaviors, negative cognitions, and hyperarousal. Research shows that no single intervention addresses all aspects effectively. The multi-modal framework mirrors the disorder’s complexity by targeting different symptom clusters through complementary approaches—evidence-based therapy for core trauma processing, digital monitoring for between-session symptoms, and peer support for social reintegration.
  • Overcoming Treatment Barriers: Studies consistently show that 50% of those with PTSD don’t seek treatment, with rural populations, minorities, and veterans particularly underserved.9 The connected care model directly addresses documented barriers: telehealth eliminates geographic obstacles,10 asynchronous support provides help outside business hours, and peer networks reduce stigma-related reluctance. Research demonstrates that when these barriers are removed, treatment engagement significantly improves.
  • Leveraging Therapeutic Alliance: Evidence indicates that the therapeutic relationship is crucial for PTSD recovery, with treatment outcomes strongly correlated to alliance quality. Rather than diminishing this relationship, the framework enhances it by providing continuous connection between sessions. Clinicians gain richer data about patients’ daily experiences, enabling more personalized interventions while maintaining the human connection essential for trauma healing.11
  • Supporting Neurobiological Healing: PTSD involves dysregulation of fear networks and stress response systems that operate continuously, not just during therapy hours. The 24/7 monitoring and just-in-time interventions align with neuroscience findings showing that repeated, distributed practice of coping skills is more effective for rewiring trauma responses than weekly sessions alone. This matches research on memory reconsolidation and extinction learning.

Evidence-Based Integration: Each component has independent empirical support—CPT/PE/EMDR show 53% remission rates,12 digital phenotyping can detect PTSD with 77% accuracy,13 and peer support improves treatment retention. By combining validated approaches rather than creating entirely new interventions, the framework builds on established efficacy while addressing individual limitations of each component.

Conclusion: Technology as an Assistant in Human Healing

The story of PTSD treatment is ultimately a human story—one of suffering, resilience, and the search for effective pathways to recovery. Technology enters this narrative not as a protagonist but as an enabling force that can help overcome barriers that have limited our ability to address the invisible wounds of trauma.

The integration of digital phenotyping, virtual reality, artificial intelligence, and precision treatment approaches represents more than incremental improvement; it offers the possibility of fundamental transformation in how we conceptualize and deliver trauma care. These technologies allow us to measure what was previously unmeasurable, to reach those who were previously unreachable, and to personalize treatment in ways that were previously unimaginable.

Yet as we embrace these technological possibilities, we must remain grounded in the core principles of trauma-informed care: safety, trustworthiness, choice, collaboration, and empowerment. Technology that fails to embody these principles will ultimately fail to serve those who need it most.

The road ahead requires continued innovation, rigorous evaluation, and a commitment to ethical implementation. It demands collaboration across disciplines and centering the voices of those with lived experience of trauma. Most importantly, it requires us to remember that technology is not an end in itself but a means to advance our fundamental mission: supporting healing and recovery for all who live with the invisible wounds of PTSD.

1. Sidran Institute. (n.d.). Post-traumatic stress disorder statistics. Retrieved from [URL]. As cited in: The Treetop Recovery. (2023). 50+ PTSD statistics & facts: How common is PTSD? Retrieved from https://www.thetreetop.com/statistics/ptsd-statistics-facts-prevelanece

2. Ranjan, G., Nguyen, T. N. B., Meng, H., Kashyap, R., Jain, R., Bhandari, S., Duffecy, J., Langenecker, S. A., Zulueta, J., McInnis, M. G., Merikangas, K. R., De Choudhury, M., & Jacobson, N. C. (2021). Using artificial intelligence and longitudinal location data to differentiate persons who develop posttraumatic stress disorder following childhood trauma. Scientific Reports, 11, Article 10303. https://doi.org/10.1038/s41598-021-89768-2

3. Carl, E., Stein, A. T., Levihn-Coon, A., Pogue, J. R., Rothbaum, B., Emmelkamp, P., Asmundson, G. J. G., Carlbring, P., & Powers, M. B. (2019). Virtual reality exposure therapy for anxiety and related disorders: A meta-analysis of randomized controlled trials. Journal of Anxiety Disorders, 61, 27-36. https://doi.org/10.1016/j.janxdis.2018.08.003

4. Mehta, A., Niles, A. N., Vargas, J. H., Marafon, T., Couto, D. D., & Gross, J. J. (2021). Acceptability and effectiveness of artificial intelligence therapy for anxiety and depression (Youper): Longitudinal observational study. Journal of Medical Internet Research, 23(6), e26771. https://doi.org/10.2196/26771

5. National Center for PTSD. (2023). Overview of psychotherapy for PTSD. U.S. Department of Veterans Affairs. Retrieved from https://ptsd.va.gov/professional/treat/txessentials/overview_therapy.asp

6. VA Cooperative Studies Program. (2023). Head-to-head comparison of prolonged exposure and CPT (CSP #591). U.S. Department of Veterans Affairs. Retrieved from https://www.research.va.gov/topics/ptsd.cfm

7. Mitchell, J. M., et al. (2021). MDMA-assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study. Nature Medicine, 27(6), 1025-1033.

8. Harvey, P. D., Goldberg, T. E., Bowie, C. R., Moeller, D., Horan, W. P., Hellemann, G., Wilder, C., Kotwicki, R. J., & Velligan, D. I. (2023). Technology and mental health: State of the art for assessment and treatment. American Journal of Psychiatry, 180(9), 638-648. https://doi.org/10.1176/appi.ajp.21121254

9. Sidran Institute. (n.d.). Post-traumatic stress disorder statistics. As cited in: The Treetop Recovery. (2023). 50+ PTSD statistics & facts: How common is PTSD? Retrieved from https://www.thetreetop.com/statistics/ptsd-statistics-facts-prevelanece

10. National Center for PTSD. (2023). PTSD and telemental health. U.S. Department of Veterans Affairs. Retrieved from https://www.ptsd.va.gov/professional/treat/txessentials/telemental_health.asp

11. Harvey, P. D., et al. (2023). Technology and mental health: State of the art for assessment and treatment. American Journal of Psychiatry, 180(9), 638-648. https://doi.org/10.1176/appi.ajp.21121254

12. National Center for PTSD. (2023). Overview of psychotherapy for PTSD. U.S. Department of Veterans Affairs. Retrieved from https://ptsd.va.gov/professional/treat/txessentials/overview_therapy.asp

13. Ranjan, G., et al. (2021). Using artificial intelligence and longitudinal location data to differentiate persons who develop posttraumatic stress disorder following childhood trauma. Scientific Reports, 11, Article 10303. https://doi.org/10.1038/s41598-021-89768-2

Protecting Innovation, Security, and Patient Trust in AI Healthcare

Model cards for AI vendors showing performance metrics across populations

As CEO of Videra, I’ve watched the artificial intelligence landscape evolve at an unprecedented pace. While this evolution brings extraordinary opportunities for healthcare advancement, it also presents significant challenges that we must address head-on – particularly regarding the proliferation of low-cost AI solutions from non-allied nations.

The healthcare sector, especially in mental and behavioral health, requires the highest standards of security, reliability, and ethical consideration. When we develop AI tools for healthcare applications, we’re not just creating technology – we’re creating solutions that impact human lives, influence medical decisions, and handle incredibly sensitive patient data.

This year has seen a surge in AI products marketed to healthcare providers at significantly reduced prices. While competitive pricing is generally beneficial for market innovation, we must carefully consider the hidden costs and risks associated with AI solutions from nations with different data privacy standards, regulatory frameworks, and strategic interests than our own.

For pharmaceutical companies and drug developers, these risks are particularly acute. Drug development involves highly sensitive intellectual property and research data that, if compromised, could have far-reaching consequences for both innovation and national security. When AI systems process this data, they need to do so with absolute security and transparency about data handling practices.

In behavioral and mental health, the stakes are equally high. These fields deal with some of our most vulnerable populations, and the AI systems supporting these services must maintain the highest standards of privacy and ethical operation. Providers need to know exactly how patient data is being processed, where it’s being stored, and who has access to it.

Key considerations for healthcare providers when evaluating AI solutions:

1. Data Security and Sovereignty

Your patient data should remain within U.S. jurisdiction, protected by our robust privacy laws and HIPAA regulations. Be wary of solutions that may route or store data through servers in countries with different privacy standards or data access laws.

2. Regulatory Compliance

Ensure any AI solution fully complies with U.S. healthcare regulations. This includes not just HIPAA, but also FDA requirements for medical devices and software as a medical device (SaMD).

3. Algorithmic Transparency

Understanding how AI makes decisions is crucial in healthcare. Providers should have clear insight into the training data and methodologies used to develop the AI systems they employ.

4. Supply Chain Security

Consider the entire technology supply chain, including where the AI models were trained and how they’re maintained. This is particularly crucial for solutions handling sensitive healthcare data.

5. Long-term Stability

Healthcare providers need partners they can rely on for the long term, with clear accountability and consistent support. This becomes particularly important when dealing with foreign entities operating under different legal frameworks.

At Videra, we believe that true innovation in healthcare AI must be built on a foundation of trust, security, and ethical operation. While cost is certainly a factor in technology decisions, it cannot be the primary driver when patient care and privacy are at stake.

The U.S. healthcare system has always been at the forefront of innovation, and maintaining this leadership requires careful consideration of the tools and technologies we employ. As we continue to advance in the AI era, let’s ensure we’re making choices that protect our patients, our intellectual property, and our healthcare infrastructure.

Our commitment to developing secure, ethical AI solutions remains unwavering. We understand that the future of healthcare technology must balance innovation with responsibility, and we’re dedicated to maintaining the highest standards in both areas.

Check our blog for the latest discussions on AI in healthcare, behavioral health, life sciences and clinical trials.

Why We Made TDScreen Free: A CEO’s Perspective on Democratizing Mental Health Screening

TDScreen AI-powered TD screening tool interface showing patient assessment dashboard

Last week we announced the launch of TDScreen – our AI-powered Tardive Dyskinesia (TD) screening tool. The response has been immediate and overwhelming, validating everything we believed about the urgent need for accessible TD screening.

This is exactly why we built TDScreen.

The Hidden Crisis in Plain Sight

Let me share some numbers that keep me up at night:

  • At least 500,000 Americans suffer with Tardive Dyskinesia
  • Only 20% of those  have been diagnosed
  • That’s a 80% diagnosis gap

But here’s what those statistics don’t capture: Each untreated case represents someone whose involuntary movements might be dismissed as nervousness, aging, or “just a quirk.” Someone who might stop taking life-changing medications because they’re embarrassed by movements that could be managed. Someone whose quality of life is quietly deteriorating while effective treatments exist.

Why Traditional Screening Fails

The standard AIMS assessment takes 15-20 minutes of specialized clinical time. For a psychiatrist seeing 20-30 patients daily, screening everyone quarterly (as guidelines recommend) is mathematically impossible. It’s not about clinician dedication – it’s about the brutal reality of time constraints in modern healthcare.

This is where AI changes everything. TDScreen compresses expert-level assessment into a 5-minute patient self-assessment, with accuracy that actually exceeds human raters (0.89 AUC, validated in our Journal of Clinical Psychiatry study).

The Business Decision to Make TDScreen Free

We’re not in the business of gatekeeping essential healthcare tools. We’re in the business of transforming behavioral health outcomes.

Making TDScreen free isn’t charity – it’s strategy. Every provider who adopts TDScreen becomes part of our mission to modernize behavioral healthcare. Every patient who gets screened is a potential life improved. And every success story builds the foundation for our broader platform vision.

What Our TD Screening Tool Actually Does

For providers wondering about the specifics – here’s exactly what you get:

Immediate Clinical Value

  • Patient completes video assessment (smartphone, tablet, or computer)
  • AI analyzes movement patterns based on AIMS criteria
  • You receive an objective risk score with visual highlights
  • Track changes over time with quantitative data

Zero Hidden Costs

  • No subscription fees
  • No per-patient charges
  • No training requirements
  • No IT integration needed
  • Start screening in under 15 minutes

The Evidence Behind TDScreen

Our research, published yesterday in the Journal of Clinical Psychiatry, demonstrates that TDScreen achieves:

  • Superior consistency compared to human raters (Cohen’s Kappa of 0.61)
  • Validated across 350+ patients in multi-site clinical trials

This isn’t theoretical – it’s proven technology ready for clinical use today. Our partners in the study – Dr. Anthony A. Sterns, Ph.D., lead researcher on the project and CEO at iRxReminder, Dr. Owen Muir, CMO of iRxReminder, CMO and co-founder of Radial Health, and co-author of the publication, and the National Institutes of Health – provided the data and analysis that helped to create TDScreen.

The Bigger Vision

Why This TD Screening Tool Changes Everything

TDScreen is just the beginning. At Videra Health, we’re building a comprehensive AI platform that transforms how behavioral health providers deliver care. But we started with TD screening for a reason: it’s a massive, solvable problem where AI demonstrably outperforms traditional methods.

By making TDScreen free, we’re proving that AI in healthcare doesn’t have to be expensive, complex, or intimidating. It can be as simple as sending your patient a link.

Your Patients Are Waiting

If you prescribe antipsychotics, you have patients at risk for TD. Statistically, several already have symptoms. The question isn’t whether to screen – it’s whether you’ll do it the old way or the better way.

TDScreen is live, validated, and free at tdscreen.ai.

No sales calls. No demos required. No contracts to sign.

Just better care, starting today.

Join the Movement

In healthcare innovation, the best time to adopt new technology is when it’s proven but not yet universal. TDScreen has the validation (peer-reviewed research), the accessibility (completely free), and the simplicity (5-minute assessments) to transform TD screening.

The providers who adopt TDScreen today aren’t just using a tool – they’re setting a new standard of care for their patients.

Will you join us?

Start screening at tdscreen.ai

Videra Health Launches TDScreen, a First-of-Its-Kind Video-based, AI-powered Tool to Assess Tardive Dyskinesia Symptoms

Videra Health Logo

New study published in the Journal of Clinical Psychiatry reveals AI enables efficient, accurate and scalable detection of TD, representing a significant advancement in meeting the standard of care for TD screening

Orem, UT, June 3, 2025Videra Health, a leading AI platform for behavioral health providers, has announced the launch of TDScreen, the first-ever automated, video-based AI solution on the market to screen for Tardive Dyskinesia (TD) symptoms. TD is a chronic, involuntary movement disorder that can develop as a side effect of long-term use of certain medications, particularly antipsychotic drugs. While TDScreen isn’t intended as a standalone diagnostic tool, it represents a significant advancement in meeting the standard of care for TD screening.

TD presents unique screening challenges even for experienced clinicians and remains underdiagnosed. TD affects up to 2.6 million Americans, and up to 7 million Americans taking antipsychotic medications could develop TD symptoms. With its involuntary movements often mistaken for nervousness, aging, or other conditions, the gap in recognition represents not just a clinical challenge, but a deeply human one that affects quality of life and treatment outcomes. A paper published Wednesday in The Journal of Clinical Psychiatry, led by Principal Investigator Dr. Anthony Sterns and members of the iRxReminder and Videra Health teams, reveals that video-based AI enables efficient, accurate, and scalable detection of TD. This application has the potential to significantly improve early diagnosis and patient outcomes, especially in remote care settings where resources are scarcest. Videra Health’s TDScreen algorithm was built using the data from multiple studies, which was supported in part by the National Institute of Mental Health, and represents a significant leap forward in integrating artificial intelligence into psychiatric care, particularly in the era of telemedicine.

“Videra Health is thrilled to be able to launch a first-of-its-kind innovative solution to screen for TD symptoms, effectively,” said Loren Larsen, CEO of Videra Health. “TDScreen and our broader AI platform aren’t about replacing clinician judgment—they’re about enhancing it. By automating routine screenings, we free healthcare providers to focus on what matters most: the human connections and complex decision-making that drive quality care.” Larsen added, “We are grateful for the multiple academic and research partners who have contributed their time and expertise to these studies.”

“Early detection of TD is critical to mitigating its debilitating effects,” said Dr. Anthony A. Sterns, Ph.D., lead researcher on the project and CEO at iRxReminder. “Our AI-driven approach not only matches but exceeds human expert performance, offering a scalable solution to a major unmet clinical need,” added Dr. Joel W. Hughes, Ph.D., collaborator from Kent State University.

“As a physician who both treats –and lives with– tardive dyskinesia, this research marks a turning point for millions of patients who have been forced to wonder if the movement disorder they suffer from could be treatable,” says Owen S, Muir, M.D., CMO of iRxReminder, CMO and co-founder of Radial Health, and co-author of the publication.  “Now, physicians have a simple, evidence-based AI-guided tool to support their clinical decision making.”

TDScreen was validated across three clinical studies involving more than 350 participants on antipsychotic medications. The innovative AI tool developed by Videra Health utilizes advanced video analysis and a vision transformer machine learning architecture to detect TD with unprecedented accuracy. TDScreen demonstrates a Cohen’s Kappa of 0.61—a number indicating the algorithm is outperforming even calibrated human raters. The algorithm achieved an area under the receiver-operating-characteristic curve (AUC) of 0.89, surpassing the sensitivity and specificity of trained human raters using the standard Abnormal Involuntary Movement Scale (AIMS).

These numbers represent more than just statistical achievements—they translate to real-world benefits:

  • Consistency: Unlike human raters whose assessments may vary, AI provides the same evaluation standards every time
  • Accessibility: Patients can complete assessments from home on their own devices
  • Efficiency: Providers save valuable clinical time while increasing screening frequency
  • Earlier detection: Subtle symptoms can be identified before they become pronounced

Continuous monitoring: Regular assessments track symptom progression or improvement

With TDScreen, patients on antipsychotics can easily complete video-based screenings in-office or remotely, and enable providers to monitor or modify their treatment plans. The TDScreen tool is based on the Abnormal Involuntary Movement Scale (AIMS), a comprehensive clinician-rated scale designed to specifically evaluate involuntary movements. Employing advanced AI video technology, TDScreen efficiently assesses and quantifies the risk of TD in less than 5 minutes. The resulting score generated by this assessment can aid in clinical decision-making and management strategies.

TDscreen is available free of charge for any provider or patient wanting to screen for TD. Visit tdscreen.ai to learn more.

About Videra Health™

Videra Health is a leading AI platform for behavioral health providers and proactively identifies, triages and monitors at-risk patients using linguistic, audio and video analysis. The FDA-registered digital platform transforms how doctors and healthcare systems interact and track a patient’s journey, illuminating the hidden depths of patient behavior and outcomes. Videra Health connects providers and patients anytime, anywhere, between visits and post-discharge via written and video assessments that translate into actionable quantitative and qualitative patient data. The platform streamlines diagnoses, enhances care accessibility, optimizes workflows and drives down costs for providers and healthcare systems. 

For more information, visit www.viderahealth.com.

About iRxReminder
iRxReminder specializes in digital health solutions aimed at enhancing medication adherence and mental health management through cutting-edge technology applications and innovative behavioral science solutions.

For more information, visit www.irxreminder.com.

How AI Expands Care When Care Demands Continues to Rise

Healthcare provider using AI mental health platform on tablet while speaking with patient, illustrating how AI in mental healthcare amplifies human care

As we recognize Mental Health Awareness Month this May, we find ourselves at a critical juncture where AI in mental healthcare offers promising solutions. The need for mental health services continues to grow at an unprecedented rate, while provider shortages and burnout intensify. According to data from across the healthcare landscape, 47% of the U.S. population now lives in an area with a mental health workforce shortage, and wait times for appointments often stretch beyond three months.

At Videra Health, we’ve been tackling this challenge head-on, working with providers who face the daily reality of trying to deliver quality care despite limited resources. Our experiences have revealed a fundamental truth: we cannot simply produce more clinicians fast enough to meet the growing demand. Instead, we must find innovative ways to expand the reach and impact of our existing clinical workforce.

The Human Understanding Gap: Where AI in Mental Healthcare Makes a Difference

The core of effective mental healthcare has always been human connection and understanding. Providers need to know not just what their patients are saying, but how they’re feeling, their emotional state, and whether they’re at risk. Traditionally, this understanding has been limited to in-office interactions, creating significant blind spots in patient care journeys.

What happens when a patient struggling with depression has a difficult week between appointments? How can a substance use disorder treatment center identify which discharged patients are at risk of relapse? How do we ensure that individuals experiencing suicidal ideation are identified and supported before reaching crisis?

These questions highlight what I call the “human understanding gap” – the critical information about patient wellbeing that falls through the cracks between formal care touchpoints.

AI in Mental Healthcare: Building Bridges, Not Replacements

This is where thoughtfully designed AI systems can make a transformative difference. At Videra, we’ve seen firsthand how clinical AI can serve as a bridge that extends human care, rather than replacing it.

Our platform uses video, audio, and text assessments powered by artificial intelligence to understand patients in their own words and on their own time. By analyzing facial expressions, voice patterns, language, and behavioral indicators, we can identify signs of emotional distress, suicidal language, medication adherence challenges, and other critical indicators that might otherwise go unnoticed between appointments.

The results have been profound. In one behavioral health system implementation, we’ve seen that patients with higher engagement in post-discharge monitoring demonstrate significantly stronger recovery outcomes. Another community mental health center utilizing our technology reduced crisis alerts by 64% after just two weeks of proactive monitoring.

Amplifying Human Care, Not Replacing It

The most important lesson we’ve learned is that effective clinical AI doesn’t aim to replace human providers – it amplifies their capabilities and extends their reach. By handling routine clinical assessments and identifying at-risk patients, AI creates a force multiplier for clinical expertise, allowing providers to direct their specialized skills where they’re needed most.

For example, our automated assessment system can engage thousands of patients consistently and frequently to identify those with acute needs, before, during or after care. Our note-taking technology reduces documentation time, giving clinicians more face-to-face time with patients. And our monitoring tools provide continuous support between appointments, creating a safety net that would be impossible to maintain manually.

This clinical enhancement works alongside our workflow solutions, which address a separate but complementary need. While clinical AI focuses on assessment and insights, our workflow tools tackle the administrative burdens that consume valuable provider time.

The result is a multiplier effect on care capacity. Providers using these integrated AI-powered clinical and workflow tools can effectively support more patients without sacrificing quality of care – in fact, they can often deliver better outcomes by focusing their expertise where it’s most needed.

Looking Forward: The Future of AI in Mental Healthcare

As we look ahead, I believe we’re only beginning to tap into AI’s potential to address the growing mental health crisis. Future developments will likely include:

  • More sophisticated risk prediction models that can identify potential issues before they become crises
  • Deeper integration with treatment pathways to provide personalized care recommendations
  • Enhanced accessibility tools that break down barriers to care for underserved populations
  • Advanced training systems that help new clinicians develop expertise more quickly

At Videra Health, we’re committed to advancing these innovations responsibly, always keeping the human connection at the center of our work. Because ultimately, the goal isn’t to build AI that replaces humans – it’s to build AI that helps humans help more humans.

A Call to Action

As we observe Mental Health Awareness Month, I encourage healthcare leaders to consider how AI can extend your organization’s capacity to deliver care. The mental health crisis isn’t waiting, and neither should we.

We need to embrace tools that allow us to do more with our existing resources, reaching patients when and where they need support. By implementing AI in mental healthcare thoughtfully, we can ensure that more people receive the care they need, when they need it most.

Together, we can build a future where technology and human connection work in harmony to meet the growing demand for mental healthcare – not by replacing the invaluable work of clinicians, but by amplifying their impact and extending their reach.

Creating Value with Videra Health

Loren Larsen joined the Life Sciences Today Podcast by Healthcare IT Today for a conversation about the future of behavioral health. Loren shares how Videra Health is reimagining mental health assessments through the power of AI. The discussion explores how technology can create meaningful value for both patients and providers while improving access and consistency in care. Listen to the full episode here.

Discovery Behavioral Health boosts care and revenue with patient engagement efforts

Healthcare IT News highlights how Discovery Behavioral Health is advancing patient care and operational efficiency through post-discharge engagement powered by AI. By implementing a white-labeled version of Videra Health’s video-based assessment platform, Discovery extended support beyond the walls of its facilities—gaining real-time insights into patient recovery and unlocking new opportunities for early intervention and re-engagement. This proactive, tech-enabled approach exemplifies the shift toward continuous, personalized care across the behavioral health spectrum. With measurable improvements in patient outcomes and provider efficiency, the article shows how innovation can meaningfully transform the recovery journey.
Read the full article here.

Managing Stress in the Digital Age: Practical Tools for Behavioral Health Clinics

digital solutions for behavioral health clinicians

In today’s fast-paced healthcare environment, behavioral health clinicians face unprecedented challenges. Rising patient demand, administrative burdens, and the constant pressure to deliver high-quality care can create a perfect storm of stress for even the most dedicated professionals. As Videra Health’s Chief Clinical Officer, I’ve witnessed firsthand how digital solutions can either add to this burden or, when thoughtfully implemented, help alleviate it.

The Growing Challenge

The statistics paint a clear picture: nearly half of the U.S. population lives in a mental health workforce shortage area, average wait times for mental health services exceed three months, and no-show rates hover around 30%. These challenges create immense pressure on clinicians, leading to burnout and decreased quality of care.

However, I’ve observed a positive shift in how behavioral health organizations are leveraging technology to address these challenges. The right digital tools can transform workflows, enhance patient engagement, and provide valuable insights that improve both clinical outcomes and staff wellbeing.

Digital Solutions That Actually Help

At Videra Health, we’ve worked with hundreds of behavioral health organizations to identify which digital approaches actually reduce clinician stress rather than adding to it. Here are key strategies we’ve found most effective:

1. Automate Administrative Tasks, Not Clinical Judgment

The most successful digital implementations focus on eliminating repetitive administrative tasks while preserving and enhancing clinicians’ unique expertise and judgment. For example, automating intake assessments and post-discharge follow-ups can save hours of staff time while still providing rich clinical data.

When our client’s large behavioral health practice implemented automated post-discharge monitoring, they didn’t just save staff time—they identified patients needing intervention who might otherwise have fallen through the cracks. As one clinician shared, “We had four alerts over the weekend, and we were able to reach out to support clients and one came back for services… we would have never been able to find these patients in time without Videra.”

2. Implement Proactive Risk Identification

One of the most stressful aspects of behavioral health practice is worrying about patients between sessions. Digital tools that allow for ongoing monitoring and proactive risk identification can alleviate this burden.

Our experience with behavioral health support services shows that timely alerts for emotional distress, suicidal ideation, and other concerning patterns can enable early intervention. This not only improves patient outcomes and scales across the entire patient population, but also reduces the psychological burden on clinicians who otherwise might worry about patients between appointments.

3. Leverage Multimodal Assessments

Traditional questionnaire-based assessments only tell part of the story. Modern behavioral health platforms that incorporate video, audio, and text assessments can capture much richer data. This approach allows patients to express themselves in their own words, providing clinicians with deeper insights while reducing the time needed to gather comprehensive information.

One clinician noted, “Because Videra is video-based, it gives the clinician or staff the very information that you would be looking for if the patient were sitting across from you in your office.” This deeper understanding helps clinicians make more informed decisions more efficiently.

4. Focus on Meaningful Measurement

Not all data is created equal. The most effective digital solutions focus on collecting and analyzing information that directly informs clinical decisions and improves care.

By tracking key metrics like changes in PHQ-9 scores, medication adherence, and social determinants of health over time, clinicians can identify trends and adjust treatment plans accordingly. This data-driven approach not only improves patient outcomes but also gives clinicians confidence that their interventions are having the desired effect.

5. Engage Patients Between Visits

Patient engagement doesn’t have to stop when the session ends. Digital platforms that facilitate ongoing communication and support between visits can extend the impact of therapy while reducing the pressure on in-person appointments.

Research shows that patients with higher engagement in digital follow-up programs demonstrate stronger recovery, with better protective factors and lower relapse rates. This continuous engagement creates a more sustainable care model for both patients and providers.

The Human Element Remains Essential

As we embrace these digital solutions, it’s crucial to remember that technology should enhance rather than replace the human connection at the heart of behavioral healthcare. The most effective implementations leverage technology to handle routine tasks, gather information, and identify risks—freeing clinicians to focus on what they do best: providing compassionate, personalized care.

One CCBHC director summarized it perfectly: “Technology doesn’t replace our clinicians—it amplifies their impact by ensuring they can focus their time and expertise where it’s needed most.”

Moving Forward Together

The future of behavioral healthcare isn’t about choosing between human expertise and digital efficiency—it’s about thoughtfully integrating both to create more sustainable, effective, and scalable care models. By implementing the right digital tools in the right way, behavioral health organizations can reduce clinician stress, improve patient outcomes, and build more resilient healthcare systems.

At Videra Health, we’re committed to supporting this integration with solutions designed specifically for the unique challenges of behavioral healthcare. Together, we can create a future where technology doesn’t add to clinician burden but instead helps create more manageable, rewarding work environments where both providers and patients can thrive.

Ethical Implementation of AI in Mental Healthcare: A Practical Guide

In a recent article published by The AI Journal, the conversation around AI in mental healthcare takes an essential turn—focusing not only on its transformative potential, but on how to implement these tools responsibly. As clinicians adopt AI to improve efficiency and outcomes, ethical principles like transparency, equity, and patient autonomy must remain central to the process. This guide emphasizes that ethical implementation isn’t a one-time decision, but a continuous journey that requires trusted partners and thoughtful oversight. Ultimately, AI should enhance—not replace—the deeply human nature of mental healthcare.
Read the full article here.

5 Ways AI Can Help Mental Health Clinicians Manage a Growing Caseload

As mental health providers face mounting caseloads and rising demand, AI offers a path forward by enhancing efficiency without sacrificing personalized care. This article highlights how AI-powered screening platforms, rooted in measurement-based care, are helping clinicians prioritize high-risk patients, automate routine tasks, and extend their reach beyond the office. The result is more time for meaningful client interactions and stronger therapeutic relationships—hallmarks of quality care. With the right implementation, AI serves not as a replacement, but as a trusted partner in the delivery of smarter, more responsive mental health services.
Read the full article here.

How AI is Empowering Providers with Early Detection of Mental Illness

AI is transforming mental health care by enabling earlier, more precise detection of symptoms that might otherwise go unnoticed until crisis points. This thoughtful piece explores how technology can continuously monitor subtle behavioral shifts—augmenting clinical expertise and allowing providers to intervene sooner. By analyzing data across multiple modalities and comparing patients to their own baselines, AI supports a shift from reactive treatment to proactive care. The result is better outcomes, lower costs, and a future where clinicians are empowered—not replaced—by intelligent tools.
Read the full article here.

Trust and Transformation Take Center Stage at DataRobot’s 2021 AI Experience Worldwide, May 11-12, 2021

Videra Health Logo

Tech visionary Alexis Ohanian and newly appointed CEO of DataRobot, Dan Wright, join customers, execs, partners, and AI visionaries on “The Hunt for Transformational Growth” during virtual event

BOSTON–(BUSINESS WIRE)–DataRobot, the leader in enterprise AI, is on “The Hunt For Transformational Growth” with its second annual virtual event on May 11-12. Alexis Ohanian, Founder of Seven Seven Six and Co-Founder of Reddit, and America’s “Science Guy,” Bill Nye, join a dynamic speaker line-up, including DataRobot executives and customers, business leaders, data scientists, educators, and leading AI ethicists. The two-day event is open to the public and free to attend, with sessions available on-demand immediately following.

Competition is fierce these days and organizations must scale fast, smart, and efficient. Offering over 20 sessions from unique voices in business and AI, AI Experience Worldwide attendees will gain actionable insight on building an agile, AI-driven enterprise and access to cutting-edge strategy that is critical when it comes to improving forecasts and optimizing performance with AI.

Day 1 of the conference includes a forward-leaning conversation featuring DataRobot’s CEO, Dan Wright, on his call for a new era of democratized AI, as well as a sneak peek at some of DataRobot’s enterprise AI platform innovations from Nenshad Bardoliwalla, SVP of Product. Executives from Yelp, Overstock.com, Pendo, and Airin share unscripted insight on their path to transformational growth.

Day 2 explores why trust in AI is no longer a feature, it’s a requirement. DataRobot’s Global AI Ethicist, Haniyeh Mahmoudian, PhD, will address the skepticism surrounding AI solutions, and discuss why it’s critical to approach issues of bias and fairness by implementing trustworthy AI.

Additional highlights include:

  • Opening Keynote from tech visionary Alexis Ohanian, Founder of Seven Seven Six and Co-Founder of Reddit. This session will explore Alexis’s views on the future of tech and how he intends to enable a better world for the next generation.
  • Ricky Ray Butler, CEO of BEN Group, Linda Klug, Founder & CEO of Airin, and Loren Larsen, CEO/Co-Founder of Videra Health, join DataRobot Chief AI Evangelist, Ben Taylor, for a transparent conversation around their initial doubts—and ultimate takeaways—when implementing AI to transform their business.
  • Sally Embrey, DataRobot’s VP of Public Health and Medical Technologies, speaks to the pivotal role of data and AI as we seek to democratize health, empower patients, and allow individuals to thrive.
  • Closing remarks from “Science Guy” Bill Nye and Chris Mattmann, Chief Technology and Innovation Officer at NASA Jet Propulsion Laboratory, on the importance of research and discovery, fighting anti-science sentiment, space exploration, where humanity is headed, and how we can all play a role.

“Every company is now an AI company, and the decision to implement AI is no longer optional,” said Wright. “Knowing this, we’re excited to welcome technologists, corporate leaders, data scientists, researchers, and users at every level to witness the transformational potential of AI on businesses’ bottom line. Our goal is to create a space for transparent discussions around the complexities surrounding AI. We’ll have CEOs share their own hesitations when embracing AI and AI ethicists and educators explore the need for AI we can trust, with actionable roadmaps to help us plan for and handle institutional bias as we look to the future.”

To register for DataRobot’s virtual conference or to learn more about the event, visit the AI Experience Worldwide website and follow along on Twitter #AIExperience.

About DataRobot

DataRobot is the leader in enterprise AI, delivering trusted AI technology and enablement services to global enterprises competing in today’s Intelligence Revolution. DataRobot’s enterprise AI platform democratizes data science with end-to-end automation for building, deploying, and managing machine learning models. This platform maximizes business value by delivering AI at scale and continuously optimizing performance over time. The company’s proven combination of cutting-edge software and world-class AI implementation, training, and support services, empowers any organization – regardless of size, industry, or resources – to drive better business outcomes with AI.

DataRobot has offices across the globe and funding from some of the world’s best investing firms including Alliance Bernstein, Altimeter, B Capital Group, Cisco, Citi Ventures, ClearBridge, DFJ Growth, Geodesic Capital, Glynn Capital, Intel Capital, Meritech, NEA, Salesforce Ventures, Sands Capital, Sapphire Ventures, Silver Lake Waterman, Snowflake Ventures, Tiger Global, T. Rowe Price, and World Innovation Lab. DataRobot was named to the Forbes 2020 Cloud 100 list and the Forbes 2019, 2020, and 2021 Most Promising AI Companies lists, and was named a Leader in the IDC MarketScape: Worldwide Advanced Machine Learning Software Platforms Vendor Assessment. For more information visit http://www.datarobot.com/, and join the conversation on the DataRobot CommunityMore Intelligent Tomorrow podcastTwitter, and LinkedIn.