As discussed in the previous blog, “From cars to digital mental health solutions: Demand safety and effectiveness,” purchasing a mental health solution that lacks high-quality clinical evidence is akin to buying a car with untested brakes. But you’re probably asking yourself, how do I wade through all the research jargon, statistics, and information to make the most informed decision for my employees?
That’s where we come in. Our clinical research team put together a simple guide, “Choosing wisely: Using clinical evidence to evaluate digital therapeutics for mental health,” to walk you through the process. Within the guide, you’ll get a hands-on introduction to the scope of mental health research, what to look for in outcomes, and a simple scoresheet that ties it all together.
But if you don’t have time to read the full report, we still have your back. Below we highlight the five most important questions to ask mental health vendors. These questions provide the quickest way to evaluate and compare clinical evidence.
The five questions:
1. Has the specific solution, not just the category, been tested in a randomized controlled trial (RCT)?1,2
Since RCTs are the “gold standard” of clinical evidence, this question is the quickest way to determine the rigor with which a product has been tested.
2. Has clinical research on the solution been published in independent, peer-reviewed journals?
Being published in a peer-reviewed journal helps to mitigate bias, since clinical and scientific third parties review the study design, the validity and reliability of measures being used, follow-up and drop-out rates, and the interpretation of results. But since not all journals are created equal, check the journal rankings here or check the journal impact factor.
3. What percentage of participants experienced remission in their mental health symptom(s)?
This will prompt vendors to explain the results of their study in terms of how many individuals’ symptoms improved and whether the improvements were lasting. This ultimately gets to the heart of clinical research — uncovering if participants improved on key measures as a result of the intervention.
4. What was the effect size of the outcome and was it between-groups or within-groups?3
The vendor will have to explain whether the intervention in question had a small, medium, or large effect size (i.e., impact) on clinical outcomes. If there is a small effect size, it means the intervention had a minimal impact on symptoms. But if there is a large effect size, it means the intervention had a substantial impact on symptoms.
5. In what types of populations has your solution been tested?
It is important to determine how the studied populations are similar to your employee demographics and if the results will be applicable. Hint: if an intervention has been tested and effective in several hundred or more people with similar demographics to your employee population, that is a good indicator.
Download our full guide, “Choosing wisely: using clinical evidence to evaluate digital therapeutics for mental health,” to learn why these are the five most important questions and ensure you provide your employees with the safest and most effective mental health support.
1Randomized controlled trial (RCT): This is the most rigorous test of efficacy/effectiveness in clinical research. Participants are randomly assigned to the control or the intervention group. This is the “gold standard” because it gives researchers confidence that improvements in measured outcomes are due to the intervention and not some other factor (e.g., participant bias, natural disease course, etc.).
2Category vs. Product Evidence: A category being backed by evidence (i.e., indirect evidence) — while necessary — is not the same as a specific solution being backed by evidence (i.e., direct evidence). For example, cognitive and behavioral techniques are evidence-based and have been validated in numerous studies across various clinical settings. But not all solutions that claim to use cognitive and behavioral techniques have themselves been validated in multiple studies (e.g., an RCT), and thus cannot claim to be directly “evidence-based.
3Effect size: Unlike statistical significance, which only measures if there was any true change, effect size measures the magnitude of the change. For example, effect size tells you how strongly the digital therapeutic impacted a mental health outcome. Effect sizes are described as small, medium or large:
≥ 0.2 is small, the intervention had a small impact on symptoms
≥ 0.5 is medium, the intervention had a moderate impact on symptoms
≥ 0.8 is large, the intervention had a substantial impact on symptoms