Heads Up!

It looks like you haven't enabled JavaScript. PsyberGuide works best when JavaScript is enabled on your browser.

The Mental Health App Guide Designed With You In Mind


PsyberGuide is funded by One Mind, a leading non-profit organization in brain health research. PsyberGuide was established in 2013 in response to a growing need for guidelines to help people navigate the mental health app marketplace. In 2017, One Mind welcomed Dr. Stephen Schueller as PsyberGuide Executive Director and established a partnership with Northwestern University. PsyberGuide now operates out of the University of California, Irvine and Northwestern University where our team consists of experts in mental health, technology, and technology delivered care. PsyberGuide’s aim is to help people make responsible and informed decisions about apps and other digital tools for mental health, by providing unbiased reviews. PsyberGuide is not an industry website; its goal is to provide accurate and reliable information free of preference, bias, or endorsement.

How PsyberGuide Works

The PsyberGuide team reviews apps based on the app's Credibility, User Experience, and Transparency of Privacy Practices.

Credibility

Psyberguide Credibility Icon

The Credibility Score represents the strength of the scientific research support for the app itself, and the therapeutic interventions the app provides.

User Experience

Psyberguide User Experience Icon

Our collaborators use the Mobile App Rating Scale (MARS) to assess the design, accessibility of information, and overall experience that the app provides.

Transparency

Psyberguide User Experience Icon

The Transparency Score represents the clarity of an app's privacy policy in detailing the data storage and collection procedures of a mobile health product and its associated servers.

App Selection Process

We discover new apps in a number of different ways, including:

  • Research papers and published reviews of apps
  • Searches on iTunes and Google Play app stores
  • Trending apps on social media and popular news
  • App developers
  • Through our partner organization & networks

We want to review apps that people are actually using. So when we identify new apps we prioritize apps that have the most user reviews in the iTunes and Google Play app stores. This gives us an idea of the apps’ popularity. If you know of an app you would like us to review, you can contact us at info@psyberguide.org.

App Scoring

Credibility

The Credibility Score is a measure of the research support backing an app or digital tool. This measure aims to give users an idea of how credible a digital tool is, i.e. how likely it is that it will work. Apps are scored based on:

  • the level of research support they have - here, we’re looking at published, peer-reviewed research specifically for the tool itself
  • the level of research support - who funded the published paper(s) supporting the app’s effectiveness
  • how specific the intervention the app proposes is - is it designed to target a specific condition or symptom, to help with overall well-being, or for tracking and monitoring? The more specific the intervention, the higher an app will score here.
  • Apps also receive scores for the number of ratings in app stores, the level of expert clinical input in their development, and how recently they have been updated.

Scoring Details for Credibility Ratings

Research Base
Possible Points: 3
ScoreCriteria
3At least two good between-group design experiments must demonstrate the efficacy of the app in terms of superiority to placebo or other treatment, equivalence to already established treatment, or a large series of single-case design experiments must demonstrate efficacy; the study sample must be clearly defined and appropriate
2Two experiments that show experiment is superior to waitlist control or one experiment that meets 3 point criteria
1Single case designs or other quasi-experimental methods demonstrating efficacy
0No published research.
Research Funding
Possible Points: 2
ScoreCriteria
2At least one research paper funded by government agency (e.g. NIH) or non-profit organization
1 All research funded primarily by for-profit organizations or combined funding sources
0 No information about source of funding for the research.
Specificity of Proposed Intervention
Possible Points: 3
ScoreCriteria
3The application is designed to improve a specific condition or symptom.
2The application is designed to help with non-specific items such as “mood” or “brain fitness”
1At The application is designed to track and monitor items such as symptom severity or medication compliance
Number of Consumer Ratings
Possible Points: 3
ScoreCriteria
3Ratings exist from >1500 users
2Ratings exist from 101-1500 users
1Fewer than 100 user ratings
0No user ratings
Clinical Input in Development
Possible Points: 1
ScoreCriteria
1Developer has an advisory board with clinical thought leader input
0Developer does not have an advisory board with clinical thought leader input
Software Support
Possible Points: 2
ScoreCriteria
2The application has been revised within the last 6 months
1The application has been revised within the last 12 months
0The application has not been revised or was revised more than 12 months ago.

User Experience

The User Experience rating is an app quality score.
“User Experience”, sometimes referred to as just UX, is the overall experience of using an app or program, in terms of how easy and engaging it is to use. The Mobile App Rating Scale (MARS) is used to assess the quality of the user experience of apps. MARS was developed by a team of researchers at Queensland University of Technology (QUT), with expertise in the development of digital health tools.

Resources

Scoring Details for User Experience Ratings

There are three main MARS factors:

  1. The MARS mean is the mean of four objective subscales:
    • Engagement: how fun, interesting and customizable the app is, and how well it engages the people it’s intended for
    • Functionality: how well the app features work, how easy it is to navigate through the app. Is it self-explanatory, intuitive, and easy to learn?
    • Aesthetics: the overall visual design - how appealing are the graphics, colors and layout?
    • Information: is the content of the app accurate, well-written and credible?
  2. Subjective Quality
  3. Perceived Impact

The Subjective Quality and Perceived Impact scores are based on the raters’ own impression of the eTool, including its usability and perceived effectiveness.
The MARS can be used as an adjunct to qualitative eTool descriptions, to give eTool users an overview of their quality rating. The scale can also help with the ranking of eTools based on their quality. The MARS scale is being used worldwide by eTool evaluation and development projects.


Click here for more information on MARS.

Transparency

Transparency scores relate to information regarding an apps’ data storage and collection policies and how readily available this information is to users. It’s important to note here that for this metric, we evaluate whether or not an app’s privacy policy has certain key pieces of information regarding data storage, encryption, deletion, etc. What we don’t do is audit the apps practices, to ensure that they actually do what they say they do in their policies. We believe that developers should be as transparent as possible with privacy information so that users can be fully informed of how their data is used and stored.

Scoring Details for Transparency

Rating Explanation
Acceptable A product that has been scored as acceptable has an acceptable level of data transparency; the privacy policy of the product provides sufficient and easily accessible information on the policies related to data collection, storage, and exchange. The information provided conforms to standards for collection, storage, and exchange of health information.
Questionable A product that has been scored as questionable has a privacy policy that is unclear or lacking specific details of policies surrounding data collection, storage, and exchange or is questionable in its adherence to standards on collection, storage, and exchange of health information.
Unacceptable A product that has been scored as unacceptable  either a) does not have a privacy policy,  b) has a privacy policy that excludes important information about data privacy, collection, storage, or exchange, or c) has a privacy policy that outlines practices for data privacy, collection, storage or exchange that do not conform to standards for health information.

Meet the Team

Stephen Schueller, Ph.D

Executive Director

Assistant Professor of Preventive Medicine, Northwestern University, Center for Behavioral Intervention Technologies

Martha Neary

Project Manager

Northwestern University, Center for Behavioral Intervention Technologies

Kristen O'Loughlin

PsyberGuide Correspondent

Graduate Student, Feinberg School of Medicine Clinical Psychology Program

Diana M. Steakley-Freeman

Systems & Web Developer

Northwestern University, Center for Behavioral Intervention Technologies

Stephen Cognetta

Project Consultant

Independent

Victoria Pickering

Project Consultant

Independent