It was only six years ago, in 2011, when the suicide research group I was involved in at the University of Manchester UK, discussed developing a suicide monitoring app. I thought – “Automating mental health support? This is science fiction!”
Just a year later, as part of my team’s research project in the Queensland University of Technology, Brisbane, Australia, I was involved in conducting a systematic contextual review and quality evaluation of all health-related apps. Easier said than done. There were already thousands of them on the app stores with no adequate classification and no guidelines or bespoke previous research to lead this process. Thankfully, I had a team of masterminds: Prof. Leanne Hides and Prof. David Kavanagh. Their expertise in developing technological interventions for mental health and wellbeing served as a beacon to help me understand the complexity of the issue: distilling a set of criteria thorough enough to encompass all components of eHealth apps, and objective enough to withstand the critical approach of rigorous research.
Conducting this systematic contextual review was supposed to be the easier task – we were only going to follow the well-established PRISMA guidelines, as if conducting a systematic literature review. This meant we wanted to find ALL apps in a category – a task somewhat attainable before Google shut down their “Apps” filter. Unfortunately nowadays there is no good alternative where thorough, yet focused results are accessible. Trawling through app stores is the only option, yet results are convoluted and popularity-based. And let’s admit it – eHealth apps are not quite as popular as we might like.
In the end we did arrive at a meaningful sample of apps by limiting the number of mental health issues we were interested in, so we could manage the numbers. But how to rate their quality? App store ratings were, of course, meaningless – some can be paid for and boosted by companies, others the result of an angry mob of users, vengeful after the newest operating system (OS) rendered the app unusable. So this left us sifting through website evaluation criteria, user experience (UX) requirements and IT benchmarks. After numerous iterations, the MARS (Mobile App Rating Scale) was born!
Today the MARS has over 170 citations, since its publication in 2015 and has been translated into eight languages. Our team has received hundreds of requests for advice, information, support or collaboration, including its use on the PsyberGuide website. eHealth is blooming!
By now our team has explored and rated hundreds of apps and we learned a few lessons:
- we need a diverse expertise in technology, design, and health in order to provide optimal, accurate ratings
- Before rating any one app, one should first explore multiple apps in a category, in order to identify the norms, the trends and who is pushing the envelope
- app rating requires a critical and objective view, which is difficult to attain despite the MARS’ focus on measuring objective features
- rating apps comes with a level of responsibility as ratings can affect future uptake
What about app research beyond quality evaluation? Do eHealth apps work? New medications require rigorous research trials before they are permitted to the market, but eHealth apps do not go through such rigorous process. Could some apps be potentially harmful? Until today, health apps have been rarely evaluated and no ‘gold standard’ exists for conducting a proper evaluation. The practice of Randomized Controlled Trials seems to be the common trend, yet it is plagued by low adherence rates, difficulties in managing control conditions and accessibility, and – maybe worst of all – inability to compete with the high pace of technology development. Several times I’ve read a publication on a ‘new’ and ‘efficacious’ app only to find that the app itself is already obsolete in OS or UX terms, or even removed from the app stores.
As technology continues to develop and adapt in an exponential rate, we need to keep up. We, researchers in the field of eHealth, need to be flexible, agile and let’s admit it, often well financed, in order to even stay in the race with commercial level products. Early apps developed and reviewed were predominantly text-based but new apps incorporate passive data collection and biomarkers such as pulse and galvanic skin response. The next generation of eHealth products may increasingly be virtual humans such as conversational agents (like Siri or Alexa) or physical robots capable of supporting our mental health. Within 10 years the iPhone has created this app landscape and who knows what the next 10 years will hold. Before we know it even our robots will need eHealth as well.