As digital therapeutics become more popular, it is important to consider how they can be integrated into mental health care in an ethical manner. The path forward requires continued attention both to appropriate oversight and models of care, and to issues of data protection and justice.
Safety and Oversight Issues
The primary ethical concerns for digital mental health technology have been safety, accountability, privacy, data protection, transparency, consent, and bias and fairness (
Many consumer mental health apps are not regulated, and there have been
related concerns regarding the lack of evidence base
for consumer mental health apps.
Digital therapeutics are regulated as medical devices, and so the United States Food and Drug Administration (FDA) provides oversight of safety and effectiveness. However, formulating and implementing quality control measures for algorithms used in digital therapeutics remains challenging, as does evaluating the necessary external elements (ie, operating systems or connectivity) for providing digital therapeutics. Many digital therapeutics are meant to evolve continuously, which may mean that they need to be re-evaluated after an initial certification.
FDA’s Digital Software Pre-Certification
program certifies companies that are found to have “a robust culture of quality and organizational excellence,” and then gives them a streamlined process for product approvals.
This program is meant to address the challenges of regulating digital tools, but it has been criticized for providing less stringent standards than those used for pharmaceuticals (eg, lacking clarity regarding evaluation criteria and a complete definition of excellence). Critics have also called for
improved accountability for maintaining standards
The regulation of medical devices in the United States and Europe focuses on the product: the digital tool itself. However, it is important to recognize that a digital tool will be used within the context of a health delivery system, for purposes and goals specified within that system, perhaps as part of a plan for allocating available resources or for treating a particular patient population. Therefore, in order to adequately assess the safety and efficacy of a digital tool,
a systems view regarding how that tool will be used
is also necessary.
Digital tools that rely on machine learning present additional challenges for regulation. With
machine learning algorithms
, it can be difficult to determine why specific data inputs led to a certain output or findings.
Consequently, it can be hard to evaluate and address systematic problems in the outputs, such as
biases that disproportionately impact different populations
efforts to develop algorithms
that are more explainable, but best practices for identifying and addressing potential biases are still evolving.
There have been calls for more transparency in health algorithms, such as developers allowing for third-party review of algorithms. It is also important for clinicians to carefully consider how to convey the risks and limitations of digital therapeutic tools to patients for informed consent purposes. Clinicians themselves may need training to understand the potential limitations of these digital tools. Involving relevant stakeholders, from clinicians to patients and community members, in plans for adoption and implementation of digital therapeutics in a health care system, can also be helpful for addressing fairness concerns.
Privacy and Data Protection
Mental health data are widely
viewed as more sensitive and potentially stigmatizing
than other health data.
Last year, a data security flaw in
a popular psychotherapy app in Finland was exploited by hackers
, who then blackmailed thousands of users over their personal data.
This incident highlighted both the value of behavioral information and the importance of strong data security measures. In order to facilitate telehealth during the pandemic, the Office of Civil Rights at the Department of Health and Human Services altered the
Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule
to eliminate penalties for violations made in the good faith provision of telehealth.
Although this provision and leniency is meant to end with the pandemic, there will be continuing tension between the accessibility afforded by digital technology, the potential exposure of patient data through these tools, and appropriate balancing of accountability and liability concerns.
Data gathered through digital therapeutics would generally be subject to HIPAA, which
establishes protections for health information used by covered entities
(ie, health care providers, health plans, and health care clearinghouses).
Health Information Technology for Economical
and Clinical Health Act further requires business associates of a HIPAA-covered entity comply with the HIPAA Security Rule.
There have been some incidents where business associates did not
adequately protect personal data
Digital therapeutics that are prescribed by covered entities should have business associate agreements in place with the digital therapeutic company and its associates that include provisions for compliance.
Data brokerage is a $200 billion industry; thus, the current landscape of data brokerage and sharing presents
additional concerns for the protection of patient data
make it possible to draw behavioral health inferences from seemingly unrelated information (ie, location data), and these inferences can lead to negative ramifications for patients (ie, higher insurance rates or employment discrimination).
(data from which 18 specific identifiers including name and age have been removed) may be shared without restriction under HIPAA,
advances in computing and the availability of large public databases make
re-identification of personal data
easier and more possible.
Thus, de-identified patient data that is shared with third-parties could be later re-identified and used in ways that the patient
may not have foreseen or expected
An increasing number of jurisdictions have considered implementing personal and biometric data regulations, such as the
General Data Protection Regulation in the European Union or the California Consumer Privacy Act
Against this backdrop, clinicians’ patients need to appreciate the risks and benefits regarding data collected through digital therapeutics; this information should be conveyed through an informed consent process.
In addition, some digital therapeutics continuously monitor patients, collecting a great amount of personal data. Further studies should evaluate the impact of pervasive surveillance on patients and the therapeutic alliance.
Bias and Fairness in Digital Therapeutics
, as well as the recent social justice movements, have put a spotlight on
bias and inequities in the health care system
historical injustices experienced by Black and Latinx individuals in health care
, these groups are more likely to express concerns regarding privacy and the quality of digital mental health.
The shift to telehealth demonstrated that not all communities or populations have the resources or infrastructure to take advantage of digital tools. Community mental health centers, which disproportionately serve Black and Latinx patients, are less likely to have the necessary equipment.
If digital therapeutics are to fulfill the promise of increased access,
improvements are needed
in infrastructure, training, and availability of clinician oversight to better serve low-income demographics.
Associated resources, such as internet connection or hardware, may also be needed.
and digital health technologies also raise issues of racial bias and fairness.
There are different types of bias, such as an inadequate fit between the data collected and the research purpose, datasets that do not have representative samples of the target population, and digital tools that produce disparate effects based on how they are implemented.
If the research population for creating the tools is not sufficiently representative of the diverse contexts in which the digital therapeutics will be used, it can lead to worse outcomes for certain groups or communities. There are a number of approaches to addressing bias in digital health tools, such as technological fixes in datasets and algorithms, or outlining principles for fairness in algorithmic tools.
These are important measures, but there must be a broader effort to detect the ways social inequities can shape the development and efficacy of digital mental health tools.
Although digital therapeutics are regulated, it is important to note that the FDA has not required data regarding diversity in training data for machine learning. In a study of machine learning health care devices approved by the FDA, investigators found that most of the 130 tools approved did not report if they had been
evaluated at more than 1 site
, and only 17 provided demographic subgroup evaluations in their submissions.
Because the data collected from some digital therapeutics may also be used for health research purposes, digital tools that are of limited effectiveness or accessible to select populations could exacerbate existing health care inequalities.
Developers, researchers, and clinicians need to consider the
usability and accessibility of digital therapeutics
for culturally diverse populations and marginalized groups.
Digital therapeutics should be evaluated on how well their designs and implementation strategies take into account the needs of diverse populations (eg, individuals from various age groups, races, gender, linguistic backgrounds, and disability status). Engaging diverse stakeholders is vital for providing equitable mental health care and avoiding a deeper digital divide in access. Future research should inform best practices, particularly in terms of how digital therapeutics interact with the provision of mental health services in real-world settings.
Telehealth and digital therapeutics hold great promise in improving care for those with mental illness. It is, however, important that we seek to integrate digital tools into mental health care in ways that support, rather than disrupt, the therapeutic relationship and provide equitable care.
At the systems and policy levels, funding and resources are needed to provide care for different mental health needs as well as to broaden access to high-quality care for marginalized groups. Such efforts will require attention to a range of issues, including reimbursement, infrastructure, and developing appropriate care models (eg, stepped-care models).
raise questions about appropriate lines of oversight or liability
; they potentially impact the nature of fiduciary relationships involved.
Frameworks for how digital therapeutics can address preventative care, patients in crisis, or special populations (eg, those with severe mental illness) also need to be developed and implemented. If we can meet these ethical challenges, then digital therapeutics will provide not only innovative, but also equitable mental health care.
is assistant professor at Stanford Center for Biomedical Ethics and in the Department of Pediatrics. She has a secondary appointment in the Department of Psychiatry at Stanford University’s School of Medicine.
1. Martinez-Martin N, Dasgupta I, Carter A, et al.
Ethics of digital mental health during COVID-19: crisis and opportunities.
JMIR Ment Health
2. Bauer M, Glenn T, Monteith S, et al.
Ethical perspectives on recommending digital technology for patients with mental illness.
Int J Bipolar Disord
3. Torous J, Roberts LW.
The ethical use of mobile health technology in clinical psychiatry.
J Nerv Ment Dis
4. Weisel KK, Fuhrmann LM, Berking M, et al.
Standalone smartphone apps for mental health-a systematic review and meta-analysis.
NPJ Digit Med
5. US Food and Drug Administration. Digital health software precertification (pre-cert) program. September 11, 2020. Accessed April 21, 2021.
6. Warren E, Murray P, Smith T. Letter to FDA on regulation of software as medical device; 2018. October 10, 2018. Accessed July 6, 2021.
7. Gerke S, Babic B, Evgeniou T, Cohen IG.
The need for a system view to regulate artificial intelligence/machine learning-based software as medical device.
NPJ Digit Med
8. Magrabi F, Ammenwerth E, McNair JB, et al.
Artificial intelligence in clinical decision support: challenges for evaluating AI and practical implications.
Yearb Med Inform
9. Challen R, Denny J, Pitt M, et al.
Artificial intelligence, bias and clinical safety.
BMJ Qual Saf
10. Obermeyer Z, Powers B, Vogeli C, Mullainathan S.
Dissecting racial bias in an algorithm used to manage the health of populations.
11. Amann J, Blasimme A, Vayena E, et al.
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.
BMC Med Inform Decis Mak
12. Aitken M, de St Jorre J, Pagliari C, et al.
Public responses to the sharing and linkage of health data for research purposes: a systematic review and thematic synthesis of qualitative studies.
BMC Med Ethics
13. Ralston W. They told their therapists everything. Hackers leaked it all.
. May 4, 2021. Accessed June 13, 2021.
14. US Department of Health and Human Services. Notification of enforcement discretion for telehealth remote communications during the COVID-19 nationwide public health emergency. March 17, 2020. Accessed April 24, 2020.
15. Congress.gov. Health Insurance Portability and Accountability Act of 1996. Public Law 104-191. August 21, 1996. Accessed July 15, 2021.
16. US Department of Health and Human Services. HITECH Act Enforcement Interim Final Rule. Reviewed June 16, 2017. Accessed July 15, 2021.
17. Perakslis ED.
Cybersecurity in health care.
N Engl J Med
18. Rothstein MA. Debate over patient privacy control in electronic health records. Hastings Center, Bioethics Forum, 2011. Updated February 5, 2014. Accessed July 6, 2021.
19. Crain M.
The limits of transparency: data brokers and commodification
New Media & Society
20. Allen M. Health insurers are vacuuming up details about you — and it could raise your rates.
. July 17, 2018. Accessed July 6, 2021.
21. Wachter S.
Data protection in the age of big data.
22. Skiljic A. The status quo of health data inferences. International Association of Privacy Professionals: Privacy Perspectives. March 19, 2021. Accessed July 6, 2021.
23. Kröger JL, Raschke P, Bhuiyan TR.
Privacy implications of accelerometer data: a review of possible inferences.
Proceedings of the 3rd International Conference on Cryptography, Security and Privacy
. ICCSP ’19. Association for Computing Machinery; 2019:81-87.
24. De-identification of protected health information: how to anonymize PHI.
. October 18, 2017. Accessed July 6, 2021.
25. Benitez K, Malin B.
Evaluating re-identification risks with respect to the HIPAA privacy rule.
J Am Med Inform Assoc.
26. Yoo JS, Thaler A, Sweeney L, Zang J. Risks to patient privacy: a re-identification of patients in Maine and Vermont statewide hospital data.
. October 9, 2018. Accessed July 6, 2021.
27. Culnane C, Rubinstein BIP, Teague V. Health data in an open world. Cornell University. Computer Science. December 15, 2017. Accessed July 6, 2021.
28. California Consumer Privacy Act of 2018 [1798.100-1798.199.100]. California Legislative Information. Updated November 3, 2020. Accessed July 15, 2021.
29. Webb Hooper M, Nápoles AM, Pérez-Stable EJ.
COVID-19 and racial/ethnic disparities.
30. van Deursen AJ.
Digital inequality during a pandemic: quantitative study of differences in COVID-19-related internet uses and outcomes among the general population.
J Med Internet Res
31. George S, Hamilton A, Baker RS.
How do low-income urban African Americans and Latinos feel about telemedicine? A diffusion of innovation analysis.
Int J Telemed Appl
32. Conrad R, Rayala H, Diamond R, et al. Expanding telemental health in response to the COVID-19 pandemic.
. April 7, 2020. Accessed July 6, 2021.
33. Rajkomar A, Hardt M, Howell MD, et al.
Ensuring fairness in machine learning to advance health equity.
Ann Intern Med
34. Gerke S, Minssen T, Cohen G.
Ethical and legal challenges of artificial intelligence-driven healthcare.
In: Bohr A, Memarzadeh K, eds.
Artificial Intelligence in Healthcare
35. Char DS, Shah NH, Magnus D.
Implementing machine learning in health care-addressing ethical challenges.
N Engl J Med
36. Binns R. Fairness in machine learning: lessons from political philosophy. Cornell University. Computer Science. Updated March 23, 2021. Accessed July 6, 2021.
37. Mittelstadt B.
Principles alone cannot guarantee ethical AI.
Nat Mach Intell
38. Wu E, Wu K, Daneshjou R, et al.
How medical AI devices are evaluated: limitations and recommendations from an analysis of FDA approvals.
39. Martschenko D, Martinez-Martin N.
What about ethics in design bioethics?
Am J Bioeth
40. Martinez-Martin N, Dunn LB, Roberts LW.
Is it ethical to use prognostic estimates from machine learning to treat psychosis?
AMA J Ethics
41. Potier R.
The digital phenotyping project: a psychoanalytical and network theory perspective.
42. Dagum P, Montag C.
Ethical considerations of digital phenotyping from the perspective of a healthcare practitioner.
In: Baumeister H, Montag C, eds.
Digital Phenotyping and Mobile Sensing: New Developments in Psychoinformatics.
Springer International Publishing; 2019:13-28.
43. Taylor CB, Fitzsimmons-Craft EE, Graham AK.
Digital technology can revolutionize mental health services delivery: the COVID-19 crisis as a catalyst for change.
Int J Eat Disord
44. Cohen IG, Amarasingham R, Shah A, et al.
The legal and ethical concerns that arise from using complex predictive analytics in health care.
Health Aff (Millwood)