Best Practices for Usability Testing in DHTs

Introduction

In clinical research, digital health technologies (DHTs)—including mobile health apps, wearables, biosensors, and remote monitoring systems—are revolutionizing how data are collected. Yet even the most technically sophisticated device can fail if participants cannot use it correctly. Usability testing is therefore not just a design exercise—it is a regulatory, ethical, and scientific necessity.

Regulatory agencies now recognize usability as central to ensuring participant safety, data quality, and Good Clinical Practice (GCP) compliance. A device that confuses users, requires complex setup, or delivers unclear feedback risks introducing bias and data loss [1]. This article explores best practices for usability testing in DHTs, synthesizing guidance from FDA, EMA, ISO, and IEC frameworks to support sponsors, vendors, and designers in achieving both compliance and participant trust.

The Role of Usability in Clinical Trials

Unlike professional medical devices operated by trained clinicians, most DHTs are layperson-operated—used by participants in their homes, often without technical assistance. These devices generate data that directly support primary or secondary endpoints. Consequently, usability directly impacts data integrity [2].

For example, an incorrect sensor placement or app navigation error can produce inaccurate readings or incomplete data. Such usability-related deviations have led to data exclusions in up to 15% of decentralized trials [3].

Usability testing therefore verifies not only that a DHT can be used, but that it will be used correctly and consistently in real-world contexts

Regulatory Expectations

  • FDA: The Human Factors and Usability Engineering Guidance for Medical Devices (2016) and the Digital Health Technologies for Remote Data Acquisition Guidance (2023) require sponsors to demonstrate that intended users can operate DHTs safely and effectively under expected use conditions [4,5] 

  • EU MDR (2017/745): Annex I mandates that medical devices must be designed to minimize user error and support safe operation under normal and foreseeable misuse [6].

  • IEC 62366-1:2015: Defines a structured usability engineering process, including user research, formative and summative testing, and documentation [7].

  • ICH E6(R3): Extends GCP oversight to digital systems, requiring sponsors to verify user competence and system usability in decentralized environments [8].


    Together, these frameworks establish usability as a core design control, not an optional design improvement.

The Usability Testing Process

Effective usability testing follows a structured lifecycle, combining formative (iterative) and summative (confirmatory) evaluations.

  • Identify user profiles, including age, health condition, digital literacy, and cultural factors. Consider environmental influences such as lighting, mobility, and internet connectivity [7].

    Example: A spirometer app validated in hospital settings may not perform as expected in home environments with poor lighting or ambient noise [9].

  • Formative usability testing should begin during prototype development. This iterative process identifies design flaws before finalization. Methods include:

    • Cognitive walkthroughs: observing user interactions.

    • Heuristic evaluation: assessing interface compliance with usability principles.

    • Think-aloud protocols: capturing user reasoning during operation.

    FDA and ISO recommend multiple formative cycles, each refining interface design and reducing use-related risks [5,7].

  • Summative testing confirms that representative users can perform critical tasks correctly and safely.

    • Conduct testing with intended users, not experts.

    • Simulate realistic use conditions and foreseeable misuse.

    • Record error types, causes, and severity.

    • Analyze whether design mitigations adequately address risks identified during formative testing [10].

    Summative testing results form part of the Device Usability File, required under EU MDR and FDA design history documentation.

  • Usability testing must link directly to the device’s risk management file. Use-related risks (e.g., misinterpretation of app notifications or failure to wear sensors correctly) must be identified, evaluated, and mitigated through design controls [11].

    Example: A wearable ECG device introduced an automatic feedback feature (“device correctly positioned”) to mitigate use-related risk. The change reduced incorrect wear events by 80% in subsequent usability tests [12].

  • Training materials and IFUs must be tested as part of usability validation. In decentralized trials, digital tutorials and in-app prompts often replace in-person training. Assess comprehension, recall, and accessibility across user demographics [13].

  • Maintain comprehensive records of usability evaluations, linking test findings to design iterations and risk mitigations. This documentation demonstrates compliance during FDA or EMA inspections and supports future DHT modifications [6,7].

Metrics and Success Criteria

Quantitative metrics:

  • Task success rate (% of users completing critical tasks without assistance).

  • Error frequency and severity.

  • Time-on-task (efficiency).

  • Post-test satisfaction (System Usability Scale or SUS score).

Qualitative insights:

  • User perceptions of reliability and clarity.

  • Emotional and cognitive workload indicators.

  • Observations of unanticipated use patterns.

Combining both measures ensures usability evidence is comprehensive and credible [10].

Common Pitfalls in DHT Usability Testing

  1. Testing with experts instead of laypersons. Results may overestimate usability.

  2. Ignoring environmental variability. Home use introduces lighting, noise, or accessibility issues.

  3. Neglecting firmware or app updates. Even minor software changes can affect usability or introduce new risks [14].

  4. Failing to document iterative improvements. Regulators require traceable evidence of risk mitigation.

  5. Insufficient inclusivity testing. Accessibility for older adults, low-literacy users, or those with disabilities must be evaluated [15]

Case Studies

Case 1 – Remote Hypertension Monitoring:

Usability testing of home blood pressure monitors revealed high rates of incorrect cuff placement. Revised visual guides and color-coded indicators improved task success from 65% to 95% [9].

Case 2 – Wearable Patch for Cardiac Rhythm Monitoring:

Summative testing across multiple age groups identified difficulties with patch removal. A redesign of adhesive tabs reduced use-related skin irritation and improved compliance [12].

Case 3 – eCOA App for Pain Reporting:

An iterative usability process improved readability and navigation for older participants, increasing data entry completeness by 20% [15]

Integrating Usability into the DHT Development Lifecycle

Sponsors and vendors should embed usability engineering across all development stages—not as a final gate but as an ongoing process. Best practices include:

  • Establish a usability plan aligned with risk management.

  • Conduct multi-phase testing covering design, validation, and post-market surveillance.

  • Involve cross-functional teams—UX designers, clinical scientists, and regulatory experts.

  • Use simulated real-world studies to assess usability under remote conditions.

Ultimately, usability is the bridge between regulatory compliance and patient engagement.

Conclusion

Usability testing transforms DHTs from technological possibilities into practical, reliable tools for clinical research. It safeguards participants, protects data integrity, and fulfills regulatory expectations under GCP and MDR.

Sponsors that invest in robust usability engineering not only mitigate compliance risk but also enhance participant experience and data quality. In the era of digital clinical trials, usability is not just design excellence—it is scientific credibility.

Next
Next

Device Fit and Classification: When “Commercial” Becomes “Clinical” (Part 3/8)