Jan 24
2026
Misuse of AI Chatbots Tops ECRI’s 2026 Health Technology Hazards List
Artificial intelligence chatbots have emerged as the most significant health technology hazard for 2026, according to a new report from ECRI, an independent, nonpartisan patient safety organization.
The finding leads ECRI’s annual Top 10 Health Technology Hazards report, which highlights emerging risks tied to healthcare technologies that could jeopardize patient safety if left unaddressed. The organization warns that while AI chatbots can offer value in clinical and administrative settings, their misuse poses a growing threat as adoption accelerates across healthcare.
Unregulated Tools, Real-World Risk
Chatbots powered by large language models, including platforms such as ChatGPT, Claude, Copilot, Gemini, and Grok, generate human-like responses to user prompts by predicting word patterns from vast training datasets. Although these systems can sound authoritative and confident, ECRI emphasizes that they are not regulated as medical devices and are not validated for clinical decision-making.
Despite those limitations, use is expanding rapidly among clinicians, healthcare staff, and patients. ECRI cites recent analysis indicating that more than 40 million people worldwide turn to ChatGPT daily for health information.
According to ECRI, this growing reliance increases the risk that false or misleading information could influence patient care. Unlike clinicians, AI systems do not understand clinical context or exercise judgment. They are designed to provide an answer in all cases, even when no reliable answer exists.
“Medicine is a fundamentally human endeavor,” said Marcus Schabacker, MD, PhD, president and chief executive officer of ECRI. “While chatbots are powerful tools, the algorithms cannot replace the expertise, education, and experience of medical professionals.”
Documented Errors and Patient Safety Concerns
ECRI reports that chatbots have generated incorrect diagnoses, recommended unnecessary testing, promoted substandard medical products, and produced fabricated medical information while presenting responses as authoritative.
In one test scenario, an AI chatbot incorrectly advised that it would be acceptable to place an electrosurgical return electrode over a patient’s shoulder blade. Following such guidance could expose patients to a serious risk of burns, ECRI said.
Patient safety experts note that the risks associated with chatbot misuse may intensify as access to care becomes more constrained. Rising healthcare costs and hospital or clinic closures could drive more patients to rely on AI tools as a substitute for professional medical advice.
ECRI will further examine these concerns during a live webcast scheduled for January 28, focused on the hidden dangers of AI chatbots in healthcare.
Equity and Bias Implications
Beyond clinical accuracy, ECRI warns that AI chatbots may also worsen existing health disparities. Because these systems reflect the data on which they are trained, embedded biases can influence how information is interpreted and presented.
“AI models reflect the knowledge and beliefs on which they are trained, biases and all,” Schabacker said. “If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems.”
Guidance for Safer Use
ECRI’s report emphasizes that chatbot risks can be reduced through education, governance, and oversight. Patients and clinicians are encouraged to understand the limitations of AI tools and to verify chatbot-generated information with trusted, knowledgeable sources.
For healthcare organizations, ECRI recommends establishing formal AI governance committees, providing training for clinicians and staff, and routinely auditing AI system performance to identify errors, bias, or unintended consequences.
Other Health Technology Hazards for 2026
In addition to AI chatbot misuse, ECRI identified nine other priority risks for the coming year:
- Unpreparedness for a sudden loss of access to electronic systems and patient data, often referred to as a digital darkness event
- Substandard and falsified medical products
- Failures in recall communication for home diabetes management technologies
- Misconnections of syringes or tubing to patient lines, particularly amid slow adoption of ENFit and NRFit connectors
- Underuse of medication safety technologies in perioperative settings
Inadequate device cleaning instructions - Cybersecurity risks associated with legacy medical devices
- Health technology implementations that lead to unsafe clinical workflows
- Poor water quality during instrument sterilization
Now in its 18th year, ECRI’s Top 10 Health Technology Hazards report draws on incident investigations, reporting databases, and independent medical device testing. Since its introduction in 2008, the report has been used by hospitals, health systems, ambulatory surgery centers, and manufacturers to identify and mitigate emerging technology-related risks.