News & Knowledge Clinical Care | Hospitals/Health Systems | Risk ManagementApril 5, 2024August 11, 2025 Five Steps to Reduce Generative AI Risks in Healthcare By: Curi Editorial Team 2 Minute Read Artificial intelligence (AI) is machine-displayed intelligence that simulates human behavior or thinking and can be trained to solve specific problems. Open AI’s ChatGPT, Google’s Gemini, and Microsoft’s Copilot are examples. The American Medical Association uses the term “augmented intelligence” as a conceptualization that focuses on AI’s assistive role as a tool, emphasizing that its design enhances human intelligence rather than replaces it. Many physicians and healthcare organizations are already using AI to help with administrative tasks, provide clinical decision support for diagnosis, and reduce medication dosage errors. In their survey, Clinician of the Future Report, Elsevier Health reported that 73% of surveyed physicians say it is desirable that they be digital experts and 48% find it desirable for physicians to use AI in clinical decision-making. Watch the recorded webinar, Is ChatGPT the New Dr. Google?, featuring Curi experts, Jason Newton and Margaret Curtin, to learn more about AI benefits, risks, and risk strategies. Benefits and risks of AI use in healthcare The potential benefits of using AI in healthcare include examples such as: Improved diagnosis with AI analysis of images for disease detection Medication dosage error reduction Optimizing treatment regimens based on patient profile Administrative workflow efficiencies such as scheduling and AI scribes The risks of AI include but may not be limited to: Inaccurate output, varying output, and inconsistent output Lack of validation, source, and credibility of outputs Bias in training databases Data and cyber breaches Patient concerns about transparency and privacy Lack of knowledge, policies, regulation, and accountability for errors due to AI Medical malpractice liability Potential malpractice allegations for using AI or not using AI Potential AI-related malpractice risks for physicians may be related to the use of AI or the failure to use AI that is known to improve care: Delayed diagnosis and/or failure to diagnose: Allegations that the use of generative AI might have sped up the time needed to arrive at an accurate diagnosis or might have guided the physician toward a proper diagnosis, thus averting the adverse outcome, or that the use of AI caused an inaccurate diagnosis. Surgical treatment errors: Allegations that the use of AI might have forewarned the surgeon about potential complications or could have identified patient-specific risks that weren’t otherwise foreseen or failed to be conveyed to the patient, or the use of AI-enabled surgical robots caused a patient injury during a procedure. Improper medical treatment or delay in treatment: Allegations that the use of AI might have forewarned the physician or identified potential complications of a proposed treatment. Improper patient monitoring: Allegations that the use of AI might have recognized a worsening medical condition or a worsening trend that the physician missed. Hospitals, health systems, and clinics may also have liability for failing to exercise due care in selecting AI technology, introducing, using, or maintaining it. Healthcare organizations may also be held vicariously liable for errors in the use of AI by their employed physicians and care team members. Five steps to reduce risk Engage physicians in conversations about AI; chances are they are already using it in some form. Establish a multidisciplinary team to review AI software, services, or devices, conduct demos before purchase, and perform failure mode and effects analyses (FMEA) tabletop exercises before implementation. Develop policies and procedures for acceptable use of AI: when and how to use it. Provide awareness and skills training on use and how to identify/report malfunctions or accuracy issues (bias). Add AI documentation standards to current documentation policies and physician interaction, particularly when an AI recommendation is rejected. Curi Editorial Team READ NEXT July 1, 2025July 1, 2025Curi Advisory | Risk Management Watch: Social Media in Healthcare: Embracing the Benefits & Managing the Risks Read more April 27, 2023July 3, 2024Clinical Care | Curi Advisory | Curi Insurance | Risk Management Claims Case Study: Failure to Follow Pre-Op Procedures A 59-year-old Hispanic man presented to his gastroenterologist with nausea and abdominal pain for a month. The patient’s past medical history was significant for hypertension, diabetes,… Read more August 15, 2022April 6, 2023Clinical Care | Telehealth Virtual Curbside Consult App Considerations The ease of use and casual nature of virtual curbside consult apps may be appealing, but fundamental considerations still apply. In determining whether telehealth apps might… Read more