Designing Generative AI Policy and Procedures for High Risk Applications

Healthcare

How we designed an AI ethical risk policy for 77 point of care sites including primary care, behavioral health facilities, dental clinics, vision clinics, mobile medical units, mobile counseling centers,  physical rehabilitation clinics, and more.

case study.designing generatve ai policy and proceedures

Problem

Large Language Models (LLMs) like ChatGPT have become accessible to healthcare practitioners. There are documented cases of practitioners using LLMs in high-risk contexts, e.g. engaging in differential diagnoses of patients and recommending courses of treatment. While LLMs may be appropriate in some healthcare contexts, they have not been sufficiently vetted for safety. However, LLMs are readily available on everyone’s personal device and many practitioners are unaware of the risks (e.g. hallucinations, bias, privacy violations, etc.)

Solution

Virtue developed, after performing an in-depth AI ethical risk assessment of the organization, an AI policy that focuses on the use of generative AI. Ethical standards were articulated that align with those espoused by the World Health Organization’s  (WHO) six core principles for AI deployment in healthcare. The policy included authorized and unauthorized procedures and workflows that align with those principles.

Context

The Board of Directors, aware of the ethical, reputational, and legal risks of AI, charged their Chief Information Officer with immediately designing and rolling out a generative AI policy. The CIO contacted Virtue for their expertise and efficiency. Virtue recommended aligning with a healthcare specific framework.