Pragmatic AI Advice from UC San Diego Health’s AI Expert

SNI’s Dr. Ashrith Amarnath, UC San Diego Health’s AI expert Dr. Karandeep Singh, and SNI’s Data Analyst Arlene Marmolejo at the AI Forum during the CAPH/SNI Annual Conference in Napa.

 

By Dr. Ashrith Amarnath, Chief Health Officer,
California Health Care Safety Net Institute

 

For public health care systems in the early phases of their AI journey, Dr. Karandeep Singh*, MMSc, Chief Health AI Officer at UC San Diego Health, shared pragmatic advice at the recent CAPH/SNI Annual Conference. Speaking to fellow AI officers and CIOs, CMIOs, and CMOs, Dr. Singh discussed how to establish the foundations for AI initiatives and evaluate AI models for equity, which is summarized below.

*Dr. Singh is also the Joan and Irwin Jacobs Chancellor’s Endowed Chair in Digital Health Innovation and Associate Professor of Medicine in Biomedical Informatics at UC San Diego School of Medicine, as well as the leader of AI initiatives at the Joan and Irwin Jacobs Center for Health Innovation.  

Four essential steps to building an AI foundation

“You don’t have to do this on your own. Partner with those who have expertise.”

– Dr. Karandeep Singh, Chief Health AI Officer, UC San Diego Health

1. Establish governance

Health systems first need governance structures to evaluate AI models and establish AI-specific policies and processes, including how to routinely monitor models for effectiveness. Singh recommended assembling a group or committee to govern AI that is willing to be held accountable for decisions made around these models.

Committee members do not need to be AI experts themselves but can partner with specialists instead. For example, external partners can help the AI governing committee read between the lines when assessing vendor offerings.

“The tools are secondary, and the problems we solve are primary.”

– Dr. Singh

2. Determine AI strategy

The cornerstone of an AI strategy is identifying and prioritizing solvable problems, rather than looking at an increasingly flooded marketplace of AI tools and unsystematically adopting what’s available. Once a health system has determined its most pressing challenges to address with AI, Singh suggested starting out by reviewing and cataloging the AI tools that are already available, such as those within the electronic health record (EHR).

“If I’ve learned one thing in this space, it’s that if a frontline clinician gives you qualitative insight – when they say something isn’t working – they’re usually right.”

– Dr. Singh

3. Prioritize clinician input, and seek their feedback

Singh recommended focusing on use cases of AI that interest clinicians or help meet an operational need. There also must be a system for consistently gathering their input.

“Look at whether the AI model will waste people’s time or create extra busy work.”

– Dr. Singh

4. Consider the amount of work that AI creates

When evaluating AI models, assess the work it creates and ask:

  • Is the extra work worth it?
  • Is there a way to decrease this new workload?

Singh said that health systems need to avoid adopting a “work creation tool” and instead ensure AI models are effective. If they do generate additional work, that tradeoff needs to be recognized and deemed worthy in service of upcoming goals.

The importance of gauging the effectiveness of AI tools cannot be overstated. Given the high cost of some of the tools, taking vendors’ effectiveness claims at face value can be a costly mistake.

“What are the potential harms of using the AI model?”

– Dr. Singh

How to evaluate AI models for equity

For public health systems, ensuring AI systems are free from bias and are equitable is paramount.

At UC San Diego Health, Singh said they have a questionnaire for those proposing AI models where they are initially asked, “Is your model equitable?” However, since respondents often struggled with this broad question, the AI committee at UC San Diego Health replaced it with more specific questions to better address bias and equity:

  • What are the potential harms of using the AI model?
  • Consider the harms that could occur if the model is inaccurate and state them.
  • Consider the harms if the model is accurate and state them.
  • Consider harms to potential groups, keeping in mind anti-discrimination laws, and state them.
  • What is your mitigation plan for these harms?

These targeted questions can help health systems more effectively evaluate an AI model’s impact on equity.

For more information, read Dr. Singh’s presentation here.