AI

I, AI: Rules to Remember When Using AI in Healthcare

Prasanna Ganesan
October 9, 2024
Share via:
A dark-themed graphic with interconnected lines and glowing red and orange circular elements, symbolizing technology and data flow. Overlaid white text reads "I, AI: Rules to Remember When Using AI in Healthcare," emphasizing the focus on ethical and practical guidelines for AI applications in healthcare.

Before diving into how to use artificial intelligence (AI) in healthcare, let’s start with a brief refresher on the basics of AI. While AI dates back almost 75 years, the rise of cloud computing and rapid advancements within AI, such as generative AI (GenAI), have unlocked transformative opportunities for health plans to eliminate operational cost through automation and process improvements.

A flowchart titled "Evolution of AI" outlines four stages: Artificial Intelligence (AI) as systems mimicking human intelligence, Machine Learning (ML) as algorithms learning from data, Deep Learning (DL) as advanced multilayered data analysis, and Generative AI (GenAI) as models creating and transforming new data formats. Each stage is represented in colored boxes with brief descriptions, showing the progression of AI capabilities over time.
  1. Artificial Intelligence (AI): Computer systems that leverage rules to mimic human intelligence to recognize patterns and solve problems or perform tasks.
  2. Machine Learning (ML): A branch of AI that uses advanced algorithms to learn from data and integrated feedback to make predictions or to suggest actions.
  3. Deep Learning (DL): A subset of ML that uses multiple layers to analyze complex data to produce insights or predictions.
  4. Generative AI (GenAI): A DL model that can understand data to generate new text, images, or other data, with outputs capable of summarizing, responding to, or transforming the data from one format or language to another.

The latest advancements in AI can help health plans radically improve their operations and ultimately provide better service and care to patients while reducing provider abrasion. However, to ensure safe, responsible investments in AI, health plans need to consider the applications of AI and how to use it safely. This is where Asimov’s laws come into play.

Asimov’s Laws

Author Isaac Asimov introduced the Three Laws of Robotics (also known as the Three Laws or Asimov’s Laws) in his 1950 novel I, Robot. These laws govern robot interactions.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While AI and robots aren’t synonymous, these rules serve as a framework that can be applied to how we envision AI being used in healthcare. With great power comes great responsibility, so rules need to be established—especially in healthcare.

Applying Asimov’s Laws to AI in Healthcare

Today, we have machines that mimic human intelligence and, therefore, we have a regulatory—and moral—requirement to develop guidelines for its responsible use. We took Asimov’s laws and reframed them through the lens of AI in healthcare (and even added a fourth).

While these are just a subset of rules created with guidance from the National Institute of Standards and Technology (NIST), they establish guardrails for the safe, secure, and trustworthy use of AI in healthcare today.

  • First Law: AI shall not act autonomously in ways that could result in patient harm.
  • Second Law: AI shall not infer or interpret policies or regulations; guidelines must be explicit and unalterable to ensure adherence to the First Law.
  • Third Law: AI shall be able to explain its own actions per the Second Law.
  • Fourth Law: Any error that AI makes shall have an easy path to being corrected and, in the future, avoided.

1.  AI Should Never Be the Final Arbiter

AI should never be the final decision-maker for irreversible actions that can directly impact patient health. If committed to the “FAVES” principles (Fair, Appropriate, Valid, Effective, Safe), health plans should clearly delineate which tasks can safely be delegated to AI and which can be supported by AI but remain under human review (e.g., care denials). Quite often, processes can be reinvented to redirect expert human supervision to potentially risky decisions while reaping the benefits of automation for safe ones.

2. AI Should Never Be Used to Guess at Policy Rules

It’s tempting to throw AI at past decisions and ask it to infer the rules of the game from how things were done. This strategy is fraught with peril because the decisions of the past aren’t guaranteed to be the correct decisions. To support accurate, consistent, and timely decisions, health plans need to standardize and codify rules at a granular level. They should strive to create a comprehensive and trustworthy decisioning source of truth for medical policy, coding guidelines, regulations, and contracts. AI can play a major role in helping develop this decisioning source of truth from which all future decisions can be based on, with confidence.

3. AI Should Be Able to Explain Actions Per the Second Law

AI’s decisions must be tracked automatically and made available for stakeholders to see. Decisions should be transparent and auditable to explain how they were made with relation to the standardized and immutable rules codified per the Second Law. This is especially important for healthcare decisions because there are policies and regulations that need to be taken into consideration and on which decisions should be based.

4. Any AI Error Should Be Correctable and, in the Future, Avoided

If the wrong action was taken, it must be reversible. Pretty simple and plays off of rule three. With a clear path to how decisions and actions were made as a result of those decisions, the actions should be able to be reversed. What’s more, AI must be able to correct errors and to learn from errors that become apparent based upon downstream claim corrections to prevent future errors. AI continues to learn and therefore even errors should serve as data points to inform AI for future similar instances. Reversing an error is good, reversing and learning from an error is great.

There’s a Time and Place for AI in Healthcare

Powering healthcare technology with AI isn’t just a recommendation, it’s a necessity. However, it’s important to remember that “AI” doesn’t equate to robots and, therefore, doesn’t mean total autonomy.

There are humans behind the AI, those with clinical knowledge who help build the algorithms and organize data, and there are humans often needed after AI has automated certain processes—to double check. By remembering the four rules of AI in healthcare, structured after Asimov’s laws for robot interactions, your health plan can approach AI in a way that’s safe, secure, and transparent.

To learn more about how Machinify can improve your health plan’s claims processes, schedule a demo today.

Prasanna Ganesan
October 9, 2024
Share via:
contact us
sign up

This is a selected article for our subscribers

Sign up to our newsletter to read the full version of this article, access our full library and receive relevant news.