Before diving into how to use artificial intelligence (AI) in healthcare, let’s start with a brief refresher on the basics of AI. While AI dates back almost 75 years, the rise of cloud computing and rapid advancements within AI, such as generative AI (GenAI), have unlocked transformative opportunities for health plans to eliminate operational cost through automation and process improvements.
The latest advancements in AI can help health plans radically improve their operations and ultimately provide better service and care to patients while reducing provider abrasion. However, to ensure safe, responsible investments in AI, health plans need to consider the applications of AI and how to use it safely. This is where Asimov’s laws come into play.
Author Isaac Asimov introduced the Three Laws of Robotics (also known as the Three Laws or Asimov’s Laws) in his 1950 novel I, Robot. These laws govern robot interactions.
While AI and robots aren’t synonymous, these rules serve as a framework that can be applied to how we envision AI being used in healthcare. With great power comes great responsibility, so rules need to be established—especially in healthcare.
Today, we have machines that mimic human intelligence and, therefore, we have a regulatory—and moral—requirement to develop guidelines for its responsible use. We took Asimov’s laws and reframed them through the lens of AI in healthcare (and even added a fourth).
While these are just a subset of rules created with guidance from the National Institute of Standards and Technology (NIST), they establish guardrails for the safe, secure, and trustworthy use of AI in healthcare today.
AI should never be the final decision-maker for irreversible actions that can directly impact patient health. If committed to the “FAVES” principles (Fair, Appropriate, Valid, Effective, Safe), health plans should clearly delineate which tasks can safely be delegated to AI and which can be supported by AI but remain under human review (e.g., care denials). Quite often, processes can be reinvented to redirect expert human supervision to potentially risky decisions while reaping the benefits of automation for safe ones.
It’s tempting to throw AI at past decisions and ask it to infer the rules of the game from how things were done. This strategy is fraught with peril because the decisions of the past aren’t guaranteed to be the correct decisions. To support accurate, consistent, and timely decisions, health plans need to standardize and codify rules at a granular level. They should strive to create a comprehensive and trustworthy decisioning source of truth for medical policy, coding guidelines, regulations, and contracts. AI can play a major role in helping develop this decisioning source of truth from which all future decisions can be based on, with confidence.
AI’s decisions must be tracked automatically and made available for stakeholders to see. Decisions should be transparent and auditable to explain how they were made with relation to the standardized and immutable rules codified per the Second Law. This is especially important for healthcare decisions because there are policies and regulations that need to be taken into consideration and on which decisions should be based.
If the wrong action was taken, it must be reversible. Pretty simple and plays off of rule three. With a clear path to how decisions and actions were made as a result of those decisions, the actions should be able to be reversed. What’s more, AI must be able to correct errors and to learn from errors that become apparent based upon downstream claim corrections to prevent future errors. AI continues to learn and therefore even errors should serve as data points to inform AI for future similar instances. Reversing an error is good, reversing and learning from an error is great.
Powering healthcare technology with AI isn’t just a recommendation, it’s a necessity. However, it’s important to remember that “AI” doesn’t equate to robots and, therefore, doesn’t mean total autonomy.
There are humans behind the AI, those with clinical knowledge who help build the algorithms and organize data, and there are humans often needed after AI has automated certain processes—to double check. By remembering the four rules of AI in healthcare, structured after Asimov’s laws for robot interactions, your health plan can approach AI in a way that’s safe, secure, and transparent.
To learn more about how Machinify can improve your health plan’s claims processes, schedule a demo today.