Clinical insights

A Closer Look at the Physicians Make Decisions Act (PMDA)

Brooke Grief
January 14, 2025
Share via:
An image featuring the title 'A Closer Look at the Physicians Make Decisions Act (PMDA)' on the left against a blue gradient background, with a statue of Lady Justice on the right, dramatically illuminated in blue and pink lighting while holding scales in one hand and a sword in the other.

The introduction of artificial intelligence (AI) into healthcare has sparked groundbreaking innovations and debates alike. Among the latest developments, the "Physicians Make Decisions Act" (PMDA) has begun to emerge as a legislative milestone aimed at defining the role of AI in medical decision making. 

By emphasizing the importance of providers’ expertise, this act is reshaping conversations around liability, ethical boundaries, and the integration of AI into clinical practice. Let’s dive into what the act entails, its current adoption landscape, its potential trajectory, and its broader implications for healthcare and AI.

What is the Physicians Make Decisions Act?

The Physicians Make Decisions Act is legislation designed to clarify the boundaries between AI-generated recommendations and the expertise and authority of providers. As AI systems like diagnostic tools, predictive algorithms, and clinical decision support systems (CDSS) rapidly emerge, questions about accountability and autonomy in healthcare have intensified. 

The PMDA underscores that while AI can provide valuable insights and recommendations, ultimate decision-making responsibility should lie with licensed providers.

Key provisions of the PMDA include:

  1. Accountability for Medical Decisions: Providers retain legal and ethical accountability for medical decisions, even when informed by AI tools.
  2. Transparency Requirements: Healthcare organizations and AI developers must disclose the role of AI in the decision-making process to patients.
  3. Standardized Oversight: Establishment of review boards to assess the performance and biases of AI systems used in clinical settings.
  4. Prohibition of AI Overreach: AI tools are restricted from making final clinical decisions without provider approval, particularly in high-stakes scenarios.

The act is seen as a preemptive measure to balance the benefits of AI with the need for human oversight, ensuring that patient safety and trust remain at the core of healthcare delivery.

Which States Have Already Adopted the PMDA?

A handful of states have formally adopted the Physicians Make Decisions Act, signaling a cautious but progressive approach to AI integration in healthcare. 

  • California: A hub for tech innovation and healthcare reform, California became the first state to enact the PMDA. The state’s version of the act incorporates additional provisions for auditing AI systems for biases and disparities in healthcare outcomes.
  • Massachusetts: With its strong ties to healthcare and biotechnology, Massachusetts followed suit, emphasizing provider education and AI literacy as part of the act’s implementation.
  • Texas: Known for its proactive stance on medical regulations, Texas adopted the PMDA with a focus on rural healthcare, where AI tools are increasingly used to address physician shortages.
  • Washington: A leader in data privacy laws, Washington’s version of the PMDA includes stricter patient consent requirements when AI systems are employed.

These states have taken the lead in defining AI’s role in healthcare, setting precedents for others to follow.

States Poised to Adopt the PMDA Next

Several states are also in various stages of considering the adoption of the Physicians Make Decisions Act. 

  • New York: With its expansive healthcare network and policy influence, New York is likely to prioritize the PMDA in upcoming legislative sessions. The state’s focus will likely center on urban healthcare settings where AI adoption is rapidly growing.
  • Illinois: As a Midwest leader in healthcare policy, Illinois is exploring the PMDA to address the increasing use of AI in hospitals and clinics.
  • Florida: With a large aging population, Florida is assessing the PMDA as part of broader efforts to regulate telemedicine and AI-driven elder care.
  • Colorado: A progressive state with a tech-friendly environment, Colorado is poised to adopt the PMDA with potential adaptations for mental health AI applications.

These states’ considerations highlight the growing need for clear frameworks around AI’s role in healthcare.

Implications for AI in Healthcare

AI, specifically in the context of healthcare, is meant to supplement, not replace. AI can speed up the legwork, but shouldn’t be deemed the final arbiter of decisions. With the introduction of this legislation, there are significant implications for AI developers, healthcare providers, and patients alike. 

1. Fostering Trust in AI Systems

One of the primary challenges for AI in healthcare is building trust. In light of recent events, fueling the perspective that health plans use AI to deny claims, building trust is crucial. By ensuring that providers remain the ultimate decision-makers, the PMDA provides reassurance that human expertise will not be overshadowed by machine intelligence. This framework encourages patients to engage with AI-assisted care while maintaining confidence in their providers.

2. Setting Standards for AI Transparency and Accountability

The act mandates transparency in how AI systems operate and are used in clinical settings. This requirement encourages developers to create systems that are not only accurate but also interpretable. Clear documentation and patient disclosures will become industry norms, reducing the risk of misuse or overreliance on opaque algorithms.

3. Driving Collaboration Between Physicians and AI

By defining boundaries and responsibilities, the PMDA sets the stage for more effective collaboration between physicians and AI systems. Physicians can confidently use AI to enhance diagnostic accuracy, streamline workflows, and improve patient outcomes without fearing loss of control or accountability.

4. Encouraging Responsible AI Innovation

The PMDA’s emphasis on oversight and ethical standards incentivizes developers to prioritize safety, fairness, and neutrality. This shift could lead to AI tools that are better calibrated for diverse patient populations, addressing concerns about biases and disparities in healthcare.

5. Influencing Federal and Global Policies

As more states adopt the PMDA, its principles could influence federal regulations and global standards for AI in healthcare. The act’s emphasis on human oversight aligns with global discussions about ethical AI deployment, positioning the U.S. as a leader in responsible AI governance.

Challenges and Criticisms of PMDA

While the PMDA is largely seen as a positive step, it is not without challenges and criticisms.

For one, establishing oversight boards and ensuring compliance across diverse healthcare systems may require significant resources and take time. Providers need to be educated on how AI works and makes decisions. 

On the other side, some tech companies argue that stringent regulations could slow innovation and adoption of life-saving technologies. One of the largest benefits of AI, compared to humans, is its speed. For example, when auditing insurance claims, human auditors often need to review thousands of pages (if not more) of medical data to draw a conclusion. AI can comb through that same data in minutes. 

Lastly, differences in how states implement the PMDA could create inconsistencies in AI regulation, complicating compliance for both AI companies and providers.

The Road Ahead

As AI continues to evolve, the Physicians Make Decisions Act represents a thoughtful approach to balancing innovation with accountability. By placing providers at the helm, the act ensures that patient safety and trust remain paramount, even as healthcare becomes increasingly data-driven.

For healthcare providers and AI companies, the PMDA’s framework offers both opportunities and responsibilities. Collaboration between these stakeholders will be essential to realizing the full potential of AI while adhering to ethical and legal standards.

Ultimately, the PMDA marks a pivotal moment in the integration of AI into healthcare. It’s not suggesting an outright ban on AI in healthcare—because there is a place for it—but instead, it’s implementing a system of checks and balances. AI adoption and evolution will likely shape not only the future of healthcare but also the broader conversation around AI governance in critical sectors.

Curious how Machinify can be a partner in your payment integrity program? Schedule a demo today.

Brooke Grief
January 14, 2025
Share via:
contact us
sign up

This is a selected article for our subscribers

Sign up to our newsletter to read the full version of this article, access our full library and receive relevant news.