Federal Changes to AI Guardrails: What You Need to Know

By
Federal Changes to AI Guardrails: What You Need to Know

Hard to believe it’s only February as the last few weeks have been jam-packed with updates to legislation, creating a lengthy list of newly-introduced executive orders. Among those, President Trump signed an executive order that revoked 78 executive actions from the previous Biden Administration, including “Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence).” This move emphasizes a departure from the previous administration’s cautious approach to AI governance, favoring, instead, a strategy that prioritizes rapid innovation and reduced regulatory oversight.

How may this impact payers? Let’s assess. 

Background on Biden’s AI Executive Orders

Let’s back up and cover Biden’s executive order first. Signed in October 2023, Biden’s executive order mandated that developers of advanced AI systems conduct comprehensive safety tests and share the results with the federal government prior to public deployment. 

The directive also called for the establishment of safety standards by federal agencies and emphasized the need to address potential risks associated with AI, including those related to national security, the economy, and public health. 

Additionally, it sought to protect consumers and workers by evaluating AI’s impact on the labor market and mitigating issues like AI-enabled fraud and discriminatory algorithms. The executive order was, according to AP News, “an initial step that is meant to ensure that AI is trustworthy and helpful, rather than deceptive and destructive.”

In addition to the original order in 2023, in mid-January of this year, six days prior to the inauguration, Biden issued an executive order aimed at “Advancing United States Leadership in Artificial Intelligence Infrastructure.” By establishing data infrastructure in the U.S., two main focus areas of this executive order are to advance innovation, maintain competitiveness, and improve security. Notably, this executive order was not overturned by Trump.

New Directives

On his first day in office, President Trump revoked Biden’s October 2023 executive order, asserting that the existing regulations posed unnecessary barriers to AI innovation. He emphasized the importance of fostering AI development free from ideological constraints and announced plans for a $500 billion investment in AI infrastructure, in collaboration with major tech companies. This initiative, dubbed ‘Stargate,’ aims to position the United States as a global leader in AI technology.

Pros of the New Directives

  1. Accelerated Innovation: By removing stringent regulatory requirements, AI developers may experience fewer obstacles, potentially leading to faster advancements and the swift deployment of new technologies.
  2. Economic Growth: The substantial investment in AI infrastructure is anticipated to create numerous jobs and stimulate economic activity, particularly in the tech sector.
  3. Global Competitiveness: Thinking big-picture, the aforementioned accelerated innovation allows for advances that could enhance the U.S.’s position in the global AI race, by fostering a more conducive environment for AI development.

Cons of the New Directives

  1. Safety Concerns: Eliminating mandatory safety assessments may increase the risk of deploying AI systems that are untested or inadequately vetted, potentially leading to unintended consequences.
  2. Ethical and Social Implications: Without regulatory oversight, there is a heightened risk of AI systems perpetuating biases or being utilized in ways that could harm vulnerable populations.
  3. Long-Term Risks: The absence of comprehensive safety evaluations could result in AI applications that pose threats to public health, national security, or economic stability.

Implications for Payers

The shift in AI policy is poised to have profound implications for the health insurance industry. AI being too heavily involved in healthcare—particularly the claims review process—has been top-of-mind for patients, providers, and payers alike. But, at the same time, the health insurance industry has historically involved laborious, manual processes that, with the introduction of AI, can be streamlined with a higher level of accuracy.

Potential Benefits

  • Enhanced Risk Assessment: With fewer regulatory hurdles, payers may integrate advanced AI tools more rapidly, enabling more precise risk modeling. This could lead to fairer pricing models and improved profitability.
  • Improved Fraud Detection: AI systems equipped with sophisticated pattern recognition capabilities could help payers identify fraudulent claims more effectively, saving billions of dollars annually.
  • Personalized Payers: Advanced AI could analyze vast datasets to tailor health plan products to individual needs, fostering customer satisfaction and potentially increasing policyholder retention.

Potential Drawbacks

  • Bias in Claims Review: Without adequate oversight, AI systems used in underwriting might unintentionally perpetuate biases, resulting in discriminatory practices against certain demographics.
  • Data Privacy Risks: The rapid deployment of AI in health insurance could lead to vulnerabilities in data security, potentially exposing sensitive customer information to breaches or misuse.
  • Regulatory Uncertainty: While reduced federal oversight might spur innovation, it could also create ambiguity regarding compliance standards, potentially leading to legal and reputational risks for payers.

Balancing Innovation and Responsibility

President Trump’s revocation of Biden’s AI safety executive order marks a pivotal change in the U.S. approach to artificial intelligence development. While the emphasis on rapid innovation and economic growth presents clear advantages, it also raises significant concerns regarding safety, ethics, and long-term societal impacts. 

The healthcare industry, as a whole, will need to tread carefully, balancing the opportunities presented by the new AI landscape with the responsibility to protect consumers and maintain ethical practices. Companies that proactively establish robust internal safeguards and prioritize transparency in AI applications may find themselves better positioned to thrive in this evolving environment. At Machinify our core tenets are safe, transparent, and efficient AI that keeps people in control while improving the health insurance industry for all parties. 

To learn more about Machinify’s capabilities and how AI can benefit your payment integrity program, contact us today.