Priority Medical

STAT’s “Denied by AI” series a model of solid investigative journalism

Published on
STAT’s “Denied by AI” series a model of solid investigative journalism
  • AI-driven algorithms in healthcare are increasingly denying patients necessary treatments by overriding clinical judgment, as highlighted by the STAT News series 'Denied by AI', raising ethical and medical concerns.
  • NaviHealth, a subsidiary of UnitedHealth Group, uses AI algorithms that fail to account for individual patient needs, leading to inappropriate treatment denials, driven by financial motives to cut costs and boost profits.
  • Regulatory bodies are beginning to address the lack of transparency and potential discrimination caused by AI in healthcare, necessitating robust regulations to prioritize patient well-being over profit and ensure fair decision-making.

Join Our Newsletter

Get the latest news, updates, and exclusive content delivered straight to your inbox.

Introduction

The world of healthcare is increasingly turning to artificial intelligence (AI) to streamline decision-making processes. However, a recent investigative series by STAT News has exposed a disturbing trend. The series, titled "Denied by AI," reveals how AI-driven algorithms are being used to override clinical judgment and deny patients the care they need. This phenomenon has serious implications for patient health and raises critical questions about the role of AI in healthcare decision-making.

The Problem with AI in Healthcare

The STAT News series highlighted the dangers of using AI algorithms to assist doctors and insurance case managers in making health care decisions. One of the primary concerns is that these algorithms often fail to account for the individual circumstances of each patient. For instance, an algorithm designed to keep patients enrolled in Medicare Advantage (MA) plans within a specific timeframe may not consider the unique health needs of each patient. As a result, patients recovering from strokes, cancer, or other serious illnesses may be prematurely denied rehabilitation care.

The Case of NaviHealth

NaviHealth, a subsidiary of UnitedHealth Group, is at the center of this controversy. The company uses an AI-driven algorithm called nH Predict to guide case managers in their decisions. The algorithm sets targets for keeping patients enrolled in MA plans within 3% or less of the projected days. However, this approach neglects to account for variances in patient health status, leading to inappropriate denials of care. Even when employees argued that patients needed more time in rehab, the company's physician medical reviewers deferred to the algorithm, causing internal dissent among staff who felt the denials were inappropriate and contradicted clear medical evidence.

How AI Algorithms Impact Patient Care

The use of AI algorithms in healthcare decision-making is fraught with challenges. One major issue is the opacity of these systems. Even developers of these algorithms may not fully understand why certain decisions are made, making it impossible for patients or family members to contest the denials effectively. This lack of transparency exacerbates health equity concerns, as patients from historically marginalized groups are disproportionately affected by algorithmic predictions that depart from human judgments about a patient's health needs.

The Financial Motive Behind AI-Driven Denials

The STAT News series also revealed that the use of AI algorithms in MA plans is driven by profit. Health insurers generate healthy profits, and by cutting off care prematurely, they can further boost their earnings. This practice is particularly concerning for seniors who may need to appeal the denials and spend months waiting for a decision that may not be in their favor. The financial motive behind these denials is starkly illustrated by the fact that insurers are using unregulated predictive algorithms to pinpoint the precise moment when they can plausibly cut off payment for an older patient's treatment.

Regulatory Concerns and Future Directions

The use of AI algorithms in healthcare is becoming increasingly regulated. States are starting to form policy rules for the use of AI among health insurers, and federal governments are working to prevent discrimination and ensure transparency in how these systems are created and used. Proposed state legislation aims to require health insurance companies to be more transparent about their systems, including the specific data sets fed into those systems and how the algorithms instruct decision-making.

Impact on Providers and Patients

Healthcare providers are facing an uphill battle as they navigate the complex landscape of AI-driven denials. Providers must determine how to respond to the AI-powered assault on revenue while also ensuring that patients receive the care they need. The increased reliance on AI has led to a "triple-D effect" – downgrades, delays, and denials – which has further exacerbated the problem of aged accounts receivables. Hospitals and health systems are reporting significant losses due to denied claims, highlighting the need for effective guardrails on the use of AI in revenue cycle management.

Conclusion

The STAT News series "Denied by AI" has shed light on a critical issue in healthcare. The use of AI algorithms to deny patients their rightful care is not only ethically questionable but also medically dangerous. As the healthcare industry continues to integrate AI into its decision-making processes, it is crucial that policymakers and stakeholders ensure that these systems are transparent, fair, and prioritize patient well-being above profit margins. Only through robust regulation and transparency can we harness the potential of AI to improve healthcare while minimizing its risks.

References