Artificial intelligence (AI) is rapidly becoming a transformative force across industries, and healthcare is a prime example. In addition to emerging use cases for the technology in the clinical context, health insurers have begun adopting AI to automate claims processing, more effectively forecast risk, and improve fraud detection – especially critical as healthcare costs continue to rise. Research shows that using AI-powered fraud detection systems can save billions of dollars in healthcare costs.
But AI isn’t perfect; in fact, at times, it can lead to more problems than it solves. Insurers must be prepared for the legal ramifications tied to this emerging technology and conduct due diligence to ensure its use is above board.
An Uptick in AI Lawsuits
As AI becomes more widespread, lawsuits involving the technology have also surged. Among these, numerous cases against health insurers allege the companies are using AI to deny certain care under Medicare Advantage.
In December, patients filed a class action lawsuit against Humana, alleging that the health insurer illegally deployed artificial intelligence in place of real doctors to wrongfully deny elderly patients the care owed to them under Medicare Advantage plans. At the heart of the lawsuit is an AI algorithm called nH Predict, designed to forecast how long patients will stay in a skilled nursing facility. The lawsuit alleges that the model has a high error rate and contradicts doctors’ recommendations for length of stay.
According to the lawsuit, Humana was aware of the high rate of wrongful denials and continued to use nH Predict to make claims decisions because they knew only a small percentage of policyholders would appeal denied claims. The suit alleges that coverage determinations were not based on patients’ individual needs but on nH Predict’s output. The lawsuit also claims that Humana limits employee discretion to deviate from the nH Predict model.
In another lawsuit in July, Cigna was accused of using the PxDx algorithm to automatically deny payments in large batches for treatments that did not meet certain criteria. PxDx flags discrepancies between a diagnosis and what Cigna deems acceptable tests and procedures. The algorithm denies claims in an average of 1.2 seconds, which the lawsuit says makes it impossible for the claims to be reviewed individually, violating state law.
And in June, UnitedHealth subsidiary UMR was sued by the U.S. Department of Labor for allegedly denying emergency room and urinary drug screening claims. The suit alleges that the procedures for adjudicating ER claims were only based on diagnosis codes, and that the urinary drug screening claims were denied without being reviewed for medical necessity.
Uncovering Bias in Healthcare Algorithms
One of the major concerns with artificial intelligence and machine learning is the potential for algorithmic bias. Frequently, AI algorithms are trained on biased data or not validated with equity in mind, which can exacerbate existing disparities.
For instance, a 2019 study published in Science revealed that a widely used commercial algorithm designed to help guide health decisions required Black hospital patients to be "considerably sicker" for the same care recommendations as white patients. The algorithm used health costs as a proxy for health needs – theoretically an accurate indicator – however, because the healthcare system tends to spend less on Black patients compared to white ones with the same health conditions, the algorithm mistakenly concluded that Black patients were healthier than they actually were.
Such inequities in patient care are often compounded by differences in social determinants of health such as environment and access to critical resources like transportation, nutritious food, and housing.
Government Regulation of AI
Last year saw the first federal regulations on AI systems in the U.S. In October, the Biden administration issued an executive order requiring AI developers to share the results of product safety tests with the government. The order also asks AI companies to voluntarily commit to developing methods intended to preserve the safety and personal security of U.S. citizens. And earlier this month, the House Financial Services Committee announced plans to create a bipartisan AI Working Group that will examine AI’s impact in decision-making processes, product development, fraud deterrence, compliance strategies, and workforce development.
The Biden administration has also prioritized oversight around Medicare Advantage, aiming to protect patients from predatory marketing and increase access to behavioral healthcare. At the end of last year, the Department of Health and Human Services put these priorities into action by revising certification criteria for decision support interventions, patient demographics and observations, and electronic case reporting. These revisions will advance interoperability and improve algorithm transparency.
The Government Accountability Office also published an accountability framework to ensure the responsible use of AI among federal agencies and other entities. The framework cites the 2019 Science study as an example of the unintended consequences of using an AI predictive model in healthcare management.
The Way Forward: Implementing AI Governance Structures
AI has the potential to revolutionize health insurance, but getting there requires comprehensive governance structures that monitor for bias and ensure the technology is leveraged responsibly. This not only enables cost-cutting and greater efficiency but also can significantly improve health outcomes and increase care access for all communities. The World Health Organization noted that a platform should be established to enable insurers and governance bodies to collectively work toward making big data analytics as transparent and accurate as possible.
Ichor Strategies can help health insurers in the early stages of integrating AI into their workflows implement governance structures that ensure responsible and equitable use. Our sophisticated research and data analytics capabilities can augment health insurers’ data strategy and model implementation. These findings can inform the development of measurement frameworks that validate and maximize impact for patients and providers while mitigating model biases. Ichor can also co-develop thought leadership opportunities and strategies that help build trust between communities and insurers to overcome barriers facing the adoption of AI.
AI presents significant opportunities in healthcare and for health insurance companies. Prioritizing ethical and equitable use can ensure that patients are able to access the care they need.