California Attorney General Probes Bias in Health Care Algorithms - Lexology

2022-09-17 06:23:10 By : Mr. Sam Qu

Review your content's performance and reach.

Become your target audience’s go-to resource for today’s hottest topics.

Understand your clients’ strategies and the most pressing issues they are facing.

Keep a step ahead of your key competitors and benchmark against them.

Questions? Please contact [email protected]

A spurt of letters from California Attorney General Rob Bonta to leaders of hospitals and other health care facilities sent on August 31, 2022 signaled the kickoff of a government probe into bias in health care algorithms that contribute to material health care decisions. The probe is part of an initiative by the California Office of the Attorney General (AG) to address disparities in health care access, quality, and outcomes and ensure compliance with state non-discrimination laws. Responses are due by October 15, 2022 and must include a list of all decision-making tools in use that contribute to clinical decision support, population health management, operational optimization, or payment management; the purposes for which the tools are used; and the name and contact information of the individuals responsible for “evaluating the purpose and use of these tools and ensuring that they do not have a disparate impact based on race or other protected characteristics.”

The press release announcing the probe describes health care algorithms as a fast-growing tool used to perform various functions across the health care industry. According to the California AG, if software is used to determine a patient’s medical needs, appropriate review, training, and guidelines for usage must be incorporated by hospitals and health care facilities to avoid the algorithms having unintended consequences for vulnerable patient groups. One example cited in the AG’s press release is an Artificial Intelligence (AI) algorithm created to predict patient outcomes may be based on a population that does not accurately represent the patient population to which the tool is applied. An AI algorithm created to predict future health care needs based on past health care costs may misrepresent needs for Black patients who often face greater barriers to accessing care, thus making it appear as if their health care costs are lower.

Not surprisingly, the announcement of the AG’s probe follows research summarized in a Pew Charitable Trusts blog post highlighting bias in AI-enabled products and a series of discussions between the Food and Drug Administration (FDA) and software as a medical device stakeholders (including patients, providers, health plans, and software companies) regarding the elimination of bias in artificial intelligence and machine learning technologies. As further discussed in our series on the FDA’s Artificial Intelligence/Machine Learning Medical Device Workshop, the FDA is currently grappling with how to address data quality, bias, and health equity when it comes to the use of AI algorithms in software that it regulates.

Taking a step back to consider the practical constraints of hospitals and health care facilities, the AG’s probe could put these entities in a difficult position. The algorithms used in commercially available software may be proprietary and, in any event, hospitals may not have the resources to independently evaluate software for bias. Further, if the FDA is still in the process of sorting out how to tackle these issues, it seems unlikely that hospitals would be in a better position to address them.

Nonetheless, the AG’s letter suggests that failure to “appropriately evaluate” the use of AI tools in hospitals and other health care settings could violate state non-discrimination laws and related federal laws and indicates that investigations will follow these information requests. As a result, before responding hospitals should carefully review their AI tools currently in use, the purposes for which they are used, and what safeguards are currently in place to counteract any bias that may be introduced by an algorithm. For example:

On the flip side, software companies whose AI tools are in use at California health facilities should be prepared to respond to inquiries from their customers regarding their AI algorithms and how data quality and bias have been evaluated, for example:

If you would like to learn how Lexology can drive your content marketing strategy forward, please email [email protected] .

© Copyright 2006 - 2022 Law Business Research