California Attorney General Investigates Bias in Health Care

2022-09-17 06:17:21 By : Mr. Raymond Lei

A spurt of letters from California Attorney General Rob Bonta to leaders of hospitals and other health care facilities sent on August 31, 2022 signaled the kickoff of a government probe into bias in health care algorithms that contribute to material health care decisions.  The probe is part of an initiative by the California Office of the Attorney General (AG) to address disparities in health care access, quality, and outcomes and ensure compliance with state non-discrimination laws.  Responses are due by October 15, 2022 and must include a list of all decision-making tools in use that contribute to clinical decision support, population health management, operational optimization, or payment management; the purposes for which the tools are used; and the name and contact information of the individuals responsible for “evaluating the purpose and use of these tools and ensuring that they do not have a disparate impact based on race or other protected characteristics.” 

The press release announcing the probe describes health care algorithms as a fast-growing tool used to perform various functions across the health care industry.  According to the California AG, if software is used to determine a patient’s medical needs, appropriate review, training, and guidelines for usage must be incorporated by hospitals and health care facilities to avoid the algorithms having unintended consequences for vulnerable patient groups.  One example cited in the AG’s press release is an Artificial Intelligence (AI) algorithm created to predict patient outcomes may be based on a population that does not accurately represent the patient population to which the tool is applied.  An AI algorithm created to predict future health care needs based on past health care costs may misrepresent needs for Black patients who often face greater barriers to accessing care, thus making it appear as if their health care costs are lower.

Not surprisingly, the announcement of the AG’s probe follows research summarized in a Pew Charitable Trusts blog post highlighting bias in AI-enabled products and a series of discussions between the Food and Drug Administration (FDA) and software as a medical device stakeholders (including patients, providers, health plans, and software companies) regarding the elimination of bias in artificial intelligence and machine learning technologies.  As further discussed in our series on the FDA’s Artificial Intelligence/Machine Learning Medical Device Workshop, the FDA is currently grappling with how to address data quality, bias, and health equity when it comes to the use of AI algorithms in software that it regulates. 

Taking a step back to consider the practical constraints of hospitals and health care facilities, the AG’s probe could put these entities in a difficult position.  The algorithms used in commercially available software may be proprietary and, in any event, hospitals may not have the resources to independently evaluate software for bias.  Further, if the FDA is still in the process of sorting out how to tackle these issues, it seems unlikely that hospitals would be in a better position to address them.

Nonetheless, the AG’s letter suggests that failure to “appropriately evaluate” the use of AI tools in hospitals and other health care settings could violate state non-discrimination laws and related federal laws and indicates that investigations will follow these information requests. As a result, before responding hospitals should carefully review their AI tools currently in use, the purposes for which they are used, and what safeguards are currently in place to counteract any bias that may be introduced by an algorithm. For example:

When is an individual reviewing AI-generated recommendations and then making a decision based on their own judgment?

What kind of nondiscrimination and elimination of bias training do individuals using AI tools receive each year?

What kind of review is conducted of software vendors and functionality before software is purchased?

Is any of the software in use certified or used by a government program?

What type of testing has been done by the software vendor to address data quality, bias, and health equity issues?

On the flip side, software companies whose AI tools are in use at California health facilities should be prepared to respond to inquiries from their customers regarding their AI algorithms and how data quality and bias have been evaluated, for example:

Is the technology locked or does it involve continuous learning?

How does the algorithm work and how was it trained?

What is the degree of accuracy across different patient groups, including vulnerable populations?

Lara is a trusted advisor to clients ranging from traditional health care providers to disrupter digital health platforms as they navigate the practical and regulatory challenges of health care innovation. Her unique depth of knowledge across HIPAA privacy and other regulatory issues governing the use of data, state and federal fraud and abuse laws, business planning and operational issues has led colleagues to describe Lara as the “Swiss Army knife” of health care problem-solving.

Working at the intersection of health care and technology, Lara counsels telemedicine and other...

Jane’s practice focuses on litigation matters, including health care enforcement defense, complex civil and business litigation, and product liability law. Recent victories to which Jane has contributed include:

You are responsible for reading, understanding and agreeing to the National Law Review's (NLR’s) and the National Law Forum LLC's  Terms of Use and Privacy Policy before using the National Law Review website. The National Law Review is a free to use, no-log in database of legal and business articles. The content and links on www.NatLawReview.com are intended for general information purposes only. Any legal analysis, legislative updates or other content and links should not be construed as legal or professional advice or a substitute for such advice. No attorney-client or confidential relationship is formed by the transmission of information between you and the National Law Review website or any of the law firms, attorneys or other professionals or organizations who include content on the National Law Review website. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor.  

Some states have laws and ethical rules regarding solicitation and advertisement practices by attorneys and/or other professionals. The National Law Review is not a law firm nor is www.NatLawReview.com  intended to be  a referral service for attorneys and/or other professionals. The NLR does not wish, nor does it intend, to solicit the business of anyone or to refer anyone to an attorney or other professional.  NLR does not answer legal questions nor will we refer you to an attorney or other professional if you request such information from us. 

Under certain state laws the following statements may be required on this website and we have included them in order to be in full compliance with these rules. The choice of a lawyer or other professional is an important decision and should not be based solely upon advertisements. Attorney Advertising Notice: Prior results do not guarantee a similar outcome. Statement in compliance with Texas Rules of Professional Conduct. Unless otherwise noted, attorneys are not certified by the Texas Board of Legal Specialization, nor can NLR attest to the accuracy of any notation of Legal Specialization or other Professional Credentials.

The National Law Review - National Law Forum LLC 3 Grant Square #141 Hinsdale, IL 60521  Telephone  (708) 357-3317 or toll free (877) 357-3317.  If you would ike to contact us via email please click here.