Rédaction Africa Links 24 with Mark P. Sendak and Nicholson Price and Karandeep Singh and Suresh Balu
Published on 2024-02-07 09:30:28
Leaders across federal agencies are working quickly to develop regulations for AI in health care, with one specific proposal gaining significant traction. This proposal is for the implementation of AI assurance laboratories, which are dedicated spaces where AI model developers can create and test AI models according to standardized criteria defined by regulators.
This concept has been embraced by many prominent names and organizations in health AI, and has garnered attention at the annual Office of the National Coordinator for Health Information Technology meeting, in a prominent JAMA Special Communication, and in a report from STAT. The proposal has been put forward by the Coalition for Health AI (CHAI) and has received strong support from key regulators including the National Coordinator for Health IT and the director of the Digital Health Center of Excellence at the Food and Drug Administration.
This proposal is a response to President Biden’s recent executive order, which calls for the development of AI assurance infrastructure. The proposal specifically includes a request for funding a “small number of assurance labs that experiment with these diverse approaches and gather evidence that the creation of such labs can meet the goals laid out in the Executive Order.”
The concept of AI assurance laboratories is gaining momentum because it has the potential to address specific AI challenges faced by health care providers. For instance, a network of AI assurance laboratories could differentiate products that perform well across different healthcare settings from those that don’t. This could lead to improved transparency and help prevent the adoption of faulty AI products in healthcare.
However, there are concerns that the proposal may exacerbate existing disparities in the healthcare system. The initial AI assurance laboratories would be located at large, well-resourced health systems and academic medical centers, such as Duke, Mayo Clinic, and Stanford. While these institutions may benefit from the funding, there is a need to direct attention and resources toward settings that are less able to effectively conduct AI assurance.
In addition to investing in a small number of AI assurance laboratories, there is a need for federal regulators to invest in AI capabilities, infrastructure, and technical assistance to advance the safe, effective, and equitable use of AI in low-resource settings.
The proposal also does not fully address differences in priorities and concerns of AI product developers, regulators, and implementers. There is a need to consider the variation in resources, populations, and operations of the numerous hospitals and healthcare organizations in the United States. In practice, healthcare providers often evaluate AI products in ways that extend beyond the bounds of current regulations, and there are competing pressures from the public, developers, and regulators at different levels.
Furthermore, AI products cannot be meaningfully evaluated solely in controlled environments. The real-world performance of AI solutions depends on the behaviors of users and changes to the work environment. It is critical to empower healthcare providers to carry out AI assurance activities themselves and to provide on-the-ground technical assistance to low-resource healthcare providers.
In summary, while the concept of AI assurance laboratories has the potential to address specific challenges in healthcare, it is crucial to ensure that investments are made in a way that supports the safe, effective, and equitable use of AI in varied clinical environments. This may involve a combination of AI assurance laboratories with large investments in technical infrastructure and regional extension centers to provide on-the-ground technical assistance to low-resource healthcare providers. This approach should prioritize the diverse needs of healthcare settings and support the implementation and maintenance of AI solutions.



