The Legality of Medical AI with Jessica Roberts

Jessica Roberts is a professor of law in artificial intelligence, machine learning, and data science at the Emory University School of Law. She is a health law scholar who often specializes in health technology and promoting ethical practices in healthcare.
We sat down with Roberts for an interview regarding the current status of medical AI legislation and how it can reshape the future of healthcare.
Q: What are the potential benefits and consequences of incorporating AI into medicine, particularly on a nationwide scale?
American healthcare is notorious for inconsistent access and high prices. We have health access imbalances based on a variety of demographic characteristics. Many populations are priced out of the insurance system, so lots of people may go without the necessary healthcare. Anything that promises to make healthcare more proportionally accessible and affordable is exciting, and certainly, technology, including AI, has that potential. AI and other technologies could improve both access and outcomes.
Artificial intelligence can identify more risks than humans working alone in certain circumstances, and early detection means early intervention. New technology can save doctors time, which can then free them up to offer better care to their patients. It can also help get people out of hospitals more quickly, which reduces cost and increases accessibility. Also, there are AI tools and technologies that patients can use themselves from the comfort of their own homes as part of virtual healthcare. Patients can access a medical chatbot around the clock, which can help eliminate transportation and other structural constraints that exacerbate resource imbalances.
But, like with any new technology, there are always risks. We might be concerned about perpetuating or exacerbating some of those very same healthcare imbalances that I was talking about. If we're not thinking transparently and without influence, we always run the danger of either perpetuating existing problems and making them worse or even creating new ones. When considering those with accessibility needs, where a lot of my work happens, I've seen examples of inaccessible technology. Things that should have been an improvement for patients with varied abilities were not.
Q: What are a few examples of AI gone awry, where it was an active agent in medical partiality?
There are examples of when an algorithm goes from just being an algorithm to becoming AI. In 2019, there was a study in the Science journal about an algorithm that allocated additional care based on cost. If someone was paying more, they then received additional care. However, this became a problem. Historically disadvantaged populations might consume less healthcare, not because they don't need it but because they can't afford it.
If cost becomes a proxy for need, it underestimates the need in those populations. Sure enough, the study found that the impacts of one of these algorithms reduced the number of patients of African descent identified as requiring additional care by over half. That design choice ended up shifting care away from populations that were already experiencing health imbalances.
Another example has to do with booking software. Healthcare providers rely on software to manage appointment schedules, and some algorithms intentionally double-book people that it predicted were the most likely to be no-shows. The goal is to improve efficiency and see as many patients as possible. So, if a certain group is more likely to miss appointments due to social and structural obstacles– lack of childcare, lack of transportation, occupation – those people are more likely to be double-booked. The patient will likely not have a great experience at their appointment because their provider will be distracted by the double-booking problem.
There are additional impacts because people of particular heritage groups and those from other health disparity populations might be more likely to miss appointments because they encounter more structural deterrents. Now you have them being double-booked disproportionately, which is potentially problematic. The technology had positive goals and was not intended to be troublesome, but when the algorithm does not consider societal structural impediments, the technology can lead to worsening pre-existing problems.
Q: Are there governmental policies that have aimed to regulate medical AI?
Medical AI is hard to regulate because it covers patient care and decision support tools. The Affordable Care Act has a provision called Section 1557, and the Department of Health and Human Services (HHS) has also issued regulations. It imposed these regulations to manage the risk of access inconsistency. The department mandated that health programs and activities that receive federal funding make reasonable efforts to assess whether or not the tools they're using feature protected traits as inputs or factors.
Section 1557 incorporates a variety of demographically protected traits as national origins, heritage, level of necessary accessible care, and age. If a decision support tool is using one of those traits as an input or a factor, then the healthcare provider has to make a conscious effort to be aware of that. Then, the provider's second duty, if there is a chance the tool is acting inconsistently to particular populations, is to make reasonable efforts to mitigate that risk.
Q: What are the potential consequences of not regulating medical AI?
When it comes to new medical technology, it's much easier to get things right from the beginning than to fix issues after they've already emerged. My concern is that if we don't address these matters and implement regulations now, these technologies will enter the market and become integrated into healthcare systems. By the time we realize they're causing harm, it will be costly and time-consuming to fix. If a hospital system relies on an AI tool that is later found to be negatively impacting access, they will have to search for a replacement. If intolerance is present during the early design stage, it could impact the development of all technologies after it.
Q: What efforts can the government make to strengthen these regulations to better protect patients?
The government could create an additional certification program where AI healthcare providers could be certified as health disparity-friendly. Looking less on the side of traditional regulation, maybe there could be an incentive system or grant program that healthcare systems could take part in to promote more impartial technology. If you create incentives for the hospitals to buy impartial technology, there will be enough demand to get the developers to create it. The problem right now is that developers are not creating new AI technologies with the patients in mind; rather, they are developing with hospitals and other providers in mind.