Dr. Ravi Parikh Advances Ethical AI in Healthcare


Dr. Ravi Parikh pictured above.
Dr. Ravi Parikh pictured above.

Dr. Ravi Parikh, a medical oncologist and associate professor of Hematology and Oncology at Emory University, leads the Human-Algorithm Collaboration Lab, an NIH-funded multidisciplinary laboratory that develops and tests algorithm-driven interventions in cancer care and serious illness. His team is dedicated to creating trustworthy AI models that enhance clinical decision-making, particularly in oncology.

Dr. Parikh shared details about his role, the ethical challenges of AI in healthcare, and his vision for its transformative potential in clinical care.

Q: What is the mission of the Human-Algorithm Collaboration Lab? 

Our goal is to integrate AI models into clinical workflows and assess their impact on decision-making and patient outcomes. We conduct prospective trials to evaluate metrics such as the number of palliative care consultations or colon cancer screenings resulting from AI-based tools, moving beyond mere statistical analyses.

Q: What do you consider the biggest ethical concern when using AI models in clinical settings?

Several ethical issues arise with AI integration in healthcare. Firstly, many AI models function as “black boxes,” making it challenging for clinicians to understand their operations, which can hinder trust. To address this, we often employ simpler models that are more interpretable. Secondly, there's uncertainty in using algorithms without robust evidence of their efficacy. While some AI tools enhance efficiency, the true potential lies in aiding treatment decisions or drug discovery. However, most available AI tools lack rigorous prospective data. Our lab addresses this by conducting large-scale trials to assess their real-world impact. Lastly, performance disparities in under-supported populations are a concern, as these groups are often overlooked in training datasets. We, along with colleagues at Emory, are working to mitigate partiality by ensuring mixed training data and developing methods to correct those problems.

Q: Can you elaborate on the strategies your lab employs to maintain transparency and minimize external influences in diagnostics and treatments?

Our lab undertakes several initiatives to ensure transparency. For instance, we're conducting a trial comparing AI-assisted clinical trial identification workflows with standard methods, aiming to streamline patient enrollment processes. We're also setting up a study, funded by a Linked Research Project Grant (RL1), to implement a machine learning system that identifies patients at risk of early mortality, facilitating timely supportive care interventions like palliative care or advanced care planning. Additionally, we're developing tools to design and run clinical trials more efficiently, targeting patients who are less responsive to traditional treatments. A recent publication in Nature Medicine highlights our efforts to identify such individuals, guiding more effective clinical trial designs.

Q: What do you envision for the future of AI in oncology over the next five to ten years, based on your current work?

I anticipate a growing emphasis on integrating multimodal data—combining AI tools from pathology, radiology, and genetics—to develop comprehensive biomarkers that mirror clinical reasoning. Moreover, the advent of autonomous AI agents capable of matching or surpassing human performance in prognosis or treatment decisions could revolutionize oncology, delivering better treatments to patients more swiftly. While these advancements may be a few years away, our lab is committed to rigorously testing these tools in clinical workflows through well-designed trials to ensure their efficacy and safety.