![]() | pei-hua Huang |
-
03.03.2026-28.04.2026
Explainability in Real-World Medical Contexts: Towards an Ethical Use of AI in Medicine
The overall objective of this research project is to systematically examine the notion of explainability in real-world medical contexts to establish a solid foundation for further inquiries into the ethical and philosophical issues raised by machine learning-based medical AI systems. Specifically, this project aims to achieve three interrelated objectives:
- To clarify the notion of explainability and the role it plays in real-world medical contexts.
- To clarify the relationships between explainability and other epistemic virtues, such as reliability, transparency, and accuracy, and to contextualise the epistemic virtues of different diagnostic tools in clinical decision-making.
- To clarify the nature of the results generated by a machine learning-based medical AI system.
This research project is expected to achieve the following objectives:
- To produce at least five academic publications, with one already under way (an invited contribution to BMC Medical Ethics).
- To offer practical advice regarding how machine learning-based medical AI systems ought to be regulated and deployed in the healthcare sector.
- To inform the general public about the latest research on the ethics of medical AI via public lectures and online venues such as The Conversation and The Atlantic.
The specific objectives I aim to achieve during my stay at the Brocher Foundation are as follows:
- To discuss my research and research ideas with other residents.
- To visit researchers knowledgeable about the European Health Data Space (EHDS), the EU’s AI Act, the Paediatric Personalised Research Network Switzerland (SwissPedHealth), and the WHO’s global strategy on digital health in Switzerland.