Dr Mobarakol Islam is a senior research fellow at WEISS, University College London. Before that, he was a postdoctoral research associate at the Department of Computing, Imperial College London, under the supervision of Dr. Ben Glocker in BioMedIA Lab. He holds a PhD degree from the Dept. of ISEP at National University of Singapore (NUS) and afterward worked as a research fellow in the same Lab. Before that, he was a lead software engineer at Samsung R&D Institute. His research focuses on enhancing deep neural network robustness, fairness, and reliability using uncertainty, calibration and causality to improve image-guided disease diagnosis and intervention. Overall, his research covers the medical imaging and video sources of MRI, CT, X-ray, Ultrasound, Endoscope, and Microscope and non-imaging data sources of DNA, Genomic, Radiomic, and clinical information. He is also involved in teaching undergraduate and postgraduate students and supervising PhD students with collaborative projects between UCL, ICL and NUS. He has received several awards including Turing Postdoctoral Enrichment Award, AUAPAF Conference Scholarship, ISEP PhD Scholarship, ICRA/MICCAI travel Awards, and KUETEF Best Paper Award. He is serving as an area-chair at MICCAI 2023, organizing of the MICCAI DART workshop and reviewer of the several top conferences and journals in Healthcare AI such as TPAMI, MedIA, IEEE TMI, MICCAI, ICRA, IROS, IJCARS, IEEE RA-L, and Neurocomputing.
Abstract:
Although AI has enormous potential to accelerate healthcare, there are very few examples of AI-based medical systems to translate into clinical practice due to the concerns of AI algorithmic trust, safety, and transparency. The key limitations of current AI-enabled systems, including recent foundation models, are reliability, technical robustness, fairness, and transparency. In particular, AI models are (i) poorly robust: the performance drops significantly on data variation; (ii) unreliable: overly confident in prediction and unable to provide feedback when a prediction is wrong or confusing; (iii) fairness and bias: underdiagnosis towards a certain population; (iv) catastrophic forgetting: disruption of previously learned tasks with the training of novel tasks in a constantly changing environment. In this talk, I will discuss some of my works toward safe and reliable AI in the applications of image-guided diagnosis and intervention. More specifically, the novel methods on uncertainty and confidence calibration, perturbation, computational stress testing, feature-level regularization, curriculum Fourier domain adaptation, and synthetic continual learning with vision-language modeling.