Impact of federated learning and explainable artificial intelligence for medical image diagnosis

Sivakumar Muthuramalingam, Padmapriya Thiyagarajan

Abstract


Medical image recognition has enormous potential to benefit from the recent developments in federated learning (FL) and interpretable artificial intelligence (AI). The function of FL and explainable artificial intelligence (XAI) in the diagnosis of brain cancers is discussed in this paper. XAI and FL techniques are vital for ensuring data ethics during medical image processing. This paper highlights the benefits of FL, such as cooperative model training and data privacy preservation, and the significance of XAI approaches in providing logical justifications for model predictions. A number of case studies on the segmentation of medical images employing FL were reviewed to compares and contrasts various methods for assessing the efficacy of FL and XAI based diagnostic models for brain tumors. The relevance of FL and XAI to improve the accuracy and interpretability during medical image diagnosis have been presented. Future research directions are also described indicating as to integrate data from various modes, create standardised evaluation processes, and manage ethical issues. This paper is intended to provide a deeper insight on relevance of FL and XAI in medical image diagnosis.

Keywords


Artificial intelligence; Data ethics; Explainable artificial intelligence; Federated learning; Machine learning; Medical image diagnostics; Medical image segmentation

Full Text:

PDF


DOI: http://doi.org/10.11591/ijai.v13.i4.pp3772-3785

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN/e-ISSN 2089-4872/2252-8938 
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).

View IJAI Stats