Explainable deep learning for multiclass skin cancer detection
Abstract
Skin cancer mortality is escalating globally due to late identification and delayed diagnosis of cases. Lack of adequate specialists, faster and reliable approaches has contributed to the delay and tedious procedures involved in diagnosing the disease. Moreover its detection and classification remain essential. This study focused on implementing an explainable and mobile compatible model for skin cancer detection and classification by combining Deep Learning and Visual explanation methods. The benefits of this study significantly target low and middle-income countries (LMICS) especially those within Sub-Saharan Africa (SSA). Utilizing explainability techniques, the study mitigates the black-box problem in existing AI-based approaches. ISIC 2019 dataset which is compiled by the International Skin Imaging Collaboration (ISIC) [1] was used to train 4 pre-trained models and a custom CNN. MobileNetV2, NasNetMobile, DenseNet121, and EfficientNetV2S were fine-tuned with several hyperparameters, additional layers before training, validated, and then tested. Also the custom CNN was validated and tested to examine its competence. They achieved significant results on validation with 96.40%, 76.06%, 91.37%, and 94.90% accuracy respectively. Likewise the custom CNN achieved 81.52% accuracy. MobileNetV2 model outperformed other architectures followed by EfficientNetV2S. Explainability techniques including Grad-CAM, LIME, and SHAP were comprehended to determine the most suitable in unveiling the decision-making of the best model. The trained model could enhance the existing workflow by saving time, ensuring transparency through human-comprehensible visual features, and provisioning feasible mobile compatibility to aid domain professionals in making informed decisions.