Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (2024)

Mukaffi Bin Moin*,Fatema Tuj Johora Faria,Swarnajit Saha,Busra Kamal Rafa,
Mohammad Shafiul Alam

Ahsanullah University of Science and Technology, Dhaka, Bangladesh.

*Corresponding author(s). E-mail(s): mukaffi28@gmail.com
Contributing authors: fatema.faria142@gmail.com; swarnajitsaha68@gmail.com;
brafa263.3@gmail.com; shafiul.cse@aust.edu;

Abstract

Lung and colon cancer are serious worldwide health challenges that require early and precise identification to reduce mortality risks. However, diagnosis, which is mostly dependent on histopathologists’ competence, presents difficulties and hazards when expertise is insufficient. While diagnostic methods like imaging and blood markers contribute to early detection, histopathology remains the gold standard, although time-consuming and vulnerable to inter-observer mistakes. Limited access to high-end technology further limits patients’ ability to receive immediate medical care and diagnosis. Recent advances in deep learning have generated interest in its application to medical imaging analysis, specifically the use of histopathological images to diagnose lung and colon cancer. The goal of this investigation is to use and adapt existing pre-trained CNN-based models, such as Xception, DenseNet201, ResNet101, InceptionV3, DenseNet121, DenseNet169, ResNet152, and InceptionResNetV2, to enhance classification through better augmentation strategies. The results show tremendous progress, with all eight models reaching impressive accuracy ranging from 97% to 99%. Furthermore, attention visualization techniques such as GradCAM, GradCAM++, ScoreCAM, Faster Score-CAM, and LayerCAM, as well as Vanilla Saliency and SmoothGrad, are used to provide insights into the models’ classification decisions, thereby improving interpretability and understanding of malignant and benign image classification.Our research implementations are open to the public at:https://github.com/Mukaffi28/Explainable-AI-for-Lung-and-Colon-Cancer-Classification

Keywords Lung Colon Cancer, Pre-trained CNN, Medical Imaging, Classification, Deep Learning, GradCAM, GradCAM++, Explainability, Histopathology

1 Introduction

A group of disorders referred to as cancer is characterised by an uncontrolled expansion and spread of abnormal cells within the body. These cells can infiltrate surrounding organs and tissues, leading to dangerous health consequences [1]. In 2023111 https://www.paho.org/en/campaigns/world-cancer-day-2023-close-care-gap, approximately 10 million people worldwide died from cancer-related causes, highlighting the ongoing global impact of this disease on public health. It is projected that the cancer death rate would increase to 60% by 2035.[2].Histopathological imaging is critical for diagnosing and curing lung and colon cancers because it provides microscopic images of tissue samples, highlights cellular architecture, and informs treatment decisions [2]. Digital pathology, powered by scanning technology, is transforming medicine by digitizing histopathological slides to improve disease diagnosis and management. AI integration increases its potential for clinical diagnosis and research [3]. This powerful combination has the potential to expand its applications in various fields, including cancer detection [4], cardiovascular diseases [5], neurological disorders[6], diabetic retinopathy [7], pulmonary diseases [8], and skin diseases [9].

In the last few years, major improvements have been achieved in the automated classification of lung and colon cancer using histopathological images. Through the use of convolutional neural networks (CNNs), Sanidhya’s [4] study demonstrates the promise of artificial intelligence (AI) in enhancing cancer diagnoses by developing a computer-aided diagnosis system that can accurately detect lung and colon tumors from histopathological pictures with high accuracy (for the lung, 97%, and for the colon, 96%.). Osamu et al. used convolutional and recurrent neural networks to classify gastric and colonic epithelial tumors in histopathology images. With AUCs of 0.96 & 0.99 for colon cancer & adenoma, and 0.97 and 0.99 to stomach cancer and adenoma, respectively, the models were incredibly accurate[10]. In another study Sudhakar’s [3] introduces an automated method using EfficientNetV2 models to detect lung and colon cancer subtypes with 99.97% accuracy, surpassing current methods. Their approach, which includes gradCAM-generated visual maps, assists pathologists in identifying critical locations for treatment plans, and it shows promise for clinical automation in cancer detection. Neha’s research [11] uses CNNs (ResNet50, VGG19, InceptionResNetV2, DenseNet) to classify lung cancer histology, aiming to improve diagnosis accuracy for better treatment decisions and reduce pathologists’ workload, potentially improving patient outcomes in lung cancer care.

After thorough analysis, a noticeable gap emerges in the realm of explainable AI techniques applied to lung and colon cancer using histopathological images. In this research paper, we focus to automate the detection of lung and colon cancer using histopathological images. We undertake a thorough evaluation of eight renowned pre-trained CNN models, including Xception, DenseNet121, DenseNet169, DenseNet201, InceptionV3, ResNet101, ResNet152, and InceptionResNetV2. To improve interpretability, we use Explainable AI approaches as Grad-CAM, Grad-CAM++, Score-CAM, FasterScore-CAM, Layer CAM for Class Activation Maps, and Vanilla Saliency and SmoothGrad for Saliency Maps. These strategies clarify the logic behind our models’ predictions, leading to a better understanding of their decision-making processes.

  • The inspection of eight pre-trained CNN models (Xception, DenseNet121, DenseNet169, DenseNet201, InceptionV3, ResNet101, ResNet152, and InceptionResNetV2) for automated lung and colon cancer classification in histological images.

  • Explainable AI approaches including Grad-CAM, Grad-CAM++, Score-CAM, FasterScore-CAM, Layer CAM, Vanilla Saliency, and SmoothGrad have been integrated to increase interpretability.

  • Our research attempts to close the gap in explainable AI methods for colon and lung cancer classification in histopathology images.

2 Related Works

2.1 Lung and Colon Cancer Classification without using XAI Techniques

Sanidhya et al. [4] used CNNs to identify malignancies of the lung and colon, with diagnosis accuracy of more than 97% for cancer of the lungs and 96% for cancer of the colon, based on digital pathology images from the LC25000 dataset. Highlighting the vital necessity of accurate and timely lung cancer histology detection, Neha et al. [11] builds upon prior work in lung cancer diagnosis and classification. Deep learning techniques, particularly CNNs like ResNet 50, VGG-19, Inception_ResNet_V2, and DenseNet, have shown promise in analyzing histopathological images for accurate subtype classification. Hasan et al. [2] offered a deep convolutional neural network model for precise identification of colon adenocarcinoma from digital histopathology images, achieving up to 99.80% accuracy. Notably, none of these studies explored Explainable AI techniques in the context of lung and colon cancer diagnosis.

2.2 Lung and Colon Cancer Classification using XAI Techniques

Satvik and Somya [1] developed a system that automates lung as well as colon cancer identification using deep neural networks on images from histopathology. Employing eight pre-trained CNN models, they achieved remarkable accuracies ranging from 96% to 100%. For explainable AI, they utilized GradCAM and SmoothGrad techniques. Introducing a novel Bilinear-CNN-based model, another research effort [12] focuses on whole-slide images (WSIs) automated tissue segmentation of lung cancer. This model addresses challenges posed by tumor heterogeneity. For explainable AI, GradCAM was utilized. Sudhakar et al. [3] proposed an automated method utilizing EfficientNetV2 models for detecting lung and colon cancer subtypes from histopathology images. Visual saliency maps were employed to aid in understanding model decisions, obtaining a remarkable 99.98% maximum test accuracy on the LC25000 dataset. However, Ahmed et al. [13] offered a lightweight, CNN-based deep learning technique for accurate colon cancer detection. Despite lacking Class Activation Maps or Saliency Maps, their method achieved a high accuracy of 99.50%, outperforming existing deep learning approaches.

3 Dataset Description

We utilized the LC25000 dataset [14], which carries 25,000 color pictures of lung and colon tissues categorised into five groups: lung squamous cell carcinoma, lung adenocarcinoma, benign lung tissue, colon cancer, and benign colonic tissue. 5,000 photos each class, each resized to 768 by 768 pixels. The collection is divided into colon and lung image sets in accordance with HIPAA compliance guidelines. It’s instrumental in developing diagnostic tools for lung and colon cancers, driving progress in medical imaging research. The visual representation of the LC25000 dataset is displayed in Figure 1.

Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (1)

4 Background Study

4.1 Convolutional Neural Networks (CNNs)

Xception [15], DenseNet201 [16], DenseNet121 [16], DenseNet169 [16], ResNet101 [17], ResNet152 [17], InceptionV3 [18], and InceptionResNetV2 [19] are all prominent Convolutional Neural Network (CNN) architectures. Xception, an “Extreme Inception,” enhances the Inception model by utilizing depthwise separable convolutions for improved computational efficiency. DenseNet models, including DenseNet201, DenseNet121, and DenseNet169, feature densely connected layers where each layer connects to every other layer, facilitating feature reuse. ResNet101 and ResNet152, part of the ResNet family, address the disappearing gradient issue by introducing skip connections, enabling the training of extremely deep networks. InceptionV3, a member of the Inception family, employs various filter sizes to capture features at multiple scales efficiently. InceptionResNetV2 combines the strengths of both Inception and ResNet architectures, integrating residual connections and multi-scale feature extraction for enhanced performance.

4.2 Explainable Artificial Intelligence (XAI) techniques

XAI techniques are essential for enhancing the transparency and interpretability of AI models. Among the notable methods are GradCAM, which identifies critical image regions by analyzing gradients of the target class score relative to convolutional neural network feature maps, and its extension GradCAM++, which refines localization accuracy by considering both positive and negative influences. ScoreCAM assigns importance scores to spatial locations in feature maps based on class scores, facilitating precise localization, while Faster Score-CAM optimizes efficiency for real-time applications. LayerCAM attributes importance scores to input pixels via relevance propagation across model layers, providing insights into decision-making processes. Gradients of output class scores are computed applying Vanilla Saliency in relation to input pictures, highlighting influential regions, whereas SmoothGrad enhances interpretability by averaging multiple saliency maps to reduce noise and provide smoother visualizations.

4.3 Evaluation Metrics

In the evaluation of lung and colon classification tasks, several metrics are vital for comprehensively assessing model performance. Accuracy measures the overall correctness of predictions irrespective of class distribution, while precision and recall evaluate the model’s ability to correctly identify instances of lung or colon conditions, focusing on minimizing false positives and false negatives, respectively. The F1 score strikes a balance between precision and recall, crucial for tasks with class imbalance. The Jaccard score assesses the overlap between predicted and actual classes, especially valuable in multi-class problems or imbalanced datasets. Finally, log loss quantifies the accuracy of predicted probabilities compared to actual probabilities, incentivizing well-calibrated predictions in lung and colon classification models.

5 Proposed Methodology

This comprehensive approach allows us to compare the performance of various models in lung and colon cancer image classification tasks efficiently. Figure 2 illustrates the methodology employed in our research.
Step 1) Input Image:To ensure uniformity and compatibility across diverse models and techniques, the images underwent meticulous preprocessing. This crucial step involved resizing the images to a standardized dimension of 299x299 pixels.
Step 2) Image Preprocessing:To prepare our dataset for model training, we initiated with pixel value normalization to a standard scale of 0 to 1, fostering convergence during training and addressing issues related to disparate data distributions. Following this, we introduced random rotations, diversifying the dataset and enhancing the model’s adaptability to varied orientations. Horizontal flipping was also used to increase variety and reduce overfitting concerns by simulating mirror images. Subsequently, we selectively cropped images, eliminating extraneous background elements and emphasizing regions of interest, thereby facilitating feature extraction and reducing computational complexity. Addressing illumination discrepancies, we adjusted image brightness levels to ensure dataset consistency and minimize lighting variations’ impact on model performance. Finally, we used contrast enhancement techniques to refine image contrast, particularly important in medical imaging where detailed information is critical for accurate analysis.
Step 3) Model Selection and Training:For model selection, we opted for eight CNN architectures renowned for their efficacy in image classification tasks. These architectures include Xception, DenseNet201, ResNet101, InceptionV3, DenseNet121, DenseNet169, ResNet152, and InceptionResNetV2. Subsequently, we proceeded with model training by implementing each CNN architecture using the TensorFlow deep learning framework. The models were trained using the preprocessed dataset, which ensured coherence and stability throughout the training process. Furthermore, hyperparameter tuning was conducted, the details of which are provided in Table 1. This optimization process aimed to fine-tune the model’s parameters and enhance its performance on the specific task of lung and colon cancer classification.
Step 4) Explainable AI Techniques:We applied eXplainable AI (XAI) techniques to the last layer of the CNN models to enhance interpretability and provide insights into model decision-making. For Class Activation Maps (CAM), we utilized GradCAM, GradCAM++, ScoreCAM, Faster Score-CAM, and LayerCAM to generate heatmaps highlighting discriminative regions in the images for each class, refining feature localization, producing class-specific attention maps, and visualizing activations at different network layers for deeper insights. Additionally, we employed Vanilla Saliency and SmoothGrad techniques for generating saliency maps to identify high-gradient regions influencing model predictions and enhancing interpretability.
Step 5) Performance Evaluation:Table 2 shows evaluation of each model to assess its effectiveness in lung and colon cancer classification. By employing these metrics, we gained a comprehensive understanding of each model’s classification performance, allowing us to compare and select the most suitable model for the task at hand.

Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (2)

6 Results and Discussion

6.1 Hyperparameter Settings

Eight pretrained CNN models were trained to classify lung and colon cancer. To ensure optimal convergence without overfitting, an Adam optimizer with a batch size of ten and a learning rate that varied between 0.0001 and 0.001 throughout epochs 25 to 40 was employed by every model. Hyperparameter Settings for Pre-trained CNN Models for lung and colon cancer classification are shown in Table 1

ModelsLearning RateBatch SizeNumber of EpochsOptimizer
Xception0.0011030Adam
DenseNet2010.0011025Adam
ResNet1010.0011035Adam
InceptionV30.0011025Adam
DenseNet1210.0011030Adam
DenseNet1690.0011035Adam
ResNet1520.0011035Adam
InceptionResNetV20.0011040Adam

6.2 Experiments

The Table 2 compares performance for a total of eight pretrained CNN models in lung and colon cancer classification. With an accuracy of 0.9989 and a log loss of 0.0384, Xception performs the best.With the highest log loss of 0.8458 and the lowest accuracy of 0.9765, InceptionResNetV2 performs the worst. The metrics include accuracy, precision, recall, F1-score, Jaccard score, and log loss, with greater values signifying better outcomes except for log loss, where lower values are better. Figure 3 show Confusion Matrices of Pretrained CNNs for Lung and Colon Cancer Classification. Visualizations of the Various Explainable AI techniques are shown in Figure 4.

ModelAccuracyPrecisionRecallF1-ScoreJaccard ScoreLog Loss
Xception0.99890.99890.99890.99890.99780.0384
DenseNet2010.99710.99710.99710.99710.99420.1057
ResNet1010.99280.99280.99280.99280.98580.2595
InceptionV30.99040.99070.99040.99040.98120.3460
DenseNet1210.98960.98980.98960.98960.97950.3749
DenseNet1690.98880.98880.98880.98880.97810.4037
ResNet1520.98850.98860.98850.98850.97740.4133
InceptionResNetV20.97650.97650.97650.97650.95470.8458
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (3)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (4)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (5)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (6)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (7)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (8)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (9)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (10)
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (11)

7 Limitations and Future Research Directions

While our research has offered useful insights into the classification of lung and colon cancers, we note numerous limitations that must be considered. To begin, characteristics such as dataset size, quality, and variety can all have an impact on our models’ effectiveness. To address these constraints, bigger and more diversified datasets would be required, maybe combining data from different sources to improve model generalization. Furthermore, our research concentrated on image-based categorization, ignoring possible synergies with other modalities such as genomes or clinical data. Future research might investigate multimodal techniques to increase classification accuracy and adaptability. Moreover, the interpretability of our models, while enhanced through XAI techniques, remains a challenge, particularly in complex medical domains. Further research is needed to develop more interpretable models and refine existing XAI methods to provide deeper insights into model decision-making processes. Furthermore, our research largely focused on two forms of cancer: lung and colon. Extending our method to additional forms of cancer would increase its relevance and influence in the area of oncology. Furthermore, examining our models’ transferability to various healthcare settings and patient demographics will be beneficial for real-world deployment.

8 Conclusion

Our research demonstrates the efficacy of our suggested methodology for lung and colon cancer classification, which uses advanced deep learning and eXplainable AI (XAI) techniques to improve accuracy and interpretability. We obtained outstanding classification results by applying cutting-edge CNN architectures to a standardized dataset. Notably, Xception outperformed all other models tested, with an amazing accuracy of 0.9989 and the lowest Log Loss of 0.0384. In contrast, InceptionResNetV2 had slightly lower accuracy (0.9765) and the highest Log Loss (0.8458), indicating that its classification capabilities should be improved. Our incorporation of XAI techniques such as GradCAM, GradCAM++, ScoreCAM, Faster Score-CAM, LayerCAM, Vanilla Saliency, and SmoothGrad yielded useful insights into the CNN models’ decision-making processes. These strategies improved the visualization of discriminative areas in histopathological images, allowing for the detection of critical characteristics that contribute to the categorization of malignant and benign tissues. Such visualizations improve interpretability and enable more informed decision-making in healthcare settings, resulting in better patient care and treatment results. Overall, our study illustrates the power of sophisticated deep learning models and XAI approaches in medical image analysis, providing a solid foundation for precise cancer classification. Xception’s superior performance demonstrates its efficacy in making confident and precise predictions, while insights from XAI techniques improve the interpretability of the classification process, establishing the way for improved diagnostic accuracy and clinical decision support in cancer treatment.

References

  • [1]Satvik Garg and Somya Garg.Prediction of lung and colon cancer through analysis of histopathological images by utilizing pre-trained cnn models with visualization of class activation and saliency maps.In 2020 3rd Artificial Intelligence and Cloud Computing Conference, AICCC 2020. ACM, December 2020.
  • [2]MdImran Hasan, MdShahin Ali, MdHabibur Rahman, and MdKhairul Islam.Automated detection and characterization of colon cancer with deep convolutional neural networks.Journal of Healthcare Engineering, 2022:1–12, August 2022.
  • [3]Sudhakar Tummala, Seifedine Kadry, Ahmed Nadeem, HafizTayyab Rauf, and Nadia Gul.An explainable classification method based on complex scaling in histopathology images for lung and colon cancer.Diagnostics, 13(9):1594, April 2023.
  • [4]Sanidhya Mangal, Aanchal Chaurasia, and Ayush Khajanchi.Convolution neural networks for diagnosing colon and lung cancer histopathological images, 2020.
  • [5]Luísa Soares.Cardiovascular disease: A review.Biomedical Journal of Scientific & Technical Research, 51(3), July 2023.
  • [6]Ziad Rizk, Ghadha IbrahimFouad, and Hanan Aly.Neurological disorders: Causes and treatments strategies.page International Journal Of Public Mental Health And Neurosciences, 04 2018.
  • [7]ManojS H and AryaA Bosale.Detection and classification of diabetic retinopathy using deep learning algorithms for segmentation to facilitate referral recommendation for test and treatment prediction, 2024.
  • [8]Sudipto Bhattacharjee, Banani Saha, Parthasarathi Bhattacharyya, and Sudipto Saha.Classification of obstructive and non-obstructive pulmonary diseases on the basis of spirometry using machine learning techniques.Journal of Computational Science, 63:101768, September 2022.
  • [9]K.Veera Swamy and B.Divya.Skin disease classification using machine learning algorithms.In 2021 2nd International Conference on Communication, Computing and Industry 4.0 (C2I4). IEEE, December 2021.
  • [10]Osamu Iizuka, Fahdi Kanavati, Kei Kato, Michael Rambeau, Koji Arihiro, and Masayuki Tsuneki.Deep learning models for histopathological classification of gastric and colonic epithelial tumours.Scientific Reports, 10(1), January 2020.
  • [11]Neha Baranwal, Preethi Doravari, and Renu Kachhoria.Classification of histopathology images of lung cancer using convolutional neural network (cnn), 2021.
  • [12]Rui Xu, Zhizhen Wang, Zhenbing Liu, Chu Han, Lixu Yan, Huan Lin, Zeyan Xu, Zhengyun Feng, Changhong Liang, Xin Chen, Xipeng Pan, and Zaiyi Liu.Histopathological tissue segmentation of lung cancer with bilinear cnn and soft attention.BioMed Research International, 2022:1–10, July 2022.
  • [13]AhmedS. Sakr, NaglaaF. Soliman, MehdharS. Al-Gaashani, Paweł Pławiak, AbdelhamiedA. Ateya, and Mohamed Hammad.An efficient deep learning approach for colon cancer detection.Applied Sciences, 12(17):8450, August 2022.
  • [14]AndrewA. Borkowski, MarilynM. Bui, L.Brannon Thomas, CatherineP. Wilson, LaurenA. DeLand, and StephenM. Mastorides.Lung and colon cancer histopathological image dataset (lc25000), 2019.
  • [15]François Chollet.Xception: Deep learning with depthwise separable convolutions, 2017.
  • [16]Gao Huang, Zhuang Liu, Laurens Van DerMaaten, and KilianQ. Weinberger.Densely connected convolutional networks.In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261–2269, 2017.
  • [17]Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.Deep residual learning for image recognition, 2015.
  • [18]Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna.Rethinking the inception architecture for computer vision.CoRR, abs/1512.00567, 2015.
  • [19]Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi.Inception-v4, inception-resnet and the impact of residual connections on learning, 2016.
Exploring Explainable AI Techniques for Improved Interpretability in Lung and Colon Cancer Classification (2024)

FAQs

How is AI used in lung cancer detection? ›

The application of AI recognition technology enables multi-parametric clustering analysis to help physicians screen for early-stage lung cancer [11], reducing errors and increasing problem-solving efficiency.

What is the value of artificial intelligence in the diagnosis of lung cancer a systematic review and meta analysis? ›

This systematic review and meta-analysis reported the promising outcomes of AI models in the early detection of lung cancer. The pooled sensitivity and specificity values of 0.87 (95% CI: 0.82–0.90) and 0.87 (95% CI: 0.80–0.91) showed the potential of AI models in identifying true positives and true negatives.

What is Sybil AI? ›

Sybil: A Validated Deep Learning Model to Predict Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography.

How is AI improving cancer diagnosis? ›

Trained on data from thousands of images and sometimes boosted with information from a patient's medical record, AI tools can tap into a larger database of knowledge than any human can. AI can scan deeper into an image and pick up on properties and nuances among cells that the human eye cannot detect.

Which AI technique can be used to predict if cancer is malignant? ›

By using the SVM Algorithm, the classification of cancer will be done by using the dataset after data pre-processing techniques.

How accurate is machine learning for lung cancer detection? ›

The experimental findings showed that the proposed NB algorithm achieved 82.97% accuracy, 75.86% sensitivity, 94.44% specificity, 95.65% precision, 84.61% F-Measure, and 84.64% G-Mean. ...

What are the limitations of AI in cancer? ›

However, data-related concerns and human biases that seep into algorithms during development and post-deployment phases affect performance in real-world settings, limiting the utility and safety of AI technology in oncology clinics.

How accurate is the AI medical diagnosis? ›

AI explanations did not significantly improve the harmful effects of biased models on a clinician's diagnostic accuracy, estimated to be about 81%. Therefore, a combination of AI models and clinicians could be effectively used for complex diagnostic tasks.

What is Pythia AI? ›

Pythia is an ancient text restoration model that recovers missing characters from a damaged text input using deep neural networks. It was created by Yannis Assael, Thea Sommerschield, and Jonathan Prag, researchers from Google DeepMind and the University of Oxford.

What diseases cannot be detected by a CT scan? ›

Where MRI really excels is showing certain diseases that a CT scan cannot detect. Some cancers, such as prostate cancer, uterine cancer, and certain liver cancers, are pretty much invisible or very hard to detect on a CT scan. Metastases to the bone and brain also show up better on an MRI.

What is Ghost AI? ›

Our AI Humanizer polishes AI-created emails, chatbot conversations, and replies from customer service too. It doesn't only enhance customer experience, but also boosts engagement levels, ensuring each interaction seems personal.

Which algorithm is used for lung cancer detection? ›

Cancer diagnosis using machine learning algorithms. Maleki et al. [14] used the k-Nearest Neighbor (kNN) algorithm on the lung cancer dataset. They applied feature selection on the dataset and came up with the six most essential features among the dataset features.

What is the role of AI in cancer screening? ›

AI can analyze patient data, including genetic information, family history, and lifestyle factors, to assess cancer risk and recommend regular screenings for high-risk individuals.

What technology is used to detect lung cancer? ›

Lung cancer is diagnosed through imaging tools, including computed tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) scans.

Top Articles
Latest Posts
Article information

Author: Dan Stracke

Last Updated:

Views: 5311

Rating: 4.2 / 5 (43 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Dan Stracke

Birthday: 1992-08-25

Address: 2253 Brown Springs, East Alla, OH 38634-0309

Phone: +398735162064

Job: Investor Government Associate

Hobby: Shopping, LARPing, Scrapbooking, Surfing, Slacklining, Dance, Glassblowing

Introduction: My name is Dan Stracke, I am a homely, gleaming, glamorous, inquisitive, homely, gorgeous, light person who loves writing and wants to share my knowledge and understanding with you.