Machine Learning Techniques for Improved Disease Detection in Breast and Gastrointestinal Tissues

Date

2021-11-17

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

We have developed high-performing deep learning architectures and preprocessing pipelines for identifying abnormalities, diseases, anatomical landmarks, and physiological characteristics of breast and gastrointestinal tissues. These algorithms have the potential to improve accuracy, speed, cost, and accessibility of medical image screening for individuals all around the world. For breast tissue, we have developed a region-proposal network for identifying and localizing malignancies, a one-class classifier for determining whether or not an image is truly a mammogram in an intelligent image filtering pipeline, and lastly DualViewNet, a convolutional neural network built upon MobileNetV2, for identifying breast tissue density in mammograms and quantifying the usefulness of different mammogram views in determining breast density. The malignancy identification network achieved 0.951 AUC when tested on BI-RADS 1, 5, and 6 mammograms from the INbreast dataset while the one-class classifier had only 2 misclassifications out of 410 mammograms and 2 misclassifications out of 1,640 non-mammograms. DualViewNet showed best performance over all compared architectures with a macro average AUC of 0.8970 and macro average 95% confidence interval of 0.8239- 0.9450 and demonstrated preference of MLO over CC views in 1,187 out of 1,323 breasts. For gastrointestinal tissue, we fine-tuned Inception-v4, Inception-ResNet-v2, and NASNet on images sent through a custom data pipeline, achieving state-of-the-art results on images taken from the Kvasir database. The resulting accuracies achieved using these models were 0.9845, 0.9848, and 0.9735, respectively. In addition, Inception-v4 achieved an average of 0.938 precision, 0.939 recall, 0.991 specificity, 0.938 F1 score, and 0.929 Matthews correlation coefficient (MCC). Bootstrapping provided NASNet, the worst performing model, a lower bound of 0.9723 accuracy on the 95% confidence interval. In addition, we built a cloud-based deployment environment for remotely analyzing and screening mammograms from anywhere in the world as well as a client-side annotation tool for generating new training data. These results are presented in detail through the following chapters.

Description

Citation