Maciej Mazurowski

Positions:

Associate Professor in Radiology

Radiology
School of Medicine

Associate Professor in the Department of Electrical and Computer Engineering

Electrical and Computer Engineering
Pratt School of Engineering

Associate Professor in Biostatistics and Bioinformatics

Biostatistics & Bioinformatics
School of Medicine

Associate Professor of Computer Science

Computer Science
Trinity College of Arts & Sciences

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 2008

University of Louisville

Grants:

Machine learning and collaborative filtering tools for personalized education in digital breast tomosynthesis

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Machine learning and collaborative filtering tools for personalized education in digital breast tomosynthesis

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Improved education in digital breast tomosynthesis using machine learning and computer vision tools

Administered By
Radiology
Awarded By
Radiological Society of North America
Role
Principal Investigator
Start Date
End Date

Development of a personalized evidence-based algorithm for the management of suspicious calcifications

Administered By
Radiology, Breast Imaging
Awarded By
Ge-Aur Radiology Research
Role
Mentor
Start Date
End Date

Breast Cancer Detection Consortium

Administered By
Surgery, Surgical Sciences
Awarded By
National Institutes of Health
Role
Statistician
Start Date
End Date

Publications:

Deep neural networks trained for segmentation are sensitive to brightness changes: Preliminary results

Medical images of a patient may have a significantly different appearance depending on imaging modality (e.g. MRI vs. CT), sequence type (e.g., T1-weighted MRI vs. T2-weighted MRI), and even manufacturer/model of equipment used for the same modality and sequence type (e.g. SIEMENS vs GE). Since in the context of deep learning training and test data often come from different institutions, it is important to determine how well neural networks generalize when image appearance varies. There is currently no systematic answer to this question. In this study, we investigate how deep neural networks trained for segmentation generalize. Our analysis is based on synthesizing a series of datasets of images with the target object of the same shape but with varying pixel intensity of the foreground object and the background. This simulates basic effects of changing equipment models and sequence types. We also consider scenarios when datasets with different image properties are combined to determine whether generalizability of the network to other scenarios is improved. We found that the generalizability of segmentation networks to changing intensities is poor. We also found that the generalizability is somewhat improved when different datasets are combined but that generalizability is typically limited to data similar to the two types of datasets included in training and not to datasets with different image intensities.
Authors
MLA Citation
Zhu, Z., et al. “Deep neural networks trained for segmentation are sensitive to brightness changes: Preliminary results.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11597, 2021. Scopus, doi:10.1117/12.2582190.
URI
https://scholars.duke.edu/individual/pub1478564
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11597
Published Date
DOI
10.1117/12.2582190

Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes.

Machine learning models for radiology benefit from large-scale data sets with high quality labels for abnormalities. We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is the largest multiply-annotated volumetric medical imaging data set reported. To annotate this data set, we developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports with an average F-score of 0.976 (min 0.941, max 1.0). We also developed a model for multi-organ, multi-disease classification of chest CT volumes that uses a deep convolutional neural network (CNN). This model reached a classification performance of AUROC >0.90 for 18 abnormalities, with an average AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data. We show that training on more labels improves performance significantly: for a subset of 9 labels - nodule, opacity, atelectasis, pleural effusion, consolidation, mass, pericardial effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased by 10% when the number of training labels was increased from 9 to all 83. All code for volume preprocessing, automated label extraction, and the volume abnormality prediction model is publicly available. The 36,316 CT volumes and labels will also be made publicly available pending institutional approval.
Authors
Draelos, RL; Dov, D; Mazurowski, MA; Lo, JY; Henao, R; Rubin, GD; Carin, L
MLA Citation
Draelos, Rachel Lea, et al. “Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes.Med Image Anal, vol. 67, Jan. 2021, p. 101857. Pubmed, doi:10.1016/j.media.2020.101857.
URI
https://scholars.duke.edu/individual/pub1433045
PMID
33129142
Source
pubmed
Published In
Med Image Anal
Volume
67
Published Date
Start Page
101857
DOI
10.1016/j.media.2020.101857

Generative adversarial network-based image completion to identify abnormal locations in digital breast tomosynthesis images

Deep learning has achieved great success in image analysis and decision making in radiology. However, a large amount of annotated imaging data is needed to construct well-performing deep learning models. A particular challenge in the context of breast cancer is the number of available cases that contain cancer, given the very low prevalence of the disease in the screening population. The question arises whether normal cases, which in the context of breast cancer screening are available in abundance, can be used to train a deep learning model that identifies locations that are abnormal. In this study, we propose to achieve this goal through the generative adversarial network (GAN)-based image completion. Our hypothesis is that if a generative network has a difficulty to correctly complete a part of an image at a certain location, then such a location is likely to represent an abnormality. We test this hypothesis using a dataset of 4348 patients with digital breast tomosynthesis (DBT) imaging from our institution. We trained our model on normal only images, to be able to fill in parts of images that were artificially removed. Then, using an independent test set, at different locations in the images, we measured how difficult it was for the network to reconstruct an artificially removed patch of the image. The difficulty was measured by mean squared error (MSE) between the original removed patch and the reconstructed patch. On average, the MSE was 2.11 times higher (with standard deviation equal to 1.01) at the locations containing expert-annotated cancerous lesions than that at the locations outside those abnormal locations. Our generative approach demonstrates a great potential for using this model to aid breast cancer detection.
Authors
Swiecicki, A; Buda, M; Saha, A; Li, N; Ghate, SV; Walsh, R; Mazurowski, MA
MLA Citation
Swiecicki, A., et al. “Generative adversarial network-based image completion to identify abnormal locations in digital breast tomosynthesis images.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2551379.
URI
https://scholars.duke.edu/individual/pub1447015
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2551379

A multitask deep learning method in simultaneously predicting occult invasive disease in ductal carcinoma in-situ and segmenting microcalcifications in mammography

We proposed a two-branch multitask learning convolutional neural network to solve two different but related tasks at the same time. Our main task is to predict occult invasive disease in biopsy proven Ductal Carcinoma in-situ (DCIS), with an auxiliary task of segmenting microcalcifications (MCs). In this study, we collected digital mammography from 604 patients, 400 of which were DCIS. The model used patches with size of 512×512 extracted within a radiologist masked ROIs as input, with outputs including noisy MC segmentations obtained from our previous algorithms, and classification labels from final diagnosis at patients' definite surgery. We utilized a deep multitask model by combining both Unet segmentation networks and prediction classification networks, by sharing first several convolutional layers. The model achieved a patch-based ROC-AUC of 0.69, with a case-based ROC-AUC of 0.61. Segmentation results achieved a dice coefficient of 0.49.
Authors
Hou, R; Grimm, LJ; Mazurowski, MA; Marks, JR; King, LM; Maley, CC; Hwang, ES; Lo, JY
MLA Citation
Hou, R., et al. “A multitask deep learning method in simultaneously predicting occult invasive disease in ductal carcinoma in-situ and segmenting microcalcifications in mammography.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2549669.
URI
https://scholars.duke.edu/individual/pub1447092
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2549669

Weakly supervised 3D classification of chest CT using aggregated multi-resolution deep segmentation features

Weakly supervised disease classification of CT imaging suffers from poor localization owing to case-level annotations, where even a positive scan can hold hundreds to thousands of negative slices along multiple planes. Furthermore, although deep learning segmentation and classification models extract distinctly unique combinations of anatomical features from the same target class(es), they are typically seen as two independent processes in a computer-aided diagnosis (CAD) pipeline, with little to no feature reuse. In this research, we propose a medical classifier that leverages the semantic structural concepts learned via multi-resolution segmentation feature maps, to guide weakly supervised 3D classification of chest CT volumes. Additionally, a comparative analysis is drawn across two different types of feature aggregation to explore the vast possibilities surrounding feature fusion. Using a dataset of 1593 scans labeled on a case-level basis via rule-based model, we train a dual-stage convolutional neural network (CNN) to perform organ segmentation and binary classification of four representative diseases (emphysema, pneumonia/atelectasis, mass and nodules) in lungs. The baseline model, with separate stages for segmentation and classification, results in AUC of 0.791. Using identical hyperparameters, the connected architecture using static and dynamic feature aggregation improves performance to AUC of 0.832 and 0.851, respectively. This study advances the field in two key ways. First, case-level report data is used to weakly supervise a 3D CT classifier of multiple, simultaneous diseases for an organ. Second, segmentation and classification models are connected with two different feature aggregation strategies to enhance the classification performance.
Authors
Saha, A; Tushar, FI; Faryna, K; D'Anniballe, VM; Hou, R; Mazurowski, MA; Rubin, GD; Lo, JY
MLA Citation
Saha, A., et al. “Weakly supervised 3D classification of chest CT using aggregated multi-resolution deep segmentation features.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2550857.
URI
https://scholars.duke.edu/individual/pub1447178
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2550857