Maciej Mazurowski

Positions:

Associate Professor in Radiology

Radiology
School of Medicine

Associate Professor in the Department of Electrical and Computer Engineering

Electrical and Computer Engineering
Pratt School of Engineering

Associate Professor in Biostatistics and Bioinformatics

Biostatistics & Bioinformatics
School of Medicine

Associate Professor of Computer Science

Computer Science
Trinity College of Arts & Sciences

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 2008

University of Louisville

Grants:

Machine learning and collaborative filtering tools for personalized education in digital breast tomosynthesis

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Machine learning and collaborative filtering tools for personalized education in digital breast tomosynthesis

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Improved education in digital breast tomosynthesis using machine learning and computer vision tools

Administered By
Radiology
Awarded By
Radiological Society of North America
Role
Principal Investigator
Start Date
End Date

Development of a personalized evidence-based algorithm for the management of suspicious calcifications

Administered By
Radiology, Breast Imaging
Awarded By
Ge-Aur Radiology Research
Role
Mentor
Start Date
End Date

Breast Cancer Detection Consortium

Administered By
Surgery, Surgical Sciences
Awarded By
National Institutes of Health
Role
Statistician
Start Date
End Date

Publications:

Deep Learning for Breast MRI Style Transfer with Limited Training Data.

In this work we introduce a novel medical image style transfer method, StyleMapper, that can transfer medical scans to an unseen style with access to limited training data. This is made possible by training our model on unlimited possibilities of simulated random medical imaging styles on the training set, making our work more computationally efficient when compared with other style transfer methods. Moreover, our method enables arbitrary style transfer: transferring images to styles unseen in training. This is useful for medical imaging, where images are acquired using different protocols and different scanner models, resulting in a variety of styles that data may need to be transferred between. Our model disentangles image content from style and can modify an image's style by simply replacing the style encoding with one extracted from a single image of the target style, with no additional optimization required. This also allows the model to distinguish between different styles of images, including among those that were unseen in training. We propose a formal description of the proposed model. Experimental results on breast magnetic resonance images indicate the effectiveness of our method for style transfer. Our style transfer method allows for the alignment of medical images taken with different scanners into a single unified style dataset, allowing for the training of other downstream tasks on such a dataset for tasks such as classification, object detection and others.
Authors
Cao, S; Konz, N; Duncan, J; Mazurowski, MA
MLA Citation
Cao, Shixing, et al. “Deep Learning for Breast MRI Style Transfer with Limited Training Data.J Digit Imaging, Dec. 2022. Pubmed, doi:10.1007/s10278-022-00755-z.
URI
https://scholars.duke.edu/individual/pub1560925
PMID
36544066
Source
pubmed
Published In
J Digit Imaging
Published Date
DOI
10.1007/s10278-022-00755-z

The Intrinsic Manifolds of Radiological Images and Their Role in Deep Learning

The manifold hypothesis is a core mechanism behind the success of deep learning, so understanding the intrinsic manifold structure of image data is central to studying how neural networks learn from the data. Intrinsic dataset manifolds and their relationship to learning difficulty have recently begun to be studied for the common domain of natural images, but little such research has been attempted for radiological images. We address this here. First, we compare the intrinsic manifold dimensionality of radiological and natural images. We also investigate the relationship between intrinsic dimensionality and generalization ability over a wide range of datasets. Our analysis shows that natural image datasets generally have a higher number of intrinsic dimensions than radiological images. However, the relationship between generalization ability and intrinsic dimensionality is much stronger for medical images, which could be explained as radiological images having intrinsic features that are more difficult to learn. These results give a more principled underpinning for the intuition that radiological images can be more challenging to apply deep learning to than natural image datasets common to machine learning research. We believe rather than directly applying models developed for natural images to the radiological imaging domain, more care should be taken to developing architectures and algorithms that are more tailored to the specific characteristics of this domain. The research shown in our paper, demonstrating these characteristics and the differences from natural images, is an important first step in this direction.
Authors
Konz, N; Gu, H; Dong, H; Mazurowski, M
MLA Citation
Konz, N., et al. “The Intrinsic Manifolds of Radiological Images and Their Role in Deep Learning.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13438 LNCS, 2022, pp. 684–94. Scopus, doi:10.1007/978-3-031-16452-1_65.
URI
https://scholars.duke.edu/individual/pub1554666
Source
scopus
Published In
Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
13438 LNCS
Published Date
Start Page
684
End Page
694
DOI
10.1007/978-3-031-16452-1_65

Lightweight Transformer Backbone for Medical Object Detection

Lesion detection in digital breast tomosynthesis (DBT) is an important and a challenging problem characterized by a low prevalence of images containing tumors. Due to the label scarcity problem, large deep learning models and computationally intensive algorithms are likely to fail when applied to this task. In this paper, we present a practical yet lightweight backbone to improve the accuracy of tumor detection. Specifically, we propose a novel modification of visual transformer (ViT) on image feature patches to connect the feature patches of a tumor with healthy backgrounds of breast images and form a more robust backbone for tumor detection. To the best of our knowledge, our model is the first work of Transformer backbone object detection for medical imaging. Our experiments show that this model can considerably improve the accuracy of lesion detection and reduce the amount of labeled data required in typical ViT. We further show that with additional augmented tumor data, our model significantly outperforms the Faster R-CNN model and state-of-the-art SWIN transformer model.
Authors
Zhang, Y; Dong, H; Konz, N; Gu, H; Mazurowski, MA
MLA Citation
Zhang, Y., et al. “Lightweight Transformer Backbone for Medical Object Detection.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13581 LNCS, 2022, pp. 47–56. Scopus, doi:10.1007/978-3-031-17979-2_5.
URI
https://scholars.duke.edu/individual/pub1555546
Source
scopus
Published In
Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
13581 LNCS
Published Date
Start Page
47
End Page
56
DOI
10.1007/978-3-031-17979-2_5

Perception and Training

Authors
Auffermann, WF; Mazurowski, M
MLA Citation
Auffermann, W. F., and M. Mazurowski. “Perception and Training.” The Handbook of Medical Image Perception and Techniques: Second Edition, 2018, pp. 470–82. Scopus, doi:10.1017/9781108163781.031.
URI
https://scholars.duke.edu/individual/pub1526580
Source
scopus
Published Date
Start Page
470
End Page
482
DOI
10.1017/9781108163781.031

Anomaly Detection of Calcifications in Mammography Based on 11,000 Negative Cases.

In mammography, calcifications are one of the most common signs of breast cancer. Detection of such lesions is an active area of research for computer-aided diagnosis and machine learning algorithms. Due to limited numbers of positive cases, many supervised detection models suffer from overfitting and fail to generalize. We present a one-class, semi-supervised framework using a deep convolutional autoencoder trained with over 50,000 images from 11,000 negative-only cases. Since the model learned from only normal breast parenchymal features, calcifications produced large signals when comparing the residuals between input and reconstruction output images. As a key advancement, a structural dissimilarity index was used to suppress non-structural noises. Our selected model achieved pixel-based AUROC of 0.959 and AUPRC of 0.676 during validation, where calcification masks were defined in a semi-automated process. Although not trained directly on any cancers, detection performance of calcification lesions on 1,883 testing images (645 malignant and 1238 negative) achieved 75% sensitivity at 2.5 false positives per image. Performance plateaued early when trained with only a fraction of the cases, and greater model complexity or a larger dataset did not improve performance. This study demonstrates the potential of this anomaly detection approach to detect mammographic calcifications in a semi-supervised manner with efficient use of a small number of labeled images, and may facilitate new clinical applications such as computer-aided triage and quality improvement.
Authors
Hou, R; Peng, Y; Grimm, LJ; Ren, Y; Mazurowski, MA; Marks, JR; King, LM; Maley, CC; Hwang, ES; Lo, JY
MLA Citation
Hou, Rui, et al. “Anomaly Detection of Calcifications in Mammography Based on 11,000 Negative Cases.Ieee Trans Biomed Eng, vol. 69, no. 5, May 2022, pp. 1639–50. Pubmed, doi:10.1109/TBME.2021.3126281.
URI
https://scholars.duke.edu/individual/pub1502472
PMID
34788216
Source
pubmed
Published In
Ieee Trans Biomed Eng
Volume
69
Published Date
Start Page
1639
End Page
1650
DOI
10.1109/TBME.2021.3126281