Joseph Lo

Overview:

My research focuses on computer vision and machine learning in medical imaging, with a focus on mammography and CT imaging. There are three specific projects:

First, we have a long track record of creating machine learning models to detect and diagnose breast cancer from mammograms. These algorithms are based on computer vision and deep learning, with the long term goal to incorporate the contribution of imaging data with proteomic/genomic markers. Specific projects include predicting which cases of DCIS are likely to contain hidden invasive cancer, thus informing women to take advantage of personalized treatment decisions. This work is funded by NIH, Dept of Defense, Cancer Research UK, and other agencies.

Second, we design virtual breast models that are based on actual patient data and thus contain highly realistic breast anatomy with voxel-level ground truth. We can transform these virtual models into physical form using several forms of 3D printing technology. In work funded by NIH, we are translating this work to produce a new generation of realistic phantoms for CT. Such physical phantoms can be scanned on actual imaging devices, allowing us to assess image quality in new ways that are not only quantitative but also clinically relevant.

Third, we are developing a broad, machine learning platform to segment multiple organs and classify multiple diseases in chest-abdomen-pelvis CT scans. The goal is to provide automated labeling of hospital-scale data sets (potentially hundreds of thousands of studies) to produce sufficient data for deep learning studies. This work includes natural language processing to analyze radiology reports, and deep learning models for the segmentation and classification tasks.

Positions:

Professor of Radiology

Radiology
School of Medicine

Professor in the Department of Electrical and Computer Engineering

Electrical and Computer Engineering
Pratt School of Engineering

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

B.S.E.E. 1988

Duke University

Ph.D. 1993

Duke University

Research Associate, Radiology

Duke University

Grants:

Predicting Breast Cancer With Ultrasound and Mammography

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Improved Diagnosis of Breast Microcalcification Clusters

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Accurate Models for Predicting Radiation-Induced Injury

Administered By
Radiation Oncology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Role
Co Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Publications:

Mask Embedding for Realistic High-Resolution Medical Image Synthesis

© 2019, Springer Nature Switzerland AG. Generative Adversarial Networks (GANs) have found applications in natural image synthesis and begin to show promises generating synthetic medical images. In many cases, the ability to perform controlled image synthesis using masked priors such as shape and size of organs is desired. However, mask-guided image synthesis is challenging due to the pixel level mask constraint. While the few existing mask-guided image generation approaches suffer from the lack of fine-grained texture details, we tackle the issue of mask-guided stochastic image synthesis via mask embedding. Our novel architecture first encodes the input mask as an embedding vector and then inject these embedding into the random latent vector input. The intuition is to classify semantic masks into partitions before feature up-sampling for improved sample space mapping stability. We validate our approach on a large dataset containing 39,778 patients with 443,556 negative screening Full Field Digital Mammography (FFDM) images. Experimental results show that our approach can generate realistic high-resolution (256 × 512 ) images with pixel-level mask constraints, and outperform other state-of-the-art approaches.
Authors
Ren, Y; Zhu, Z; Li, Y; Kong, D; Hou, R; Grimm, LJ; Marks, JR; Lo, JY
MLA Citation
Ren, Y., et al. “Mask Embedding for Realistic High-Resolution Medical Image Synthesis.” Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11769 LNCS, 2019, pp. 422–30. Scopus, doi:10.1007/978-3-030-32226-7_47.
URI
https://scholars.duke.edu/individual/pub1423148
Source
scopus
Published In
Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
11769 LNCS
Published Date
Start Page
422
End Page
430
DOI
10.1007/978-3-030-32226-7_47

Prediction of Upstaged Ductal Carcinoma in situ Using Forced Labeling and Domain Adaptation.

OBJECTIVE: The goal of this study is to use adjunctive classes to improve a predictive model whose performance is limited by the common problems of small numbers of primary cases, high feature dimensionality, and poor class separability. Specifically, our clinical task is to use mammographic features to predict whether ductal carcinoma in situ (DCIS) identified at needle core biopsy will be later upstaged or shown to contain invasive breast cancer. METHODS: To improve the prediction of pure DCIS (negative) versus upstaged DCIS (positive) cases, this study considers the adjunctive roles of two related classes: atypical ductal hyperplasia (ADH), a non-cancer type of breast abnormity, and invasive ductal carcinoma (IDC), with 113 computer vision based mammographic features extracted from each case. To improve the baseline Model A classification of pure vs. upstaged DCIS, we designed three different strategies (Models B, C, D) with different ways of embedding features or inputs. RESULTS: Based on ROC analysis, the baseline Model A performed with AUC of 0.614 (95% CI, 0.496-0.733). All three new models performed better than the baseline, with domain adaptation (Model D) performing the best with an AUC of 0.697 (95% CI, 0.595-0.797). CONCLUSION: We improved the prediction performance of DCIS upstaging by embedding two related pathology classes in different training phases. SIGNIFICANCE: The three new strategies of embedding related class data all outperformed the baseline model, thus demonstrating not only feature similarities among these different classes, but also the potential for improving classification by using other related classes.
Authors
Hou, R; Mazurowski, MA; Grimm, LJ; Marks, JR; King, LM; Maley, CC; Hwang, ES; Lo, JY
MLA Citation
Hou, Rui, et al. “Prediction of Upstaged Ductal Carcinoma in situ Using Forced Labeling and Domain Adaptation..” Ieee Trans Biomed Eng, Sept. 2019. Pubmed, doi:10.1109/TBME.2019.2940195.
URI
https://scholars.duke.edu/individual/pub1409876
PMID
31502960
Source
pubmed
Published In
Ieee Trans Biomed Eng
Published Date
DOI
10.1109/TBME.2019.2940195

Controlling the position-dependent contrast of 3D printed physical phantoms with a single material

© SPIE. Downloading of the abstract is permitted for personal use only. Custom 3D printed physical phantoms are desired for testing the limits of medical imaging, and for providing patientspecific information. This work focuses on the development of low-cost, open source fused filament fabrication for printing of physical phantoms with the structure and contrast of human anatomy in computed tomography (CT) images. Specifically, this paper introduces the concept of using a porous 3D printed layer as a background into which additional material can be printed to control the position-dependent contrast. By using this method, eight levels of contrast were printed with a single material.
Authors
Tong, H; Pegues, H; Yang, F; Samei, E; Lo, JY; Wiley, BJ
MLA Citation
Tong, H., et al. “Controlling the position-dependent contrast of 3D printed physical phantoms with a single material.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 10948, 2019. Scopus, doi:10.1117/12.2513469.
URI
https://scholars.duke.edu/individual/pub1398121
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
10948
Published Date
DOI
10.1117/12.2513469

Special Section Guest Editorial: Special Section on 3D Printing in Medical Imaging

Authors
MLA Citation
Samei, E., and J. Lo. “Special Section Guest Editorial: Special Section on 3D Printing in Medical Imaging.” Journal of Medical Imaging, vol. 6, no. 2, Apr. 2019. Scopus, doi:10.1117/1.JMI.6.2.021601.
URI
https://scholars.duke.edu/individual/pub1402534
Source
scopus
Published In
Journal of Medical Imaging (Bellingham, Wash.)
Volume
6
Published Date
DOI
10.1117/1.JMI.6.2.021601

Multi-organ segmentation in clinical-computed tomography for patient-specific image quality and dose metrology

© SPIE. Downloading of the abstract is permitted for personal use only. The purpose of this study was to develop a robust, automated multi-organ segmentation model for clinical adult and pediatric CT and implement the model as part of a patient-specific safety and quality monitoring system. 3D convolutional neural network (Unet) models were setup to segment 30 different organs and structures at the diagnostic image resolution. For each organ, 200 manually-labeled cases were used to train the network, fitting it to different clinical imaging resolutions and contrast enhancement stages. The dataset was randomly shuffled, and divided with 6/2/2 train/validation/test set split. The model was deployed to automatically segment 1200 clinical CT images as a demonstration of the utility of the method. Each case was made into a patient-specific phantom based on the segmentation masks, with unsegmented organs and structures filled in by deforming a template XCAT phantom of similar anatomy. The organ doses were then estimated using a validated scanner-specific MC-GPU package using the actual scan information. The segmented organ information was likewise used to assess contrast, noise, and detectability index within each organ. The neural network segmentation model showed dice similarity coefficients (DSC) above 0.85 for the majority of organs. Notably, the lungs and liver showed a DSC of 0.95 and 0.94, respectively. The segmentation results produced patient-specific dose and quality values across the tested 1200 patients with representative the histogram distributions. The measurements were compared in global-to-organ (e.g. CTDvol vs. liver dose) and organ-to-organ (e.g. liver dose vs. spleen dose) manner. The global-to-organ measurements (liver dose vs. CTDIvol: o-'. = 0.62; liver vs. global d': o'. = 0.78; liver vs. global noise: o'. = 0.55) were less correlated compared to the organ-to-organ measurements (liver vs. spleen dose: o'. = 0.93; liver vs. spleen d': o'. = 0.82; liver vs. spleen noise: o'. = 0.78). This variation of measurement is more prominent for iterative reconstruction kernel compared to the filtered back projection kernel (liver vs. global noise: o'.o4o'. = 0.47 vs. o'.oouo F = 0.75; liver vs. global d': o'.o1/4o'. = 0.74 vs. o'.oouo'F = 0.83). The results can help derive meaningful relationships between image quality, organ doses, and patient attributes.
Authors
Fu, W; Sharma, S; Smith, T; Hou, R; Abadi, E; Selvakumaran, V; Tang, R; Lo, JY; Segars, WP; Kapadia, AJ; Solomon, JB; Rubin, GD; Samei, E
MLA Citation
Fu, W., et al. “Multi-organ segmentation in clinical-computed tomography for patient-specific image quality and dose metrology.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 10948, 2019. Scopus, doi:10.1117/12.2512883.
URI
https://scholars.duke.edu/individual/pub1377624
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
10948
Published Date
DOI
10.1117/12.2512883

Research Areas:

Breast Neoplasms
Clinical Trials as Topic
Computer Simulation
Decision Making, Computer-Assisted
Decision Support Systems, Clinical
Decision Support Techniques
Image Processing, Computer-Assisted
Imaging, Three-Dimensional
Machine learning
Mammography
Models, Structural
Pattern Recognition, Automated
Radiographic Image Interpretation, Computer-Assisted
Radiology
Technology Assessment, Biomedical
Tomosynthesis