Joseph Lo

Overview:

My research uses computer vision and machine learning to improve medical imaging, focusing on breast and CT imaging. There are three specific projects:

(1) We design deep learning models to diagnose breast cancer from mammograms. We perform single-shot lesion detection, multi-task segmentation/classification, and image synthesis. Our goal is to improve radiologist diagnostic performance and empower patients to make personalized treatment decisions. This work is funded by NIH, Dept of Defense, Cancer Research UK, and other agencies.

(2) We create virtual breast models that are based on actual patient data and thus contain highly realistic anatomy. We transform these virtual models into physical form using customized 3D printing technology. With NIH funding, we are translating this work to produce a new generation of realistic phantoms for CT. Such physical phantoms can be scanned on actual imaging devices, allowing us to assess image quality in new ways that are not only quantitative but also clinically relevant.

(3) We develop computer-aided triage tools to classify multiple diseases in chest-abdomen-pelvis CT scans. We are building hospital-scale data sets with hundreds of thousands of patients. This work includes natural language processing to analyze radiology reports as well as deep learning models for organ segmentation and disease classification.

Positions:

Professor in Radiology

Radiology
School of Medicine

Professor of Biomedical Engineering

Biomedical Engineering
Pratt School of Engineering

Professor in the Department of Electrical and Computer Engineering

Electrical and Computer Engineering
Pratt School of Engineering

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

B.S.E.E. 1988

Duke University

Ph.D. 1993

Duke University

Research Associate, Radiology

Duke University

Grants:

Predicting Breast Cancer With Ultrasound and Mammography

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Improved Diagnosis of Breast Microcalcification Clusters

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Accurate Models for Predicting Radiation-Induced Injury

Administered By
Radiation Oncology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
US Army Medical Research
Role
Co Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Publications:

<i>i</i>Phantom: A framework for automated creation of individualized computational phantoms and its application to CT organ dosimetry

Objective: This study aims to develop and validate a novel framework, <i>i</i>Phantom, for automated creation of patient-specific phantoms or “digital-twins (DT)” using patient medical images. The framework is applied to assess radiation dose to radiosensitive organs in CT imaging of individual patients. Method: Given a volume of patient CT images, <i>i</i>Phantom segments selected anchor organs and structures (e.g., liver, bones, pancreas) using a learning-based model developed for multi-organ CT segmentation. Organs which are challenging to segment (e.g., intestines) are incorporated from a matched phantom template, using a diffeomorphic registration model developed for multi-organ phantom-voxels. The resulting digital-twin phantoms are used to assess organ doses during routine CT exams. Result: <i>i</i>Phantom was validated on both with a set of XCAT digital phantoms (n=50) and an independent clinical dataset (n=10) with similar accuracy. <i>i</i>Phantom precisely predicted all organ locations yielding Dice Similarity Coefficients (DSC) 0.6 - 1 for anchor organs and DSC of 0.3-0.9 for all other organs. <i>i</i>Phantom showed <10% errors in estimated radiation dose for the majority of organs, which was notably superior to the state-of-the-art baseline method (20-35% dose errors). Conclusion: <i>i</i>Phantom enables automated and accurate creation of patient-specific phantoms and, for the first time, provides sufficient and automated patient-specific dose estimates for CT dosimetry. Significance: The new framework brings the creation and application of CHPs (computational human phantoms) to the level of individual CHPs through automation, achieving wide and precise organ localization, paving the way for clinical monitoring, personalized optimization, and large-scale research.
Authors
Fu, W; Sharma, S; Abadi, E; Iliopoulos, AS; Wang, Q; Sun, X; Lo, JYC; Segars, WP; Samei, E
MLA Citation
Fu, W., et al. “<i>i</i>Phantom: A framework for automated creation of individualized computational phantoms and its application to CT organ dosimetry.” Ieee Journal of Biomedical and Health Informatics, Jan. 2021. Scopus, doi:10.1109/JBHI.2021.3063080.
URI
https://scholars.duke.edu/individual/pub1475284
Source
scopus
Published In
Ieee Journal of Biomedical and Health Informatics
Published Date
DOI
10.1109/JBHI.2021.3063080

IPhantom: An automated framework in generating personalized computational phantoms for organ-based radiation dosimetry

We propose an automated framework to generate 3D detailed person-specific computational phantoms directly from patient medical images. We investigate the feasibility of this framework in terms of accurately generating patient-specific phantoms and the clinical utility in estimating patient-specific organ dose for CT images. The proposed framework generates 3D volumetric phantoms with a comprehensive set of radiosensitive organs, by fusing patient image data with prior anatomical knowledge from a library of computational phantoms in a two-stage approach. In the first stage, the framework segments a selected set of organs from patient medical images as anchors. In the second stage, conditioned on the segmented organs, the framework generates unsegmented anatomies through mappings between anchor and nonanchor organs learned from libraries of phantoms with rich anatomy. We applied this framework to clinical CT images and demonstrated its utility for patient-specific organ dosimetry. The result showed the framework generates patientspecific phantoms in ∼10 seconds and provides Monte Carlo based organ dose estimation in ∼30 seconds with organ dose errors <10% for the majority of organs. The framework shows the potential for large scale and real-time clinic analysis, standardization, and optimization.
Authors
Fu, W; Segars, PW; Sharma, S; Lo, JY; Samei, E
MLA Citation
Fu, W., et al. “IPhantom: An automated framework in generating personalized computational phantoms for organ-based radiation dosimetry.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11595, 2021. Scopus, doi:10.1117/12.2582238.
URI
https://scholars.duke.edu/individual/pub1478550
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11595
Published Date
DOI
10.1117/12.2582238

Weakly Supervised Multi-Organ Multi-Disease Classification of Body CT Scans.

Authors
Tushar, FI; D'Anniballe, VM; Hou, R; Mazurowski, MA; Fu, W; Samei, E; Rubin, GD; Lo, JY
MLA Citation
Tushar, Fakrul Islam, et al. “Weakly Supervised Multi-Organ Multi-Disease Classification of Body CT Scans.Corr, vol. abs/2008.01158, 2020.
URI
https://scholars.duke.edu/individual/pub1454154
Source
dblp
Published In
Corr
Volume
abs/2008.01158
Published Date

Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes.

Machine learning models for radiology benefit from large-scale data sets with high quality labels for abnormalities. We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is the largest multiply-annotated volumetric medical imaging data set reported. To annotate this data set, we developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports with an average F-score of 0.976 (min 0.941, max 1.0). We also developed a model for multi-organ, multi-disease classification of chest CT volumes that uses a deep convolutional neural network (CNN). This model reached a classification performance of AUROC >0.90 for 18 abnormalities, with an average AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data. We show that training on more labels improves performance significantly: for a subset of 9 labels - nodule, opacity, atelectasis, pleural effusion, consolidation, mass, pericardial effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased by 10% when the number of training labels was increased from 9 to all 83. All code for volume preprocessing, automated label extraction, and the volume abnormality prediction model is publicly available. The 36,316 CT volumes and labels will also be made publicly available pending institutional approval.
Authors
Draelos, RL; Dov, D; Mazurowski, MA; Lo, JY; Henao, R; Rubin, GD; Carin, L
MLA Citation
Draelos, Rachel Lea, et al. “Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes.Med Image Anal, vol. 67, Jan. 2021, p. 101857. Pubmed, doi:10.1016/j.media.2020.101857.
URI
https://scholars.duke.edu/individual/pub1433045
PMID
33129142
Source
pubmed
Published In
Med Image Anal
Volume
67
Published Date
Start Page
101857
DOI
10.1016/j.media.2020.101857

Detection of masses and architectural distortions in digital breast tomosynthesis: a publicly available dataset of 5, 060 patients and a deep learning model.

Authors
Buda, M; Saha, A; Walsh, R; Ghate, SV; Li, N; Swiecicki, A; Lo, JY; Mazurowski, MA
URI
https://scholars.duke.edu/individual/pub1466877
Source
dblp
Published In
Corr
Volume
abs/2011.07995
Published Date

Research Areas:

Breast Neoplasms
Clinical Trials as Topic
Computer Simulation
Decision Making, Computer-Assisted
Decision Support Systems, Clinical
Decision Support Techniques
Image Processing, Computer-Assisted
Imaging, Three-Dimensional
Machine learning
Mammography
Models, Structural
Pattern Recognition, Automated
Radiographic Image Interpretation, Computer-Assisted
Radiology
Technology Assessment, Biomedical
Tomosynthesis