Joseph Lo

Overview:

My research focuses on computer vision and machine learning in medical imaging, with a focus on mammography and CT imaging. There are three specific projects:

First, we have a long track record of creating machine learning models to detect and diagnose breast cancer from mammograms. These algorithms are based on computer vision and deep learning, with the long term goal to incorporate the contribution of imaging data with proteomic/genomic markers. Specific projects include predicting which cases of DCIS are likely to contain hidden invasive cancer, thus informing women to take advantage of personalized treatment decisions. This work is funded by NIH, Dept of Defense, Cancer Research UK, and other agencies.

Second, we design virtual breast models that are based on actual patient data and thus contain highly realistic breast anatomy with voxel-level ground truth. We can transform these virtual models into physical form using several forms of 3D printing technology. In work funded by NIH, we are translating this work to produce a new generation of realistic phantoms for CT. Such physical phantoms can be scanned on actual imaging devices, allowing us to assess image quality in new ways that are not only quantitative but also clinically relevant.

Third, we are developing a broad, machine learning platform to segment multiple organs and classify multiple diseases in chest-abdomen-pelvis CT scans. The goal is to provide automated labeling of hospital-scale data sets (potentially hundreds of thousands of studies) to produce sufficient data for deep learning studies. This work includes natural language processing to analyze radiology reports, and deep learning models for the segmentation and classification tasks.

Positions:

Professor of Radiology

Radiology
School of Medicine

Professor in the Department of Electrical and Computer Engineering

Electrical and Computer Engineering
Pratt School of Engineering

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

B.S.E.E. 1988

Duke University

Ph.D. 1993

Duke University

Research Associate, Radiology

Duke University

Grants:

Predicting Breast Cancer With Ultrasound and Mammography

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Improved Diagnosis of Breast Microcalcification Clusters

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Principal Investigator
Start Date
End Date

Accurate Models for Predicting Radiation-Induced Injury

Administered By
Radiation Oncology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
US Army Medical Research
Role
Co Investigator
Start Date
End Date

Computer Aid for the Decision to Biopsy Breast Lesions

Administered By
Radiology
Awarded By
National Institutes of Health
Role
Investigator
Start Date
End Date

Publications:

Microcalcification localization and cluster detection using unsupervised convolutional autoencoders and structural similarity index

© 2020 SPIE. Detecting microcalcification clusters in mammograms is important to the diagnosis of breast diseases. Previous studies which mainly focused on supervised methods require abundant annotated training data but these data are usually hard to acquire. In this work, we leverage unsupervised convolutional autoencoders and structural similarity (SSIM) based post-processing to detect and localize microcalcification clusters in full-field digital mammograms (FFDMs). Our models were trained by patches extracted from 3,632 normal cases, in total with 16,702 mammograms. Evaluations were conducted in three aspects, including patch-based anomaly detection, pixel-wise microcalcification localization, and microcalcification cluster detection. Specifically, the receiver operating characteristic (ROC) analysis was used for patch-based anomaly detection. Then, a pixel-wise ROC analysis and a cluster-based free-response ROC (FROC) analysis were performed to assess our detection algorithms of individual microcalcifications and microcalcification clusters, respectively. We achieved a pixel-wise AUC of 0.97 as well as a cluster-based sensitivity of 0.62 at 1 false positive per image and 0.75 at 2.5 false positives per image. Both qualitative and quantitative results demonstrated the effectiveness of our method.
Authors
Peng, Y; Hou, R; Ren, Y; Grimm, LJ; Marks, JR; Hwang, ES; Lo, JY
MLA Citation
Peng, Y., et al. “Microcalcification localization and cluster detection using unsupervised convolutional autoencoders and structural similarity index.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2551263.
URI
https://scholars.duke.edu/individual/pub1447091
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2551263

A multitask deep learning method in simultaneously predicting occult invasive disease in ductal carcinoma in-situ and segmenting microcalcifications in mammography

© 2020 SPIE. We proposed a two-branch multitask learning convolutional neural network to solve two different but related tasks at the same time. Our main task is to predict occult invasive disease in biopsy proven Ductal Carcinoma in-situ (DCIS), with an auxiliary task of segmenting microcalcifications (MCs). In this study, we collected digital mammography from 604 patients, 400 of which were DCIS. The model used patches with size of 512×512 extracted within a radiologist masked ROIs as input, with outputs including noisy MC segmentations obtained from our previous algorithms, and classification labels from final diagnosis at patients' definite surgery. We utilized a deep multitask model by combining both Unet segmentation networks and prediction classification networks, by sharing first several convolutional layers. The model achieved a patch-based ROC-AUC of 0.69, with a case-based ROC-AUC of 0.61. Segmentation results achieved a dice coefficient of 0.49.
Authors
Hou, R; Grimm, LJ; Mazurowski, MA; Marks, JR; King, LM; Maley, CC; Hwang, ES; Lo, JY
MLA Citation
Hou, R., et al. “A multitask deep learning method in simultaneously predicting occult invasive disease in ductal carcinoma in-situ and segmenting microcalcifications in mammography.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2549669.
URI
https://scholars.duke.edu/individual/pub1447092
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2549669

Weakly supervised 3D classification of chest CT using aggregated multi-resolution deep segmentation features

© 2020 SPIE. Weakly supervised disease classification of CT imaging suffers from poor localization owing to case-level annotations, where even a positive scan can hold hundreds to thousands of negative slices along multiple planes. Furthermore, although deep learning segmentation and classification models extract distinctly unique combinations of anatomical features from the same target class(es), they are typically seen as two independent processes in a computer-aided diagnosis (CAD) pipeline, with little to no feature reuse. In this research, we propose a medical classifier that leverages the semantic structural concepts learned via multi-resolution segmentation feature maps, to guide weakly supervised 3D classification of chest CT volumes. Additionally, a comparative analysis is drawn across two different types of feature aggregation to explore the vast possibilities surrounding feature fusion. Using a dataset of 1593 scans labeled on a case-level basis via rule-based model, we train a dual-stage convolutional neural network (CNN) to perform organ segmentation and binary classification of four representative diseases (emphysema, pneumonia/atelectasis, mass and nodules) in lungs. The baseline model, with separate stages for segmentation and classification, results in AUC of 0.791. Using identical hyperparameters, the connected architecture using static and dynamic feature aggregation improves performance to AUC of 0.832 and 0.851, respectively. This study advances the field in two key ways. First, case-level report data is used to weakly supervise a 3D CT classifier of multiple, simultaneous diseases for an organ. Second, segmentation and classification models are connected with two different feature aggregation strategies to enhance the classification performance.
Authors
Saha, A; Tushar, FI; Faryna, K; D'Anniballe, VM; Hou, R; Mazurowski, MA; Rubin, GD; Lo, JY
MLA Citation
Saha, A., et al. “Weakly supervised 3D classification of chest CT using aggregated multi-resolution deep segmentation features.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2550857.
URI
https://scholars.duke.edu/individual/pub1447178
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2550857

Attention-guided classification of abnormalities in semi-structured computed tomography reports

© 2020 SPIE. Lack of annotated data is a major challenge to machine learning algorithms, particularly in the field of radiology. Algorithms that can efficiently extract labels in a fast and precise manner are in high demand. Weak supervision is a compromise solution, particularly, when dealing with imaging modalities like Computed Tomography (CT), where the number of slices can reach 1000 per case. Radiology reports store crucial information about clinicians' findings and observations in CT slices. Automatic generation of labels from CT reports is not a trivial task due to the complexity of sentences and diversity of expression in free-text narration. In this study, we focus on abnormality classification in lungs, liver and kidneys. Firstly, a rule-based model is used to extract weak labels at the case level. Afterwards, attention guided recurrent neural network (RNN) is trained to perform binary classification of radiology reports in terms of whether the organ is normal or abnormal. Additionally, a multi-label RNN with attention mechanism is trained to perform binary classification by aggregating its output for four representative diseases (lungs: emphysema, mass-nodule, effusion and atelectasis-pneumonia; liver: dilatation, fatty infiltration-steatosis, calcification-stone-gallstone, lesion-mass; kidneys: atrophy, cyst, stone-calculi, lesion) into a single abnormal class. Performance has been evaluated using the receiver operating characteristic (ROC) area under the curve (AUC) on 274, 306 and 278 reports for lungs, liver and kidneys correspondingly, manually annotated by radiology experts. The change in performance was evaluated for different sizes of training dataset for lungs. The AUCs of multi-label pretrained models: lungs - 0.929, liver - 0.840, kidney - 0.844; multi-label models: lungs - 0.903, liver - 0.848, kidney - 0.906; binary pretrained models: lungs - 0.922, liver - 0.826, kidneys - 0.928.
Authors
Faryna, K; Tushar, FI; D'Anniballe, VM; Hou, R; Rubin, GD; Lo, JY
MLA Citation
Faryna, K., et al. “Attention-guided classification of abnormalities in semi-structured computed tomography reports.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11314, 2020. Scopus, doi:10.1117/12.2551370.
URI
https://scholars.duke.edu/individual/pub1447180
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11314
Published Date
DOI
10.1117/12.2551370

Assessment of task-based performance from five clinical DBT systems using an anthropomorphic breast phantom

© 2020 SPIE. Purpose: There are currently five FDA approved commercial digital breast tomosynthesis (DBT) systems, all of which have varying geometry and exposure techniques. The aim of this work was to determine if an anthropomorphic breast phantom could be used to systematically compare performance of DBT, full field digital mammography (FFDM) and synthetic mammography (SM) across the systems. Methods: An anthropomorphic breast phantom was created through inkjet printing containing printed masses. The phantom was imaged using automatic exposure control (AEC) settings for that system. Thus, all phantom acquisition settings, and subsequent radiation dose levels, were dictated from the manufacturer settings. A four alternative forced choice reader study was conducted to assess reader performance. Results: Performance in detecting masses was higher with DBT than with FFDM or SM. The difference in proportion correct (PC) was statistically significant for most cases. Additionally, PC of the DBT systems trended with increased gantry span with lowest PC from Hologic and Fuji (both 15°), then both GE systems (25°), and highest for Siemens (50°). Conclusions: A phantom containing masses was imaged on five commercially available DBT systems across 3 states. A 4AFC study was performed to assess performance with FFDM, DBT, and SM across all systems. Overall detection was highest using DBT, with improvement as the gantry span increased. This study is the first of its kind to use an inkjet based physical anthropomorphic phantom to assess performance of all five commercially available breast imaging systems.
Authors
Ikejimba, LC; Salad, J; Graff, CG; Goodsitt, M; Chan, HP; Zhao, W; Huang, H; Ghammraoui, B; Lo, JY; Glick, SJ
MLA Citation
Ikejimba, L. C., et al. “Assessment of task-based performance from five clinical DBT systems using an anthropomorphic breast phantom.” Proceedings of Spie  the International Society for Optical Engineering, vol. 11513, 2020. Scopus, doi:10.1117/12.2564357.
URI
https://scholars.duke.edu/individual/pub1448495
Source
scopus
Published In
Smart Structures and Materials 2005: Active Materials: Behavior and Mechanics
Volume
11513
Published Date
DOI
10.1117/12.2564357

Research Areas:

Breast Neoplasms
Clinical Trials as Topic
Computer Simulation
Decision Making, Computer-Assisted
Decision Support Systems, Clinical
Decision Support Techniques
Image Processing, Computer-Assisted
Imaging, Three-Dimensional
Machine learning
Mammography
Models, Structural
Pattern Recognition, Automated
Radiographic Image Interpretation, Computer-Assisted
Radiology
Technology Assessment, Biomedical
Tomosynthesis