Kyle Lafata

Overview:

Kyle Lafata is an Assistant Professor of Radiation Oncology, Radiology, Medical Physics, and Electrical & Computer Engineering at Duke University. After earning his PhD in Medical Physics in 2018, he completed postdoctoral training at the U.S. Department of Veterans Affairs in the Big Data Scientist Training Enhancement Program. Prof. Lafata has broad expertise in imaging science, digital pathology, computer vision, biophysics, and applied mathematics. His dissertation work focused on the applied analysis of stochastic differential equations and high-dimensional radiomic phenotyping, where he developed physics-based computational methods and soft-computing paradigms to interrogate images. These included stochastic modeling, self-organization, and quantum machine learning (i.e., an emerging branch of research that explores the methodological and structural similarities between quantum systems and learning systems). 

Prof. Lafata has worked in various areas of computational medicine and biology, resulting in 38 peer-reviewed journal publications, 15 invited talks, and more than 50 national conference presentations. At Duke, the Lafata Lab focuses on the theory, development, and application of multiscale computational biomarkers. Using computational and mathematical methods, they study the appearance and behavior of disease across different physical length-scales (i.e., radiomics ~10−3 m, pathomics ~10−6 m, and genomics ~10−9 m) and time-scales (e.g., the natural history of disease, response to treatment). The overarching goal of the lab is to develop and apply new technology that transforms imaging into basic science findings and computational biomarker discovery.

Positions:

Thaddeus V. Samulski, Assistant Professor of Radiation Oncology

Radiation Oncology
School of Medicine

Assistant Professor of Radiation Oncology

Radiation Oncology
School of Medicine

Assistant Professor in Radiology

Radiology
School of Medicine

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 2018

Duke University

C. 2018

Duke University

Postdoctoral Associate, Radiation Oncology/Radiation Physics Division

Duke University School of Medicine

Grants:

Targeting the B Cell Response to Treat Antibody-Mediated Rejection

Administered By
Surgery, Abdominal Transplant Surgery
Awarded By
National Institutes of Health
Role
Co Investigator
Start Date
End Date

Computational Pathology of Proteinuric Diseases

Administered By
Medicine, Nephrology
Awarded By
National Institutes of Health
Role
Co Investigator
Start Date
End Date

Targeting the B Cell Response to Treat Antibody-Mediated Rejection

Awarded By
National Institutes of Health
Role
Co Investigator
Start Date
End Date

Computational Pathology of Proteinuric Diseases (R01)

Administered By
Medicine, Nephrology
Awarded By
National Institutes of Health
Role
Co Investigator
Start Date
End Date

Targeting the B Cell Response to Treat Antibody-Mediated Rejection

Administered By
Surgery, Abdominal Transplant Surgery
Awarded By
National Institutes of Health
Role
Co Investigator
Start Date
End Date

Publications:

Exploratory analysis of mesenteric-portal axis CT radiomic features for survival prediction of patients with pancreatic ductal adenocarcinoma.

OBJECTIVE: To develop and evaluate task-based radiomic features extracted from the mesenteric-portal axis for prediction of survival and response to neoadjuvant therapy in patients with pancreatic ductal adenocarcinoma (PDAC). METHODS: Consecutive patients with PDAC who underwent surgery after neoadjuvant therapy from two academic hospitals between December 2012 and June 2018 were retrospectively included. Two radiologists performed a volumetric segmentation of PDAC and mesenteric-portal axis (MPA) using a segmentation software on CT scans before (CTtp0) and after (CTtp1) neoadjuvant therapy. Segmentation masks were resampled into uniform 0.625-mm voxels to develop task-based morphologic features (n = 57). These features aimed to assess MPA shape, MPA narrowing, changes in shape and diameter between CTtp0 and CTtp1, and length of MPA segment affected by the tumor. A Kaplan-Meier curve was generated to estimate the survival function. To identify reliable radiomic features associated with survival, a Cox proportional hazards model was used. Features with an ICC  ≥ 0.80 were used as candidate variables, with clinical features included a priori. RESULTS: In total, 107 patients (60 men) were included. The median survival time was 895 days (95% CI: 717, 1061). Three task-based shape radiomic features (Eccentricity mean tp0, Area minimum value tp1, and Ratio 2 minor tp1) were selected. The model showed an integrated AUC of 0.72 for prediction of survival. The hazard ratio for the Area minimum value tp1 feature was 1.78 (p = 0.02) and 0.48 for the Ratio 2 minor tp1 feature (p = 0.002). CONCLUSION: Preliminary results suggest that task-based shape radiomic features can predict survival in PDAC patients. KEY POINTS: • In a retrospective study of 107 patients who underwent neoadjuvant therapy followed by surgery for PDAC, task-based shape radiomic features were extracted and analyzed from the mesenteric-portal axis. • A Cox proportional hazards model that included three selected radiomic features plus clinical information showed an integrated AUC of 0.72 for prediction of survival, and a better fit compared to the model with only clinical information.
Authors
Rigiroli, F; Hoye, J; Lerebours, R; Lyu, P; Lafata, KJ; Zhang, AR; Erkanli, A; Mettu, NB; Morgan, DE; Samei, E; Marin, D
MLA Citation
Rigiroli, Francesca, et al. “Exploratory analysis of mesenteric-portal axis CT radiomic features for survival prediction of patients with pancreatic ductal adenocarcinoma.Eur Radiol, Mar. 2023. Pubmed, doi:10.1007/s00330-023-09532-0.
URI
https://scholars.duke.edu/individual/pub1568076
PMID
36894753
Source
pubmed
Published In
Eur Radiol
Published Date
DOI
10.1007/s00330-023-09532-0

A neural ordinary differential equation model for visualizing deep neural network behaviors in multi-parametric MRI-based glioma segmentation.

PURPOSE: To develop a neural ordinary differential equation (ODE) model for visualizing deep neural network behavior during multi-parametric MRI-based glioma segmentation as a method to enhance deep learning explainability. METHODS: By hypothesizing that deep feature extraction can be modeled as a spatiotemporally continuous process, we implemented a novel deep learning model, Neural ODE, in which deep feature extraction was governed by an ODE parameterized by a neural network. The dynamics of (1) MR images after interactions with the deep neural network and (2) segmentation formation can thus be visualized after solving the ODE. An accumulative contribution curve (ACC) was designed to quantitatively evaluate each MR image's utilization by the deep neural network toward the final segmentation results. The proposed Neural ODE model was demonstrated using 369 glioma patients with a 4-modality multi-parametric MRI protocol: T1, contrast-enhanced T1 (T1-Ce), T2, and FLAIR. Three Neural ODE models were trained to segment enhancing tumor (ET), tumor core (TC), and whole tumor (WT), respectively. The key MRI modalities with significant utilization by deep neural networks were identified based on ACC analysis. Segmentation results by deep neural networks using only the key MRI modalities were compared to those using all four MRI modalities in terms of Dice coefficient, accuracy, sensitivity, and specificity. RESULTS: All Neural ODE models successfully illustrated image dynamics as expected. ACC analysis identified T1-Ce as the only key modality in ET and TC segmentations, while both FLAIR and T2 were key modalities in WT segmentation. Compared to the U-Net results using all four MRI modalities, the Dice coefficient of ET (0.784→0.775), TC (0.760→0.758), and WT (0.841→0.837) using the key modalities only had minimal differences without significance. Accuracy, sensitivity, and specificity results demonstrated the same patterns. CONCLUSION: The Neural ODE model offers a new tool for optimizing the deep learning model inputs with enhanced explainability. The presented methodology can be generalized to other medical image-related deep-learning applications.
Authors
Yang, Z; Hu, Z; Ji, H; Lafata, K; Vaios, E; Floyd, S; Yin, F-F; Wang, C
URI
https://scholars.duke.edu/individual/pub1526964
PMID
36840621
Source
pubmed
Published In
Med Phys
Published Date
DOI
10.1002/mp.16286

A Faster Prostate MRI: Comparing a Novel Denoised, Single-Average T2 Sequence to the Conventional Multiaverage T2 Sequence Regarding Lesion Detection and PI-RADS Score Assessment.

BACKGROUND: The T2 w sequence is a standard component of a prostate MRI examination; however, it is time-consuming, requiring multiple signal averages to achieve acceptable image quality. PURPOSE/HYPOTHESIS: To determine whether a denoised, single-average T2 sequence (T2 -R) is noninferior to the standard multiaverage T2 sequence (T2 -S) in terms of lesion detection and PI-RADS score assessment. STUDY TYPE: Retrospective. POPULATION: A total of 45 males (age range 60-75 years) who underwent clinically indicated prostate MRI examinations, 21 of whom had pathologically proven prostate cancer. FIELD STRENGTH/SEQUENCE: A 3 T; T2 w FSE, DWI with ADC maps, and dynamic contrast-enhanced images with color-coded perfusion maps. T2 -R images were created from the raw data utilizing a single "average" with iterative denoising. ASSESSMENT: Nine readers randomly assessed complete exams including T2 -R and T2 -S images in separate sessions. PI-RADS version 2.1 was used. All readers then compared the T2 -R and T2 -S images side by side to evaluate subjective preference. An additional detailed image quality assessment was performed by three senior level readers. STATISTICAL TESTS: Generalized linear mixed effects models for differences in lesion detection, image quality features, and overall preference between T2 -R and T2 -S sequences. Intraclass correlation coefficients (ICC) were used to assess reader agreement for all comparisons. A significance threshold of P = 0.05 was used for all statistical tests. RESULTS: There was no significant difference between sequences regarding identification of lesions with PI-RADS ≥3 (P = 0.10) or PI-RADS score (P = 0.77). Reader agreement was excellent for lesion identification (ICC = 0.84). There was no significant overall preference between the two sequences regarding image quality (P = 0.07, 95% CI: [-0.23, 0.01]). Reader agreement was good regarding sequence preference (ICC = 0.62). DATA CONCLUSION: Use of single-average, denoised T2 -weighted images was noninferior in prostate lesion detection or PI-RADS scoring when compared to standard multiaverage T2 -weighted images. EVIDENCE LEVEL: 3. TECHNICAL EFFICACY: Stage 3.
Authors
Kelleher, CB; Macdonald, J; Jaffe, TA; Allen, BC; Kalisz, KR; Kauffman, TH; Smith, JD; Maurer, KR; Thomas, SP; Coleman, AD; Zaki, IH; Kannengiesser, S; Lafata, K; Gupta, RT; Bashir, MR
URI
https://scholars.duke.edu/individual/pub1562348
PMID
36607254
Source
pubmed
Published In
J Magn Reson Imaging
Published Date
DOI
10.1002/jmri.28577

Towards optimal deep fusion of imaging and clinical data via a model-based description of fusion quality.

BACKGROUND: Due to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non-trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produce more useful information than captured by individual data sources. Here, we introduce the concept of fusion quality for deep learning problems involving imaging and clinical data. We first provide a general theoretical framework and numerical validation of our technique. To demonstrate real-world applicability, we then apply our technique to optimize the fusion of CT imaging and hepatic blood markers to estimate portal venous hypertension, which is linked to prognosis in patients with cirrhosis of the liver. PURPOSE: To develop a measurement method of optimal data fusion quality deep learning problems utilizing both imaging data and clinical data. METHODS: Our approach is based on modeling the fully connected layer (FCL) of a convolutional neural network (CNN) as a potential function, whose distribution takes the form of the classical Gibbs measure. The features of the FCL are then modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The probability density of each source, relative to the probability density of the FCL, represents a quantitative measure of source-bias. To minimize this source-bias and optimize CNN performance, we implement a vector-growing encoding scheme called positional encoding, where low-dimensional clinical data are transcribed into a rich feature space that complements high-dimensional imaging features. We first provide a numerical validation of our approach based on simulated Gaussian processes. We then applied our approach to patient data, where we optimized the fusion of CT images with blood markers to predict portal venous hypertension in patients with cirrhosis of the liver. This patient study was based on a modified ResNet-152 model that incorporates both images and blood markers as input. These two data sources were processed in parallel, fused into a single FCL, and optimized based on our fusion quality framework. RESULTS: Numerical validation of our approach confirmed that the probability density function of a fused feature space converges to a source-specific probability density function when source data are improperly fused. Our numerical results demonstrate that this phenomenon can be quantified as a measure of fusion quality. On patient data, the fused model consisting of both imaging data and positionally encoded blood markers at the theoretically optimal fusion quality metric achieved an AUC of 0.74 and an accuracy of 0.71. This model was statistically better than the imaging-only model (AUC = 0.60; accuracy = 0.62), the blood marker-only model (AUC = 0.58; accuracy = 0.60), and a variety of purposely sub-optimized fusion models (AUC = 0.61-0.70; accuracy = 0.58-0.69). CONCLUSIONS: We introduced the concept of data fusion quality for multi-source deep learning problems involving both imaging and clinical data. We provided a theoretical framework, numerical validation, and real-world application in abdominal radiology. Our data suggests that CT imaging and hepatic blood markers provide complementary diagnostic information when appropriately fused.
Authors
Wang, Y; Li, X; Konanur, M; Konkel, B; Seyferth, E; Brajer, N; Liu, J-G; Bashir, MR; Lafata, KJ
MLA Citation
Wang, Yuqi, et al. “Towards optimal deep fusion of imaging and clinical data via a model-based description of fusion quality.Med Phys, Dec. 2022. Pubmed, doi:10.1002/mp.16181.
URI
https://scholars.duke.edu/individual/pub1560811
PMID
36548913
Source
pubmed
Published In
Med Phys
Published Date
DOI
10.1002/mp.16181

Prognostic Model for Intracranial Progression after Stereotactic Radiosurgery: A Multicenter Validation Study.

Stereotactic radiosurgery (SRS) is a standard of care for many patients with brain metastases. To optimize post-SRS surveillance, this study aimed to validate a previously published nomogram predicting post-SRS intracranial progression (IP). We identified consecutive patients completing an initial course of SRS across two institutions between July 2017 and December 2020. Patients were classified as low- or high-risk for post-SRS IP per a previously published nomogram. Overall survival (OS) and freedom from IP (FFIP) were assessed via the Kaplan−Meier method. Assessment of parameters impacting FFIP was performed with univariable and multivariable Cox proportional hazard models. Among 890 patients, median follow-up was 9.8 months (95% CI 9.1−11.2 months). In total, 47% had NSCLC primary tumors, and 47% had oligometastatic disease (defined as ≤5 metastastic foci) at the time of SRS. Per the IP nomogram, 53% of patients were deemed high-risk. For low- and high-risk patients, median FFIP was 13.9 months (95% CI 11.1−17.1 months) and 7.6 months (95% CI 6.4−9.3 months), respectively, and FFIP was superior in low-risk patients (p < 0.0001). This large multisite BM cohort supports the use of an IP nomogram as a quick and simple means of stratifying patients into low- and high-risk groups for post-SRS IP.
Authors
Carpenter, DJ; Natarajan, B; Arshad, M; Natesan, D; Schultz, O; Moravan, MJ; Read, C; Lafata, KJ; Giles, W; Fecci, P; Mullikin, TC; Reitman, ZJ; Kirkpatrick, JP; Floyd, SR; Chmura, SJ; Hong, JC; Salama, JK
MLA Citation
Carpenter, David J., et al. “Prognostic Model for Intracranial Progression after Stereotactic Radiosurgery: A Multicenter Validation Study.Cancers (Basel), vol. 14, no. 21, Oct. 2022. Pubmed, doi:10.3390/cancers14215186.
URI
https://scholars.duke.edu/individual/pub1555466
PMID
36358606
Source
pubmed
Published In
Cancers
Volume
14
Published Date
DOI
10.3390/cancers14215186