Lei Ren

Overview:

Dr. Ren's research interests include imaging dose reduction using digital tomosynthesis (DTS), cone-beam CT (CBCT) scatter correction, novel DTS/CBCT/MRI image reconstruction methods using prior information and motion modeling, deformable image registration, image synthesis, image augmentation, 4D imaging, development and application of AI in image guided radiation therapy (IGRT).  His clinical expertise focuses on stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) of brain, lung and liver cancer patients.

Positions:

Adjunct Professor in the Department of Radiation Oncology

Radiation Oncology
School of Medicine

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 2009

Duke University

Medical Physics Faculty, Radiation Oncology

Henry Ford Health System

Therapeutic Medical Physics

American Board of Radiology

Grants:

A synchronized moving grid (SMOG) system to improve CBCT for IGRT and ART

Administered By
Radiation Oncology
Awarded By
Indiana University
Role
Principal Investigator
Start Date
End Date

Publications:

A geometry-guided deep learning technique for CBCT reconstruction.

Purpose.Although deep learning (DL) technique has been successfully used for computed tomography (CT) reconstruction, its implementation on cone-beam CT (CBCT) reconstruction is extremely challenging due to memory limitations. In this study, a novel DL technique is developed to resolve the memory issue, and its feasibility is demonstrated for CBCT reconstruction from sparsely sampled projection data.Methods.The novel geometry-guided deep learning (GDL) technique is composed of a GDL reconstruction module and a post-processing module. The GDL reconstruction module learns and performs projection-to-image domain transformation by replacing the traditional single fully connected layer with an array of small fully connected layers in the network architecture based on the projection geometry. The DL post-processing module further improves image quality after reconstruction. We demonstrated the feasibility and advantage of the model by comparing ground truth CBCT with CBCT images reconstructed using (1) GDL reconstruction module only, (2) GDL reconstruction module with DL post-processing module, (3) Feldkamp, Davis, and Kress (FDK) only, (4) FDK with DL post-processing module, (5) ray-tracing only, and (6) ray-tracing with DL post-processing module. The differences are quantified by peak-signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-square error (RMSE).Results.CBCT images reconstructed with GDL show improvements in quantitative scores of PSNR, SSIM, and RMSE. Reconstruction time per image for all reconstruction methods are comparable. Compared to current DL methods using large fully connected layers, the estimated memory requirement using GDL is four orders of magnitude less, making DL CBCT reconstruction feasible.Conclusion.With much lower memory requirement compared to other existing networks, the GDL technique is demonstrated to be the first DL technique that can rapidly and accurately reconstruct CBCT images from sparsely sampled data.
Authors
MLA Citation
Lu, Ke, et al. “A geometry-guided deep learning technique for CBCT reconstruction.Phys Med Biol, vol. 66, no. 15, July 2021. Pubmed, doi:10.1088/1361-6560/ac145b.
URI
https://scholars.duke.edu/individual/pub1488943
PMID
34261057
Source
pubmed
Published In
Phys Med Biol
Volume
66
Published Date
DOI
10.1088/1361-6560/ac145b

A generative adversarial network (GAN)-based technique for synthesizing realistic respiratory motion in the extended cardiac-torso (XCAT) phantoms.

Objective. Synthesize realistic and controllable respiratory motions in the extended cardiac-torso (XCAT) phantoms by developing a generative adversarial network (GAN)-based deep learning technique.Methods. A motion generation model was developed using bicycle-GAN with a novel 4D generator. Input with the end-of-inhale (EOI) phase images and a Gaussian perturbation, the model generates inter-phase deformable-vector-fields (DVFs), which were composed and applied to the input to generate 4D images. The model was trained and validated using 71 4D-CT images from lung cancer patients and then applied to the XCAT EOI images to generate 4D-XCAT with realistic respiratory motions. A separate respiratory motion amplitude control model was built using decision tree regression to predict the input perturbation needed for a specific motion amplitude, and this model was developed using 300 4D-XCAT generated from 6 XCAT phantom sizes with 50 different perturbations for each size. In both patient and phantom studies, Dice coefficients for lungs and lung volume variation during respiration were compared between the simulated images and reference images. The generated DVF was evaluated by deformation energy. DVFs and ventilation maps of the simulated 4D-CT were compared with the reference 4D-CTs using cross correlation and Spearman's correlation. Comparison of DVFs and ventilation maps among the original 4D-XCAT, the generated 4D-XCAT, and reference patient 4D-CTs were made to show the improvement of motion realism by the model. The amplitude control error was calculated.Results. Comparing the simulated and reference 4D-CTs, the maximum deviation of lung volume during respiration was 5.8%, and the Dice coefficient reached at least 0.95 for lungs. The generated DVFs presented comparable deformation energy levels. The cross correlation of DVFs achieved 0.89 ± 0.10/0.86 ± 0.12/0.95 ± 0.04 along thex/y/zdirection in the testing group. The cross correlation of ventilation maps derived achieved 0.80 ± 0.05/0.67 ± 0.09/0.68 ± 0.13, and the Spearman's correlation achieved 0.70 ± 0.05/0, 60 ± 0.09/0.53 ± 0.01, respectively, in the training/validation/testing groups. The generated 4D-XCAT phantoms presented similar deformation energy as patient data while maintained the lung volumes of the original XCAT phantom (Dice = 0.95, maximum lung volume variation = 4%). The motion amplitude control models controlled the motion amplitude control error to be less than 0.5 mm.Conclusions. The results demonstrated the feasibility of synthesizing realistic controllable respiratory motion in the XCAT phantom using the proposed method. This crucial development enhances the value of XCAT phantoms for various 4D imaging and therapy studies.
Authors
MLA Citation
Chang, Yushi, et al. “A generative adversarial network (GAN)-based technique for synthesizing realistic respiratory motion in the extended cardiac-torso (XCAT) phantoms.Phys Med Biol, vol. 66, no. 11, May 2021. Pubmed, doi:10.1088/1361-6560/ac01b4.
URI
https://scholars.duke.edu/individual/pub1484348
PMID
34061044
Source
pubmed
Published In
Phys Med Biol
Volume
66
Published Date
DOI
10.1088/1361-6560/ac01b4

Liver synthetic CT generation based on dense-cyclegan for MRI-only treatment planning

The application of MRI significantly improves the accuracy and reliability of target delineation for many disease sites in radiotherapy due to its superior soft tissue contrast as compared to CT. However, MRI data do not contain the electron density information that is necessary for accurate dose calculation. There has been limited work in abdominal synthetic CT (sCT) generation. In this work, we propose to integrate dense blocks and a novel compound loss function into a 3D cycleGAN-based framework to generate sCT from MR images. Since MRI and CT are two different image modalities, dense blocks are employed to combine low- and high-frequency information that can effectively represent different image patches. A novel compound loss function with lp-norm (p = 1.5) distance and gradient difference is used to differentiate the structure boundaries and to retain the sharpness of the sCT image. This proposed algorithm was evaluated using 21 hepatocellular cancer patients' registered MR and CT images as the training dataset. Leave-one-out cross-validation was performed. The average mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) were 72.87±18.16 HU, 22.65±3.63 dB, and 0.92±0.04 respectively. The proposed method has encouraging outcomes in generating sCT for the potential use in MR-only photon or proton radiotherapy treatment planning.
Authors
Liu, Y; Lei, Y; Wang, T; Zhou, J; Lin, L; Liu, T; Patel, P; Curran, WJ; Ren, L; Yang, X
MLA Citation
Liu, Y., et al. “Liver synthetic CT generation based on dense-cyclegan for MRI-only treatment planning.” Progress in Biomedical Optics and Imaging  Proceedings of Spie, vol. 11313, 2020. Scopus, doi:10.1117/12.2549265.
URI
https://scholars.duke.edu/individual/pub1463368
Source
scopus
Published In
Progress in Biomedical Optics and Imaging Proceedings of Spie
Volume
11313
Published Date
DOI
10.1117/12.2549265

4D radiomics: impact of 4D-CBCT image quality on radiomic analysis.

PURPOSE: To investigate the impact of 4D-CBCT image quality on radiomic analysis and the efficacy of using deep learning based image enhancement to improve the accuracy of radiomic features of 4D-CBCT. MATERIAL AND METHODS: In this study, 4D-CT data from 16 lung cancer patients were obtained. Digitally reconstructed radiographs (DRRs) were simulated from the 4D-CT, and then used to reconstruct 4D CBCT using the conventional FDK (Feldkamp et al 1984 J. Opt. Soc. Am. A 1 612-9) algorithm. Different projection numbers (i.e. 72, 120, 144, 180) and projection angle distributions (i.e. evenly distributed and unevenly distributed using angles from real 4D-CBCT scans) were simulated to generate the corresponding 4D-CBCT. A deep learning model (TecoGAN) was trained on 10 patients and validated on 3 patients to enhance the 4D-CBCT image quality to match with the corresponding ground-truth 4D-CT. The remaining 3 patients with different tumor sizes were used for testing. The radiomic features in 6 different categories, including histogram, GLCM, GLRLM, GLSZM, NGTDM, and wavelet, were extracted from the gross tumor volumes of each phase of original 4D-CBCT, enhanced 4D-CBCT, and 4D-CT. The radiomic features in 4D-CT were used as the ground-truth to evaluate the errors of the radiomic features in the original 4D-CBCT and enhanced 4D-CBCT. Errors in the original 4D-CBCT demonstrated the impact of image quality on radiomic features. Comparison between errors in the original 4D-CBCT and enhanced 4D-CBCT demonstrated the efficacy of using deep learning to improve the radiomic feature accuracy. RESULTS: 4D-CBCT image quality can substantially affect the accuracy of the radiomic features, and the degree of impact is feature-dependent. The deep learning model was able to enhance the anatomical details and edge information in the 4D-CBCT as well as removing other image artifacts. This enhancement of image quality resulted in reduced errors for most radiomic features. The average reduction of radiomics errors for 3 patients are 20.0%, 31.4%, 36.7%, 50.0%, 33.6% and 11.3% for histogram, GLCM, GLRLM, GLSZM, NGTDM and Wavelet features. And the error reduction was more significant for patients with larger tumors. The findings were consistent across different respiratory phases, projection numbers, and angle distributions. CONCLUSIONS: The study demonstrated that 4D-CBCT image quality has a significant impact on the radiomic analysis. The deep learning-based augmentation technique proved to be an effective approach to enhance 4D-CBCT image quality to improve the accuracy of radiomic analysis.
Authors
Zhang, Z; Huang, M; Jiang, Z; Chang, Y; Torok, J; Yin, F-F; Ren, L
MLA Citation
Zhang, Zeyu, et al. “4D radiomics: impact of 4D-CBCT image quality on radiomic analysis.Phys Med Biol, vol. 66, no. 4, Feb. 2021, p. 045023. Pubmed, doi:10.1088/1361-6560/abd668.
URI
https://scholars.duke.edu/individual/pub1470006
PMID
33361574
Source
pubmed
Published In
Phys Med Biol
Volume
66
Published Date
Start Page
045023
DOI
10.1088/1361-6560/abd668

Building a patient-specific model using transfer learning for four-dimensional cone beam computed tomography augmentation.

Background: We previously developed a deep learning model to augment the quality of four-dimensional (4D) cone-beam computed tomography (CBCT). However, the model was trained using group data, and thus was not optimized for individual patients. Consequently, the augmented images could not depict small anatomical structures, such as lung vessels. Methods: In the present study, the transfer learning method was used to further improve the performance of the deep learning model for individual patients. Specifically, a U-Net-based model was first trained to augment 4D-CBCT using group data. Next, transfer learning was used to fine tune the model based on a specific patient's available data to improve its performance for that individual patient. Two types of transfer learning were studied: layer-freezing and whole-network fine-tuning. The performance of the transfer learning model was evaluated by comparing the augmented CBCT images with the ground truth images both qualitatively and quantitatively using a structure similarity index matrix (SSIM) and peak signal-to-noise ratio (PSNR). The results were also compared to those obtained using only the U-Net method. Results: Qualitatively, the patient-specific model recovered more detailed information of the lung area than the group-based U-Net model. Quantitatively, the SSIM improved from 0.924 to 0.958, and the PSNR improved from 33.77 to 38.42 for the whole volumetric images for the group-based U-Net and patient-specific models, respectively. The layer-freezing method was found to be more efficient than the whole-network fine-tuning method, and had a training time as short as 10 minutes. The effect of augmentation by transfer learning increased as the number of projections used for CBCT reconstruction decreased. Conclusions: Overall, the patient-specific model optimized by transfer learning was efficient and effective at improving image qualities of augmented undersampled three-dimensional (3D)- and 4D-CBCT images, and could be extremely valuable for applications in image-guided radiation therapy.
Authors
MLA Citation
Sun, Leshan, et al. “Building a patient-specific model using transfer learning for four-dimensional cone beam computed tomography augmentation.Quant Imaging Med Surg, vol. 11, no. 2, Feb. 2021, pp. 540–55. Pubmed, doi:10.21037/qims-20-655.
URI
https://scholars.duke.edu/individual/pub1470707
PMID
33532255
Source
pubmed
Published In
Quantitative Imaging in Medicine and Surgery
Volume
11
Published Date
Start Page
540
End Page
555
DOI
10.21037/qims-20-655

Research Areas:

4d Imaging
Artificial Intelligence
Deep Learning
Image Synthesis
Image reconstruction
Image registration
Image-guided radiation therapy