Lei Ren

Overview:

Dr. Ren's research interests include imaging dose reduction using digital tomosynthesis (DTS), cone-beam CT (CBCT) scatter correction, novel DTS/CBCT/MRI image reconstruction methods using prior information and motion modeling, deformable image registration, image synthesis, image augmentation, 4D imaging, development and application of AI in image guided radiation therapy (IGRT).  His clinical expertise focuses on stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) of brain, lung and liver cancer patients.

Positions:

Adjunct Professor in the Department of Radiation Oncology

Radiation Oncology
School of Medicine

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 2009

Duke University

Medical Physics Faculty, Radiation Oncology

Henry Ford Health System

Therapeutic Medical Physics

American Board of Radiology

Grants:

A synchronized moving grid (SMOG) system to improve CBCT for IGRT and ART

Administered By
Radiation Oncology
Awarded By
Indiana University
Role
Principal Investigator
Start Date
End Date

Publications:

A geometry-guided deep learning technique for CBCT reconstruction.

Purpose.Although deep learning (DL) technique has been successfully used for computed tomography (CT) reconstruction, its implementation on cone-beam CT (CBCT) reconstruction is extremely challenging due to memory limitations. In this study, a novel DL technique is developed to resolve the memory issue, and its feasibility is demonstrated for CBCT reconstruction from sparsely sampled projection data.Methods.The novel geometry-guided deep learning (GDL) technique is composed of a GDL reconstruction module and a post-processing module. The GDL reconstruction module learns and performs projection-to-image domain transformation by replacing the traditional single fully connected layer with an array of small fully connected layers in the network architecture based on the projection geometry. The DL post-processing module further improves image quality after reconstruction. We demonstrated the feasibility and advantage of the model by comparing ground truth CBCT with CBCT images reconstructed using (1) GDL reconstruction module only, (2) GDL reconstruction module with DL post-processing module, (3) Feldkamp, Davis, and Kress (FDK) only, (4) FDK with DL post-processing module, (5) ray-tracing only, and (6) ray-tracing with DL post-processing module. The differences are quantified by peak-signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-square error (RMSE).Results.CBCT images reconstructed with GDL show improvements in quantitative scores of PSNR, SSIM, and RMSE. Reconstruction time per image for all reconstruction methods are comparable. Compared to current DL methods using large fully connected layers, the estimated memory requirement using GDL is four orders of magnitude less, making DL CBCT reconstruction feasible.Conclusion.With much lower memory requirement compared to other existing networks, the GDL technique is demonstrated to be the first DL technique that can rapidly and accurately reconstruct CBCT images from sparsely sampled data.
Authors
MLA Citation
Lu, Ke, et al. “A geometry-guided deep learning technique for CBCT reconstruction.Phys Med Biol, vol. 66, no. 15, July 2021. Pubmed, doi:10.1088/1361-6560/ac145b.
URI
https://scholars.duke.edu/individual/pub1488943
PMID
34261057
Source
pubmed
Published In
Phys Med Biol
Volume
66
Published Date
DOI
10.1088/1361-6560/ac145b

A generative adversarial network (GAN)-based technique for synthesizing realistic respiratory motion in the extended cardiac-torso (XCAT) phantoms.

Objective. Synthesize realistic and controllable respiratory motions in the extended cardiac-torso (XCAT) phantoms by developing a generative adversarial network (GAN)-based deep learning technique.Methods. A motion generation model was developed using bicycle-GAN with a novel 4D generator. Input with the end-of-inhale (EOI) phase images and a Gaussian perturbation, the model generates inter-phase deformable-vector-fields (DVFs), which were composed and applied to the input to generate 4D images. The model was trained and validated using 71 4D-CT images from lung cancer patients and then applied to the XCAT EOI images to generate 4D-XCAT with realistic respiratory motions. A separate respiratory motion amplitude control model was built using decision tree regression to predict the input perturbation needed for a specific motion amplitude, and this model was developed using 300 4D-XCAT generated from 6 XCAT phantom sizes with 50 different perturbations for each size. In both patient and phantom studies, Dice coefficients for lungs and lung volume variation during respiration were compared between the simulated images and reference images. The generated DVF was evaluated by deformation energy. DVFs and ventilation maps of the simulated 4D-CT were compared with the reference 4D-CTs using cross correlation and Spearman's correlation. Comparison of DVFs and ventilation maps among the original 4D-XCAT, the generated 4D-XCAT, and reference patient 4D-CTs were made to show the improvement of motion realism by the model. The amplitude control error was calculated.Results. Comparing the simulated and reference 4D-CTs, the maximum deviation of lung volume during respiration was 5.8%, and the Dice coefficient reached at least 0.95 for lungs. The generated DVFs presented comparable deformation energy levels. The cross correlation of DVFs achieved 0.89 ± 0.10/0.86 ± 0.12/0.95 ± 0.04 along thex/y/zdirection in the testing group. The cross correlation of ventilation maps derived achieved 0.80 ± 0.05/0.67 ± 0.09/0.68 ± 0.13, and the Spearman's correlation achieved 0.70 ± 0.05/0, 60 ± 0.09/0.53 ± 0.01, respectively, in the training/validation/testing groups. The generated 4D-XCAT phantoms presented similar deformation energy as patient data while maintained the lung volumes of the original XCAT phantom (Dice = 0.95, maximum lung volume variation = 4%). The motion amplitude control models controlled the motion amplitude control error to be less than 0.5 mm.Conclusions. The results demonstrated the feasibility of synthesizing realistic controllable respiratory motion in the XCAT phantom using the proposed method. This crucial development enhances the value of XCAT phantoms for various 4D imaging and therapy studies.
Authors
MLA Citation
Chang, Yushi, et al. “A generative adversarial network (GAN)-based technique for synthesizing realistic respiratory motion in the extended cardiac-torso (XCAT) phantoms.Phys Med Biol, vol. 66, no. 11, May 2021. Pubmed, doi:10.1088/1361-6560/ac01b4.
URI
https://scholars.duke.edu/individual/pub1484348
PMID
34061044
Source
pubmed
Published In
Phys Med Biol
Volume
66
Published Date
DOI
10.1088/1361-6560/ac01b4

Enhancing digital tomosynthesis (DTS) for lung radiotherapy guidance using patient-specific deep learning model.

Digital tomosynthesis (DTS) has been proposed as a fast low-dose imaging technique for image-guided radiation therapy (IGRT). However, due to the limited scanning angle, DTS reconstructed by the conventional FDK method suffers from significant distortions and poor plane-to-plane resolutions without full volumetric information, which severely limits its capability for image guidance. Although existing deep learning-based methods showed feasibilities in restoring volumetric information in DTS, they ignored the inter-patient variabilities by training the model using group patients. Consequently, the restored images still suffered from blurred and inaccurate edges. In this study, we presented a DTS enhancement method based on a patient-specific deep learning model to recover the volumetric information in DTS images. The main idea is to use the patient-specific prior knowledge to train the model to learn the patient-specific correlation between DTS and the ground truth volumetric images. To validate the performance of the proposed method, we enrolled both simulated and real on-board projections from lung cancer patient data. Results demonstrated the benefits of the proposed method: (1) qualitatively, DTS enhanced by the proposed method shows CT-like high image quality with accurate and clear edges; (2) quantitatively, the enhanced DTS has low-intensity errors and high structural similarity with respect to the ground truth CT images; (3) in the tumor localization study, compared to the ground truth CT-CBCT registration, the enhanced DTS shows 3D localization errors of ≤0.7 mm and ≤1.6 mm for studies using simulated and real projections, respectively; and (4), the DTS enhancement is nearly real-time. Overall, the proposed method is effective and efficient in enhancing DTS to make it a valuable tool for IGRT applications.
Authors
MLA Citation
Jiang, Zhuoran, et al. “Enhancing digital tomosynthesis (DTS) for lung radiotherapy guidance using patient-specific deep learning model.Phys Med Biol, vol. 66, no. 3, Jan. 2021, p. 035009. Pubmed, doi:10.1088/1361-6560/abcde8.
URI
https://scholars.duke.edu/individual/pub1465334
PMID
33238249
Source
pubmed
Published In
Phys Med Biol
Volume
66
Published Date
Start Page
035009
DOI
10.1088/1361-6560/abcde8

Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network.

Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 ± 0.002, a PSNR of 28.0 ± 1.9 dB, an NCC of 0.970 ± 0.017, and a SNU of 0.298 ± 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters.
Authors
Dai, X; Lei, Y; Liu, Y; Wang, T; Ren, L; Curran, WJ; Patel, P; Liu, T; Yang, X
MLA Citation
Dai, Xianjin, et al. “Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network.Phys Med Biol, vol. 65, no. 21, Nov. 2020, p. 215025. Pubmed, doi:10.1088/1361-6560/abb31f.
URI
https://scholars.duke.edu/individual/pub1467178
PMID
33245059
Source
pubmed
Published In
Phys Med Biol
Volume
65
Published Date
Start Page
215025
DOI
10.1088/1361-6560/abb31f

Adaptive respiratory signal prediction using dual multi-layer perceptron neural networks.

PURPOSE: To improve the prediction accuracy of respiratory signals by adapting the multi-layer perceptron neural network (MLP-NN) model to changing respiratory signals. We have previously developed an MLP-NN to predict respiratory signals obtained from a real-time position management (RPM) device. Preliminary testing results indicated that poor prediction accuracy may be observed after several seconds for irregular breathing patterns as only a set of fixed data was used in one-time training. To improve the prediction accuracy, we introduced a continuous learning technique using the updated training data to replace one-time learning using the fixed training data. We carried on this new prediction using an adaptation approach with dual MLP-NNs rather than single MLP-NN. When one MLP-NN was performing prediction of the respiratory signals, another one was being trained using the updated data and vice versa. The predicted performance was evaluated by root-mean-square-error (RMSE) between the predicted and true signals from 202 patients' respiratory patterns each with 1 min recording length. The effects of adding an additional network, training parameter, and respiratory signal irregularity on the performance of the new predictor were investigated based on four different network configurations: a single MLP-NN, high-computation dual MLP-NNs (U1), two different combinations of high- and low-computation dual MLP-NNs (U2 and U3). The RMSEs using U1 method were reduced by 34%, 19%, and 10% compared to those using MLP-NN, U2 and U3 methods, respectively. Continuous training of an MLP-NN based on a dual-network configuration using updated respiratory signals improved prediction accuracy compared to one-time training of an MLP-NN using fixed signals.
Authors
Sun, W; Wei, Q; Ren, L; Dang, J; Yin, F-F
MLA Citation
Sun, Wenzheng, et al. “Adaptive respiratory signal prediction using dual multi-layer perceptron neural networks.Phys Med Biol, vol. 65, no. 18, Sept. 2020, p. 185005. Pubmed, doi:10.1088/1361-6560/abb170.
URI
https://scholars.duke.edu/individual/pub1461003
PMID
32924976
Source
pubmed
Published In
Phys Med Biol
Volume
65
Published Date
Start Page
185005
DOI
10.1088/1361-6560/abb170

Research Areas:

4d Imaging
Artificial Intelligence
Deep Learning
Image Synthesis
Image reconstruction
Image registration
Image-guided radiation therapy