Chunhao Wang

Overview:

  • Deep learning methods for image-based radiotherapy outcome prediction and assessment
  • Machine learning in outcome modelling
  • Automation in radiotherapy planning and delivery



Positions:

Assistant Professor of Radiation Oncology

Radiation Oncology
School of Medicine

Member of the Duke Cancer Institute

Duke Cancer Institute
School of Medicine

Education:

Ph.D. 2016

Duke University

Medical Physics Resident, Radiation Oncology Physics Division

Duke University

Medical Physics Resident, Radiation Oncology Physics Division

Duke University

Grants:

Publications:

Rapid Auto IMRT Planning Using Cascade Dense Convolutional Neural Network (CDCNN): A Feasibility Study for Fluence Map Prediction Using Deep Learning on Prostate IMRT Patients

Authors
Wang, C; Li, X; Chang, Y; Sheng, Y; Zhang, J; Yin, FF; Wu, QJJ
MLA Citation
Wang, C., et al. “Rapid Auto IMRT Planning Using Cascade Dense Convolutional Neural Network (CDCNN): A Feasibility Study for Fluence Map Prediction Using Deep Learning on Prostate IMRT Patients.” International Journal of Radiation Oncology*Biology*Physics, vol. 105, no. 1, Elsevier BV, 2019, pp. E789–90. Crossref, doi:10.1016/j.ijrobp.2019.06.760.
URI
https://scholars.duke.edu/individual/pub1415074
Source
crossref
Published In
International Journal of Radiation Oncology, Biology, Physics
Volume
105
Published Date
Start Page
E789
End Page
E790
DOI
10.1016/j.ijrobp.2019.06.760

An Interpretable Planning Bot for Pancreas Stereotactic Body Radiation Therapy.

PURPOSE: Pancreas stereotactic body radiation therapy (SBRT) treatment planning requires planners to make sequential, time-consuming interactions with the treatment planning system to reach the optimal dose distribution. We sought to develop a reinforcement learning (RL)-based planning bot to systematically address complex tradeoffs and achieve high plan quality consistently and efficiently. METHODS AND MATERIALS: The focus of pancreas SBRT planning is finding a balance between organ-at-risk sparing and planning target volume (PTV) coverage. Planners evaluate dose distributions and make planning adjustments to optimize PTV coverage while adhering to organ-at-risk dose constraints. We formulated such interactions between the planner and treatment planning system into a finite-horizon RL model. First, planning status features were evaluated based on human planners' experience and defined as planning states. Second, planning actions were defined to represent steps that planners would commonly implement to address different planning needs. Finally, we derived a reward system based on an objective function guided by physician-assigned constraints. The planning bot trained itself with 48 plans augmented from 16 previously treated patients, and generated plans for 24 cases in a separate validation set. RESULTS: All 24 bot-generated plans achieved similar PTV coverages compared with clinical plans while satisfying all clinical planning constraints. Moreover, the knowledge learned by the bot could be visualized and interpreted as consistent with human planning knowledge, and the knowledge maps learned in separate training sessions were consistent, indicating reproducibility of the learning process. CONCLUSIONS: We developed a planning bot that generates high-quality treatment plans for pancreas SBRT. We demonstrated that the training phase of the bot is tractable and reproducible, and the knowledge acquired is interpretable. As a result, the RL planning bot can potentially be incorporated into the clinical workflow and reduce planning inefficiencies.
Authors
MLA Citation
Zhang, Jiahan, et al. “An Interpretable Planning Bot for Pancreas Stereotactic Body Radiation Therapy.Int J Radiat Oncol Biol Phys, vol. 109, no. 4, Mar. 2021, pp. 1076–85. Pubmed, doi:10.1016/j.ijrobp.2020.10.019.
URI
https://scholars.duke.edu/individual/pub1464387
PMID
33115686
Source
pubmed
Published In
Int J Radiat Oncol Biol Phys
Volume
109
Published Date
Start Page
1076
End Page
1085
DOI
10.1016/j.ijrobp.2020.10.019

An artificial intelligence-driven agent for real-time head-and-neck IMRT plan generation using conditional generative adversarial network (cGAN).

PURPOSE: To develop an artificial intelligence (AI) agent for fully automated rapid head-and-neck intensity-modulated radiation therapy (IMRT) plan generation without time-consuming dose-volume-based inverse planning. METHODS: This AI agent was trained via implementing a conditional generative adversarial network (cGAN) architecture. The generator, PyraNet, is a novel deep learning network that implements 28 classic ResNet blocks in pyramid-like concatenations. The discriminator is a customized four-layer DenseNet. The AI agent first generates multiple customized two-dimensional projections at nine template beam angles from a patient's three-dimensional computed tomography (CT) volume and structures. These projections are then stacked as four-dimensional inputs of PyraNet, from which nine radiation fluence maps of the corresponding template beam angles are generated simultaneously. Finally, the predicted fluence maps are automatically postprocessed by Gaussian deconvolution operations and imported into a commercial treatment planning system (TPS) for plan integrity check and visualization. The AI agent was built and tested upon 231 oropharyngeal IMRT plans from a TPS plan library. 200/16/15 plans were assigned for training/validation/testing, respectively. Only the primary plans in the sequential boost regime were studied. All plans were normalized to 44 Gy prescription (2 Gy/fx). A customized Harr wavelet loss was adopted for fluence map comparison during the training of the PyraNet. For test cases, isodose distributions in AI plans and TPS plans were qualitatively evaluated for overall dose distributions. Key dosimetric metrics were compared by Wilcoxon signed-rank tests with a significance level of 0.05. RESULTS: All 15 AI plans were successfully generated. Isodose gradients outside of PTV in AI plans were comparable to those of the TPS plans. After PTV coverage normalization, Dmean of left parotid (DAI  = 23.1 ± 2.4 Gy; DTPS  = 23.1 ± 2.0 Gy), right parotid (DAI  = 23.8 ± 3.0 Gy; DTPS  = 23.9 ± 2.3 Gy), and oral cavity (DAI  = 24.7 ± 6.0 Gy; DTPS  = 23.9 ± 4.3 Gy) in the AI plans and the TPS plans were comparable without statistical significance. AI plans achieved comparable results for maximum dose at 0.01cc of brainstem (DAI  = 15.0 ± 2.1 Gy; DTPS  = 15.5 ± 2.7 Gy) and cord + 5mm (DAI  = 27.5 ± 2.3 Gy; DTPS  = 25.8 ± 1.9 Gy) without clinically relevant differences, but body Dmax results (DAI  = 121.1 ± 3.9 Gy; DTPS  = 109.0 ± 0.9 Gy) were higher than the TPS plan results. The AI agent needed ~3 s for predicting fluence maps of an IMRT plan. CONCLUSIONS: With rapid and fully automated execution, the developed AI agent can generate complex head-and-neck IMRT plans with acceptable dosimetry quality. This approach holds great potential for clinical applications in preplanning decision-making and real-time planning.
Authors
Li, X; Wang, C; Sheng, Y; Zhang, J; Wang, W; Yin, F-F; Wu, Q; Wu, QJ; Ge, Y
URI
https://scholars.duke.edu/individual/pub1474690
PMID
33577108
Source
pubmed
Published In
Med Phys
Published Date
DOI
10.1002/mp.14770

Automatic detection of pulmonary nodules on CT images with YOLOv3: development and evaluation using simulated and patient data.

Background: To develop a high-efficiency pulmonary nodule computer-aided detection (CAD) method for localization and diameter estimation. Methods: The developed CAD method centralizes a novel convolutional neural network (CNN) algorithm, You Only Look Once (YOLO) v3, as a deep learning approach. This method is featured by two distinct properties: (I) an automatic multi-scale feature extractor for nodule feature screening, and (II) a feature-based bounding box generator for nodule localization and diameter estimation. Two independent studies were performed to train and evaluate this CAD method. One study comprised of a computer simulation that utilized computer-based ground truth. In this study, 300 CT scans were simulated by Cardiac-torso (XCAT) digital phantom. Spherical nodules of various sizes (i.e., 3-10 mm in diameter) were randomly implanted within the lung region of the simulated images-the second study utilized human-based ground truth in patients. The CAD method was developed by CT scans sourced from the LIDC-IDRI database. CT scans with slice thickness above 2.5 mm were excluded, leaving 888 CT images for analysis. A 10-fold cross-validation procedure was implemented in both studies to evaluate network hyper-parameterization and generalization. The overall accuracy of the CAD method was evaluated by the detection sensitivities, in response to average false positives (FPs) per image. In the patient study, the detection accuracy was further compared against 9 recently published CAD studies using free-receiver response operating characteristic (FROC) curve analysis. Localization and diameter estimation accuracies were quantified by the mean and standard error between the predicted value and ground truth. Results: The average results among the 10 cross-validation folds in both studies demonstrated the CAD method achieved high detection accuracy. The sensitivity was 99.3% (FPs =1), and improved to 100% (FPs =4) in the simulation study. The corresponding sensitivities were 90.0% and 95.4% in the patient study, displaying superiority over several conventional and CNN-based lung nodule CAD methods in the FROC curve analysis. Nodule localization and diameter estimation errors were less than 1 mm in both studies. The developed CAD method achieved high computational efficiency: it yields nodule-specific quantitative values (i.e., number, existence confidence, central coordinates, and diameter) within 0.1 s for 2D CT slice inputs. Conclusions: The reported results suggest that the developed lung pulmonary nodule CAD method possesses high accuracies of nodule localization and diameter estimation. The high computational efficiency enables its potential clinical application in the future.
Authors
MLA Citation
Liu, Chenyang, et al. “Automatic detection of pulmonary nodules on CT images with YOLOv3: development and evaluation using simulated and patient data.Quant Imaging Med Surg, vol. 10, no. 10, Oct. 2020, pp. 1917–29. Pubmed, doi:10.21037/qims-19-883.
URI
https://scholars.duke.edu/individual/pub1461029
PMID
33014725
Source
pubmed
Published In
Quantitative Imaging in Medicine and Surgery
Volume
10
Published Date
Start Page
1917
End Page
1929
DOI
10.21037/qims-19-883

Dose-Distribution-Driven PET Image-Based Outcome Prediction (DDD-PIOP): A Deep Learning Study for Oropharyngeal Cancer IMRT Application.

Purpose: To develop a deep learning-based AI agent, DDD-PIOP (Dose-Distribution-Driven PET Image Outcome Prediction), for predicting 18FDG-PET image outcomes of oropharyngeal cancer (OPC) in response to intensity-modulated radiation therapy (IMRT). Methods: DDD-PIOP uses pre-radiotherapy 18FDG-PET/CT images and the planned spatial dose distribution as the inputs, and it predicts the 18FDG-PET image outcomes in response to the planned IMRT delivery. This AI agent centralizes a customized convolutional neural network (CNN) as a deep learning approach, and it incorporates a few designs to enhance prediction accuracy. 66 OPC patients who received IMRT treatment on a sequential boost regime (2 Gy/daily fraction) were studied for DDD-PIOP development. 61 patients were used for AI agent training/validation, and the remaining five were used as independent tests. To evaluate the developed AI agent's performance, the predicted mean standardized uptake values (SUVs) of gross tumor volume (GTV) and clinical target volume (CTV) were compared with the ground truth values. Overall SUV distribution accuracy was evaluated by gamma test passing rates under different criteria. Results: The developed DDD-PIOP successfully generated 18FDG-PET image outcome predictions for five test patients. The predicted mean SUV values of GTV/CTV were 3.50/1.41, which were close to the ground-truth values of 3.57/1.51. In 2D-based gamma tests, the average passing rate was 92.1% using 5%/10 mm criteria, which was improved to 95.9%/93.2% when focusing on GTV/CTV regions. 3D gamma test passing rates were 98.7% using 5%/10 mm criteria, and the corresponding GTV/CTV results were 99.8%/99.4%. Conclusion: The reported results suggest that the developed AI agent DDD-PIOP successfully predicted 18FDG-PET image outcomes with high quantitative accuracy. The generated voxel-based image outcome predictions could be used for treatment planning optimization prior to radiation delivery for the best individual-based outcome.
Authors
MLA Citation
Wang, Chunhao, et al. “Dose-Distribution-Driven PET Image-Based Outcome Prediction (DDD-PIOP): A Deep Learning Study for Oropharyngeal Cancer IMRT Application.Front Oncol, vol. 10, 2020, p. 1592. Pubmed, doi:10.3389/fonc.2020.01592.
URI
https://scholars.duke.edu/individual/pub1454969
PMID
33014811
Source
pubmed
Published In
Frontiers in Oncology
Volume
10
Published Date
Start Page
1592
DOI
10.3389/fonc.2020.01592