Related Articles |
IEEE Trans Med Imaging. 2020 Feb 24;:
Authors: Liang S, Thung KH, Nie D, Zhang Y, Shen D
Abstract
Accurate segmentation of organs at risk (OARs) from head and neck (H&N) CT images is crucial for effective H&N cancer radiotherapy. However, the existing deep learning methods are often not trained in an end-to-end fashion, i.e., they independently predetermine the regions of target organs before organ segmentation, causing limited information sharing between related tasks and thus leading to suboptimal segmentation results. Furthermore, when conventional segmentation network is used to segment all the OARs simultaneously, the results often favor big OARs over small OARs. Thus, the existing methods often train a specific model for each OAR, ignoring the correlation between different segmentation tasks. To address these issues, we propose a new multi-view spatial aggregation framework for joint localization and segmentation of multiple OARs using H&N CT images. The core of our framework is a proposed region-of-interest (ROI)-based fine-grained representation convolutional neural network (CNN), which is used to generate multi-OAR probability maps from each 2D view (i.e., axial, coronal, and sagittal view) of CT images. Specifically, our ROI-based fine-grained representation CNN (1) unifies the OARs localization and segmentation tasks and trains them in an end-to-end fashion, and (2) improves the segmentation results of various-sized OARs via a novel ROI-based fine-grained representation. Our multi-view spatial aggregation framework then spatially aggregates and assembles the generated multi-view multi-OAR probability maps to segment all the OARs simultaneously. We evaluate our framework using two sets of H&N CT images and achieve competitive and highly robust segmentation performance for OARs of various sizes.
PMID: 32091997 [PubMed - as supplied by publisher]
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου