Loading...

3D Image segmentation with sparse annotation by self-training and internal registration

Bitarafan, A ; Sharif University of Technology | 2020

472 Viewed
  1. Type of Document: Article
  2. DOI: 10.1109/JBHI.2020.3038847
  3. Publisher: Institute of Electrical and Electronics Engineers Inc , 2020
  4. Abstract:
  5. Anatomical image segmentation is one of the foundations for medical planning. Recently, convolutional neural networks (CNN) have achieved much success in segmenting volumetric (3D) images when a large number of fully annotated 3D samples are available. However, rarely a volumetric medical image dataset containing a sufficient number of segmented 3D images is accessible since providing manual segmentation masks is monotonous and time-consuming. Thus, to alleviate the burden of manual annotation, we attempt to effectively train a 3D CNN using a sparse annotation where ground truth on just one 2D slice of the axial axis of each training 3D image is available. To tackle this problem, we propose a self-training framework that alternates between two steps consisting of assigning pseudo annotations to unlabeled voxels and updating the 3D segmentation network by employing both the labeled and pseudo labeled voxels. To produce pseudo labels more accurately, we benefit from both propagation of labels (or pseudo-labels) between adjacent slices and 3D processing of voxels. More precisely, a 2D registration-based method is proposed to gradually propagate labels between consecutive 2D slices and a 3D U-Net is employed to utilize volumetric information. Ablation studies on benchmarks show that cooperation between the 2D registration and the 3D segmentation provides accurate pseudo-labels that enable the segmentation network to be trained effectively when for each training sample only even one segmented slice by an expert is available. Our method is assessed on the CHAOS and Visceral datasets to segment abdominal organs. Results demonstrate that despite utilizing just one segmented slice for each 3D image (that is weaker supervision in comparison with the compared weakly supervised methods) can result in higher performance and also achieve closer results to the fully supervised manner. IEEE
  6. Keywords:
  7. Deep learning ; Inter-slice registration ; Medical 3D image segmentation ; Self-training ; Sparse annotation ; Weakly supervised learning ; Backpropagation ; Convolutional neural networks ; Image annotation ; Medical image processing ; 3-d processing ; 3D image segmentation ; 3D segmentation ; Abdominal organs ; Manual annotation ; Manual segmentation ; Supervised methods ; Training sample ; Image segmentation
  8. Source: IEEE Journal of Biomedical and Health Informatics ; 2020
  9. URL: https://ieeexplore.ieee.org/document/9264631