Loading...
Search for: self-training
0.003 seconds

    Deep Semi-Supervised Text Classification

    , M.Sc. Thesis Sharif University of Technology Karimi, Ali (Author) ; Semati, Hossein (Supervisor)
    Abstract
    Large data sources labeled by experts at cost are essential for deep learning success in various domains. But, when labeling is expensive and labeled data is scarce, deep learning generally does not perform well. The goal of semi-supervised learning is to leverage abundant unlabeled data that one can easily collect. New semi-supervised algorithms based on data augmentation techniques have reached new advances in this field. In this work, by studying different textual augmentation techniques, a new approach is proposed that can obtain effective information signals from unlabeled data. The method encourages the model to generate the same representation vectors for different augmented versions... 

    3D Image segmentation with sparse annotation by self-training and internal registration

    , Article IEEE Journal of Biomedical and Health Informatics ; 2020 Bitarafan, A ; Nikdan, M ; Soleymanibaghshah, M ; Sharif University of Technology
    Institute of Electrical and Electronics Engineers Inc  2020
    Abstract
    Anatomical image segmentation is one of the foundations for medical planning. Recently, convolutional neural networks (CNN) have achieved much success in segmenting volumetric (3D) images when a large number of fully annotated 3D samples are available. However, rarely a volumetric medical image dataset containing a sufficient number of segmented 3D images is accessible since providing manual segmentation masks is monotonous and time-consuming. Thus, to alleviate the burden of manual annotation, we attempt to effectively train a 3D CNN using a sparse annotation where ground truth on just one 2D slice of the axial axis of each training 3D image is available. To tackle this problem, we propose...