Loading...
Search for: interpretability
0.121 seconds

    Machine Learning Based Modeling of Cognitive Performance from Life-style Data

    , M.Sc. Thesis Sharif University of Technology Jazayeri, Farnaz (Author) ; Razvan, Mohammad Reza (Supervisor) ; Khaligh Razavi, Mahdi (Supervisor)
    Abstract
    For neurodegenerative diseases like Multiple Sclerosis, Alzheimer’s, or Parkinson’s disease early detection is required to slow progression and prevent disease onset. To do so, identifying early signs and symptoms of the disease as well as modifying lifestyle can play a crucial role. Nowadays, the increasing use of smart gadgets and sensors has paved the way for collecting behavioral data and therefore analyzing and extracting meaningful patterns. In this study, lifestyle and cognitive performance data have been collected via a platform called OptiMind. Previous studies have shown that the Integrated Cognitive Assessment (ICA) can identify patients with neurodegenerative disorders (such as... 

    A Probabilistic Approach to Assessing and Interpreting Test Suite Effectiveness

    , Ph.D. Dissertation Sharif University of Technology Agha Mohammadi, Alireza (Author) ; Mirian Hosseinabadi, Hassan (Supervisor)
    Abstract
    The test suite effectiveness concerns the ability of test suites to reveal faults. Mutation testing is a de facto standard to assess the test suite effectiveness. However, mutation testing is a time-consuming process. Over the years, researchers have proposed two kinds of approaches. The first category is related to code coverage criteria and assess the total test suite effectiveness. The second is known as Predictive Mutation Testing (PMT). The suggested approach is probabilistic, being in different levels of abstraction (macro and micro). First, in the macro level, there is a code coverage criterion that not only does outperform existing code coverage but also does not have a statistically... 

    Detecting Metastatic Lung Cancer and Its Lesions From CT-Scan Images Using Deep Interpretable Networks

    , M.Sc. Thesis Sharif University of Technology Rasekh, Ali (Author) ; Rabiee, Hamid Reza (Supervisor)
    Abstract
    Using automated assistants in medical applications has been increased in recent years. One of the most popular methods are artificial intelligence and deep learning methods which are specifically used in medical images analysis. Using these methods can improve the diagnosis accuracy, while performing in a faster time. So these methods can reduce the economical costs, error rate, and response time. But one important challenge in deep learning methods, is the interpretability of neural networks. In this research we focused on introducing an interpretability method for our pixel-wise segmentation network which is applied to the lung nodules dataset. In this research we first implemented a... 

    Analyzing Dermatological Data for Disease Detection Using Interpretable Deep Learning

    , M.Sc. Thesis Sharif University of Technology Hashemi Golpaygani, Fatemeh Sadat (Author) ; Rabiee, Hamid Reza (Supervisor) ; Sharifi Zarchi, Ali (Supervisor) ; Ghandi, Narges (Co-Supervisor)
    Abstract
    We present a deep neural network to classify dermatological disease from patient images. Using self-supervised learning method we have utilized large amount of unlabeled data. We have pre-trained our model on 27000 dermoscopic images gathered from razi hospital, the best dermatological hospital in Iran, along with 33000 images from ISIC 2020 dataset. We have evaluated our model performance in semi-supervised and transfer learning approaches. Our experiments show that using this approach can improve model accuracy and PRC up to 20 percent on semi-supervised setting. The results also show that pretraining can improve classification PRC up to 20 percent on transfer learning task on HAM10000... 

    EEG-based Personalized Interpretable Visual Attention Prediction

    , M.Sc. Thesis Sharif University of Technology Behnamnia, Armin (Author) ; Rabiee, Hamid Reza (Supervisor)
    Abstract
    Human visual attention is a mapping that determines to what regions of an image human’s eyes focus more while perceiving it. Personalized visual attention is visual attention computed for a specific individual. The importance of visual attention lies in its wide range of applications in computer vision and cognitive science, such as neural encoding, image captioning, self-driving cars, video anomaly detection, image classification, and visual design. One of important aspects of visual attention is personalization, the ability to assign every individual their own, specialized attention map. In this project we aim to utilize EEG signals measured from people’s brain to predict their... 

    Real-time Automatic Detection and Classification of Colorectal Polyps during Colonoscopy using Interpretable Artificial Intelligence

    , M.Sc. Thesis Sharif University of Technology Pourmand, Amir (Author) ; Rabiee, Hamid Reza (Supervisor)
    Abstract
    Cancer is the leading cause of death worldwide, and colorectal cancer is the second leading cause of death in women and the third in men. On the other hand, colon polyps can cause colorectal cancer. Therefore, early detection of polyps is of great importance. In recent years, many methods have been proposed for polyp detection using deep learning with high accuracy, but most of them have problems with speed, accuracy, or interpretability. Speed is important because colonoscopy should be performed as quickly and promptly as possible, and in many cases, it is not possible to repeat the colonoscopy. In addition, many of them only address the issue of polyp detection, while from a medical point... 

    Interpretability of Machine Learning Algorithms through the Lens of Causal Inference

    , M.Sc. Thesis Sharif University of Technology Fatemi, Pouria (Author) ; Mohammad Hossein Yassaee (Supervisor)
    Abstract
    Machine learning is becoming increasingly popular for solving various problems, and it has become a big part of our daily lives. However, with the use of complex machine learning models, it is important to explain how these algorithms work. Knowing why a model makes a certain prediction can be just as important as the accuracy of that prediction in many applications. Unfortunately, the highest accuracies for large data sets are often achieved by complex models that are difficult to interpret even for the designers. Therefore, the interpretability of machine learning algorithms has become just as important as their accuracy. Recently, different methods have been proposed to help users... 

    Proposal of a Numerical Metric for Comparing and Evaluating Interpreting Methods for Machine Learning Models

    , M.Sc. Thesis Sharif University of Technology Khani, Pouya (Author) ; Jafari Siavoshani, Mahdi (Supervisor)
    Abstract
    The complexity and non-linearity of today’s machine learning-based systems make it difficult for both end users and experts in the field to understand the logic and reasoning behind their decisions and outputs. Explainable AI (XAI) methods have gained significant attention in real-world problems as they enhance our understanding of these models, increasing trust and improving their efficiency. By applying different explanation methods on a machine learning model, the same output is not necessarily observed, hence evaluation metrics are needed to assess and compare the quality of explanation methods based on one or more definitions of the goodness of the explanation produced by them. Several... 

    Prediction of DNA/RNA Sequence Binding Site to Protein with the Ability to Implement on GPU

    , M.Sc. Thesis Sharif University of Technology Fatemeh Tabatabaei (Author) ; Koohi, Sommaye (Supervisor)
    Abstract
    Based on the importance of DNA/RNA binding proteins in different cellular processes, finding binding sites of them play crucial role in many applications, like designing drug/vaccine, designing protein, and cancer control. Many studies target this issue and try to improve the prediction accuracy with three strategies: complex neural-network structures, various types of inputs, and ML methods to extract input features. But due to the growing volume of sequences, these methods face serious processing challenges. So, this paper presents KDeep, based on CNN-LSTM and the primary form of DNA/RNA sequences as input. As the key feature improving the prediction accuracy, we propose a new encoding... 

    EEG-based Thought to Text Conversion Via Interpretable Deep Networks

    , M.Sc. Thesis Sharif University of Technology Dastani, Saeed (Author) ; Rabiee, Hamid Reza (Supervisor)
    Abstract
    With the advancement of technologies related to electroencephalography signals, brain and computer interfaces, the program has received much attention. This report deals with one of the new and important issues in this field, i.e. converting thought into text. In this research, the letters, words, and sentences that a person thinks or utters in his mind are decoded and converted into text based on electroencephalography signals. There is still no credible and credible information in neuroscience about whether the same patterns of neuronal activity occur in the brain when thinking about similar letters or words. However, the remarkable growth and development of deep neural networks has made... 

    Investigation and Development of an Interpretable Machine Learning Model in Therapeutic Applications by Providing Solutions to Change the Condition of Patients

    , M.Sc. Thesis Sharif University of Technology Damandeh, Moloud (Author) ; Haji, Alireza (Supervisor)
    Abstract
    Despite the significant progress of machine learning models in the health domain, current advanced methods usually produce non-transparent and black-box models, and for this reason, they are not widely used in medical decision-making. To address the issue of non-transparency in black-box models, interpretable machine learning models have been developed. In the health domain, counterfactual scenarios can provide personalized explanations for predictions and suggest necessary changes to transition from an undesirable outcome class to a desirable one for physicians. The aim of this study is to present an interpretable machine learning framework in the health domain that, in addition to having... 

    Graph Neural Networks Interpretability Diagnosis using Histopathological Images

    , M.Sc. Thesis Sharif University of Technology Abdous, Sina (Author) ; Rohban, Mohammad Hossein (Supervisor)
    Abstract
    Deep learning methods are rapidly gaining traction for clinical use in digital pathology. Despite the increasing use of graph neural networks for classifying histopathology images, due to the high accuracy of these methods in this field and the introduction of new interpretability methods for these networks, current proposed solutions still face two main issues. First, there is no comprehensive framework for evaluating the effectiveness of interpretability methods for graph networks, particularly for histopathology images. Additionally, applying conventional interpretability methods to these types of networks for pathology images, with consideration of domain-specific knowledge, has been...