Loading...

Interpretability of Machine Learning Algorithms through the Lens of Causal Inference

Fatemi, Pouria | 2024

29 Viewed
  1. Type of Document: M.Sc. Thesis
  2. Language: Farsi
  3. Document No: 56949 (05)
  4. University: Sharif University of Technology
  5. Department: Electrical Engineering
  6. Advisor(s): Mohammad Hossein Yassaee
  7. Abstract:
  8. Machine learning is becoming increasingly popular for solving various problems, and it has become a big part of our daily lives. However, with the use of complex machine learning models, it is important to explain how these algorithms work. Knowing why a model makes a certain prediction can be just as important as the accuracy of that prediction in many applications. Unfortunately, the highest accuracies for large data sets are often achieved by complex models that are difficult to interpret even for the designers. Therefore, the interpretability of machine learning algorithms has become just as important as their accuracy. Recently, different methods have been proposed to help users understand the predictions of complex models. Interpretability is important because it builds trust, provides insight on how to improve a model, and helps us understand how the model works. Example-based methods are a group of widely used techniques for explaining machine learning algorithms. These methods aim to provide a local description of the model around a specific input by giving an example. The most popular example-based method is the counterfactual explanation method, which identifies another input that is similar to the original input but changes the model's output. However, the example produced by the counterfactual explanation method may not provide a good understanding of the model's performance because it does not take into account the causal relationships in the model's input. On the other hand, counterfactuals are an essential topic in the field of causal inference, and many studies have been conducted on them. Consequently, in recent years, counterfactual explanations have been improved with the use of causal structural models.In this thesis, we present a new method for interpreting machine learning algorithms. Our approach employs the concept of backtracking counterfactual, which has recently gained popularity in the field of causal inference. Firstly, we examine the limitations of previous counterfactual explanation methods. Then, we introduce our novel approach and explore its unique features. We also investigate the relationship between our method and past techniques and demonstrate that it encompasses them in certain scenarios. Finally, we discuss the generalizations of our proposed solution and simulate its application. The results of the simulation indicate that our new solution offers improved insights into model performance and feasibility
  9. Keywords:
  10. Interpretability ; Explainable Artificial Intelligence ; Causal Inference ; Machine Learning ; Counter Factuals ; Structural Causal Models

 Digital Object List

 Bookmark

...see more