Loading...
Search for: inference-attacks
0.01 seconds

    Analysis and Improvement of Privacy-Preserving Federated Learning

    , M.Sc. Thesis Sharif University of Technology Rahmani, Fatemeh (Author) ; Jafari Sivoshani, Mahdi (Supervisor) ; Rohban, Mohammad Hossein (Supervisor)
    Abstract
    Membership inference attacks are one of the most important privacy-violating attacks in machine learning, as well as infrastructure of more serious attacks such as data extraction attacks. Since membership inference attack is used as a measure to evaluate the level of privacy protection of machine learning models, different researches have investigated and provided new methods for this attack. However, the accuracy of these attacks has not been investigated on models trained with the latest techniques such as data augmentation and regularization techniques. In this research, we see that the Lira attack, the latest membership inference attack, which has much more power compared to previous... 

    Security and searchability in secret sharing-based data outsourcing

    , Article International Journal of Information Security ; Volume 14, Issue 6 , November , 2015 , Pages 513-529 ; 16155262 (ISSN) Hadavi, M. A ; Jalili, R ; Damiani, E ; Cimato, S ; Sharif University of Technology
    Springer Verlag  2015
    Abstract
    A major challenge organizations face when hosting or moving their data to the Cloud is how to support complex queries over outsourced data while preserving their confidentiality. In principle, encryption-based systems can support querying encrypted data, but their high complexity has severely limited their practical use. In this paper, we propose an efficient yet secure secret sharing-based approach for outsourcing relational data to honest-but-curious data servers. The problem with using secret sharing in a data outsourcing scenario is how to efficiently search within randomly generated shares. We present multiple partitioning methods that enable clients to efficiently search among shared... 

    Privacy Against Brute-Force Inference Attacks

    , Article 2019 IEEE International Symposium on Information Theory, ISIT 2019, 7 July 2019 through 12 July 2019 ; Volume 2019-July , 2019 , Pages 637-641 ; 21578095 (ISSN) ; 9781538692912 (ISBN) Osia, S. A ; Rassouli, B ; Haddadi, H ; Rabiee, H. R ; Gunduz, D ; The Institute of Electrical and Electronics Engineers, Information Theory Society ; Sharif University of Technology
    Institute of Electrical and Electronics Engineers Inc  2019
    Abstract
    Privacy-preserving data release is about disclosing information about useful data while retaining the privacy of sensitive data. Assuming that the sensitive data is threatened by a brute-force adversary, we define Guessing Leakage as a measure of privacy, based on the concept of guessing. After investigating the properties of this measure, we derive the optimal utility-privacy trade-off via a linear program with any f-information adopted as the utility measure, and show that the optimal utility is a concave and piece-wise linear function of the privacy-leakage budget