Loading...
Privacy-Preserving Fine-tuning of Parameter-Efficient Language Models
Amir Hossein Hadian | 2024
0
Viewed
- Type of Document: M.Sc. Thesis
- Language: Farsi
- Document No: 57553 (19)
- University: Sharif University of Technology
- Department: Computer Engineering
- Advisor(s): Rabiee, Hamid Reza; Hosseini, Abbas
- Abstract:
- With the advancement of GPUs and the availability of massive datasets, the main reasons behind the emergence of large language models, human life has been profoundly impacted. Large language models, due to their numerous applications and their increasing development, have influenced various aspects of human life, and this influence continues to grow. As a result, large language models are being used to solve a wide range of problems. However, using these models for problem-solving requires appropriate training data and significant computational resources. Because deep models can be attacked, leading to the extraction of their training data, preserving the privacy of training data in the fine-tuning of language models has become a real concern. Additionally, due to the limitations of computational resources, parameter-efficient fine-tuning methods have been proposed to fine-tune large language models. In this research, a two-step training method based on parameter-efficient fine-tuning approaches is presented for the privacy-preserving fine-tuning of language models, enabling the solution of various problems while preserving the privacy of training data using limited resources. In this method, after adding a small number of parameters to the pre-trained language model and training it, the bias parameters of the language model are trained. This method has been implemented on two downstream tasks: classification and text generation, using two parameter-efficient language models. The one to three percent increase in various metrics for the text generation task in the results demonstrates the effectiveness of this approach
- Keywords:
- Privacy ; Differential Privacy ; Fine Tuning ; Large Language Model ; Parameter-Efficient Fine-Tuning (PEFT) ; Graphic Processing
-
محتواي کتاب
- view