Loading...
- Type of Document: M.Sc. Thesis
- Language: Farsi
- Document No: 51723 (31)
- University: Sharif University of Technology
- Department: Languages and Linguistics Center
- Advisor(s): Khosravizadeh, Parvaneh
- Abstract:
- One of the ways for semantic modelling of natural languages is to use the distributed representations of words. Formerly in order to produce the distributed representations of words several methods like LSA, LDA and other techniques taken from information retrieval were applied. The main idea presented by these methods is that the whole document consists of one central topic that has become hidden by the words it is made of. These models assign one general meaning to the whole document and do not regard words as semantically independent. However, each of the words of a document have specific senses that these models are not able to reflect. On the other hand efforts done in the field of language modelling have given rise to Neural Network Language Models. These models produce word representations with noticeable power of reflecting word senses. Results in the application of these word representations in tasks like computing word association have proven hopeful. In this research we are seeking to evaluate the performance of these word representations in the problem of extracting synonyms and antonyms in Persian language. Accordingly we are going to present new ideas in order to increase the efficiency of these representations as much as possible
- Keywords:
- Semantic Relation Extraction ; Synonym Extraction ; Antonym Extraction ; Semantic Modeling ; Semantic Relations ; Natural Language
- محتواي کتاب
- view