Sakonii commited on
Commit
9cdc238
·
verified ·
1 Parent(s): 78526be

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -1
README.md CHANGED
@@ -113,7 +113,29 @@ Being extracted and scraped from variety of internet sources, Personal and sensi
113
 
114
  ### Citation Information
115
 
116
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
 
118
  ### Contributions
119
 
 
113
 
114
  ### Citation Information
115
 
116
+ If you use this dataset in your research, please cite:
117
+
118
+ ```bibtex
119
+ @inproceedings{maskey-etal-2022-nepali,
120
+ title = "{N}epali Encoder Transformers: An Analysis of Auto Encoding Transformer Language Models for {N}epali Text Classification",
121
+ author = "Maskey, Utsav and
122
+ Bhatta, Manish and
123
+ Bhatt, Shiva and
124
+ Dhungel, Sanket and
125
+ Bal, Bal Krishna",
126
+ editor = "Melero, Maite and
127
+ Sakti, Sakriani and
128
+ Soria, Claudia",
129
+ booktitle = "Proceedings of the 1st Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages",
130
+ month = jun,
131
+ year = "2022",
132
+ address = "Marseille, France",
133
+ publisher = "European Language Resources Association",
134
+ url = "https://aclanthology.org/2022.sigul-1.14/",
135
+ pages = "106--111",
136
+ abstract = "Language model pre-training has significantly impacted NLP and resulted in performance gains on many NLP-related tasks, but comparative study of different approaches on many low-resource languages seems to be missing. This paper attempts to investigate appropriate methods for pretraining a Transformer-based model for the Nepali language. We focus on the language-specific aspects that need to be considered for modeling. Although some language models have been trained for Nepali, the study is far from sufficient. We train three distinct Transformer-based masked language models for Nepali text sequences: distilbert-base (Sanh et al., 2019) for its efficiency and minuteness, deberta-base (P. He et al., 2020) for its capability of modeling the dependency of nearby token pairs and XLM-ROBERTa (Conneau et al., 2020) for its capabilities to handle multilingual downstream tasks. We evaluate and compare these models with other Transformer-based models on a downstream classification task with an aim to suggest an effective strategy for training low-resource language models and their fine-tuning."
137
+ }
138
+ ```
139
 
140
  ### Contributions
141