Henniina commited on
Commit
fbb6347
·
verified ·
1 Parent(s): cd1e0f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -27
README.md CHANGED
@@ -41,14 +41,15 @@ model-index:
41
  name: Metric
42
  ---
43
 
44
- # SetFit with TurkuNLP/bert-base-finnish-cased-v1
45
 
46
- This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [TurkuNLP/bert-base-finnish-cased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-cased-v1) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
47
 
 
48
  The model has been trained using an efficient few-shot learning technique that involves:
49
 
50
- 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
51
- 2. Training a classification head with features from the fine-tuned Sentence Transformer.
52
 
53
  ## Model Details
54
 
@@ -64,10 +65,9 @@ The model has been trained using an efficient few-shot learning technique that i
64
 
65
  ### Model Sources
66
 
67
- - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
68
- - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
69
- - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
70
-
71
  ### Model Labels
72
  | Label | Examples |
73
  |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
@@ -79,7 +79,7 @@ The model has been trained using an efficient few-shot learning technique that i
79
  ### Metrics
80
  | Label | Metric |
81
  |:--------|:-------|
82
- | **all** | 0.8267 |
83
 
84
  ## Uses
85
 
@@ -102,23 +102,20 @@ model = SetFitModel.from_pretrained("Finnish-actions/SetFit-FinBERT1-A2-accusati
102
  preds = model("Etunimi Sukunimi 🙋‍♀️")
103
  ```
104
 
105
- <!--
106
  ### Downstream Use
107
 
108
- *List how someone could finetune this model on their own dataset.*
109
- -->
110
 
111
- <!--
112
  ### Out-of-Scope Use
113
 
114
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
115
- -->
116
 
117
- <!--
118
  ## Bias, Risks and Limitations
119
 
120
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
121
- -->
 
122
 
123
  <!--
124
  ### Recommendations
@@ -158,6 +155,7 @@ preds = model("Etunimi Sukunimi 🙋‍♀️")
158
  - eval_max_steps: -1
159
  - load_best_model_at_end: False
160
 
 
161
  ### Training Results
162
  | Epoch | Step | Training Loss | Validation Loss |
163
  |:------:|:----:|:-------------:|:---------------:|
@@ -223,6 +221,7 @@ preds = model("Etunimi Sukunimi 🙋‍♀️")
223
  | 3.8728 | 2800 | 0.0 | - |
224
  | 3.9419 | 2850 | 0.0 | - |
225
  | 4.0 | 2892 | - | 0.3083 |
 
226
 
227
  ### Framework Versions
228
  - Python: 3.11.9
@@ -235,18 +234,21 @@ preds = model("Etunimi Sukunimi 🙋‍♀️")
235
 
236
  ## Citation
237
 
 
 
238
  ### BibTeX
239
  ```bibtex
240
- @article{https://doi.org/10.48550/arxiv.2209.11055,
241
- doi = {10.48550/ARXIV.2209.11055},
242
- url = {https://arxiv.org/abs/2209.11055},
243
- author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
244
- keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
245
- title = {Efficient Few-Shot Learning Without Prompts},
246
- publisher = {arXiv},
247
- year = {2022},
248
- copyright = {Creative Commons Attribution 4.0 International}
249
  }
 
250
  ```
251
 
252
  <!--
 
41
  name: Metric
42
  ---
43
 
44
+ # Detect Actions in Asynchronous Conversation Comments
45
 
46
+ ## SetFit with TurkuNLP/bert-base-finnish-cased-v1
47
 
48
+ This is a SetFit model that can be used for Text Classification of actions in asynchronous conversation. This particular model detects if a comment includes an accusation or not. The configuration of the model is that the model is based on only one annotator's annotations (annotator A2). Metric evaluations are based on conservative ground truth (see paper). This SetFit model uses TurkuNLP/bert-base-finnish-cased-v1 as the Sentence Transformer embedding model (using word embeddings). A LogisticRegression instance is used for classification.
49
  The model has been trained using an efficient few-shot learning technique that involves:
50
 
51
+ 1. Fine-tuning a Sentence Transformer with contrastive learning.
52
+ 2. Training a classification head with features from the fine-tuned Sentence Transformer.
53
 
54
  ## Model Details
55
 
 
65
 
66
  ### Model Sources
67
 
68
+ - **Repository:** [GitHub](https://github.com/henniina/Detecting-paired-actions)
69
+ - **Paper:** Paakki, H., Toivanen, P. and Kajava K. (2025). Implicit and Indirect: Detecting Face-threatening and Paired Actions in Asynchronous Online Conversations. Northern European Journal of Language Technology (NEJLT), 11(1), pp. 58-83.
70
+ -
 
71
  ### Model Labels
72
  | Label | Examples |
73
  |:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 
79
  ### Metrics
80
  | Label | Metric |
81
  |:--------|:-------|
82
+ | **all** | 0.72 |
83
 
84
  ## Uses
85
 
 
102
  preds = model("Etunimi Sukunimi 🙋‍♀️")
103
  ```
104
 
105
+
106
  ### Downstream Use
107
 
108
+ NB. This model has been trained on data coming from Finnish language asynchronous conversations under crisis related news on Facebook. This specific model has been trained to detect whether a comment includes a question or not. It reflects only one of our annotators' label interpretations, so the best use of our models (see our paper) would be to combine a set of models we provide on our Huggingface (Finnish-actions), and use a model ensemble to provide label predictions. It needs to be noted also that the model may not be well applicable outside of its empirical context, so in downstream applications, one should always conduct an evaluation of the model applicability using manually annotated data from that specific context (see our paper for annotation instructions).
 
109
 
 
110
  ### Out-of-Scope Use
111
 
112
+ Please use this model only for action detection and analysis. Uses of this model and the involved data for generative purposes (e.g. NLG) is prohibited.
 
113
 
 
114
  ## Bias, Risks and Limitations
115
 
116
+ Note that the model may produce errors. Due to the size of the training dataset, model may not generalize very well even for other novel topics within the same context. Note that model predictions should not be regarded as final judgments e.g. for online moderation purposes, but each case should also be regarded individually if using model predictions to support moderation. Also, the annotations only reflect three (though experienced) annotators' interpretations, so there might be perspectives on data intepretation that have not been taken into account here.
117
+ If model is used to support moderation on social media, we recommend that final judgments should always be left for human moderators.
118
+
119
 
120
  <!--
121
  ### Recommendations
 
155
  - eval_max_steps: -1
156
  - load_best_model_at_end: False
157
 
158
+ <!--
159
  ### Training Results
160
  | Epoch | Step | Training Loss | Validation Loss |
161
  |:------:|:----:|:-------------:|:---------------:|
 
221
  | 3.8728 | 2800 | 0.0 | - |
222
  | 3.9419 | 2850 | 0.0 | - |
223
  | 4.0 | 2892 | - | 0.3083 |
224
+ -->
225
 
226
  ### Framework Versions
227
  - Python: 3.11.9
 
234
 
235
  ## Citation
236
 
237
+ If you use this model, please cite the following work:
238
+
239
  ### BibTeX
240
  ```bibtex
241
+ @article{paakki-implicit-indirect,
242
+ doi = {https://doi.org/10.3384/nejlt.2000-1533.2025.5980},
243
+ url = {https://nejlt.ep.liu.se/article/view/5980},
244
+ author = {Paakki, Henna and Toivanen, Pihla and Kajava, Kaisla},
245
+ title = {Implicit and Indirect: Detecting Face-threatening and Paired Actions in Asynchronous Online Conversations},
246
+ publisher = {Northern European Journal of Language Technology (NEJLT)},
247
+ volume= {11},
248
+ number= {1},
249
+ year = {2025}
250
  }
251
+
252
  ```
253
 
254
  <!--