monsoon-nlp commited on
Commit
9611604
·
1 Parent(s): e1ad462

new decoder path

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: es
3
+ ---
4
+
5
+ # es-seq2seq-gender (decoder)
6
+
7
+ This is a seq2seq model (decoder half) to "flip" gender in Spanish sentences.
8
+ The model can augment your existing Spanish data, or generate counterfactuals
9
+ to test a model's decisions (would changing the gender of the subject or speaker change output?).
10
+
11
+ Intended Examples:
12
+
13
+ - el profesor viejo => la profesora vieja (article, noun, adjective all flip)
14
+ - una actriz => un actor (irregular noun)
15
+ - el lingüista => la lingüista (irregular noun)
16
+ - la biblioteca => la biblioteca (no person, no flip)
17
+
18
+ People's names are unchanged in this version, but you can use packages
19
+ such as https://pypi.org/project/gender-guesser/
20
+
21
+ ## Sample code
22
+
23
+ https://colab.research.google.com/drive/1Ta_YkXx93FyxqEu_zJ-W23PjPumMNHe5
24
+
25
+ ```
26
+ import torch
27
+ from transformers import AutoTokenizer, EncoderDecoderModel
28
+
29
+ model = EncoderDecoderModel.from_encoder_decoder_pretrained("monsoon-nlp/es-seq2seq-gender-encoder", "monsoon-nlp/es-seq2seq-gender-decoder")
30
+ tokenizer = AutoTokenizer.from_pretrained('monsoon-nlp/es-seq2seq-gender-decoder') # all are same as BETO uncased original
31
+
32
+ input_ids = torch.tensor(tokenizer.encode("la profesora vieja")).unsqueeze(0)
33
+ generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id)
34
+ tokenizer.decode(generated.tolist()[0])
35
+ > '[PAD] el profesor viejo profesor viejo profesor...'
36
+ ```
37
+
38
+ ## Training
39
+
40
+ I originally developed
41
+ <a href="https://github.com/MonsoonNLP/el-la">a gender flip Python script</a>
42
+ with
43
+ <a href="https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased">BETO</a>,
44
+ the Spanish-language BERT from Universidad de Chile,
45
+ and spaCy to parse dependencies in sentences.
46
+
47
+ More about this project: https://medium.com/ai-in-plain-english/gender-bias-in-spanish-bert-1f4d76780617
48
+
49
+ The seq2seq model is trained on gender-flipped text from that script run on the
50
+ <a href="https://huggingface.co/datasets/muchocine">muchocine dataset</a>,
51
+ and the first 6,853 lines from the
52
+ <a href="https://oscar-corpus.com/">OSCAR corpus</a>
53
+ (Spanish ded-duped).
54
+
55
+ The encoder and decoder started with weights and vocabulary from BETO (uncased).
56
+
57
+ ## Non-binary gender
58
+
59
+ This model is useful to generate male and female text samples, but falls
60
+ short of capturing gender diversity in the world and in the Spanish
61
+ language. Some communities prefer the plural -@s to represent
62
+ -os and -as, or -e and -es for gender-neutral or mixed-gender plural,
63
+ or use fewer gendered professional nouns (la juez and not jueza). This is not yet
64
+ embraced by the Royal Spanish Academy
65
+ and is not represented in the corpora and tokenizers used to build this project.
66
+
67
+ This seq2seq project and script could, in the future, help generate more text samples
68
+ and prepare NLP models to understand us all better.
69
+
70
+ #### Sources
71
+
72
+ - https://www.nytimes.com/2020/04/15/world/americas/argentina-gender-language.html
73
+ - https://www.washingtonpost.com/dc-md-va/2019/12/05/teens-argentina-are-leading-charge-gender-neutral-language/?arc404=true
74
+ - https://www.theguardian.com/world/2020/jan/19/gender-neutral-language-battle-spain
75
+ - https://es.wikipedia.org/wiki/Lenguaje_no_sexista
76
+ - https://remezcla.com/culture/argentine-company-re-imagines-little-prince-gender-neutral-language/
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_cross_attention": true,
3
+ "architectures": [
4
+ "BertLMHeadModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "is_decoder": true,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "output_past": true,
20
+ "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "type_vocab_size": 2,
23
+ "vocab_size": 31002
24
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7cb79f2638210339579e4c8ddc4bcbd74e850d86e495405f44c3eefaf7d1205
3
+ size 553140698
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "special_tokens_map_file": "/root/.cache/huggingface/transformers/78141ed1e8dcc5ff370950397ca0d1c5c9da478f54ec14544187d8a93eff1a26.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d", "name_or_path": "dccuchile/bert-base-spanish-wwm-uncased", "do_basic_tokenize": true, "never_split": null}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff