toksuite_chinese / README.md
gsaltintas's picture
Uploading tokenizer_robustness_completion_chinese_romanization subset
01bb30d verified
|
raw
history blame
12.4 kB
metadata
license: cc
multilinguality: multilingual
task_categories:
  - multiple-choice
pretty_name: Tokenization Robustness
tags:
  - multilingual
  - tokenization
dataset_info:
  - config_name: tokenizer_robustness_completion_chinese_canonical
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 8225
        num_examples: 40
    download_size: 9396
    dataset_size: 8225
  - config_name: tokenizer_robustness_completion_chinese_code_language_script_switching
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 8136
        num_examples: 40
    download_size: 8261
    dataset_size: 8136
  - config_name: tokenizer_robustness_completion_chinese_colloquial
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 7442
        num_examples: 39
    download_size: 8111
    dataset_size: 7442
  - config_name: tokenizer_robustness_completion_chinese_equivalent_expressions
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 7907
        num_examples: 40
    download_size: 8383
    dataset_size: 7907
  - config_name: tokenizer_robustness_completion_chinese_keyboard_proximity_errors
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 7340
        num_examples: 40
    download_size: 8251
    dataset_size: 7340
  - config_name: tokenizer_robustness_completion_chinese_ocr_errors
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 8441
        num_examples: 40
    download_size: 8307
    dataset_size: 8441
  - config_name: tokenizer_robustness_completion_chinese_optional_diacritics
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 10200
        num_examples: 40
    download_size: 8835
    dataset_size: 10200
  - config_name: tokenizer_robustness_completion_chinese_partially_romanized
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 7680
        num_examples: 40
    download_size: 8217
    dataset_size: 7680
  - config_name: tokenizer_robustness_completion_chinese_romanization
    features:
      - name: question
        dtype: string
      - name: choices
        list: string
      - name: answer
        dtype: int64
      - name: answer_label
        dtype: string
      - name: split
        dtype: string
      - name: subcategories
        dtype: string
      - name: category
        dtype: string
      - name: lang
        dtype: string
      - name: second_lang
        dtype: string
      - name: notes
        dtype: string
      - name: id
        dtype: string
      - name: set_id
        dtype: string
      - name: variation_id
        dtype: string
    splits:
      - name: test
        num_bytes: 7859
        num_examples: 40
    download_size: 8285
    dataset_size: 7859
configs:
  - config_name: tokenizer_robustness_completion_chinese_canonical
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_canonical/test-*
  - config_name: tokenizer_robustness_completion_chinese_code_language_script_switching
    data_files:
      - split: test
        path: >-
          tokenizer_robustness_completion_chinese_code_language_script_switching/test-*
  - config_name: tokenizer_robustness_completion_chinese_colloquial
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_colloquial/test-*
  - config_name: tokenizer_robustness_completion_chinese_equivalent_expressions
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_equivalent_expressions/test-*
  - config_name: tokenizer_robustness_completion_chinese_keyboard_proximity_errors
    data_files:
      - split: test
        path: >-
          tokenizer_robustness_completion_chinese_keyboard_proximity_errors/test-*
  - config_name: tokenizer_robustness_completion_chinese_ocr_errors
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_ocr_errors/test-*
  - config_name: tokenizer_robustness_completion_chinese_optional_diacritics
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_optional_diacritics/test-*
  - config_name: tokenizer_robustness_completion_chinese_partially_romanized
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_partially_romanized/test-*
  - config_name: tokenizer_robustness_completion_chinese_romanization
    data_files:
      - split: test
        path: tokenizer_robustness_completion_chinese_romanization/test-*

Dataset Card for Tokenization Robustness

A comprehensive evaluation dataset for testing robustness of different tokenization strategies.

Dataset Details

Dataset Description

This dataset evaluates how robust language models are to different tokenization strategies and edge cases. It includes text completion questions with multiple choice answers designed to test various aspects of tokenization handling.

  • Curated by: R3
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): [More Information Needed]
  • License: cc

Dataset Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

The dataset contains multiple-choice questions with associated metadata about tokenization types and categories.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Data Collection and Processing

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Bias, Risks, and Limitations

The dataset focuses primarily on English text and may not generalize to other languages or tokenization schemes not covered in the evaluation.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]