|
|
--- |
|
|
dataset_info: |
|
|
- config_name: asian_to_black |
|
|
features: |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: swapped |
|
|
dtype: string |
|
|
- name: masked |
|
|
dtype: string |
|
|
- name: source |
|
|
dtype: string |
|
|
- name: target |
|
|
dtype: string |
|
|
- name: source_id |
|
|
dtype: string |
|
|
- name: target_id |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 39396761 |
|
|
num_examples: 27016 |
|
|
- name: val |
|
|
num_bytes: 39396761 |
|
|
num_examples: 27016 |
|
|
- name: test |
|
|
num_bytes: 39396761 |
|
|
num_examples: 27016 |
|
|
download_size: 74877080 |
|
|
dataset_size: 118190283 |
|
|
- config_name: asian_to_white |
|
|
features: |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: swapped |
|
|
dtype: string |
|
|
- name: masked |
|
|
dtype: string |
|
|
- name: source |
|
|
dtype: string |
|
|
- name: target |
|
|
dtype: string |
|
|
- name: source_id |
|
|
dtype: string |
|
|
- name: target_id |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 41288195 |
|
|
num_examples: 27016 |
|
|
- name: val |
|
|
num_bytes: 41288195 |
|
|
num_examples: 27016 |
|
|
- name: test |
|
|
num_bytes: 41288195 |
|
|
num_examples: 27016 |
|
|
download_size: 78565107 |
|
|
dataset_size: 123864585 |
|
|
- config_name: black_to_white |
|
|
features: |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: swapped |
|
|
dtype: string |
|
|
- name: masked |
|
|
dtype: string |
|
|
- name: source |
|
|
dtype: string |
|
|
- name: target |
|
|
dtype: string |
|
|
- name: source_id |
|
|
dtype: string |
|
|
- name: target_id |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 59726209 |
|
|
num_examples: 36694 |
|
|
- name: val |
|
|
num_bytes: 59726209 |
|
|
num_examples: 36694 |
|
|
- name: test |
|
|
num_bytes: 59726209 |
|
|
num_examples: 36694 |
|
|
download_size: 114734341 |
|
|
dataset_size: 179178627 |
|
|
configs: |
|
|
- config_name: asian_to_black |
|
|
data_files: |
|
|
- split: train |
|
|
path: asian_to_black/train-* |
|
|
- split: val |
|
|
path: asian_to_black/val-* |
|
|
- split: test |
|
|
path: asian_to_black/test-* |
|
|
- config_name: asian_to_white |
|
|
data_files: |
|
|
- split: train |
|
|
path: asian_to_white/train-* |
|
|
- split: val |
|
|
path: asian_to_white/val-* |
|
|
- split: test |
|
|
path: asian_to_white/test-* |
|
|
- config_name: black_to_white |
|
|
data_files: |
|
|
- split: train |
|
|
path: black_to_white/train-* |
|
|
- split: val |
|
|
path: black_to_white/val-* |
|
|
- split: test |
|
|
path: black_to_white/test-* |
|
|
license: cc-by-sa-4.0 |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
|
|
|
# GRADIEND Race Data |
|
|
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
|
|
This dataset consists of templated sentences with the masked word being sensitive to race, e.g., *African*. |
|
|
``` |
|
|
|
|
|
``` |
|
|
|
|
|
See [GENTER](https://huggingface.co/datasets/aieng-lab/genter) and [GRADIEND Religion Data](https://huggingface.co/datasets/aieng-lab/gradiend_religion_data) for similar datasets. |
|
|
|
|
|
## Usage |
|
|
```python |
|
|
genter = load_dataset('aieng-lab/gradiend_race_data', pair_of_races, trust_remote_code=True, split=split) |
|
|
``` |
|
|
`split` can be either `train`, `val`, `test`, or `all`. |
|
|
|
|
|
`pair_of_races` can be `asian_to_black`, `asian_to_white`, or `black_to_white`. |
|
|
|
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
This dataset is a filtered version of [Wikipedia-10](https://drive.google.com/file/d/1boQTn44RnHdxWeUKQAlRgQ7xrlQ_Glwo/view?usp=sharing) containing only sentences that contain a race bias sensitive word of the `source_id` race. We used the same bias sensitive words as defined by [Maede et al. (2021)](https://arxiv.org/abs/2110.08527) ([bias attribute words](https://github.com/McGill-NLP/bias-bench/blob/main/data/bias_attribute_words.json)). |
|
|
Based on the masked term (`source`), an associated `target` is derived from a corresponding bias attribute pair, matching the casing of `source` (e.g., `White` gets to `Black` and not `black`). |
|
|
|
|
|
|
|
|
### Dataset Sources |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **Repository:** [github.com/aieng-lab/gradiend](https://github.com/aieng-lab/gradiend) |
|
|
- **Paper:** [](https://arxiv.org/abs/2502.01406) |
|
|
- **Original Data**: [Wikipedia-10](https://drive.google.com/file/d/1boQTn44RnHdxWeUKQAlRgQ7xrlQ_Glwo) (a subset of [English Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)) |
|
|
|
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
|
|
- `text`: the original entry of Wikipedia-10 |
|
|
- `masked`: the masked version of `text`, i.e., with template masks for the name (`[NAME]`) and the pronoun (`[PRONOUN]`) |
|
|
- `swapped`: like `masked` but with inserted `target` for `[MASK]` |
|
|
- `source`: the word at the position of `[MASK]` in `masked` (e.g., `African`) |
|
|
- `source_id`: a normalized identifier for the `source` (e.g., `black`). All entries of the same `pair_of_races` have the same `source_id`. |
|
|
- `target`: the word inserted for `[MASK]` in `swapped` |
|
|
- `target_id`: a normalized identifier for the `target`. All entries of the same `pair_of_races`have the same `target_id`. |
|
|
|
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
|
|
For the training of a race bias [GRADIEND models](https://github.com/aieng-lab/gradiend), a diverse dataset is required to asses model gradients relevant to bias-sensitive information. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
|
|
The dataset is derived from [Wikipedia-10](https://drive.google.com/file/d/1boQTn44RnHdxWeUKQAlRgQ7xrlQ_Glwo) by filtering it and extracting the template structure. Whe Wikipedia-10 dump is derived from [English Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by [Maede et al. 2021](https://arxiv.org/pdf/2110.08527). |
|
|
|
|
|
|
|
|
### Limitations |
|
|
|
|
|
Note that the splitting is performed entirely random. Thus, the same masked text might occur in other splits (in combination with other target words). The same limitation holds across different `pair_of_races`. |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
``` |
|
|
@misc{drechsel2025gradiendfeaturelearning, |
|
|
title={{GRADIEND}: Feature Learning within Neural Networks Exemplified through Biases}, |
|
|
author={Jonathan Drechsel and Steffen Herbold}, |
|
|
year={2025}, |
|
|
eprint={2502.01406}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.LG}, |
|
|
url={https://arxiv.org/abs/2502.01406}, |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
## Dataset Card Authors |
|
|
|
|
|
[jdrechsel](https://huggingface.co/jdrechsel) |