leo98xh's picture
Update README.md
6c4bda2 verified
metadata
license: cc-by-nc-4.0
language:
  - zh
tags:
  - Audio
  - Corpus
  - mlcroissant
task_categories:
  - text-to-speech
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: tts_corpus
        path: ATT_Corpus.jsonl
      - split: trap_corpus
        path: Trap_Corpus.jsonl

📚 Audio Turing Test Corpus

A high‑quality, multidimensional Chinese transcript corpus designed to evaluate whether a machine‑generated speech sample can fool human listeners—the “Audio Turing Test.”

About Audio Turing Test (ATT)

ATT is an evaluation framework with a standardized human evaluation protocol and an accompanying dataset, aiming to resolve the lack of unified protocols in TTS evaluation and the difficulty in comparing multiple TTS systems. To further support the training and iteration of TTS systems, we utilized additional private evaluation data to train Auto-ATT model based on Qwen2-Audio-7B, enabling a model-as-a-judge approach for rapid evaluation of TTS systems on the ATT dataset. The datasets and Auto-ATT model can be cound in ATT Collection.

Dataset Description

This dataset provides 500 textual transcripts from the Audio Turing Test (ATT) corpus, corresponding to the "transcripts known" (white-box) setting. These samples are part of the full 1,000-sample benchmark described in our paper, with the remaining 500 black-box entries hosted privately.

To prevent data contamination, we only release the white-box subset publicly. The black-box subset, while evaluated under identical protocols, is hosted privately on AGI-Eval to safeguard the integrity and future utility of the benchmark.

This separation between public and private subsets is a core part of the ATT design, ensuring the benchmark remains a trustworthy and unbiased tool for evaluating TTS systems.

The corpus spans five key linguistic and stylistic dimensions relevant to Chinese TTS evaluation:

  • Chinese-English Code-switching
  • Paralinguistic Features and Emotions
  • Special Characters and Numerals
  • Polyphonic Characters
  • Classical Chinese Poetry/Prose

For each dimension, this open-source subset includes 100 manually reviewed transcripts.

Additionally, the dataset includes 104 "trap" transcripts for attentiveness checks during human evaluation:

  • 35 flawed synthetic transcripts: intentionally flawed scripts designed to produce clearly synthetic and unnatural speech.
  • 69 authentic human transcripts: scripts corresponding to genuine human recordings, ensuring evaluators can reliably distinguish between human and synthetic speech.

How to Use This Dataset

  1. Generate Speech: Use these transcripts to generate audio with your TTS model. Pay attention that here are some phone numbers, email addresses, and websites in the corpus. Due to potential sensitivity risks, we have masked these texts as placeholders: [PHONE_MASK], [EMAIL_MASK], and [WEB_MASK]. However, to properly test the TTS system’s capabilities in this regard, please replace these placeholders with actual content before use.
  2. Evaluate: Use our Auto-ATT evaluation model to score your generated audio.
  3. Benchmark: Compare your model’s scores against scores from other evaluated TTS models listed in our research paper and the "trap" audio clips in Audio Turing Test Audio.

Data Format

Normal Transcripts

{
  "ID": "poem-100",
  "Text": "姚鼐在《登泰山记》中详细记录了登山的经过:“余始循以入,道少半,越中岭,复循西谷,遂至其巅。”这番描述让我仿佛身临其境地感受到了登山的艰辛与乐趣。当我亲自攀登泰山时,也经历了类似的艰辛与挑战。虽然路途遥远且充满艰辛,但当我站在山顶俯瞰群山时,那份成就感与自豪感让我倍感满足与幸福。",
  "Dimension": "poem",
  "Split": "white Box"
}

Trap Transcripts

{
  "ID": "human_00001",
  "Text": "然后当是去年,也是有一个契机,我就,呃,报了一个就是小凯书法家的这位老师的班。",
  "Ground Truth": 1
}
  • ID: Unique identifier for the transcript.
  • Text: The text intended for speech synthesis.
  • Dimension: Linguistic/stylistic category (only for normal transcripts).
  • Split: Indicates the "white Box" scenario (only for normal transcripts).
  • Ground Truth: Indicates if the transcript corresponds to human speech (1) or flawed synthetic speech (0) (only for trap transcripts).

Citation

This dataset is openly accessible for research purposes. If you use this dataset in your research, please cite our paper:

@software{Audio-Turing-Test-Transcripts,
  author = {Wang, Xihuai and Zhao, Ziyi and Ren, Siyu and Zhang, Shao and Li, Song and Li, Xiaoyu and Wang, Ziwen and Qiu, Lin and Wan, Guanglu and Cao, Xuezhi and Cai, Xunliang and Zhang, Weinan},
  title = {Audio Turing Test: Benchmarking the Human-likeness and Naturalness of Large Language Model-based Text-to-Speech Systems in Chinese},
  year = {2025},
  url = {https://huggingface.co/Meituan/Audio-Turing-Test-Corpus},
  publisher = {huggingface},
}