The Dataset Viewer has been disabled on this dataset.

GovTube

GovTube is a dataset of over 1 million transcripts and audio tracks of videos uploaded to over 1,000 U.S. Federal Government YouTube channels. For each video, the dataset contains the audio track, machine-generated transcripts, and detailed video metadata. For each channel, the dataset contains channel-level metadata.

Background

The U.S. Federal Government maintained an official registry of its social media accounts called the US Digital Registry, which was discontinued in September 2024. The final export of the registry was preserved by the End of Term Archive 2024.

To build this dataset, the registry was filtered for YouTube channels. All channel links were tested automatically, typos and data-entry errors were corrected manually, and account URLs were normalized to YouTube channel IDs. This produced a seed list of 1,355 unique channel IDs.

yt-dlp was then used to gather metadata for each channel and to enumerate all videos by scanning each channel's "Uploads" playlist. YouTube imposes a limit of 20,000 videos per playlist, so only the 20,000 most recent uploads per channel could be discovered. This yielded a total of 1,189,692 videos across 1,352 channels. All the metadata and audiotracks were downloaded in the first half of January 2025.

For each video, yt-dlp downloaded the audio track and produced a .info.json metadata file. A small number of videos also had manually uploaded subtitles, which were downloaded as well.

Transcripts

Transcripts were generated using speaches, an OpenAI API-compatible self-hosted transcription server powered by faster-whisper. The model used was Systran/faster-whisper-medium, chosen for its balance between speed and accuracy. Transcription was parallelized across 250 GPU Docker containers orchestrated with Kubernetes. The setup is documented in this Mozilla AI Blueprint.

For each video, three transcript formats were produced: JSON (word-level timestamps), SRT (subtitles), and plain text.

Where to Find the Data

Metadata (Hugging Face)

This repository contains the .info.json metadata files produced by yt-dlp for each video, concatenated and published in two formats:

JSONL (data/jsonl/):

  • govtube_metadata_channels.jsonl.gz (19 MB) β€” Channel-level metadata for 1,352 channels, compressed with gzip.
  • govtube_metadata_videos_by_channel_jsonl_gz.tar (16 GB) β€” Video-level metadata segmented by channel. Each channel's videos are stored as an individual gzipped JSONL file, packaged together into a single uncompressed tar archive.

Parquet (data/parquet/):

  • All video-level metadata in columnar Parquet format, split into 20 chunks of less than 2 GB each (25.1 GB total). Unlike the JSONL files, the Parquet data is not segmented by channel.

Querying the Metadata with DuckDB

DuckDB (v0.10.3+) can query the Parquet files directly from Hugging Face without downloading them first. Since Parquet is columnar, only the columns used in your query are transferred.

List all columns available in the dataset:

DESCRIBE SELECT *
FROM 'hf://datasets/storytracer/govtube_metadata/data/parquet/*.parquet'
LIMIT 0;

Count videos by channel:

SELECT channel_id, count(*) AS video_count
FROM 'hf://datasets/storytracer/govtube_metadata/data/parquet/*.parquet'
GROUP BY channel_id
ORDER BY video_count DESC
LIMIT 20;

Find the most viewed videos:

SELECT id, title, channel, view_count, upload_date
FROM 'hf://datasets/storytracer/govtube_metadata/data/parquet/*.parquet'
ORDER BY view_count DESC
LIMIT 20;

For more details on DuckDB's Hugging Face integration, see the DuckDB documentation.

Full Dataset

The full dataset β€” including audio tracks, transcripts, and the raw per-video metadata files β€” is stored in a private S3-compatible bucket. This bucket is not publicly accessible and only available to collaborators.

The bucket contains two directories:

downloads/
└── {channel_id}/{video_id}/
    β”œβ”€β”€ {video_id}.m4a          # Audio track
    β”œβ”€β”€ {video_id}.info.json    # yt-dlp video metadata (source for the HF repo above)
    └── {video_id}.srt          # Manual subtitles (rare)

transcripts/
└── {channel_id}/{video_id}/
    β”œβ”€β”€ {video_id}.json         # Transcript as Whisper JSON
    β”œβ”€β”€ {video_id}.srt          # Transcript as subtitles
    └── {video_id}.txt          # Transcript as plain text

The downloads/ and transcripts/ directories are organized identically: first-level subdirectories are named by YouTube channel ID, and second-level subdirectories are named by YouTube video ID.

Tools

  • yt-dlp β€” Audio downloading and metadata extraction
  • speaches β€” OpenAI API-compatible transcription server
  • faster-whisper β€” Whisper inference engine (medium model)
Downloads last month
60