Model Card for esp-aves2-eat-all
Model Details
Model Description
esp-aves2-eat-all is a self-supervised audio representation learning model (bioacoustic encoder) based on the EAT (Efficient Audio Transformer) architecture, trained with self-supervised learning on the All mix (Bioacoustics + AudioSet), as described in What Matters for Bioacoustic Encoding.
- Developed by: Marius Miron, David Robinson, Milad Alizadeh, Ellen Gilsenan-McMahon, Gagan Narula, Emmanuel Chemla, Maddie Cusimano, Felix Effenberger, Masato Hagiwara, Benjamin Hoffman, Sara Keen, Diane Kim, Jane K. Lawton, Jen-Yu Liu, Aza Raskin, Olivier Pietquin, Matthieu Geist
- Funded by: More info at
https://www.earthspecies.org/about-us#support - Shared by: Earth Species Project
- Model type: Transformer; EAT backbone (self-supervised)
- License: CC-BY-NC-SA
- Finetuned from model: N/A (self-supervised pretraining checkpoint)
Model Sources
- Repository:
https://github.com/earthspecies/avex - Paper: What Matters for Bioacoustic Encoding
- Hugging Face Model: ESP-AVES2 Collection
- Configuration: train_config.yaml
Parent Models
- EAT (Efficient Audio Transformer)
- Source:
http://github.com/cwx-worst-one/EAT - Description: Open-source EAT implementation used as the reference architecture/training codebase.
- License: See upstream repository
- Source:
Uses
Direct Use
esp-aves2-eat-all can be used as an embedding model for downstream tasks such as species classification/detection (via probes or finetuning), retrieval and clustering, and as a general audio encoder baseline.
Downstream Use
Use frozen embeddings with linear probes, or fine-tune on domain-specific bioacoustic datasets.
Out-of-Scope Use
Not a generative model; does not output text.
Bias, Risks, and Limitations
- Bias: Training data mixes citizen-science bioacoustics with AudioSet; both can introduce geographic/taxa and recording-condition bias. Transfer to under-represented taxa/habitats may be limited.
- Risks: Potential misuse for sensitive wildlife monitoring without safeguards.
- Limitations: The paper standardizes evaluations at 16 kHz; higher-frequency information may be important for some taxa.
How to Get Started with the Model
Loading this model requires the AVEX (Animal Vocalization Encoder) library avex to be installed.
Installation
pip install avex
Or with uv:
uv add avex
For more details, see https://github.com/earthspecies/avex.
Loading the Model
from avex import load_model
model = load_model("esp_aves2_eat_all", device="cuda")
Embedding Extraction
import torch
from avex import load_model
model = load_model("esp_aves2_eat_all", device="cuda")
with torch.no_grad():
embeddings = model(audio_tensor)
# Shape: (batch, time_steps, 768) for EAT
# Pool to get fixed-size embedding
embedding = embeddings.mean(dim=1) # Shape: (batch, 768)
Transfer Learning with Probes
from avex.models.probes import build_probe_from_config
from avex.configs import ProbeConfig
# Load backbone for feature extraction
base = load_model("esp_aves2_eat_all", device="cuda")
# Define a probe head for your task
probe_config = ProbeConfig(
probe_type="linear",
target_layers=["last_layer"],
aggregation="mean",
freeze_backbone=True,
online_training=True,
)
probe = build_probe_from_config(
probe_config=probe_config,
base_model=base,
num_classes=10, # Your number of classes
device="cuda",
)
Training Details
Training Data
Self-supervised pretraining on the All mix (Bioacoustics + AudioSet). Labels are ignored during SSL.
Training Data Sources
| Dataset | Description | Source | License | Size |
|---|---|---|---|---|
| AudioSet | general audio | Link | See dataset terms | 5700 hours |
| Xeno-canto | birds | Link | CC (varies) | 10416 hours |
| iNaturalist | diverse taxa | Link | CC (varies) | 1539 hours |
| Watkins | marine mammals | Link | licensing agreement (paper) | 27 hours |
| Animal Sound Archive | diverse taxa | Link | See archive terms | 78 hours |
Training Procedure
As described in the paper, EAT uses a self-supervised objective combining teacher distillation with reconstruction of masked spectrogram patches.
Training Hyperparameters
Training hyperparameters are specified in train_config.yaml.
Evaluation
Testing Data, Factors & Metrics
Testing Data
The paper evaluates on:
- BEANS (classification and detection):
https://github.com/earthspecies/beans - BirdSet (detection):
https://huggingface.co/datasets/DBD-research-group/BirdSet - Individual ID: Pipit, Chiffchaff, Little Owl, Macaques
- Vocal Repertoire: Zebra Finch, Giant Otters, Bengalese Finch, Killer Whale
Metrics
- Linear probing: accuracy / mAP
- Retrieval: ROC AUC
- Clustering: NMI
Results
Aggregate results for linear probing (frozen base model) with esp-aves2-eat-all (from the provided LaTeX table; corresponding to the EAT ``all'' SSL checkpoint):
| Benchmark | Task | Metric | Score |
|---|---|---|---|
| BEANS Classification | Probe | Accuracy | 0.709 |
| BEANS Classification | Retrieval | ROC AUC | 0.704 |
| BEANS Classification | Clustering | NMI | 0.448 |
| BEANS Detection | Probe | mAP | 0.315 |
| BEANS Detection | Retrieval | ROC AUC | 0.694 |
| BirdSet | Probe | mAP | 0.166 |
| BirdSet | Retrieval | ROC AUC | 0.677 |
| Individual ID | Probe | Accuracy | 0.348 |
| Individual ID | Retrieval | ROC AUC | 0.611 |
| Vocal Repertoire | Retrieval | ROC AUC | 0.788 |
| Vocal Repertoire | Clustering | NMI | 0.512 |
Citation
BibTeX:
@inproceedings{miron2025matters,
title={What Matters for Bioacoustic Encoding},
author={Miron, Marius and Robinson, David and Alizadeh, Milad and Gilsenan-McMahon, Ellen and Narula, Gagan and Chemla, Emmanuel and Cusimano, Maddie and Effenberger, Felix and Hagiwara, Masato and Hoffman, Benjamin and Keen, Sara and Kim, Diane and Lawton, Jane K. and Liu, Jen-Yu and Raskin, Aza and Pietquin, Olivier and Geist, Matthieu},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026}
}
Model Card Contact
Contact: marius@earthspecies.org, david@earthspecies.org, milad@earthspecies.org, gagan@earthspecies.org