LongVideoHaystack / README.md
ZihanWang314
update
e18de20
|
raw
history blame
8.35 kB
---
dataset_info:
features:
- name: vclip_id
dtype: string
- name: question_id
dtype: int32
- name: question
dtype: string
- name: answer
dtype: string
- name: frame_indexes
sequence: int32
- name: choices
struct:
- name: A
dtype: string
- name: B
dtype: string
- name: C
dtype: string
- name: D
dtype: string
- name: E
dtype: string
- name: video_metadata
struct:
- name: CLIP-reference-interval
sequence: float64
- name: bitrate
dtype: int64
- name: codec
dtype: string
- name: frame_dimensions
sequence: int64
- name: frame_dimensions_resized
sequence: int64
- name: frame_rate
dtype: float64
- name: resolution
dtype: string
- name: resolution_resized
dtype: string
- name: vclip_duration
dtype: float64
- name: vclip_frame_count
dtype: int64
- name: video_duration
dtype: float64
- name: video_frame_count
dtype: int64
- name: video_id
dtype: string
splits:
- name: train
num_bytes: 4782472
num_examples: 11218
- name: test
num_bytes: 1776278
num_examples: 3874
download_size: 1999818
dataset_size: 6558750
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
<h1 align='center' style="text-align:center; font-weight:bold; font-size:2.0em;letter-spacing:2.0px;">
LV-Haystack: Temporal Search for Long-Form Video Understanding</h1>
<p align='center' style="text-align:center;font-size:1.25em;">
<a href="https://jhuiye.com/" target="_blank">Jinhui Ye<sup>1</sup></a>,&nbsp;
<a href="https://zihanwang314.github.io/" target="_blank">Zihan Wang<sup>2</sup></a>,&nbsp;
<a href="https://haosensun.github.io/" target="_blank">Haosen Sun<sup>2</sup></a>,&nbsp;
<a href="https://keshik6.github.io/" target="_blank">Keshigeyan Chandrasegaran<sup>1</sup></a>,&nbsp;
<a href="https://zanedurante.github.io/" target="_blank">Zane Durante<sup>1</sup></a>,&nbsp;<br/>
<a href="https://ceyzaguirre4.github.io/" target="_blank">Cristobal Eyzaguirre<sup>1</sup></a>,&nbsp;
<a href="https://talkingtorobots.com/yonatanbisk.html" target="_blank">Yonatan Bisk<sup>3</sup></a>,&nbsp;
<a href="https://www.niebles.net/" target="_blank">Juan Carlos Niebles<sup>1</sup></a>,&nbsp;
<a href="https://profiles.stanford.edu/ehsan-adeli" target="_blank">Ehsan Adeli<sup>1</sup></a>,&nbsp;
<a href="https://profiles.stanford.edu/fei-fei-li/" target="_blank">Li Fei-Fei<sup>1</sup></a>,&nbsp;
<a href="https://jiajunwu.com/" target="_blank">Jiajun Wu<sup>1</sup></a>,&nbsp;
<a href="https://limanling.github.io/" target="_blank">Manling Li<sup>2</sup></a><br/>
&nbsp;Stanford University<sup>1</sup>, Northwestern University<sup>2</sup>, Carnegie Mellon University<sup>3</sup><br/>
<!-- <em>Conference on AI Research, 2025</em> -->
<br/>
<a href="https://examplewebsite.com" title="Website" target="_blank" rel="nofollow" style="text-decoration: none;">🌎Website</a> |
<a href="https://examplecode.com" title="Dataset" target="_blank" rel="nofollow" style="text-decoration: none;">🧑‍💻Code</a> |
<a href="https://arxiv.org/examplepaper" title="aXiv" target="_blank" rel="nofollow" style="text-decoration: none;">📄arXiv</a> |
<a href="https://exampleleaderboard.com" title="Leaderboard" target="_blank" rel="nofollow" style="text-decoration: none;">🏆 Leaderboard (Coming Soon)</a><br>
</p>
<img src="assets/img/logo.png" alt="Logo" width="400" height="auto" style="display:block; margin:auto;" />
<p align='center' style="text-align:center;font-size:1.25em;">
<a align='center' style="text-decoration: none; color: gray">
Dataset is part of the <a href="">T* project</a></p>
<p align=center>
</p>
#### Dataset Sample
```python
{
'vclip_id': '6338b73e-393f-4d37-b278-68703b45908c',
'question_id': 10,
'question': 'What nail did I pull out?',
'answer': 'E',
'frame_indexes': [5036, 5232], # the keyframe indexes
'choices': {
'A': 'The nail from the front wheel fender',
'B': 'The nail from the motorcycle battery compartment',
'C': 'The nail from the left side of the motorcycle seat',
'D': 'The nail from the rearview mirror mount',
'E': 'The nail on the right side of the motorcycle exhaust pipe'
},
'video_metadata': {
'CLIP-reference-interval': [180.0, 240.0], # Time interval of the video that is considered to be important in CLIP. This is originally from the Ego4D dataset, used here for annotators to quickly locate in the video.
'frame_count': 14155, # Total number of frames in the video
'frame_rate': 30.0, # Frame rate of the video
'duration': 471.8333435058594, # Duration of the video in seconds
'resolution': '454x256', # Original resolution of the video
'frame_dimensions': None, # Frame dimensions (if available)
'codec': 'N/A', # Codec used for the video (if available)
'bitrate': 0, # Bitrate of the video (if available)
'frame_dimensions_resized': [340, 256], # Resized frame dimensions
'resolution_resized': '340x256', # Resized resolution
'video_id': 'b6ae365a-dd70-42c4-90d6-e0351778d991' # Unique video identifier
}
}
```
#### Dataset exploration
add hyperlink to demo
#### Dataset Usage
```python
from datasets import load_dataset
dataset = load_dataset("LVHaystack/LongVideoHaystack")
print(dataset)
```
```bash
>>> DatasetDict({
train: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 11218
})
test: Dataset({
features: ['vclip_id', 'question_id', 'question', 'answer', 'frame_indexes', 'choices', 'video_metadata'],
num_rows: 3874
})
})
```
#### Dataset Statistics Summary
| **Metric** | **Total** | **Train** | **Test** |
|--------------------------------|--------------|-------------|-------------|
| **Video Statistics** | | | |
| Total Videos | **988** | **744** | **244** |
| Total Video Duration (hr) | 423.3 | 322.2 | 101.0 |
| Avg. Video Duration (min) | 25.7 | 26.0 | 24.8 |
| **Clip Statistics** | | | |
| Total Video Clips | **1,324** | **996** | **328** |
| Total Video Clip Duration (hr) | 180.4 | 135.3 | 45.0 |
| Avg. Video Clip Duration (sec) | 8.2 | 8.2 | 8.2 |
| **Frame Statistics** | | | |
| Total Frames (k) | **45,700** | **34,800** | **10,900** |
| Avg. Frames per Video (k) | 46.3 | 46.8 | 44.7 |
| Ratio of Keyframe / Frame (‰) | 0.62 | 0.59 | 0.71 |
| **QA Statistics** | | | |
| Total QA Pairs | **15,092** | **11,218** | **3,874** |
| Avg. QA Pair per Video | 15.3 | 15.1 | 15.9 |
| Avg. QA Pair per Clip | 11.4 | 11.3 | 11.8 |
| Avg. Keyframes per Question | 1.88 | 1.84 | 2.01 |
#### Download Videos
Assume your video is in ./videos/
#### Evaluation scripts
Please refer to ./eval.py (add hyperlink).
#### Contact
- Jinhui Ye: [email protected]
- Zihan Wang: [email protected]
- Haosen Sun: [email protected]
- Keshigeyan Chandrasegaran: [email protected]
- Manling Li: [email protected]
#### Citation
```bibtex
@misc{tstar,
title={Re-thinking Temporal Search for Long-Form Video Understanding},
author={Jinhui Ye and Zihan Wang and Haosen Sun and Keshigeyan Chandrasegaran and Zane Durante and Cristobal Eyzaguirre and Yonatan Bisk and Juan Carlos Niebles and Ehsan Adeli and Li Fei-Fei and Jiajun Wu and Manling Li},
year={2025},
eprint={2501.TODO},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
Website template borrowed from [HourVideo](https://huggingface.co/datasets/HourVideo/HourVideo).