Improve model card: add pipeline tag, project page link, paper info, and citation
Browse filesThis PR improves the model card by:
* Adding the `pipeline_tag` to enable filtering on the Hugging Face Hub.
* Adding a link to the project page.
* Adding paper information.
* Adding citation details.
README.md
CHANGED
|
@@ -1,18 +1,20 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
datasets:
|
| 4 |
-
- andaba/TEMPURA-VER
|
| 5 |
base_model:
|
| 6 |
- Qwen/Qwen2.5-VL-3B-Instruct
|
|
|
|
|
|
|
| 7 |
library_name: transformers
|
|
|
|
| 8 |
tags:
|
| 9 |
- text-generation-inference
|
|
|
|
| 10 |
---
|
|
|
|
| 11 |
# Model Card for Model ID
|
| 12 |
|
| 13 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 14 |
|
| 15 |
-
This
|
| 16 |
|
| 17 |
## Model Details
|
| 18 |
|
|
@@ -20,23 +22,21 @@ This modelcard aims to be a base template for new models. It has been generated
|
|
| 20 |
|
| 21 |
<!-- Provide a longer summary of what this model is. -->
|
| 22 |
|
|
|
|
| 23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
| 26 |
-
- **Funded by [optional]:** [More Information Needed]
|
| 27 |
-
- **Shared by [optional]:** [More Information Needed]
|
| 28 |
-
- **Model type:** [More Information Needed]
|
| 29 |
-
- **Language(s) (NLP):** [More Information Needed]
|
| 30 |
-
- **License:** [More Information Needed]
|
| 31 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
| 32 |
-
|
| 33 |
-
### Model Sources [optional]
|
| 34 |
|
| 35 |
<!-- Provide the basic links for the model. -->
|
| 36 |
|
| 37 |
-
- **Repository:** [
|
| 38 |
-
- **Paper
|
| 39 |
-
- **
|
| 40 |
|
| 41 |
## Uses
|
| 42 |
|
|
@@ -46,36 +46,34 @@ This modelcard aims to be a base template for new models. It has been generated
|
|
| 46 |
|
| 47 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 48 |
|
| 49 |
-
|
| 50 |
|
| 51 |
### Downstream Use [optional]
|
| 52 |
|
| 53 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 54 |
|
| 55 |
-
|
| 56 |
|
| 57 |
### Out-of-Scope Use
|
| 58 |
|
| 59 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 60 |
|
| 61 |
-
|
| 62 |
|
| 63 |
## Bias, Risks, and Limitations
|
| 64 |
|
| 65 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 66 |
|
| 67 |
-
|
| 68 |
|
| 69 |
### Recommendations
|
| 70 |
|
| 71 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 72 |
|
| 73 |
-
Users
|
| 74 |
|
| 75 |
## How to Get Started with the Model
|
| 76 |
|
| 77 |
-
Use the code below to get started with the model.
|
| 78 |
-
|
| 79 |
[More Information Needed]
|
| 80 |
|
| 81 |
## Training Details
|
|
@@ -84,22 +82,19 @@ Use the code below to get started with the model.
|
|
| 84 |
|
| 85 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 86 |
|
| 87 |
-
|
| 88 |
|
| 89 |
### Training Procedure
|
| 90 |
|
| 91 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 92 |
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
[More Information Needed]
|
| 96 |
-
|
| 97 |
|
| 98 |
#### Training Hyperparameters
|
| 99 |
|
| 100 |
-
-
|
| 101 |
|
| 102 |
-
#### Speeds, Sizes, Times
|
| 103 |
|
| 104 |
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 105 |
|
|
@@ -135,8 +130,6 @@ Use the code below to get started with the model.
|
|
| 135 |
|
| 136 |
#### Summary
|
| 137 |
|
| 138 |
-
|
| 139 |
-
|
| 140 |
## Model Examination [optional]
|
| 141 |
|
| 142 |
<!-- Relevant interpretability work for the model goes here -->
|
|
@@ -149,11 +142,11 @@ Use the code below to get started with the model.
|
|
| 149 |
|
| 150 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 151 |
|
| 152 |
-
-
|
| 153 |
-
-
|
| 154 |
-
-
|
| 155 |
-
-
|
| 156 |
-
-
|
| 157 |
|
| 158 |
## Technical Specifications [optional]
|
| 159 |
|
|
@@ -173,17 +166,26 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
| 173 |
|
| 174 |
[More Information Needed]
|
| 175 |
|
| 176 |
-
## Citation
|
| 177 |
|
| 178 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 179 |
|
| 180 |
**BibTeX:**
|
| 181 |
|
| 182 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 183 |
|
| 184 |
**APA:**
|
| 185 |
|
| 186 |
-
|
| 187 |
|
| 188 |
## Glossary [optional]
|
| 189 |
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
base_model:
|
| 3 |
- Qwen/Qwen2.5-VL-3B-Instruct
|
| 4 |
+
datasets:
|
| 5 |
+
- andaba/TEMPURA-VER
|
| 6 |
library_name: transformers
|
| 7 |
+
license: cc-by-4.0
|
| 8 |
tags:
|
| 9 |
- text-generation-inference
|
| 10 |
+
pipeline_tag: video-text-to-text
|
| 11 |
---
|
| 12 |
+
|
| 13 |
# Model Card for Model ID
|
| 14 |
|
| 15 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 16 |
|
| 17 |
+
This model card describes TEMPURA, a model for temporal event masked prediction and understanding for reasoning in action.
|
| 18 |
|
| 19 |
## Model Details
|
| 20 |
|
|
|
|
| 22 |
|
| 23 |
<!-- Provide a longer summary of what this model is. -->
|
| 24 |
|
| 25 |
+
TEMPURA enhances video temporal understanding by integrating causal reasoning with fine-grained temporal segmentation. More details can be found on the [project page](https://andy-cheng.github.io/TEMPURA/).
|
| 26 |
|
| 27 |
+
- **Developed by:** Jen-Hao Cheng, Vivian Wang, Huayu Wang, Huapeng Zhou, Yi-Hao Peng, Hou-I Liu, Hsiang-Wei Huang, Kuang-Ming Chen, Cheng-Yen Yang, Wenhao Chai, Yi-Ling Chen, Vibhav Vineet, Qin Cai, Jenq-Neng Hwang
|
| 28 |
+
- **Model type:** Video-Language Model
|
| 29 |
+
- **Language(s) (NLP):** English
|
| 30 |
+
- **License:** cc-by-4.0
|
| 31 |
+
- **Finetuned from model:** Qwen/Qwen2.5-VL-3B-Instruct
|
| 32 |
|
| 33 |
+
### Model Sources
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
<!-- Provide the basic links for the model. -->
|
| 36 |
|
| 37 |
+
- **Repository:** [https://github.com/andy-cheng/TEMPURA](https://github.com/andy-cheng/TEMPURA)
|
| 38 |
+
- **Paper:** [TEMPURA: Temporal Event Masked Prediction and Understanding for Reasoning in Action](https://huggingface.co/papers/2505.01583)
|
| 39 |
+
- **Project Page:** [https://andy-cheng.github.io/TEMPURA/](https://andy-cheng.github.io/TEMPURA/)
|
| 40 |
|
| 41 |
## Uses
|
| 42 |
|
|
|
|
| 46 |
|
| 47 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
| 48 |
|
| 49 |
+
The model can be used directly for temporal grounding and highlight detection in videos.
|
| 50 |
|
| 51 |
### Downstream Use [optional]
|
| 52 |
|
| 53 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
| 54 |
|
| 55 |
+
The model can be fine-tuned for various applications requiring temporal video understanding, such as video summarization, event extraction, and question answering.
|
| 56 |
|
| 57 |
### Out-of-Scope Use
|
| 58 |
|
| 59 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
| 60 |
|
| 61 |
+
The model may not perform well on videos with significantly different visual styles or languages compared to the training data.
|
| 62 |
|
| 63 |
## Bias, Risks, and Limitations
|
| 64 |
|
| 65 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
| 66 |
|
| 67 |
+
The model's performance is influenced by biases present in the VER dataset. Further analysis is needed to fully characterize these biases.
|
| 68 |
|
| 69 |
### Recommendations
|
| 70 |
|
| 71 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 72 |
|
| 73 |
+
Users should be aware of potential biases in the model's outputs.
|
| 74 |
|
| 75 |
## How to Get Started with the Model
|
| 76 |
|
|
|
|
|
|
|
| 77 |
[More Information Needed]
|
| 78 |
|
| 79 |
## Training Details
|
|
|
|
| 82 |
|
| 83 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
| 84 |
|
| 85 |
+
The model was trained on the VER dataset ([https://huggingface.co/datasets/andaba/TEMPURA-VER](https://huggingface.co/datasets/andaba/TEMPURA-VER)).
|
| 86 |
|
| 87 |
### Training Procedure
|
| 88 |
|
| 89 |
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
| 90 |
|
| 91 |
+
The training procedure involves masked event prediction and video event segmentation with temporal dense captioning. See the training scripts in the repository for details.
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
#### Training Hyperparameters
|
| 94 |
|
| 95 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 96 |
|
| 97 |
+
#### Speeds, Sizes, Times
|
| 98 |
|
| 99 |
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
| 100 |
|
|
|
|
| 130 |
|
| 131 |
#### Summary
|
| 132 |
|
|
|
|
|
|
|
| 133 |
## Model Examination [optional]
|
| 134 |
|
| 135 |
<!-- Relevant interpretability work for the model goes here -->
|
|
|
|
| 142 |
|
| 143 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 144 |
|
| 145 |
+
- **Hardware Type:** [More Information Needed]
|
| 146 |
+
- **Hours used:** [More Information Needed]
|
| 147 |
+
- **Cloud Provider:** [More Information Needed]
|
| 148 |
+
- **Compute Region:** [More Information Needed]
|
| 149 |
+
- **Carbon Emitted:** [More Information Needed]
|
| 150 |
|
| 151 |
## Technical Specifications [optional]
|
| 152 |
|
|
|
|
| 166 |
|
| 167 |
[More Information Needed]
|
| 168 |
|
| 169 |
+
## Citation
|
| 170 |
|
| 171 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 172 |
|
| 173 |
**BibTeX:**
|
| 174 |
|
| 175 |
+
```tex
|
| 176 |
+
@article{tempura,
|
| 177 |
+
title={TEMPURA: Temporal Event Masked Prediction and Understanding for Reasoning in Action},
|
| 178 |
+
author={Jen-Hao Cheng and Vivian Wang and Huayu Wang and Huapeng Zhou and Yi-Hao Peng and Hou-I Liu
|
| 179 |
+
and Hsiang-Wei Huang and Kuang-Ming Chen and Cheng-Yen Yang
|
| 180 |
+
and Wenhao Chai and Yi-Ling Chen and Vibhav Vineet and Qin Cai and Jenq-Neng Hwang},
|
| 181 |
+
journal={arXiv preprint arXiv:2505.01583},
|
| 182 |
+
year={2025}
|
| 183 |
+
}
|
| 184 |
+
```
|
| 185 |
|
| 186 |
**APA:**
|
| 187 |
|
| 188 |
+
Cheng, J.-H., Wang, V., Wang, H., Zhou, H., Peng, Y.-H., Liu, H.-I., Huang, H.-W., Chen, K.-M., Yang, C.-Y., Chai, W., Chen, Y.-L., Vineet, V., Cai, Q., & Hwang, J.-N. (2025). *TEMPURA: Temporal Event Masked Prediction and Understanding for Reasoning in Action*. arXiv preprint arXiv:2505.01583.
|
| 189 |
|
| 190 |
## Glossary [optional]
|
| 191 |
|