Upload folder using huggingface_hub
Browse files
README.md
CHANGED
|
@@ -16,6 +16,7 @@ tags:
|
|
| 16 |
- nvidia
|
| 17 |
- unsloth
|
| 18 |
- cosmos
|
|
|
|
| 19 |
---
|
| 20 |
<div>
|
| 21 |
<p style="margin-top: 0;margin-bottom: 0;">
|
|
@@ -28,14 +29,13 @@ tags:
|
|
| 28 |
<a href="https://discord.gg/unsloth">
|
| 29 |
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
|
| 30 |
</a>
|
| 31 |
-
<a href="https://docs.unsloth.ai/
|
| 32 |
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
|
| 33 |
</a>
|
| 34 |
</div>
|
| 35 |
</div>
|
| 36 |
|
| 37 |
|
| 38 |
-
|
| 39 |
# **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
|
| 40 |
|
| 41 |
[**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
|
|
@@ -44,11 +44,18 @@ tags:
|
|
| 44 |
|
| 45 |
## Description:
|
| 46 |
|
| 47 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
The
|
| 52 |
|
| 53 |
**Model Developer**: NVIDIA
|
| 54 |
|
|
@@ -60,7 +67,9 @@ The Cosmos-Reason1 includes the following model:
|
|
| 60 |
|
| 61 |
### License:
|
| 62 |
|
| 63 |
-
This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license).
|
|
|
|
|
|
|
| 64 |
|
| 65 |
Under the NVIDIA Open Model License, NVIDIA confirms:
|
| 66 |
|
|
@@ -81,7 +90,10 @@ Physical AI: Space, time, fundamental physics understanding and embodied reasoni
|
|
| 81 |
### Release Date:
|
| 82 |
|
| 83 |
* Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
|
| 84 |
-
* Huggingface:
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
## Model Architecture:
|
| 87 |
|
|
@@ -91,6 +103,21 @@ Network Architecture: Qwen2.5-VL-7B-Instruct.
|
|
| 91 |
Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
|
| 92 |
|
| 93 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 94 |
## Input
|
| 95 |
|
| 96 |
**Input Type(s)**: Text+Video/Image
|
|
@@ -111,7 +138,7 @@ Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggi
|
|
| 111 |
|
| 112 |
## Output
|
| 113 |
|
| 114 |
-
**Output Type(s)**: Text
|
| 115 |
|
| 116 |
**Output Format**: String
|
| 117 |
|
|
@@ -119,6 +146,9 @@ Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggi
|
|
| 119 |
|
| 120 |
**Other Properties Related to Output**:
|
| 121 |
* Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
|
|
|
|
|
|
|
|
|
|
| 122 |
* Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 123 |
|
| 124 |
|
|
@@ -145,11 +175,18 @@ Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggi
|
|
| 145 |
See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
|
| 146 |
* Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
|
| 147 |
|
| 148 |
-
|
| 149 |
-
|
| 150 |
Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
|
|
|
|
| 151 |
All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
|
| 152 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
**Data Collection Method**:
|
| 154 |
* RoboVQA: Hybrid: Automatic/Sensors
|
| 155 |
* BridgeDataV2: Automatic/Sensors
|
|
@@ -157,6 +194,11 @@ All datasets go through the data annotation process described in the technical p
|
|
| 157 |
* RoboFail: Automatic/Sensors
|
| 158 |
* HoloAssist: Human
|
| 159 |
* AV: Automatic/Sensors
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 160 |
|
| 161 |
**Labeling Method**:
|
| 162 |
* RoboVQA: Hybrid: Human,Automated
|
|
@@ -165,6 +207,32 @@ All datasets go through the data annotation process described in the technical p
|
|
| 165 |
* RoboFail: Hybrid: Human,Automated
|
| 166 |
* HoloAssist: Hybrid: Human,Automated
|
| 167 |
* AV: Hybrid: Human,Automated
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
|
| 169 |
**Metrics**:
|
| 170 |
We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
|
|
@@ -176,6 +244,7 @@ We report the model accuracy on the embodied reasoning benchmark introduced in [
|
|
| 176 |
Modality: Video (mp4) and Text
|
| 177 |
|
| 178 |
## Dataset Quantification
|
|
|
|
| 179 |
We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
|
| 180 |
**The AV data is currently unavailable and will be uploaded soon!**
|
| 181 |
|
|
@@ -185,15 +254,86 @@ We release the embodied reasoning data and benchmarks. Each data sample is a pai
|
|
| 185 |
| **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
|
| 186 |
| **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
|
| 187 |
|
|
|
|
| 188 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
|
| 190 |
-
We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
|
| 191 |
|
| 192 |
|
| 193 |
## Inference:
|
| 194 |
-
**Acceleration Engine:** PyTorch, flash attention <br>
|
| 195 |
**Test Hardware:** H100, A100, GB200 <br>
|
| 196 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 197 |
|
| 198 |
## Ethical Considerations
|
| 199 |
|
|
@@ -232,12 +372,12 @@ We value you, the datasets, the diversity they represent, and what we have been
|
|
| 232 |
| Model Type: | Transformer |
|
| 233 |
| Intended Users: | Physical AI developers |
|
| 234 |
| Output: | Text |
|
| 235 |
-
| Describe how the model works: |
|
| 236 |
| Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
|
| 237 |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
|
| 238 |
| Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
|
| 239 |
| Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
|
| 240 |
-
| Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
|
| 241 |
|
| 242 |
### Privacy
|
| 243 |
|
|
@@ -258,5 +398,5 @@ We value you, the datasets, the diversity they represent, and what we have been
|
|
| 258 |
| :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 259 |
| Model Application(s): | Physical AI common sense understanding and embodied reasoning |
|
| 260 |
| Describe the life critical impact (if present). | None Known |
|
| 261 |
-
| Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) |
|
| 262 |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
|
|
|
|
| 16 |
- nvidia
|
| 17 |
- unsloth
|
| 18 |
- cosmos
|
| 19 |
+
pipeline_tag: image-text-to-text
|
| 20 |
---
|
| 21 |
<div>
|
| 22 |
<p style="margin-top: 0;margin-bottom: 0;">
|
|
|
|
| 29 |
<a href="https://discord.gg/unsloth">
|
| 30 |
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
|
| 31 |
</a>
|
| 32 |
+
<a href="https://docs.unsloth.ai/">
|
| 33 |
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
|
| 34 |
</a>
|
| 35 |
</div>
|
| 36 |
</div>
|
| 37 |
|
| 38 |
|
|
|
|
| 39 |
# **Cosmos-Reason1: Physical AI Common Sense and Embodied Reasoning Models**
|
| 40 |
|
| 41 |
[**Cosmos**](https://huggingface.co/collections/nvidia/cosmos-reason1-67c9e926206426008f1da1b7) | [**Code**](https://github.com/nvidia-cosmos/cosmos-reason1) | [**Paper**](https://arxiv.org/abs/2503.15558) | [**Paper Website**](https://research.nvidia.com/labs/dir/cosmos-reason1)
|
|
|
|
| 44 |
|
| 45 |
## Description:
|
| 46 |
|
| 47 |
+
NVIDIA Cosmos Reason – an open, customizable, 7B-parameter reasoning vision language model (VLM) for physical AI and robotics - enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next.
|
| 48 |
+
|
| 49 |
+
Cosmos Reason excels at navigating the long tail of diverse scenarios of the physical world with spatial-temporal understanding. Cosmos Reason is post-trained with physical common sense and embodied reasoning data with supervised fine-tuning and reinforcement learning. It uses chain-of-thought reasoning capabilities to understand world dynamics without human annotations.
|
| 50 |
+
|
| 51 |
+
Given a video/image and a text prompt, the model first converts the video/image into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses.
|
| 52 |
|
| 53 |
+
Cosmos Reason can be used for robotics and physical AI applications including:
|
| 54 |
+
- Data curation and annotation — Enable developers to automate high-quality curation and annotation of massive, diverse training datasets.
|
| 55 |
+
- Robot planning and reasoning — Act as the brain for deliberate, methodical decision-making in a robot vision language action (VLA) model. Now robots such as humanoids and autonomous vehicles can interpret environments and given complex commands, break them down into tasks and execute them using common sense, even in unfamiliar environments.
|
| 56 |
+
- Video analytics AI agents — Extract valuable insights and perform root-cause analysis on massive volumes of video data. These agents can be used to analyze and understand recorded or live video streams across city and industrial operations.
|
| 57 |
|
| 58 |
+
The model is ready for commercial use.
|
| 59 |
|
| 60 |
**Model Developer**: NVIDIA
|
| 61 |
|
|
|
|
| 67 |
|
| 68 |
### License:
|
| 69 |
|
| 70 |
+
This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Additional Information: [Apache License 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md).
|
| 71 |
+
|
| 72 |
+
For a custom license, please contact [[email protected]](mailto:[email protected]).
|
| 73 |
|
| 74 |
Under the NVIDIA Open Model License, NVIDIA confirms:
|
| 75 |
|
|
|
|
| 90 |
### Release Date:
|
| 91 |
|
| 92 |
* Github: [05/17/2025](https://github.com/nvidia-cosmos/cosmos-reason1)
|
| 93 |
+
* Huggingface:
|
| 94 |
+
* [08/01/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/0caf724f837efea5e25bf6d5818dcdeec0a36604). Shipped a few improvements which include captions with temporal timestamp, Set of Mark prompting.
|
| 95 |
+
* [06/10/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/2464fff43c5c0bfb1916ac8c009feda4aed81be9). Enhanced critic capability for physical plausibility.
|
| 96 |
+
* [05/17/2025](https://huggingface.co/nvidia/Cosmos-Reason1-7B/commit/098a5bb62a1f4fc05e5c4ac89aae8005e301aa18). Initial release.
|
| 97 |
|
| 98 |
## Model Architecture:
|
| 99 |
|
|
|
|
| 103 |
Cosmos-Reason-7B is post-trained based on [Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) and follows the same model architecture.
|
| 104 |
|
| 105 |
|
| 106 |
+
**Number of model parameters:**
|
| 107 |
+
|
| 108 |
+
Cosmos-Reason1-7B:<br>
|
| 109 |
+
* Vision Transformer (ViT): 675.76M (675,759,104)
|
| 110 |
+
* Language Model (LLM): 7.07B (7,070,619,136)
|
| 111 |
+
* Other components (output projection layer): 545.00M (544,997,376)
|
| 112 |
+
|
| 113 |
+
|
| 114 |
+
## Computational Load:
|
| 115 |
+
|
| 116 |
+
* Cumulative Compute: 3.2603016e+21 FLOPS
|
| 117 |
+
* Estimated Energy and Emissions for Model Training:
|
| 118 |
+
* Total kWh = 16658432
|
| 119 |
+
* Total Emissions (tCO2e) = 5380.674
|
| 120 |
+
|
| 121 |
## Input
|
| 122 |
|
| 123 |
**Input Type(s)**: Text+Video/Image
|
|
|
|
| 138 |
|
| 139 |
## Output
|
| 140 |
|
| 141 |
+
**Output Type(s)**: Text
|
| 142 |
|
| 143 |
**Output Format**: String
|
| 144 |
|
|
|
|
| 146 |
|
| 147 |
**Other Properties Related to Output**:
|
| 148 |
* Recommend using 4096 or more output max tokens to avoid truncation of long chain-of-thought response.
|
| 149 |
+
|
| 150 |
+
* Our AI model recognizes timestamps added at the bottom of each frame for accurate temporal localization.
|
| 151 |
+
|
| 152 |
* Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
|
| 153 |
|
| 154 |
|
|
|
|
| 175 |
See [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) for details.
|
| 176 |
* Post Training: [Cosmos-Reason1](https://github.com/nvidia-cosmos/cosmos-reason1) provides examples of supervised fine-tuning and reinforcement learning on embodied reasoning datasets.
|
| 177 |
|
| 178 |
+
## Training and Evaluation Sections:
|
| 179 |
+
### 05/17/2025
|
| 180 |
Please see our [technical paper](https://arxiv.org/pdf/2503.15558) for detailed evaluations on physical common sense and embodied reasoning. Part of the evaluation datasets are released under [Cosmos-Reason1-Benchmark](https://huggingface.co/datasets/nvidia/Cosmos-Reason1-Benchmark). The embodied reasoning datasets and benchmarks focus on the following areas: robotics (RoboVQA, BridgeDataV2, Agibot, RobFail), ego-centric human demonstration (HoloAssist), and Autonomous Vehicle (AV) driving video data. The AV dataset is collected and annotated by NVIDIA.
|
| 181 |
+
|
| 182 |
All datasets go through the data annotation process described in the technical paper to prepare training and evaluation data and annotations.
|
| 183 |
|
| 184 |
+
### 08/01/2025
|
| 185 |
+
We enhance the model capability with the augmented training data. PLM-Video-Human and Nexar are used to enable dense temporal captioning. Describe Anything is added to enhance a set of mark (SoM) prompting. We enrich data in intelligent transportation systems (ITS) and warehouse applications. Lastly, Visual Critics dataset contains a collection of AI generated videos from Cosmos-Predict2 and Wan2.1 with human annotations to describe the physical correctness in AI videos.
|
| 186 |
+
|
| 187 |
+
|
| 188 |
+
## Training Datasets:
|
| 189 |
+
|
| 190 |
**Data Collection Method**:
|
| 191 |
* RoboVQA: Hybrid: Automatic/Sensors
|
| 192 |
* BridgeDataV2: Automatic/Sensors
|
|
|
|
| 194 |
* RoboFail: Automatic/Sensors
|
| 195 |
* HoloAssist: Human
|
| 196 |
* AV: Automatic/Sensors
|
| 197 |
+
* PLM-Video-Human: Human
|
| 198 |
+
* Nexar: Automatic/Sensors
|
| 199 |
+
* Describe Anything: Human
|
| 200 |
+
* ITS / Warehouse: Human, Automatic
|
| 201 |
+
* Visual Critics: Automatic
|
| 202 |
|
| 203 |
**Labeling Method**:
|
| 204 |
* RoboVQA: Hybrid: Human,Automated
|
|
|
|
| 207 |
* RoboFail: Hybrid: Human,Automated
|
| 208 |
* HoloAssist: Hybrid: Human,Automated
|
| 209 |
* AV: Hybrid: Human,Automated
|
| 210 |
+
* PLM-Video-Human: Human,Automated
|
| 211 |
+
* Nexar: Human
|
| 212 |
+
* Describe Anything: Human,Automated
|
| 213 |
+
* ITS / Warehouse: Human, Automated
|
| 214 |
+
* Visual Critics: Human,Automated
|
| 215 |
+
|
| 216 |
+
|
| 217 |
+
# Evaluation Datasets:
|
| 218 |
+
|
| 219 |
+
**Data Collection Method**:
|
| 220 |
+
* RoboVQA: Hybrid: Automatic/Sensors
|
| 221 |
+
* BridgeDataV2: Automatic/Sensors
|
| 222 |
+
* AgiBot: Automatic/Sensors
|
| 223 |
+
* RoboFail: Automatic/Sensors
|
| 224 |
+
* HoloAssist: Human
|
| 225 |
+
* AV: Automatic/Sensors
|
| 226 |
+
|
| 227 |
+
|
| 228 |
+
**Labeling Method**:
|
| 229 |
+
* RoboVQA: Hybrid: Human,Automated
|
| 230 |
+
* BridgeDataV2: Hybrid: Human,Automated
|
| 231 |
+
* AgiBot: Hybrid: Human,Automated
|
| 232 |
+
* RoboFail: Hybrid: Human,Automated
|
| 233 |
+
* HoloAssist: Hybrid: Human,Automated
|
| 234 |
+
* AV: Hybrid: Human,Automated
|
| 235 |
+
|
| 236 |
|
| 237 |
**Metrics**:
|
| 238 |
We report the model accuracy on the embodied reasoning benchmark introduced in [Cosmos-Reason1](https://arxiv.org/abs/2503.15558). The results differ from those presented in Table 9 due to additional training aimed at supporting a broader range of Physical AI tasks beyond the benchmark.
|
|
|
|
| 244 |
Modality: Video (mp4) and Text
|
| 245 |
|
| 246 |
## Dataset Quantification
|
| 247 |
+
### 05/17/2025
|
| 248 |
We release the embodied reasoning data and benchmarks. Each data sample is a pair of video and text. The text annotations include understanding and reasoning annotations described in the Cosmos-Reason1 paper. Each video may have multiple text annotations. The quantity of the video and text pairs is described in the table below.
|
| 249 |
**The AV data is currently unavailable and will be uploaded soon!**
|
| 250 |
|
|
|
|
| 254 |
| **RL Data** | 252 | 200 | 240 | 200 | 200 | N/A | **2.6GB** |
|
| 255 |
| **Benchmark Data** | 110 | 100 | 100 | 100 | 100 | 100 | **1.5GB** |
|
| 256 |
|
| 257 |
+
We release text annotations for all embodied reasoning datasets and videos for RoboVQA and AV datasets. For other datasets, users may download the source videos from the original data source and find corresponding video sources via the video names. The held-out RoboFail benchmark is released for measuring the generalization capability.
|
| 258 |
|
| 259 |
+
### 08/01/2025
|
| 260 |
+
| | [PLM-Video-Human](https://huggingface.co/datasets/facebook/PLM-Video-Human) | Nexar | [Describe Anything](https://huggingface.co/datasets/nvidia/describe-anything-dataset)| [ITS / Warehouse] | Visual Critics | Total Storage Size |
|
| 261 |
+
|------------------ |-----------------------------------------------------------------------------|-------------|--------------------------------------------------------------------------------------|-------------------------|--------------------------------------------|--------------------|
|
| 262 |
+
| **SFT Data** | 39k | 240k | 178k | 24k | 24k | **2.6TB** |
|
| 263 |
|
|
|
|
| 264 |
|
| 265 |
|
| 266 |
## Inference:
|
|
|
|
| 267 |
**Test Hardware:** H100, A100, GB200 <br>
|
| 268 |
+
> [!NOTE]
|
| 269 |
+
> We suggest using `fps=4` for the input video and `max_tokens=4096` to avoid truncated response.
|
| 270 |
+
```python
|
| 271 |
+
from transformers import AutoProcessor
|
| 272 |
+
from vllm import LLM, SamplingParams
|
| 273 |
+
from qwen_vl_utils import process_vision_info
|
| 274 |
+
|
| 275 |
+
# You can also replace the MODEL_PATH by a safetensors folder path mentioned above
|
| 276 |
+
MODEL_PATH = "nvidia/Cosmos-Reason1-7B"
|
| 277 |
+
|
| 278 |
+
llm = LLM(
|
| 279 |
+
model=MODEL_PATH,
|
| 280 |
+
limit_mm_per_prompt={"image": 10, "video": 10},
|
| 281 |
+
)
|
| 282 |
+
|
| 283 |
+
sampling_params = SamplingParams(
|
| 284 |
+
temperature=0.6,
|
| 285 |
+
top_p=0.95,
|
| 286 |
+
repetition_penalty=1.05,
|
| 287 |
+
max_tokens=4096,
|
| 288 |
+
)
|
| 289 |
+
|
| 290 |
+
video_messages = [
|
| 291 |
+
{"role": "system", "content": "You are a helpful assistant. Answer the question in the following format: <think>\nyour reasoning\n</think>\n\n<answer>\nyour answer\n</answer>."},
|
| 292 |
+
{"role": "user", "content": [
|
| 293 |
+
{"type": "text", "text": (
|
| 294 |
+
"Is it safe to turn right?"
|
| 295 |
+
)
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"type": "video",
|
| 299 |
+
"video": "file:///path/to/your/video.mp4",
|
| 300 |
+
"fps": 4,
|
| 301 |
+
}
|
| 302 |
+
]
|
| 303 |
+
},
|
| 304 |
+
]
|
| 305 |
+
|
| 306 |
+
# Here we use video messages as a demonstration
|
| 307 |
+
messages = video_messages
|
| 308 |
+
|
| 309 |
+
processor = AutoProcessor.from_pretrained(MODEL_PATH)
|
| 310 |
+
prompt = processor.apply_chat_template(
|
| 311 |
+
messages,
|
| 312 |
+
tokenize=False,
|
| 313 |
+
add_generation_prompt=True,
|
| 314 |
+
)
|
| 315 |
+
image_inputs, video_inputs, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
|
| 316 |
+
|
| 317 |
+
mm_data = {}
|
| 318 |
+
if image_inputs is not None:
|
| 319 |
+
mm_data["image"] = image_inputs
|
| 320 |
+
if video_inputs is not None:
|
| 321 |
+
mm_data["video"] = video_inputs
|
| 322 |
+
|
| 323 |
+
llm_inputs = {
|
| 324 |
+
"prompt": prompt,
|
| 325 |
+
"multi_modal_data": mm_data,
|
| 326 |
+
|
| 327 |
+
# FPS will be returned in video_kwargs
|
| 328 |
+
"mm_processor_kwargs": video_kwargs,
|
| 329 |
+
}
|
| 330 |
+
|
| 331 |
+
outputs = llm.generate([llm_inputs], sampling_params=sampling_params)
|
| 332 |
+
generated_text = outputs[0].outputs[0].text
|
| 333 |
+
|
| 334 |
+
print(generated_text)
|
| 335 |
+
```
|
| 336 |
+
|
| 337 |
|
| 338 |
## Ethical Considerations
|
| 339 |
|
|
|
|
| 372 |
| Model Type: | Transformer |
|
| 373 |
| Intended Users: | Physical AI developers |
|
| 374 |
| Output: | Text |
|
| 375 |
+
| Describe how the model works: | Given a video/image and a text prompt, the model first converts the video/image into tokens using a vision encoder and a special translator called a projector. These video tokens are combined with the text prompt and fed into the core model, which uses a mix of LLM modules and techniques. This enables the model to think step-by-step and provide detailed, logical responses. |
|
| 376 |
| Technical Limitations: | The model may not follow the video or text input accurately in challenging cases, where the input video shows complex scene composition and temporal dynamics. Examples of challenging scenes include: fast camera movements, overlapping human-object interactions, low lighting with high motion blur, and multiple people performing different actions simultaneously. |
|
| 377 |
| Verified to have met prescribed NVIDIA quality standards: | Yes |
|
| 378 |
| Performance Metrics: | Quantitative and Qualitative Evaluation. Cosmos-Reason1 proposes the embodied reasoning benchmark and physical common sense benchmark to evaluate accuracy with visual question answering. |
|
| 379 |
| Potential Known Risks: | The model's output can generate all forms of texts, including what may be considered toxic, offensive, or indecent. |
|
| 380 |
+
| Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Additional Information: [Apache License 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md). |
|
| 381 |
|
| 382 |
### Privacy
|
| 383 |
|
|
|
|
| 398 |
| :---------------------------------------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 399 |
| Model Application(s): | Physical AI common sense understanding and embodied reasoning |
|
| 400 |
| Describe the life critical impact (if present). | None Known |
|
| 401 |
+
| Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Additional Information: [Apache License 2.0](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md). |
|
| 402 |
| Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog. |
|