|
|
--- |
|
|
language: |
|
|
- en |
|
|
task_categories: |
|
|
- image-text-to-text |
|
|
tags: |
|
|
- robotics |
|
|
- spatial-reasoning |
|
|
- warehouse |
|
|
- depth-perception |
|
|
- multimodal |
|
|
- vision-language-model |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id_db |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: string |
|
|
- name: rgb_image |
|
|
dtype: string |
|
|
- name: depth_image |
|
|
dtype: string |
|
|
- name: rle |
|
|
list: |
|
|
- name: size |
|
|
list: int32 |
|
|
- name: counts |
|
|
dtype: string |
|
|
- name: texts |
|
|
struct: |
|
|
- name: user |
|
|
dtype: string |
|
|
- name: assistant |
|
|
dtype: string |
|
|
- name: category |
|
|
dtype: string |
|
|
- name: normalized_answer |
|
|
dtype: string |
|
|
- name: dataset_name |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2647453621 |
|
|
num_examples: 499083 |
|
|
- name: validation |
|
|
num_bytes: 13372284 |
|
|
num_examples: 1942 |
|
|
download_size: 1428260287 |
|
|
dataset_size: 2660825905 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: validation |
|
|
path: data/validation-* |
|
|
--- |
|
|
|
|
|
# SmolRGPT Dataset: Efficient Spatial Reasoning for Warehouse Environments |
|
|
|
|
|
This repository hosts the **Spacial Warehouse Dataset**, a key component for the research presented in: |
|
|
|
|
|
* **Paper:** [SmolRGPT: Efficient Spatial Reasoning for Warehouse Environments with 600M Parameters](https://huggingface.co/papers/2509.15490) |
|
|
* **Code:** [https://github.com/abtraore/SmolRGPT](https://github.com/abtraore/SmolRGPT) |
|
|
|
|
|
## Abstract |
|
|
|
|
|
Recent advances in vision-language models (VLMs) have enabled powerful multimodal reasoning, but state-of-the-art approaches typically rely on extremely large models with prohibitive computational and memory requirements. This makes their deployment challenging in resource-constrained environments such as warehouses, robotics, and industrial applications, where both efficiency and robust spatial understanding are critical. In this work, we present SmolRGPT, a compact vision-language architecture that explicitly incorporates region-level spatial reasoning by integrating both RGB and depth cues. SmolRGPT employs a three-stage curriculum that progressively align visual and language features, enables spatial relationship understanding, and adapts to task-specific datasets. We demonstrate that with only 600M parameters, SmolRGPT achieves competitive results on challenging warehouse spatial reasoning benchmarks, matching or exceeding the performance of much larger alternatives. These findings highlight the potential for efficient, deployable multimodal intelligence in real-world settings without sacrificing core spatial reasoning capabilities. The code of the experimentation will be available at: this https URL |
|
|
|
|
|
## Dataset Overview |
|
|
|
|
|
The Spacial Warehouse Dataset is a crucial benchmark for challenging warehouse spatial reasoning tasks, integrating both RGB and depth cues for explicit region-level spatial understanding. It is specifically designed to support the training and evaluation of models like SmolRGPT, enabling efficient multimodal intelligence in resource-constrained environments such as warehouses and robotics. |
|
|
|
|
|
## Data Download and Preparation |
|
|
|
|
|
To use this dataset, you first need to clone it from Hugging Face using Git LFS, and then download and untar the RGB and Depth images from the original repository as instructed in the SmolRGPT GitHub repository. |
|
|
|
|
|
1. **Clone the dataset:** |
|
|
```bash |
|
|
git lfs install # Make sure git-lfs is installed (https://git-lfs.com) |
|
|
git clone https://huggingface.co/datasets/Abdrah/warehouse-rgbd-smolRGPT |
|
|
``` |
|
|
2. **Download and untar RGB and Depth images:** |
|
|
```bash |
|
|
# You can also use `huggingface-cli download` |
|
|
git clone https://huggingface.co/datasets/nvidia/PhysicalAI-Spatial-Intelligence-Warehouse |
|
|
cd PhysicalAI-Spatial-Intelligence-Warehouse |
|
|
|
|
|
# Untar images for train/test subsets |
|
|
for dir in train test; do |
|
|
for subdir in images depths; do |
|
|
if [ -d "$dir/$subdir" ]; then |
|
|
echo "Processing $dir/$subdir" |
|
|
cd "$dir/$subdir" |
|
|
tar -xzf chunk_*.tar.gz |
|
|
# rm chunk_*.tar.gz |
|
|
cd ../.. |
|
|
fi |
|
|
done |
|
|
done |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
``` |
|
|
@article{traore2025smolrgptefficientspatialreasoning, |
|
|
title={SmolRGPT: Efficient Spatial Reasoning for Warehouse Environments with 600M Parameters}, |
|
|
author={Abdarahmane Traore and Éric Hervet and Andy Couturier}, |
|
|
year={2025}, |
|
|
eprint={2509.15490}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2509.15490}, |
|
|
} |
|
|
``` |