Enhance dataset card: Add task category, links, detailed info, and usage

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +95 -5
README.md CHANGED
@@ -1,13 +1,103 @@
1
  ---
2
  language:
3
  - en
 
 
 
 
 
 
4
  ---
5
- # 3D-MOOD
6
 
7
- <!-- Provide a quick summary of the dataset. -->
8
 
9
- This dataset is for [3D-MOOD](https://arxiv.org/abs/2507.23567).
 
 
10
 
11
- It contains the selected images and annotations from [Argoverse 2](https://www.argoverse.org/av2.html) and [ScanNetV2](http://www.scan-net.org/), and also the depth GT for [Omni3D](https://github.com/facebookresearch/omni3d/blob/main/DATA.md) data.
12
 
13
- We provide the HDF5 data and annotation in JSON format.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ task_categories:
5
+ - image-to-3d
6
+ tags:
7
+ - 3d-object-detection
8
+ - monocular
9
+ - open-set
10
  ---
 
11
 
12
+ # 3D-MOOD Dataset
13
 
14
+ <div align="center">
15
+ <img src="https://github.com/cvg/3D-MOOD/blob/main/assets/overview.png" width="100%" alt="3D-MOOD Overview" align="center">
16
+ </div>
17
 
18
+ This dataset is for [3D-MOOD: Lifting 2D to 3D for Monocular Open-Set Object Detection](https://arxiv.org/abs/2507.23567). It contains selected images and annotations from [Argoverse 2](https://www.argoverse.org/av2.html) and [ScanNetV2](http://www.scan-net.org/), and also the depth ground truth (GT) for [Omni3D](https://github.com/facebookresearch/omni3d/blob/main/DATA.md) data. We provide the HDF5 data and annotation in JSON format.
19
 
20
+ * **Paper:** [3D-MOOD: Lifting 2D to 3D for Monocular Open-Set Object Detection](https://arxiv.org/abs/2507.23567)
21
+ * **Project Page:** [https://royyang0714.github.io/3D-MOOD](https://royyang0714.github.io/3D-MOOD)
22
+ * **Code:** [https://github.com/cvg/3D-MOOD](https://github.com/cvg/3D-MOOD)
23
+
24
+ ## Introduction
25
+
26
+ Monocular 3D object detection is valuable for various applications such as robotics and AR/VR. This dataset is associated with 3D-MOOD, the first end-to-end 3D Monocular Open-set Object Detector, which addresses monocular 3D object detection in an open-set setting. The approach involves lifting open-set 2D detection into 3D space, enabling end-to-end joint training for both 2D and 3D tasks to yield better overall performance.
27
+
28
+ ## Data Preparation
29
+
30
+ The HDF5 files and annotations for ScanNet v2, Argoverse 2, and the depth GT for Omni3D datasets are provided. For training and testing with Omni3D, please refer to the [DATA guide](https://github.com/cvg/3D-MOOD/blob/main/docs/DATA.md) in the GitHub repository to set up the Omni3D data.
31
+
32
+ The final data folder structure should be like:
33
+
34
+ ```
35
+ REPO_ROOT
36
+ ├── data
37
+ │ ├── omni3d
38
+ │ │ └── annotations
39
+ ├── KITTI_object
40
+ ├── KITTI_object_depth
41
+ ├── nuscenes
42
+ ├── nuscenes_depth
43
+ ├── objectron
44
+ ├── objectron_depth
45
+ ├── SUNRGBD
46
+ ├── ARKitScenes
47
+ ├── ARKitScenes_depth
48
+ ├── hypersim
49
+ ├── hypersim_depth
50
+ ├── argoverse2
51
+ │ ├── annotations
52
+ │ └── val.hdf5
53
+ └── scannet
54
+ ├── annotations
55
+ └── val.hdf5
56
+ ```
57
+
58
+ By default, in our provided config, we use `HDF5` as the data backend. You can convert each folder using the [script](https://github.com/SysCV/vis4d/blob/main/vis4d/data/io/to_hdf5.py) to generate them, or you can just change the `data_backend` in the configs to `FileBackend`.
59
+
60
+ ## Sample Usage
61
+
62
+ We provide the [`demo.py`](https://github.com/cvg/3D-MOOD/blob/main/scripts/demo.py) to test whether the installation is complete.
63
+
64
+ First, install the necessary packages (for full installation instructions, refer to the [GitHub repository](https://github.com/cvg/3D-MOOD#installation)):
65
+
66
+ ```bash
67
+ conda create -n opendet3d python=3.11 -y
68
+
69
+ conda activate opendet3d
70
+
71
+ # Install Vis4D
72
+ # It should also install the PyTorch with CUDA support. But please check.
73
+ pip install vis4d==1.0.0
74
+
75
+ # Install CUDA ops
76
+ pip install git+https://github.com/SysCV/vis4d_cuda_ops.git --no-build-isolation --no-cache-dir
77
+
78
+ # Install 3D-MOOD
79
+ pip install -v -e .
80
+ ```
81
+
82
+ Then, run the demo script:
83
+
84
+ ```bash
85
+ python scripts/demo.py
86
+ ```
87
+
88
+ It will save the prediction as follows to `assets/demo/output.png`.
89
+
90
+ You can also try the live demo on [Hugging Face Spaces](https://huggingface.co/spaces/RoyYang0714/3D-MOOD)!
91
+
92
+ ## Citation
93
+
94
+ If you find our work useful in your research please consider citing our publications:
95
+
96
+ ```bibtex
97
+ @article{yang20253d,
98
+ title={3D-MOOD: Lifting 2D to 3D for Monocular Open-Set Object Detection},
99
+ author={Yang, Yung-Hsu and Piccinelli, Luigi and Segu, Mattia and Li, Siyuan and Huang, Rui and Fu, Yuqian and Pollefeys, Marc and Blum, Hermann and Bauer, Zuria},
100
+ journal={arXiv preprint arXiv:2507.23567},
101
+ year={2025}
102
+ }
103
+ ```