| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - object-detection |
| | language: |
| | - en |
| | tags: |
| | - computer-vision |
| | - object-detection |
| | - yolo |
| | - virtual-reality |
| | - vr |
| | - accessibility |
| | - social-vr |
| | pretty_name: DISCOVR - Virtual Reality UI Object Detection Dataset |
| | size_categories: |
| | - 10K<n<100K |
| | dataset_info: |
| | features: |
| | - name: image |
| | dtype: image |
| | - name: objects |
| | struct: |
| | - name: class_id |
| | list: int64 |
| | - name: center_x |
| | list: float32 |
| | - name: center_y |
| | list: float32 |
| | - name: width |
| | list: float32 |
| | - name: height |
| | list: float32 |
| | splits: |
| | - name: train |
| | num_bytes: 536592914 |
| | num_examples: 15207 |
| | - name: test |
| | num_bytes: 29938152 |
| | num_examples: 839 |
| | - name: validation |
| | num_bytes: 59182849 |
| | num_examples: 1645 |
| | download_size: 613839753 |
| | dataset_size: 625713915 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: train |
| | path: data/train-* |
| | - split: test |
| | path: data/test-* |
| | - split: validation |
| | path: data/validation-* |
| | --- |
| | |
| | # DIgitial Social Context Objects in VR (DISCOVR): A Social Virtual Reality Object Detection Dataset |
| |
|
| | ## Dataset Description |
| |
|
| | DISCOVR is an object detection dataset for identifying user interface elements and interactive objects in virtual reality (VR) and social VR environments. The dataset contains **17,691 annotated images** across **30 object classes** commonly found in 17 top social VR applications & VR demos. |
| |
|
| | This dataset is designed to support research in VR accessibility, automatic UI analysis, and assistive technologies for virtual environments. |
| |
|
| | ### The entire dataset is available to download at once at https://huggingface.co/datasets/UWMadAbility/DISCOVR/blob/main/dataset.zip |
| |
|
| | ### Pretrained YOLOv8 weights are available at https://huggingface.co/UWMadAbility/VRSight |
| |
|
| | ### If you use DISCOVR in your work, please cite our work VRSight for which it was developed: |
| |
|
| | ```bibtex |
| | @inproceedings{killough2025vrsight, |
| | title={VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People}, |
| | author={Killough, Daniel and Feng, Justin and Ching, Zheng Xue and Wang, Daniel and Dyava, Rithvik and Tian, Yapeng and Zhao, Yuhang}, |
| | booktitle={Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology}, |
| | pages={1--17}, |
| | year={2025} |
| | } |
| | ``` |
| |
|
| | --- |
| |
|
| | ### Dataset Summary |
| |
|
| | - **Total Images:** 17,691 |
| | - Training: 15,207 images |
| | - Validation: 1,645 images |
| | - Test: 839 images |
| | - **Classes:** 30 object categories |
| | - **Format:** YOLOv8 |
| | - **License:** CC BY 4.0 |
| |
|
| | ## Object Classes |
| |
|
| | The dataset includes 30 classes of VR UI elements and interactive objects: |
| |
|
| | | ID | Class Name | Description | |
| | |----|------------|-------------| |
| | | 0 | avatar | User representations (human avatars) | |
| | | 1 | avatar-nonhuman | Non-human avatar representations | |
| | | 2 | button | Interactive buttons | |
| | | 3 | campfire | Campfire objects (social gathering points) | |
| | | 4 | chat box | Text chat interface elements | |
| | | 5 | chat bubble | Speech/thought bubbles | |
| | | 6 | controller | VR controller representations | |
| | | 7 | dashboard | VR OS dashboard | |
| | | 8 | guardian | Boundary/guardian system indicators (blue grid/plus signs) | |
| | | 9 | hand | Hand representations | |
| | | 10 | hud | Heads-up display elements | |
| | | 11 | indicator-mute | Mute status indicators | |
| | | 12 | interactable | Generic interactable objects | |
| | | 13 | locomotion-target | Movement/teleportation targets | |
| | | 14 | menu | Menu interfaces | |
| | | 15 | out of bounds | Out-of-bounds warnings (red circle) | |
| | | 16 | portal | Portal/doorway objects | |
| | | 17 | progress bar | Progress indicators | |
| | | 18 | seat-multiple | Multi-person seating | |
| | | 19 | seat-single | Single-person seating | |
| | | 20 | sign-graphic | Graphical signs | |
| | | 21 | sign-text | Text-based signs | |
| | | 22 | spawner | Object spawning points | |
| | | 23 | table | Tables and surfaces | |
| | | 24 | target | Target/aim points | |
| | | 25 | ui-graphic | Graphical UI elements | |
| | | 26 | ui-text | Text UI elements | |
| | | 27 | watch | Watch/time displays | |
| | | 28 | writing surface | Whiteboards/drawable surfaces | |
| | | 29 | writing utensil | Drawing/writing tools | |
| |
|
| | ## Dataset Structure |
| |
|
| | ``` |
| | DISCOVR/ |
| | ├── train/ |
| | │ ├── images/ # 15,207 training images (.jpg) |
| | │ └── labels/ # YOLO format annotations (.txt) |
| | ├── validation/ |
| | │ ├── images/ # 1,645 validation images |
| | │ └── labels/ # YOLO format annotations |
| | ├── test/ |
| | │ ├── images/ # 839 test images |
| | │ └── labels/ # YOLO format annotations |
| | └── data.yaml # Dataset configuration file |
| | ``` |
| |
|
| | ### Annotation Format |
| |
|
| | Annotations are in YOLO format with normalized coordinates: |
| | ``` |
| | <class_id> <center_x> <center_y> <width> <height> |
| | ``` |
| |
|
| | All coordinates are normalized to [0, 1] range relative to image dimensions. |
| |
|
| | --- |
| |
|
| | ## Usage |
| |
|
| | ### With Hugging Face Datasets |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load the full dataset |
| | dataset = load_dataset("UWMadAbility/DISCOVR") |
| | |
| | # Access individual splits |
| | train_data = dataset['train'] |
| | val_data = dataset['validation'] |
| | test_data = dataset['test'] |
| | |
| | # Example: Get first training image and its annotations |
| | sample = train_data[0] |
| | image = sample['image'] |
| | objects = sample['objects'] |
| | |
| | print(f"Number of objects: {len(objects['class_id'])}") |
| | print(f"Class IDs: {objects['class_id']}") |
| | print(f"Bounding boxes: {list(zip(objects['center_x'], objects['center_y'], objects['width'], objects['height']))}") |
| | ``` |
| |
|
| | ### With YOLOv8/Ultralytics |
| |
|
| | First, download the dataset and create a `data.yaml` file: |
| |
|
| | ```yaml |
| | path: ./DISCOVR |
| | train: train/images |
| | val: validation/images |
| | test: test/images |
| | |
| | nc: 30 |
| | names: |
| | 0: avatar |
| | 1: avatar-nonhuman |
| | 2: button |
| | 3: campfire |
| | 4: chat box |
| | 5: chat bubble |
| | 6: controller |
| | 7: dashboard |
| | 8: guardian |
| | 9: hand |
| | 10: hud |
| | 11: indicator-mute |
| | 12: interactable |
| | 13: locomotion-target |
| | 14: menu |
| | 15: out of bounds |
| | 16: portal |
| | 17: progress bar |
| | 18: seat-multiple |
| | 19: seat-single |
| | 20: sign-graphic |
| | 21: sign-text |
| | 22: spawner |
| | 23: table |
| | 24: target |
| | 25: ui-graphic |
| | 26: ui-text |
| | 27: watch |
| | 28: writing surface |
| | 29: writing utensil |
| | ``` |
| |
|
| | Then train a model: |
| |
|
| | ```python |
| | from ultralytics import YOLO |
| | |
| | # Load a pretrained model |
| | model = YOLO('yolov8n.pt') |
| | |
| | # Train the model |
| | results = model.train( |
| | data='data.yaml', |
| | epochs=100, |
| | imgsz=640, |
| | batch=16 |
| | ) |
| | |
| | # Validate the model |
| | metrics = model.val() |
| | |
| | # Make predictions |
| | results = model.predict('path/to/vr_image.jpg') |
| | ``` |
| |
|
| | ### With Transformers (DETR, etc.) |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | from transformers import AutoImageProcessor, AutoModelForObjectDetection |
| | |
| | # Load dataset |
| | dataset = load_dataset("UWMadAbility/DISCOVR") |
| | |
| | # Load model and processor |
| | processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") |
| | model = AutoModelForObjectDetection.from_pretrained("facebook/detr-resnet-50") |
| | |
| | # Process image |
| | sample = dataset['train'][0] |
| | inputs = processor(images=sample['image'], return_tensors="pt") |
| | |
| | # Note: You'll need to convert YOLO format to COCO format for DETR |
| | # YOLO: (center_x, center_y, width, height) normalized |
| | # COCO: (x_min, y_min, width, height) in pixels |
| | ``` |
| |
|
| | --- |
| |
|
| | ## Applications |
| |
|
| | This dataset can be used for: |
| |
|
| | - **VR Accessibility Research**: Automatically detecting and describing UI elements for users with disabilities |
| | - **UI/UX Analysis**: Analyzing VR interface design patterns |
| | - **Assistive Technologies**: Building screen readers and navigation aids for VR |
| | - **Automatic Testing**: Testing VR applications for UI consistency |
| | - **Content Moderation**: Detecting inappropriate content in social VR spaces |
| | - **User Behavior Research**: Understanding how users interact with VR interfaces |
| |
|
| | ## Citation |
| |
|
| | If you use this dataset in your research, please cite our publication using DISCOVR, called VRSight: |
| |
|
| | ```bibtex |
| | @inproceedings{killough2025vrsight, |
| | title={VRSight: An AI-Driven Scene Description System to Improve Virtual Reality Accessibility for Blind People}, |
| | author={Killough, Daniel and Feng, Justin and Ching, Zheng Xue and Wang, Daniel and Dyava, Rithvik and Tian, Yapeng and Zhao, Yuhang}, |
| | booktitle={Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology}, |
| | pages={1--17}, |
| | year={2025} |
| | } |
| | ``` |
| |
|
| | ## License |
| |
|
| | This dataset is released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license. |
| |
|
| | You are free to: |
| | - Share — copy and redistribute the material |
| | - Adapt — remix, transform, and build upon the material |
| |
|
| | Under the following terms: |
| | - Attribution — You must give appropriate credit |
| |
|
| | ## Contact |
| |
|
| | For questions, issues, or collaborations: |
| | - Main Codebase: https://github.com/MadisonAbilityLab/VRSight |
| | - This Repository: [UWMadAbility/DISCOVR](https://huggingface.co/datasets/UWMadAbility/DISCOVR) |
| | - Organization: UW-Madison Ability Lab |
| |
|
| | ## Acknowledgments |
| | This dataset was created by Daniel K., Justin, Daniel W., ZX, Ricky, Abhinav, and the MadAbility Lab at the University of Wisconsin-Madison to support research in VR accessibility and assistive technologies. |
| |
|