sanskxr02 commited on
Commit
9e24abc
·
verified ·
1 Parent(s): 551b9bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -35
README.md CHANGED
@@ -6,11 +6,10 @@ pretty_name: >-
6
  size_categories:
7
  - 100K<n<1M
8
  task_categories:
9
- - reinforcement-learning
10
- - robotics
11
- - world models
12
- - representation-learning
13
- - video-understanding
14
  tags:
15
  - egocentric
16
  - robotics
@@ -24,57 +23,72 @@ tags:
24
  ---
25
 
26
 
27
- # Fidelity Dynamics – Egocentric State–Action Transitions (v0)
 
 
 
28
 
29
- This repository contains an initial release of egocentric state–action–state′ transitions extracted from real-world worker footage by buildai.
 
30
 
31
  ## Overview
32
- The dataset is constructed by enriching monocular egocentric videos into structured transitions of the form:
 
33
 
34
  (s_t, a_t, s_{t+1})
35
 
36
- Each transition represents a short temporal step derived from consecutive video frames.
 
37
 
38
- This release is intended as early infrastructure for researchers exploring:
39
- - World models
40
- - Dynamics learning
41
- - Vision–language–action systems
42
- - Representation learning from human activity
43
 
44
  ## Data Contents
45
- - ~250,000 transitions
46
- - Real factory environments
47
- - Egocentric viewpoint (head- or chest-mounted cameras)
48
 
49
- Each transition is stored as a JSONL record.
 
 
 
 
50
 
51
  ## Schema (Simplified)
52
 
53
- - `s` (state):
54
- - `ego.pose`: estimated egomotion
55
- - `ego.vel`: egocentric velocity
56
- - `hand`: hand presence and image-space location
57
- - `entities`: detected objects with bounding boxes and centers
58
- - `meta`: video identifier
 
 
59
 
60
- - `a` (action):
61
  - `ego_delta`
62
  - `hand_delta`
63
- - `grasp_delta`
64
 
65
  - `s_prime`:
66
  - Same structure as `s`, representing the next timestep
67
 
68
- All values are derived from monocular video without force, torque, or privileged robot sensors.
69
 
70
- ## Notes & Limitations
71
- - Monocular only
72
- - No force / torque / joint states
73
- - No task labels
74
- - Noise and estimation error are expected
 
 
 
75
 
76
- This is an early dataset released to support exploration and feedback.
 
 
 
 
 
77
 
78
  ## Credits
79
- Original video data provided by BuildAI.
80
- Enrichment and processing by Fidelity Dynamics.
 
 
6
  size_categories:
7
  - 100K<n<1M
8
  task_categories:
9
+ - reinforcement-learning
10
+ - representation-learning
11
+ - video-understanding
12
+
 
13
  tags:
14
  - egocentric
15
  - robotics
 
23
  ---
24
 
25
 
26
+ # Fidelity Data Factory – Egocentric State–Action Transitions (v0)
27
+
28
+ This repository contains an initial release of structured state–action–state′
29
+ transitions extracted from real-world egocentric video.
30
 
31
+ The goal of this dataset is to provide early infrastructure for learning
32
+ dynamics and representations from large-scale human activity data.
33
 
34
  ## Overview
35
+
36
+ Each data point is a short temporal transition of the form:
37
 
38
  (s_t, a_t, s_{t+1})
39
 
40
+ Transitions are derived from monocular egocentric footage recorded in real
41
+ factory environments.
42
 
43
+ This release does not include robot-specific signals such as torques or joint
44
+ states, and is intended for research and exploration rather than deployment.
 
 
 
45
 
46
  ## Data Contents
 
 
 
47
 
48
+ - ~200k+ transitions
49
+ - Egocentric (head / chest-mounted) viewpoint
50
+ - Real industrial environments
51
+
52
+ Transitions are stored in JSONL format.
53
 
54
  ## Schema (Simplified)
55
 
56
+ Each record contains:
57
+
58
+ - `s`:
59
+ - `ego_pose`
60
+ - `ego_velocity`
61
+ - `hand_state`
62
+ - `entities` (objects with image-space location)
63
+ - `meta` (video id, timestamp)
64
 
65
+ - `a`:
66
  - `ego_delta`
67
  - `hand_delta`
68
+ - `interaction_delta`
69
 
70
  - `s_prime`:
71
  - Same structure as `s`, representing the next timestep
72
 
73
+ See `schema.json` for full details.
74
 
75
+ ## Intended Use
76
+
77
+ This dataset may be useful for:
78
+ - World model research
79
+ - Offline RL
80
+ - Vision–language–action pretraining
81
+ - Learning dynamics from human activity
82
+ - Representation learning from egocentric video
83
 
84
+ ## Limitations
85
+
86
+ - Monocular video only
87
+ - No force / torque signals
88
+ - No task labels
89
+ - Contains estimation noise
90
 
91
  ## Credits
92
+
93
+ Original video data provided by BuildAI.
94
+ Enrichment and processing by Fidelity Dynamics.