hgupt3 commited on
Commit
142775f
Β·
verified Β·
1 Parent(s): c4160bf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -19,11 +19,11 @@ The SITR dataset consists of three main parts:
19
  1. **Simulated Tactile Dataset**
20
  A large-scale synthetic dataset generated using physics-based rendering (PBR) in Blender. This dataset spans 100 unique simulated sensor configurations with tactile signals, calibration images, and corresponding surface normal maps. It includes 10K unique contact configurations generated using 50 high-resolution 3D meshes of common household objects, resulting in a pre-training dataset of 1M samples.
21
 
22
- 2. **Real-World Tactile Dataset – Classification**
23
  Data collected from 7 real sensors (including variations of GelSight Mini, GelSight Hex, GelSight Wedge, and DIGIT). For the classification task, 20 objects are pressed against each sensor at various poses and depths, accumulating 1K tactile images per object (140K images in total, with 20K per sensor). We decided to only use 16 of the objects for our classification experiments and some of the items were deemed unsuitable (this was decided before experimentation). The dataset is provided as separate train (80%) and test sets (20%).
24
 
25
- 3. **Real-World Tactile Dataset – Pose Estimation**
26
- For pose estimation, tactile signals are recorded using a modified Ender-3 Pro 3D printer equipped with 3D-printed indenters. This setup provides accurate ground truth (x, y, z coordinates) for contact points. Data were collected for 6 indenters across 4 sensors, resulting in 1K samples per indentor (24K images in total, 6K per sensor). This dataset is also organized into train and test sets.
27
 
28
  ---
29
 
@@ -93,7 +93,7 @@ data_root/
93
  └── ...
94
  ```
95
 
96
- ### 2. Real-World Classification Dataset
97
 
98
  Each of the `train_set/` and `test_set/` directories follows this structure:
99
 
@@ -113,7 +113,7 @@ train_set/ (or test_set/)
113
  └── ...
114
  ```
115
 
116
- ### 3. Real-World Pose Estimation Dataset
117
 
118
  Similarly, each of the `train_set/` and `test_set/` directories is structured as follows:
119
 
 
19
  1. **Simulated Tactile Dataset**
20
  A large-scale synthetic dataset generated using physics-based rendering (PBR) in Blender. This dataset spans 100 unique simulated sensor configurations with tactile signals, calibration images, and corresponding surface normal maps. It includes 10K unique contact configurations generated using 50 high-resolution 3D meshes of common household objects, resulting in a pre-training dataset of 1M samples.
21
 
22
+ 2. **Classification Tactile Dataset**
23
  Data collected from 7 real sensors (including variations of GelSight Mini, GelSight Hex, GelSight Wedge, and DIGIT). For the classification task, 20 objects are pressed against each sensor at various poses and depths, accumulating 1K tactile images per object (140K images in total, with 20K per sensor). We decided to only use 16 of the objects for our classification experiments and some of the items were deemed unsuitable (this was decided before experimentation). The dataset is provided as separate train (80%) and test sets (20%).
24
 
25
+ 3. **Pose Estimation Tactile Dataset**
26
+ For pose estimation, tactile signals are recorded using a modified Ender-3 Pro 3D printer equipped with 3D-printed indenters. This setup provides accurate ground truth (x, y, z coordinates) for contact points. Data were collected for 6 indenters across 4 sensors, resulting in 1K samples per indentor (24K images in total, 6K per sensor). This dataset is also organized into train (80%) and test sets (20%).
27
 
28
  ---
29
 
 
93
  └── ...
94
  ```
95
 
96
+ ### 2. Classification Dataset
97
 
98
  Each of the `train_set/` and `test_set/` directories follows this structure:
99
 
 
113
  └── ...
114
  ```
115
 
116
+ ### 3. Pose Estimation Dataset
117
 
118
  Similarly, each of the `train_set/` and `test_set/` directories is structured as follows:
119