Datasets:
File size: 10,223 Bytes
d3ae82e 11f08fd 585cff1 11f08fd 485f8b2 11f08fd 485f8b2 11f08fd 485f8b2 126f061 d3ae82e 90569d8 1595e54 90569d8 1595e54 90569d8 126f061 90569d8 48211a9 90569d8 1595e54 90569d8 bfae1d4 90569d8 bfae1d4 b5bbc6b 33d27eb b5bbc6b bfae1d4 90569d8 b7ddf34 90569d8 1595e54 90569d8 126f061 90569d8 126f061 b81f3f3 126f061 b81f3f3 b7ddf34 126f061 b81f3f3 126f061 b7ddf34 33d27eb b7ddf34 126f061 b7ddf34 48211a9 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 422eea0 0fd1c10 422eea0 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 90569d8 bfae1d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 |
---
license: bsd-3-clause
dataset_info:
features:
- name: cond_exp_y
dtype: float64
- name: m1
dtype: float64
- name: g1
dtype: float64
- name: l1
dtype: float64
- name: Y
dtype: float64
- name: D_1
dtype: float64
- name: carat
dtype: float64
- name: depth
dtype: float64
- name: table
dtype: float64
- name: price
dtype: float64
- name: x
dtype: float64
- name: y
dtype: float64
- name: z
dtype: float64
- name: review
dtype: string
- name: sentiment
dtype: string
- name: label
dtype: int64
- name: cut_Good
dtype: bool
- name: cut_Ideal
dtype: bool
- name: cut_Premium
dtype: bool
- name: cut_Very Good
dtype: bool
- name: color_E
dtype: bool
- name: color_F
dtype: bool
- name: color_G
dtype: bool
- name: color_H
dtype: bool
- name: color_I
dtype: bool
- name: color_J
dtype: bool
- name: clarity_IF
dtype: bool
- name: clarity_SI1
dtype: bool
- name: clarity_SI2
dtype: bool
- name: clarity_VS1
dtype: bool
- name: clarity_VS2
dtype: bool
- name: clarity_VVS1
dtype: bool
- name: clarity_VVS2
dtype: bool
- name: image
dtype: image
splits:
- name: train
num_bytes: 185209908.0
num_examples: 50000
download_size: 174280492
dataset_size: 185209908.0
tags:
- Causal Inference
size_categories:
- 10K<n<100K
---
# Dataset Card
Semi-synthetic dataset with multimodal confounding.
The dataset is generated according to the description in [DoubleMLDeep: Estimation of Causal Effects with Multimodal Data](https://arxiv.org/abs/2402.01785).
## Dataset Details
### Dataset Description & Usage
The dataset is a semi-synthetic dataset as a benchmark for treatment effect estimation with multimodal confounding. The outcome
variable `Y` is generated according to a partially linear model
$$
Y = \theta_0 D_1 + g_1(X) + \varepsilon
$$
with an constant treatment effect of
$$\theta_0=0.5.$$
The target variables `sentiment`, `label` and `price` are used to generate credible confounding by affecting both `Y` and `D_1`.
This confounding is generated to be negative, such that estimates of the treatment effect should generally be smaller than `0.5`.
For a more detailed description on the data generating process, see [DoubleMLDeep: Estimation of Causal Effects with Multimodal Data](https://arxiv.org/abs/2402.01785).
The dataset includes the corresponding target variables `sentiment`, `label`, `price` and oracle values such as `cond_exp_y`, `l1`, `m1`, `g1`.
These values are included for convenience for e.g. benchmarking against optimal estimates, but should not be used in the model.
Further, several tabular features are highly correlated, such that it may be helpful to drop features such as `x`, `y`, `z`.
An example looks as follows:
```
{'cond_exp_y': 2.367230022801451,
'm1': -2.7978920933712907,
'g1': 4.015536418538365,
'l1': 2.61659037185272,
'Y': 3.091541535115522,
'D_1': -3.2966127914738275,
'carat': 0.5247285289349821,
'depth': 58.7,
'table': 59.0,
'price': 9.7161333532141,
'x': 7.87,
'y': 7.78,
'z': 4.59,
'review': "I really liked this Summerslam due to the look of the arena, the curtains and just the look overall was interesting to me for some reason. Anyways, this could have been one of the best Summerslam's ever if the WWF didn't have Lex Luger in the main event against Yokozuna, now for it's time it was ok to have a huge fat man vs a strong man but I'm glad times have changed. It was a terrible main event just like every match Luger is in is terrible. Other matches on the card were Razor Ramon vs Ted Dibiase, Steiner Brothers vs Heavenly Bodies, Shawn Michaels vs Curt Hening, this was the event where Shawn named his big monster of a body guard Diesel, IRS vs 1-2-3 Kid, Bret Hart first takes on Doink then takes on Jerry Lawler and stuff with the Harts and Lawler was always very interesting, then Ludvig Borga destroyed Marty Jannetty, Undertaker took on Giant Gonzalez in another terrible match, The Smoking Gunns and Tatanka took on Bam Bam Bigelow and the Headshrinkers, and Yokozuna defended the world title against Lex Luger this match was boring and it has a terrible ending. However it deserves 8/10",
'sentiment': 'positive',
'label': 6,
'cut_Good': False,
'cut_Ideal': False,
'cut_Premium': True,
'cut_Very Good': False,
'color_E': False,
'color_F': True,
'color_G': False,
'color_H': False,
'color_I': False,
'color_J': False,
'clarity_IF': False,
'clarity_SI1': False,
'clarity_SI2': False,
'clarity_VS1': False,
'clarity_VS2': True,
'clarity_VVS1': False,
'clarity_VVS2': False,
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32>}
```
### Dataset Sources
The dataset is based on the three commonly used datasets:
- [Diamonds dataset](https://www.kaggle.com/datasets/shivam2503/diamonds)
- [IMDB dataset](https://huggingface.co/datasets/imdb)
- [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html)
The versions to create this dataset can be found on Kaggle:
- [Diamonds dataset (Kaggle)](https://www.kaggle.com/datasets/shivam2503/diamonds)
- [IMDB dataset (Kaggle)](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews?select=IMDB+Dataset.csv)
- [CIFAR-10 dataset (Kaggle)](https://www.kaggle.com/datasets/swaroopkml/cifar10-pngs-in-folders)
The original citations can be found below.
### Dataset Preprocessing
All datasets are subsampled to be of equal size (`50,000`). The CIFAR-10 data is based on the trainings dataset, whereas the IMDB data contains train and test data
to obtain `50,000` observations. The labels of the CIFAR-10 data are set to integer values `0` to `9`.
The Diamonds dataset is cleaned (values with `x`, `y`, `z` equal to `0` are removed) and outliers are dropped (such that `45<depth<75`, `40<table<80`, `x<30`, `y<30` and `2<z<30`).
The remaining `53,907` observations are downsampled to the same size of `50,000` observations. Further `price` and `carat` are transformed with the natural logarithm and `cut`,
`color` and `clarity` are dummy coded (with baselines `Fair`, `D` and `I1`).
## Uses
The dataset should as a benchmark to compare different causal inference methods for observational data under multimodal confounding.
## Dataset Structure
### Data Instances
### Data Fields
The data fields can be devided into several categories:
- **Outcome and Treatments**
- `Y` (`float64`): Outcome of interest
- `D_1` (`float64`): Treatment value
- **Text Features**
- `review` (`string`): IMDB review text
- `sentiment` (`string`): Corresponding sentiment, either `positive` or `negative`
- **Image Features**
- `image` (`image`): Image
- `label` (`int64`): Corresponding label from `0` to `9`
- **Tabular Features**
- `price` (`float64`): Logarithm of the price in US dollars
- `carat` (`float64`): Logarithm of the weight of the diamond
- `x` (`float64`): Length in mm
- `y` (`float64`): Width in mm
- `z` (`float64`): Depth in mm
- `depth` (`float64`): Total depth percentage
- `table` (`float64`): Width of top of diamond relative to widest point
- **Cut**: Quality of the cut (`Fair`, `Good`, `Very Good`, `Premium`, `Ideal`) (dummy coded with `Fair` as baseline)
- `cut_Good` (`bool`)
- `cut_Very Good` (`bool`)
- `cut_Premium` (`bool`)
- `cut_Ideal` (`bool`)
- **Color**: Diamond color, from `J`(worst) to `D`(best) (dummy coded with `D` as baseline)
- `color_E` (`bool`)
- `color_F` (`bool`)
- `color_G` (`bool`)
- `color_H` (`bool`)
- `color_I` (`bool`)
- `color_J` (`bool`)
- **Clarity**: Measurement of diamond clarity (`I1` (worst), `SI2`, `SI1`, `VS2`, `VS1`, `VVS2`, `VVS1`, `IF` (best)) (dummy coded with `I1` as baseline)
- `clarity_SI2` (`bool`)
- `clarity_SI1` (`bool`)
- `clarity_VS2` (`bool`)
- `clarity_VS1` (`bool`)
- `clarity_VVS2` (`bool`)
- `clarity_VVS1` (`bool`)
- `clarity_IF` (`bool`)
- **Oracle Features**
- `cond_exp_y` (`float64`): Expected value of `Y` conditional on `D_1`, `sentiment`, `label` and `price`
- `l1` (`float64`): Expected value of `Y` conditional on `sentiment`, `label` and `price`
- `m1` (`float64`): Expected value of `D_1` conditional on `sentiment`, `label` and `price`
- `g1` (`float64`): Additive component of `Y` based on `sentiment`, `label` and `price` (see Dataset Description)
## Limitations
As the confounding is generated via original labels, completely removing the confounding might not be possible.
## Citation Information
### Dataset Citation
If you use the dataset please cite this article:
```
@article{klaassen2024doublemldeep,
title={DoubleMLDeep: Estimation of Causal Effects with Multimodal Data},
author={Klaassen, Sven and Teichert-Kluge, Jan and Bach, Philipp and Chernozhukov, Victor and Spindler, Martin and Vijaykumar, Suhas},
journal={arXiv preprint arXiv:2402.01785},
year={2024}
}
```
### Dataset Sources
The three original datasets can be cited via
Diamonds dataset:
```
@Book{ggplot2_book,
author = {Hadley Wickham},
title = {ggplot2: Elegant Graphics for Data Analysis},
publisher = {Springer-Verlag New York},
year = {2016},
isbn = {978-3-319-24277-4},
url = {https://ggplot2.tidyverse.org},
}
```
IMDB dataset:
```
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
```
CIFAR-10 dataset:
```
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
```
## Dataset Card Authors
Sven Klaassen |