Datasets:
license: bsd-3-clause
dataset_info:
features:
- name: cond_exp_y
dtype: float64
- name: m1
dtype: float64
- name: g1
dtype: float64
- name: l1
dtype: float64
- name: 'Y'
dtype: float64
- name: D_1
dtype: float64
- name: carat
dtype: float64
- name: depth
dtype: float64
- name: table
dtype: float64
- name: price
dtype: float64
- name: x
dtype: float64
- name: 'y'
dtype: float64
- name: z
dtype: float64
- name: review
dtype: string
- name: sentiment
dtype: string
- name: label
dtype: int64
- name: cut_Good
dtype: bool
- name: cut_Ideal
dtype: bool
- name: cut_Premium
dtype: bool
- name: cut_Very Good
dtype: bool
- name: color_E
dtype: bool
- name: color_F
dtype: bool
- name: color_G
dtype: bool
- name: color_H
dtype: bool
- name: color_I
dtype: bool
- name: color_J
dtype: bool
- name: clarity_IF
dtype: bool
- name: clarity_SI1
dtype: bool
- name: clarity_SI2
dtype: bool
- name: clarity_VS1
dtype: bool
- name: clarity_VS2
dtype: bool
- name: clarity_VVS1
dtype: bool
- name: clarity_VVS2
dtype: bool
- name: image
dtype: image
splits:
- name: train
num_bytes: 185209908
num_examples: 50000
download_size: 174280492
dataset_size: 185209908
tags:
- Causal Inference
size_categories:
- 10K<n<100K
Dataset Card
Semi-synthetic dataset with multimodal confounding. The dataset is generated according to the description in DoubleMLDeep: Estimation of Causal Effects with Multimodal Data.
Dataset Details
Dataset Description & Usage
The dataset is a semi-synthetic dataset as a benchmark for treatment effect estimation with multimodal confounding. The outcome
variable Y
is generated according to a partially linear model
with an constant treatment effect of
The target variables sentiment
, label
and price
are used to generate credible confounding by affecting both Y
and D_1
.
This confounding is generated to be negative, such that estimates of the treatment effect should generally be smaller than 0.5
.
For a more detailed description on the data generating process, see DoubleMLDeep: Estimation of Causal Effects with Multimodal Data.
The dataset includes the corresponding target variables sentiment
, label
, price
and oracle values such as cond_exp_y
, l1
, m1
, g1
.
These values are included for convenience for e.g. benchmarking against optimal estimates, but should not be used in the model.
Further, several tabular features are highly correlated, such that it may be helpful to drop features such as x
, y
, z
.
An example looks as follows:
{'cond_exp_y': 2.367230022801451,
'm1': -2.7978920933712907,
'g1': 4.015536418538365,
'l1': 2.61659037185272,
'Y': 3.091541535115522,
'D_1': -3.2966127914738275,
'carat': 0.5247285289349821,
'depth': 58.7,
'table': 59.0,
'price': 9.7161333532141,
'x': 7.87,
'y': 7.78,
'z': 4.59,
'review': "I really liked this Summerslam due to the look of the arena, the curtains and just the look overall was interesting to me for some reason. Anyways, this could have been one of the best Summerslam's ever if the WWF didn't have Lex Luger in the main event against Yokozuna, now for it's time it was ok to have a huge fat man vs a strong man but I'm glad times have changed. It was a terrible main event just like every match Luger is in is terrible. Other matches on the card were Razor Ramon vs Ted Dibiase, Steiner Brothers vs Heavenly Bodies, Shawn Michaels vs Curt Hening, this was the event where Shawn named his big monster of a body guard Diesel, IRS vs 1-2-3 Kid, Bret Hart first takes on Doink then takes on Jerry Lawler and stuff with the Harts and Lawler was always very interesting, then Ludvig Borga destroyed Marty Jannetty, Undertaker took on Giant Gonzalez in another terrible match, The Smoking Gunns and Tatanka took on Bam Bam Bigelow and the Headshrinkers, and Yokozuna defended the world title against Lex Luger this match was boring and it has a terrible ending. However it deserves 8/10",
'sentiment': 'positive',
'label': 6,
'cut_Good': False,
'cut_Ideal': False,
'cut_Premium': True,
'cut_Very Good': False,
'color_E': False,
'color_F': True,
'color_G': False,
'color_H': False,
'color_I': False,
'color_J': False,
'clarity_IF': False,
'clarity_SI1': False,
'clarity_SI2': False,
'clarity_VS1': False,
'clarity_VS2': True,
'clarity_VVS1': False,
'clarity_VVS2': False,
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=32x32>}
Dataset Sources
The dataset is based on the three commonly used datasets:
The versions to create this dataset can be found on Kaggle:
The original citations can be found below.
Dataset Preprocessing
All datasets are subsampled to be of equal size (50,000
). The CIFAR-10 data is based on the trainings dataset, whereas the IMDB data contains train and test data
to obtain 50,000
observations. The labels of the CIFAR-10 data are set to integer values 0
to 9
.
The Diamonds dataset is cleaned (values with x
, y
, z
equal to 0
are removed) and outliers are dropped (such that 45<depth<75
, 40<table<80
, x<30
, y<30
and 2<z<30
).
The remaining 53,907
observations are downsampled to the same size of 50,000
observations. Further price
and carat
are transformed with the natural logarithm and cut
,
color
and clarity
are dummy coded (with baselines Fair
, D
and I1
).
Uses
The dataset should as a benchmark to compare different causal inference methods for observational data under multimodal confounding.
Dataset Structure
Data Instances
Data Fields
The data fields can be devided into several categories:
Outcome and Treatments
Y
(float64
): Outcome of interestD_1
(float64
): Treatment value
Text Features
review
(string
): IMDB review textsentiment
(string
): Corresponding sentiment, eitherpositive
ornegative
Image Features
image
(image
): Imagelabel
(int64
): Corresponding label from0
to9
Tabular Features
price
(float64
): Logarithm of the price in US dollarscarat
(float64
): Logarithm of the weight of the diamondx
(float64
): Length in mmy
(float64
): Width in mmz
(float64
): Depth in mmdepth
(float64
): Total depth percentagetable
(float64
): Width of top of diamond relative to widest point- Cut: Quality of the cut (
Fair
,Good
,Very Good
,Premium
,Ideal
) (dummy coded withFair
as baseline)cut_Good
(bool
)cut_Very Good
(bool
)cut_Premium
(bool
)cut_Ideal
(bool
)
- Color: Diamond color, from
J
(worst) toD
(best) (dummy coded withD
as baseline)color_E
(bool
)color_F
(bool
)color_G
(bool
)color_H
(bool
)color_I
(bool
)color_J
(bool
)
- Clarity: Measurement of diamond clarity (
I1
(worst),SI2
,SI1
,VS2
,VS1
,VVS2
,VVS1
,IF
(best)) (dummy coded withI1
as baseline)clarity_SI2
(bool
)clarity_SI1
(bool
)clarity_VS2
(bool
)clarity_VS1
(bool
)clarity_VVS2
(bool
)clarity_VVS1
(bool
)clarity_IF
(bool
)
Oracle Features
cond_exp_y
(float64
): Expected value ofY
conditional onD_1
,sentiment
,label
andprice
l1
(float64
): Expected value ofY
conditional onsentiment
,label
andprice
m1
(float64
): Expected value ofD_1
conditional onsentiment
,label
andprice
g1
(float64
): Additive component ofY
based onsentiment
,label
andprice
(see Dataset Description)
Limitations
As the confounding is generated via original labels, completely removing the confounding might not be possible.
Citation Information
Dataset Citation
If you use the dataset please cite this article:
@article{klaassen2024doublemldeep,
title={DoubleMLDeep: Estimation of Causal Effects with Multimodal Data},
author={Klaassen, Sven and Teichert-Kluge, Jan and Bach, Philipp and Chernozhukov, Victor and Spindler, Martin and Vijaykumar, Suhas},
journal={arXiv preprint arXiv:2402.01785},
year={2024}
}
Dataset Sources
The three original datasets can be cited via
Diamonds dataset:
@Book{ggplot2_book,
author = {Hadley Wickham},
title = {ggplot2: Elegant Graphics for Data Analysis},
publisher = {Springer-Verlag New York},
year = {2016},
isbn = {978-3-319-24277-4},
url = {https://ggplot2.tidyverse.org},
}
IMDB dataset:
@InProceedings{maas-EtAl:2011:ACL-HLT2011,
author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher},
title = {Learning Word Vectors for Sentiment Analysis},
booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
month = {June},
year = {2011},
address = {Portland, Oregon, USA},
publisher = {Association for Computational Linguistics},
pages = {142--150},
url = {http://www.aclweb.org/anthology/P11-1015}
}
CIFAR-10 dataset:
@TECHREPORT{Krizhevsky09learningmultiple,
author = {Alex Krizhevsky},
title = {Learning multiple layers of features from tiny images},
institution = {},
year = {2009}
}
Dataset Card Authors
Sven Klaassen