ACL-OCL / Base_JSON /prefixT /json /teachingnlp /2021.teachingnlp-1.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:51:39.780684Z"
},
"title": "MiniVQA -A resource to build your tailored VQA competition",
"authors": [
{
"first": "Jean-Benoit",
"middle": [],
"last": "Delbrouck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Stanford University",
"location": {}
},
"email": "jeanbenoit.delbrouck@stanford.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "MiniVQA 1 is a Jupyter notebook to build a tailored VQA competition for your students. The resource creates all the needed resources to create a classroom competition that engages and inspires your students on the free, self-service Kaggle platform. \"InClass competitions make machine learning fun!\" 2 .",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "MiniVQA 1 is a Jupyter notebook to build a tailored VQA competition for your students. The resource creates all the needed resources to create a classroom competition that engages and inspires your students on the free, self-service Kaggle platform. \"InClass competitions make machine learning fun!\" 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Beyond simply recognizing what objects are present, vision-and-language tasks, such as captioning (Chen et al., 2015) or visual question answering (Antol et al., 2015) , challenge systems to understand a wide range of detailed semantics of an image, including objects, attributes, spatial relationships, actions and intentions, and how all these concepts are referred to and grounded in natural language. VQA is therefore viewed as a suitable way to evaluate a system reading comprehension.",
"cite_spans": [
{
"start": 98,
"end": 117,
"text": "(Chen et al., 2015)",
"ref_id": "BIBREF2"
},
{
"start": 147,
"end": 167,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task overview",
"sec_num": "1"
},
{
"text": "\"Since questions can be devised to query any aspect of text comprehension, the ability to answer questions is the strongest possible demonstration of understanding\" (Lehnert, 1977) 2 To teach NLP A trained model cannot perform satisfactorily on the Visual Question Answering task without language understanding abilities. A visual-only system (i.e., processing only the image) will not have the visual reasoning skills required to provide correct answers (as it doesn't process the questions as input). Two NLP challenges arise to build a multimodal model:",
"cite_spans": [
{
"start": 165,
"end": 180,
"text": "(Lehnert, 1977)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task overview",
"sec_num": "1"
},
{
"text": "\u2022 The questions must be processed by the system as input. This is the opportunity to teach different features extractions techniques for sentences. The techniques can range from unsupervised methods (bag of words, tf-idf or Word2Vec/Doc2Vec (Mikolov et al., 2013) models) to supervised methods (Recurrent Neural Networks (Hochreiter and Schmidhuber, 1997) or Transformers (Vaswani et al., 2017) neural networks).",
"cite_spans": [
{
"start": 241,
"end": 263,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 321,
"end": 355,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF5"
},
{
"start": 372,
"end": 394,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task overview",
"sec_num": "1"
},
{
"text": "\u2022 The extracted linguistic features must be incorporated into the visual processing pipeline. This challenge lies at the intersection of vision and language research. Different approaches, such as early and late fusion techniques, can be introduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task overview",
"sec_num": "1"
},
{
"text": "All implemented techniques can be evaluated on the difficult task that is VQA. Because answering questions about a visual context is a hard cognitive task, using different NLP approaches can lead to significant performance differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task overview",
"sec_num": "1"
},
{
"text": "The source code is split into several sections, each section containing tunable parameters to modulate the dataset to match your needs. For example, you can choose the number of possible answers, the sample size per answer or the balance of labels between splits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource overview",
"sec_num": "3"
},
{
"text": "MiniVQA built upon two datasets, the VQA V2 dataset (Goyal et al., 2017) and the VQA-Med dataset (Ben Abacha et al., 2020) . You can choose to create a competition based on natural images or medical images. MiniVQA proposes 443k questions on 82.7k images and 4547 unique answers for the former, and 7.4k unique image-question pairs and 332 answers for the latter 3 .",
"cite_spans": [
{
"start": 52,
"end": 72,
"text": "(Goyal et al., 2017)",
"ref_id": "BIBREF3"
},
{
"start": 97,
"end": 122,
"text": "(Ben Abacha et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Resource overview",
"sec_num": "3"
},
{
"text": "Finally, MiniVQA provides a second Jupyter notebook that trains and evaluates a baseline VQA model on any dataset you create. You are free to share it or not amongst your class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource overview",
"sec_num": "3"
},
{
"text": "The following sections present the features of MiniVQA. We illustrate these features using the VQA v2.0 dataset as a matter of example. The same can be applied to the VQA-Med dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resource presentation",
"sec_num": "4"
},
{
"text": "The resource automatically downloads annotations (questions, image_ids and answer)s from the official datasets websites. Any pre-processing that must be done is carried out. The number of possible questions and unique answers are printed to the user, along with random examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic download",
"sec_num": "4.1"
},
{
"text": "Using MiniVQA, you can choose the size of your dataset according to several settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decide the volume of your dataset",
"sec_num": "4.2"
},
{
"text": "num_answers the number of possible different answers (i.e., how many classes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decide the volume of your dataset",
"sec_num": "4.2"
},
{
"text": "sampling_type how to select samples, choose between \"top\" or \"random\". Value \"top\" gets the 'num_answers' most common answers in the dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decide the volume of your dataset",
"sec_num": "4.2"
},
{
"text": "sampling_exclude_top and sam-pling_exclude_bottom you can choose to exclude the n most popular or least popular answers (the most popular answers is \"no\" and contains 80.000 examples).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decide the volume of your dataset",
"sec_num": "4.2"
},
{
"text": "if sam-pling_type is random, you can choose a minimum and maximum number of samples with min_samples and max_samples. This section outputs a bar graph containing the label distributions and some additional information. Figure 1 shows two examples. ",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 227,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "min_samples and max_samples",
"sec_num": null
},
{
"text": "This section creates the chosen samples a json format. Two more parameters are available. sample_clipping set to select n maximum samples per answer. This setting is particularly handy if you chose the \"top\" sampling_type in the previous section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create dataset files",
"sec_num": "4.3"
},
{
"text": "im_download you can choose to download the images of the selected samples directly through http requests. Though rather slow, this allows the user not to download the images of the full dataset. resize if im_download is set to True, images are squared-resized to n pixels. For your mini-VQA project, you might want to use lower resolution for your images (faster training). n = 128 is a good choice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create dataset files",
"sec_num": "4.3"
},
{
"text": "As in any competition, participants are provided a train and validation set with ground-truth answers, and a test-set without these answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create splits",
"sec_num": "4.4"
},
{
"text": "train_size and valid_size fraction of the total examples selected to populate the splits. 0.8 and 0.1 are usually good values. The rest (0.1) goes in the test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Create splits",
"sec_num": "4.4"
},
{
"text": "balanced whether or not labels are homogeneously distributed across splits. Figure 2 shows an example of the balanced setting effect: Regardless of the balanced value, at least one sample of each label is put in each split.",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Create splits",
"sec_num": "4.4"
},
{
"text": "It is possible to compute questions embedding using pre-trained transformer models. Each representation is then reduced into a two-dimensional point using t-SNE algorithm (van der Maaten and Hinton, 2008) . These embeddings can also be used for section 5. MiniVQA plots two projections, one for the 5 most popular question types and one with randomly chosen question types. ",
"cite_spans": [
{
"start": 171,
"end": 204,
"text": "(van der Maaten and Hinton, 2008)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explore the question embedding space",
"sec_num": "4.5"
},
{
"text": "Finally, you can download the dataset file in json, the splits, and optionally, the images. {train, val, sample_submission}.csv csv files containing question_id, label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download files",
"sec_num": "4.6"
},
{
"text": "test.csv must be given to students. They must fill it with their systems predictions formatted like sample_submission.csv which contains random predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download files",
"sec_num": "4.6"
},
{
"text": "answer_key.csv is the ground-truth file that has to be stored on Kaggle (see Appendix A).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download files",
"sec_num": "4.6"
},
{
"text": "answer_list.csv maps the label to the answer in natural language (i.e., label 0 is answer at line 1 in answer_list, etc.).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download files",
"sec_num": "4.6"
},
{
"text": "image_question.json maps an image_id to a list of questions (that concerns image_id). Each question is a tuple (question_id, question).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Download files",
"sec_num": "4.6"
},
{
"text": "The provided baseline consists of a dataloader that opens images and resize them to 112 \u00d7 112 pixels and that embeds questions to a feature vector of size 768 from a pre-trained DistilBERT (Sanh et al.) .",
"cite_spans": [
{
"start": 189,
"end": 202,
"text": "(Sanh et al.)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "5"
},
{
"text": "The network consists a modified Resnet (He et al., 2016) that takes as input an RBG image of size 112 \u00d7 112 and outputs a feature map of size 512 \u00d7 4 \u00d7 4 that is flattened and then concatenated with the question representation of size 768. Finally, a classification layer projects this representation to probabilities over answers. The network is trained until no improvements is recorded on the validation set (early-stopping).",
"cite_spans": [
{
"start": 39,
"end": 56,
"text": "(He et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Model",
"sec_num": "5"
},
{
"text": "As of 2021",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Navigate to http://www.kaggle.com/ inclass. Follow the instructions to setup an InClass competition. Upload files train.csv, val.csv, test.csv, answer_key.csv, and sam-ple_submission.csv when prompted. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Create a competition on Kaggle",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "VQA: Visual Question Answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question Answer- ing. In International Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Overview of the vqa-med task at imageclef 2020: Visual question answering and generation in the medical domain",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Vivek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Datla",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2020,
"venue": "CLEF 2020 Working Notes, CEUR Workshop Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha, Vivek V. Datla, Sadid A. Hasan, Dina Demner-Fushman, and Henning M\u00fcller. 2020. Overview of the vqa-med task at imageclef 2020: Vi- sual question answering and generation in the medi- cal domain. In CLEF 2020 Working Notes, CEUR Workshop Proceedings, Thessaloniki, Greece. CEUR- WS.org <http://ceur-ws.org>.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Microsoft coco captions: Data collection and evaluation server",
"authors": [
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Ramakrishna",
"middle": [],
"last": "Vedantam",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1504.00325"
]
},
"num": null,
"urls": [],
"raw_text": "Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakr- ishna Vedantam, Saurabh Gupta, Piotr Doll\u00e1r, and C Lawrence Zitnick. 2015. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image understand- ing in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recogni- tion. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Human and computational question answering",
"authors": [
{
"first": "Wendy",
"middle": [],
"last": "Lehnert",
"suffix": ""
}
],
"year": 1977,
"venue": "Cognitive Science",
"volume": "1",
"issue": "1",
"pages": "47--73",
"other_ids": {
"DOI": [
"10.1207/s15516709cog0101_3"
]
},
"num": null,
"urls": [],
"raw_text": "Wendy Lehnert. 1977. Human and computational ques- tion answering. Cognitive Science, 1(1):47-73.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "26",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Ad- vances in Neural Information Processing Systems, vol- ume 26. Curran Associates, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Debut",
"middle": [],
"last": "Lysandre",
"suffix": ""
},
{
"first": "Chaumond",
"middle": [],
"last": "Julien",
"suffix": ""
},
{
"first": "Wolf",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Hugging",
"middle": [],
"last": "Face",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Sanh, Lysandre DEBUT, Julien CHAUMOND, Thomas WOLF, and Hugging Face. Distilbert, a dis- tilled version of bert: smaller, faster, cheaper and lighter.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "sampling_type random (left) and sam-pling_type top (right).",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "balance set to True (left) and to False (right).",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Questions embedding using a pretrained bertbase-nli-mean-tokens model.",
"type_str": "figure",
"num": null
}
}
}
}