{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:15:41.329346Z" }, "title": "Building a Video-and-Language Dataset with Human Actions for Multimodal Logical Inference", "authors": [ { "first": "Riko", "middle": [], "last": "Suzuki", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ochanomizu University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "suzuki.riko@is.ocha.ac.jp" }, { "first": "Hitomi", "middle": [], "last": "Yanaka", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Tokyo", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "hyanaka@is.s.u-tokyo.ac.jp" }, { "first": "Koji", "middle": [], "last": "Mineshima", "suffix": "", "affiliation": { "laboratory": "", "institution": "Keio University", "location": { "settlement": "Tokyo", "country": "Japan" } }, "email": "minesima@abelard.flet.keio.ac.jp" }, { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "", "affiliation": {}, "email": "bekki@is.ocha.ac.jp" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper introduces a new video-andlanguage dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action labels, and 1,942 action triplets of the form \u27e8subject, predicate, object\u27e9 that can be translated into logical semantic representations. The dataset is expected to be useful for evaluating multimodal inference systems between videos and semantically complicated sentences including negation and quantification.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper introduces a new video-andlanguage dataset with human actions for multimodal logical inference, which focuses on intentional and aspectual expressions that describe dynamic human actions. The dataset consists of 200 videos, 5,554 action labels, and 1,942 action triplets of the form \u27e8subject, predicate, object\u27e9 that can be translated into logical semantic representations. The dataset is expected to be useful for evaluating multimodal inference systems between videos and semantically complicated sentences including negation and quantification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Multimodal understanding tasks Suhr et al., 2017 Suhr et al., , 2019 have attracted rapidly growing attention from both computer vision and natural language processing communities, and various multimodal tasks combining visual and linguistic reasoning, such as visual question answering (Antol et al., 2015; Acharya et al., 2019) and image caption generation (Vinyals et al., 2015) , have been introduced. With the development of the multimodal structured datasets such as Visual Genome (Krishna et al., 2017) , recent studies have been tackling a complex multimodal inference task such as Visual Reasoning (Suhr et al., 2019) and Visual-Textual Entailment (VTE) (Suzuki et al., 2019; Do et al., 2020) , a task to judge if a sentence is true or false under the situation described in an image.", "cite_spans": [ { "start": 31, "end": 48, "text": "Suhr et al., 2017", "ref_id": "BIBREF12" }, { "start": 49, "end": 68, "text": "Suhr et al., , 2019", "ref_id": "BIBREF13" }, { "start": 287, "end": 307, "text": "(Antol et al., 2015;", "ref_id": "BIBREF1" }, { "start": 308, "end": 329, "text": "Acharya et al., 2019)", "ref_id": "BIBREF0" }, { "start": 359, "end": 381, "text": "(Vinyals et al., 2015)", "ref_id": "BIBREF16" }, { "start": 487, "end": 509, "text": "(Krishna et al., 2017)", "ref_id": "BIBREF9" }, { "start": 607, "end": 626, "text": "(Suhr et al., 2019)", "ref_id": "BIBREF13" }, { "start": 663, "end": 684, "text": "(Suzuki et al., 2019;", "ref_id": "BIBREF14" }, { "start": 685, "end": 701, "text": "Do et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The recently proposed multimodal logical inference system (Suzuki et al., 2019) uses first-order logic (FOL) formulas as unified semantic representations for text and image information. The FOL formulas are structured representations that capture not only objects and their semantic relationships in images but also those complex expressions including negation, quantification, and nu- merals. When we consider extending the logical inference system between texts and images to that between texts and videos, it is necessary to handle the property of video information: there are dynamic expressions to capture human actions and movements of things in videos more than in images.", "cite_spans": [ { "start": 58, "end": 79, "text": "(Suzuki et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As an example, consider a video-and-language inference example in Figure 1 . This video consists of SCENE1, where the sentence The woman puts on her outerwear is true, and SCENE2, where the sentence The woman takes off her outerwear is true. Note that the entire video represents richer information as expressed by the sentence the woman tries to put on her outerwear. To judge whether this sentence is true, it is not enough to simply combine two actions, putting on outerwear and taking off outerwear. To capture this dynamic aspect of human action, it is necessary to take into account the information expressed by intentional phrases such as trying to put on outerwear.", "cite_spans": [], "ref_spans": [ { "start": 66, "end": 74, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Towards such a complex multimodal inference between video and text, we build a new Japanese video-and-language dataset with human actions. We annotate videos with action labels written in triplets of the form \u27e8subject, predicate, object\u27e9, where object can be empty (indicated by \u03d5). Action labels contain not only basic expressions such as \u27e8person, run, \u03d5\u27e9 and \u27e8person, hold, cup\u27e9, but also expressions including intentional phrases such as \u27e8person, try to eat, food\u27e9. An advantage of using triplets \u27e8subject, predicate, object\u27e9 is that a triplet itself can serve as the semantic representation of a video and can be translated into logical formulas (see Section 3). This paper introduces a method to create a video-and-language dataset involving aspectual and intentional phrases. We collect a preliminary dataset labeled in Japanese for human actions. We also analyze to what extent our dataset contains various aspectual and intentional phrases. Our dataset will be publicly available at https://github.com/rikos3/HumanActions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There have been several efforts to create human action video datasets in the field of computer vision. Charades (Sigurdsson et al., 2016) contains 9,848 videos of daily activities annotated with free-text descriptions and action labels in English. Charades STA (Gao et al., 2017 ) is a dataset built by adding sentence descriptions with start and end times to the Charades dataset. For Japanese video datasets, STAIR Actions (Yoshikawa et al., 2018 ) is a dataset that consists of 63,000 videos with action labels. Each video is about 5 seconds and has a single action label from 100 action categories. Action Genome (Ji et al., 2020 ) is a large-scale video dataset built upon the Charades dataset, which provides action labels and spatiotemporal scene graphs.", "cite_spans": [ { "start": 261, "end": 278, "text": "(Gao et al., 2017", "ref_id": "BIBREF5" }, { "start": 425, "end": 448, "text": "(Yoshikawa et al., 2018", "ref_id": "BIBREF17" }, { "start": 617, "end": 633, "text": "(Ji et al., 2020", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "VIOLIN (Liu et al., 2020) introduces a multimodal inference task between text and videos: given a video with aligned subtitles as a premise, paired with a natural language hypothesis based on the video content, a model needs to judge whether or not the hypothesis is entailed by the given video. The VIOLIN dataset mainly focuses on conversation reasoning and commonsense reasoning, and the dataset contains videos collected from movies or TV shows.", "cite_spans": [ { "start": 7, "end": 25, "text": "(Liu et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Compared to the existing datasets, our dataset is distinctive in that action labels are written in structured representations \u27e8subject, predicate, object\u27e9 and contain various expressions such as continue to eat and try to close that support complex inference between videos and texts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "3 Semantic Representations of Videos Suzuki et al. (2019) proposed FOL formulas as semantic representations of text and images. They use the formulas translated from FOL structures for images to solve a complex VTE task. We extend this idea to semantic representations of videos.", "cite_spans": [ { "start": 37, "end": 57, "text": "Suzuki et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "FOL structures (also called first-order models) are used to represent semantic information in images (H\u00fcrlimann and Bos, 2016 ). An FOL structure for an image is a pair (D, I) where D is a domain consisting of all the entities occurring in the image, and I is an interpretation function that describes the attributes and relations holding of the entities in the image (Suzuki et al., 2019) .", "cite_spans": [ { "start": 101, "end": 125, "text": "(H\u00fcrlimann and Bos, 2016", "ref_id": "BIBREF6" }, { "start": 368, "end": 389, "text": "(Suzuki et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To extend FOL structures for images to those for videos, we add to FOL structures a set of scenes S = {s 1 , s 2 , . . . , s n } that makes up a video, ordered by the temporal precedence relation. This structure may be considered as a possible world model for standard temporal logic (Venema, 2017; Blackburn et al., 2002) . Thus, a video is represented by (S, D, I), where S is a set of scenes linearly ordered by the temporal precedence relation, D is a domain of the entities, which is constant in all scenes, and I is an interpretation function that assigns attributes and relations to the entities in each scene. We assign personal", "cite_spans": [ { "start": 284, "end": 298, "text": "(Venema, 2017;", "ref_id": "BIBREF15" }, { "start": 299, "end": 322, "text": "Blackburn et al., 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "IDs (d 1 , d 2 , . . . , d n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "to people appearing in each scene. Since the purpose of our dataset is to label human actions, we assign IDs to people, but not to non-human objects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To facilitate the annotation of the attributes and relations holding of the entities in each scene, we use triplets of the form \u27e8subject, predicate, object\u27e9 given to each scene s i as action labels, where object may be empty. This form itself can be seen as a semantic representation of videos. Furthermore, it can also be translated into an FOL formula, in a similar way to the standard translation of modal logic to FOL (Blackburn et al., 2002) . The following examples show a translation from triplets in scenes into FOL formulas.", "cite_spans": [ { "start": 422, "end": 446, "text": "(Blackburn et al., 2002)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "(", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "1) s 1 :\u27e8d 1 , run, \u03d5\u27e9 \u21d2 run(s 1 , d 1 ) (2) s 2 : \u27e8d 1 , hold, pillow\u27e9 \u21d2 \u2203x(pillow(s 2 , x) \u2227 hold(s 2 , d 1 , x))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Here each predicate has an additional argument for a scene variable. (1) means that the entity d 1 runs in scene s 1 ; (2) means that the entity d 1 holds a pillow in scene s 2 . Each triplet can be translated into an FOL formula by using this method and thus serve as a se- mantic representation of a video usable in the semantic parser and inference system for the VTE task presented in Suzuki et al. (2019) . Though it is left for future work, the dataset in which each scene of a video is annotated with triplets will be useful to evaluate the VTE system for videos.", "cite_spans": [ { "start": 389, "end": 409, "text": "Suzuki et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We selected videos from the test set of the Charades dataset (Sigurdsson et al., 2016) . The Charades dataset contains videos drawing daily activities in a room such as drinking from a cup, putting on shoes, and watching a laptop or something on a laptop. Each video is collected via crowdsourcing: workers are asked to generate the script that describes daily activities and then to record a video of that script being acted out.", "cite_spans": [ { "start": 61, "end": 86, "text": "(Sigurdsson et al., 2016)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Video Selection", "sec_num": "4.1" }, { "text": "We select videos where multiple persons appear from the Charades test set to cover various actions within human interaction such as touching someone's shoulder or handing something. These actions are expected to be described in expressions involving various linguistic phenomena. To collect videos where multiple persons appear, we selected 200 videos whose descriptions include phrases another person, another people, and they. Figure 2 shows a video example involving human interaction.", "cite_spans": [], "ref_spans": [ { "start": 429, "end": 437, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Video Selection", "sec_num": "4.1" }, { "text": "We annotate each video with \u27e8subject, predicate, object\u27e9 triplet format as action labels that represent human-object activities. We also annotate each action label with a start and end time to locate the activity accurately. We ask two workers to freely write predicates and object names that describe human activities to collect various expressions. Using this format the workers can freely decide the span of each scene and thus annotate a video with action labels more easily and flexibly. In Section 4.5 below, we will explain how to convert the triplet action format with start and end times to FOL structures extended with scenes as presented in Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.2" }, { "text": "Subject We assign personal IDs (d 1 , d 2 , d 3 , . . .) to people in order of appearance in the video. If multiple persons appear for the first time in the same scene, we assign personal IDs to people appearing in order from left to right.", "cite_spans": [], "ref_spans": [ { "start": 31, "end": 51, "text": "(d 1 , d 2 , d 3 , .", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Annotation", "sec_num": "4.2" }, { "text": "Predicate In a triplet, predicate contains various expressions such as aspectual and intentional phrases for describing dynamic human actions in videos, those phrases that do not usually appear in captions for static images. The following examples show characteristic predicates of videos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.2" }, { "text": "\u2022 predicates for utterance and communication (e.g. speak, talk, tell, ask, listen) \u2022 predicates for intention and attitude (e.g. try to eat, try to close). \u2022 aspectual predicates (e.g. start talking, continue to eat)", "cite_spans": [ { "start": 45, "end": 82, "text": "(e.g. speak, talk, tell, ask, listen)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.2" }, { "text": "We allow workers to use not only a transitive or intransitive verb but also verb phrases for predicates such as try to V and continue to V to collect diverse aspectual and intentional phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotation", "sec_num": "4.2" }, { "text": "Object The object in a triplet contains an object name or personal ID. If the item in predicate is an intransitive verb, object is empty. For instance, in Figure 3 , the object for the predicate hold is pillow and the object is empty for the predicate run. ", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 163, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Annotation", "sec_num": "4.2" }, { "text": "In this work, we ask three workers to either annotate or merge action labels. All of the workers are native speakers of Japanese. We merge and confirm action labels in the following steps: (1) merge action labels made by two workers and arrange them in ascending order of start times, (2) watch videos by three workers to see if an action label is correct, and (3) if action labels duplicate, select one action label.", "cite_spans": [ { "start": 285, "end": 288, "text": "(2)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Validation", "sec_num": "4.3" }, { "text": "Regarding duplicated action labels, the labels and their start and end time are determined according to the agreement of three workers. Consider the following duplicate case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Validation", "sec_num": "4.3" }, { "text": "Videos Average time Average of Action English Japanese (sec) action labels categories Charades (Sigurdsson et al., 2016) 9848 30 6.8 157 \u2713 ActionGenome (Ji et al., 2020) 9848 30 170 157 \u2713 STAIR Actions (Yoshikawa et al., 2018) 7, /try to drink(6), /try to hold(3), /try to put(3), /try to cut (2) , /try to move (2) , /pretend to eat (2) , /try to remove (2) , /try to put on ( (\u03c3 1 ) 0:10-0:13 \u27e8d 1 , hold, clothes\u27e9 (\u03c3 2 ) 0:11-0:14 \u27e8d 1 , hold, clothes\u27e9 (\u03c3 3 ) 0:11-0:15 \u27e8d 1 , hold, outerwear\u27e9", "cite_spans": [ { "start": 95, "end": 120, "text": "(Sigurdsson et al., 2016)", "ref_id": "BIBREF11" }, { "start": 152, "end": 169, "text": "(Ji et al., 2020)", "ref_id": "BIBREF7" }, { "start": 202, "end": 226, "text": "(Yoshikawa et al., 2018)", "ref_id": "BIBREF17" }, { "start": 293, "end": 296, "text": "(2)", "ref_id": "BIBREF4" }, { "start": 312, "end": 315, "text": "(2)", "ref_id": "BIBREF4" }, { "start": 334, "end": 337, "text": "(2)", "ref_id": "BIBREF4" }, { "start": 355, "end": 358, "text": "(2)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "In this case, (\u03c3 1 ) and (\u03c3 2 ) are duplicates in that subject, predicate, and object are the same while the start time and end time are different. If the third worker judges that (\u03c3 2 ) is more adequate than (\u03c3 1 ), we merge (\u03c3 1 ) and (\u03c3 2 ) and obtain the action labels below.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "(\u03c3 1 \u2032 ) 0:10-0:14 \u27e8d 1 , hold, clothes\u27e9 (\u03c3 2 \u2032 ) 0:11-0:15 \u27e8d 1 , hold, outerwear\u27e9 Table 1 shows that despite its size, our dataset contains more action categories than other previous datasets. About 65% of total action labels are action labels that appear only once. This indicates that there are a wide variety of expressions. The dataset contains characteristic expressions of videos such as walk, talk, and stop walking. Table 2 shows the frequency and examples of three types of predicates, i.e., utterance, intentional, and aspectual predicates. The distribution of characteristic predicates of videos in our dataset was: 2.49% predicates for utterance, 0.98% predicates for intention and attitude, and 0.15% aspectual predicates. One possible reason for the low frequency of aspectual predicates is that Charades contains 30-second videos, which might be too short to describe multiple actions involving aspectual phrases. It would be expected to increase the number of aspectual predicates if we annotate longer videos such as the VIOLIN dataset (Liu et al., 2020) , which is left for future work. The number of overlaps of action categories between ours and STAIR Actions (Yoshikawa et al., 2018) is 28. These results indicate that our dataset contains more diverse action categories compared to other datasets. Table 3 shows frequent action labels in our dataset. Our dataset contains not only predicates for utterance, intention, and aspect, but also punctual verbs (e.g. stop walking and turn on) and durative verbs (e.g. sit and wait).", "cite_spans": [ { "start": 1055, "end": 1073, "text": "(Liu et al., 2020)", "ref_id": "BIBREF10" }, { "start": 1182, "end": 1206, "text": "(Yoshikawa et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 84, "end": 91, "text": "Table 1", "ref_id": null }, { "start": 426, "end": 433, "text": "Table 2", "ref_id": "TABREF2" }, { "start": 1322, "end": 1329, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "The triplet action forms with start and end points used in the annotation can be converted to FOL structures extended with scenes presented in Section 3. In the extended FOL structures, each scene is linearly ordered by the temporal precedence relation and is uniquely characterized by the set of all the attributes and relations holding in it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conversion to FOL structures", "sec_num": "4.5" }, { "text": "As an illustration, consider the example in Figure 4 . In this case, we can separate the entire video into 11 scenes as shown in Figure 4 . Accordingly, in the extended FOL structure, we have S = {s 1 , . . . , s 11 }. Here the first scene, s 1 , consists of the following: the predicate run holds of the entity d 1 , the predicate sit holds of the pair (d 2 , x 1 ) where x 1 is an entity which is a table. In terms of the interpretation function I relativized to a scene, we have I s 1 (run) = {d 1 } , I s 1 (sit) = {(d 2 , x 1 )} and I s 1 (table) = {x 1 }. Similarly, we can extend the interpretation function I to the other scenes.", "cite_spans": [], "ref_spans": [ { "start": 44, "end": 52, "text": "Figure 4", "ref_id": "FIGREF3" }, { "start": 129, "end": 137, "text": "Figure 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Conversion to FOL structures", "sec_num": "4.5" }, { "text": "While the triplet format is suitable for the annotation of various action labels, the semantic representation in the form of FOL structures with scenes can be directly used in model checking and theorem proving for the VTE system developed in Suzuki et al. (2019) . Our annotation format is flexible enough to be adapted in such applications.", "cite_spans": [ { "start": 243, "end": 263, "text": "Suzuki et al. (2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Conversion to FOL structures", "sec_num": "4.5" }, { "text": "We introduce a video-and-language dataset with human actions for multimodal inference. We annotate human actions in videos in the free format and collect 1,942 action categories for 200 videos. Our dataset contains various action labels for videos, including those predicates characteristic of videos such as predicates for utterance, predicates for intention and attitude, and aspectual predicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In future work, we analyze recent ac-tion recognition models using Action Genome (Ji et al., 2020) with our dataset. We will also work on building a multimodal logical inference system between texts and videos.", "cite_spans": [ { "start": 81, "end": 98, "text": "(Ji et al., 2020)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "This work was partially supported by JST CREST Grant Number JPMJCR20D2, Japan. Thanks to the anonymous reviewers for helpful comments. We would also like to thank Mai Yokozeki and Natsuki Murakami for their contributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TallyQA: Answering complex counting questions", "authors": [ { "first": "Manoj", "middle": [], "last": "Acharya", "suffix": "" }, { "first": "Kushal", "middle": [], "last": "Kafle", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Kanan", "suffix": "" } ], "year": 2019, "venue": "The Association for the Advancement of Artificial Intelligence (AAAI2019)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manoj Acharya, Kushal Kafle, and Christopher Kanan. 2019. TallyQA: Answering complex counting ques- tions. In The Association for the Advancement of Artificial Intelligence (AAAI2019).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "VQA: Visual Question Answering", "authors": [ { "first": "Stanislaw", "middle": [], "last": "Antol", "suffix": "" }, { "first": "Aishwarya", "middle": [], "last": "Agrawal", "suffix": "" }, { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2015, "venue": "International Conference on Computer Vision", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Modal Logic", "authors": [ { "first": "Patrick", "middle": [], "last": "Blackburn", "suffix": "" }, { "first": "Yde", "middle": [], "last": "Maarten De Rijke", "suffix": "" }, { "first": "", "middle": [], "last": "Venema", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrick Blackburn, Maarten de Rijke, and Yde Venema. 2002. Modal Logic. Cambridge University Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Zeynep Akata, and Thomas Lukasiewicz. 2020. e-SNLI-VE", "authors": [ { "first": "Virginie", "middle": [], "last": "Do", "suffix": "" }, { "first": "Oana-Maria", "middle": [], "last": "Camburu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Virginie Do, Oana-Maria Camburu, Zeynep Akata, and Thomas Lukasiewicz. 2020. e-SNLI-VE-", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "0: Corrected visual-textual entailment with natural language explanations", "authors": [], "year": null, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "0: Corrected visual-textual entailment with nat- ural language explanations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "TALL: temporal activity localization via language query", "authors": [ { "first": "Jiyang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zhenheng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ram", "middle": [], "last": "Nevatia", "suffix": "" } ], "year": 2017, "venue": "IEEE International Conference on Computer Vision", "volume": "", "issue": "", "pages": "5277--5285", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. 2017. TALL: temporal activity localization via language query. In IEEE International Confer- ence on Computer Vision, pages 5277-5285, Venice, Italy. IEEE Computer Society.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Combining Lexical and Spatial Knowledge to Predict Spatial Relations between Objects in Images", "authors": [ { "first": "Manuela", "middle": [], "last": "H\u00fcrlimann", "suffix": "" }, { "first": "Johan", "middle": [], "last": "Bos", "suffix": "" } ], "year": 2016, "venue": "Proc. of the Workshop on Vision and Language", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuela H\u00fcrlimann and Johan Bos. 2016. Combin- ing Lexical and Spatial Knowledge to Predict Spa- tial Relations between Objects in Images. In Proc. of the Workshop on Vision and Language.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Action genome: Actions as compositions of spatio-temporal scene graphs", "authors": [ { "first": "Jingwei", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "Juan", "middle": [ "Carlos" ], "last": "Niebles", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingwei Ji, Ranjay Krishna, Li Fei-Fei, and Juan Car- los Niebles. 2020. Action genome: Actions as com- positions of spatio-temporal scene graphs. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning", "authors": [ { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Bharath", "middle": [], "last": "Hariharan", "suffix": "" }, { "first": "Laurens", "middle": [], "last": "Van Der Maaten", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" }, { "first": "C", "middle": [ "Lawrence" ], "last": "Zitnick", "suffix": "" }, { "first": "Ross", "middle": [ "B" ], "last": "Girshick", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "1988--1997", "other_ids": {}, "num": null, "urls": [], "raw_text": "Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988-1997.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "authors": [ { "first": "Ranjay", "middle": [], "last": "Krishna", "suffix": "" }, { "first": "Yuke", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Groth", "suffix": "" }, { "first": "Justin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Kenji", "middle": [], "last": "Hata", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Kravitz", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Kalantidis", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "David", "middle": [ "A" ], "last": "Shamma", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2017, "venue": "International Journal of Computer Vision", "volume": "123", "issue": "1", "pages": "32--73", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Interna- tional Journal of Computer Vision, 123(1):32-73.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Violin: A large-scale dataset for video-and-language inference", "authors": [ { "first": "Jingzhou", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Wenhu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Licheng", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "10900--10910", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingzhou Liu, Wenhu Chen, Yu Cheng, Zhe Gan, Licheng Yu, Yiming Yang, and Jingjing Liu. 2020. Violin: A large-scale dataset for video-and-language inference. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), pages 10900-10910.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Hollywood in homes: Crowdsourcing data collection for activity understanding", "authors": [ { "first": "A", "middle": [], "last": "Gunnar", "suffix": "" }, { "first": "G\u00fcl", "middle": [], "last": "Sigurdsson", "suffix": "" }, { "first": "Xiaolong", "middle": [], "last": "Varol", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Laptev", "suffix": "" }, { "first": "", "middle": [], "last": "Gupta", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "510--526", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gunnar A. Sigurdsson, G\u00fcl Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. 2016. Hollywood in homes: Crowdsourcing data collec- tion for activity understanding. In Proceedings of the European Conference on Computer Vision (ECCV), pages 510-526, Amsterdam, Netherlands. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A corpus of natural language for visual reasoning", "authors": [ { "first": "Alane", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "James", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "217--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual rea- soning. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 217-223, Van- couver, Canada. Association for Computational Lin- guistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A corpus for reasoning about natural language grounded in photographs", "authors": [ { "first": "Alane", "middle": [], "last": "Suhr", "suffix": "" }, { "first": "Stephanie", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Ally", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Iris", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Huajun", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6418--6428", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in pho- tographs. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 6418-6428, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Multimodal logical inference system for visual-textual entailment", "authors": [ { "first": "Riko", "middle": [], "last": "Suzuki", "suffix": "" }, { "first": "Hitomi", "middle": [], "last": "Yanaka", "suffix": "" }, { "first": "Masashi", "middle": [], "last": "Yoshikawa", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Mineshima", "suffix": "" }, { "first": "Daisuke", "middle": [], "last": "Bekki", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "386--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Riko Suzuki, Hitomi Yanaka, Masashi Yoshikawa, Koji Mineshima, and Daisuke Bekki. 2019. Multimodal logical inference system for visual-textual entail- ment. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Stu- dent Research Workshop, pages 386-392, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Temporal Logic, chapter 10", "authors": [ { "first": "Yde", "middle": [], "last": "Venema", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yde Venema. 2017. Temporal Logic, chapter 10. John Wiley and Sons, Ltd.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Show and Tell: A neural image caption generator", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Toshev", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Dumitru", "middle": [], "last": "Erhan", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "3156--3164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and Tell: A neural im- age caption generator. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3156-3164.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "STAIR actions: A video dataset of everyday home actions", "authors": [ { "first": "Yuya", "middle": [], "last": "Yoshikawa", "suffix": "" }, { "first": "Jiaqing", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Akikazu", "middle": [], "last": "Takeuchi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuya Yoshikawa, Jiaqing Lin, and Akikazu Takeuchi. 2018. STAIR actions: A video dataset of everyday home actions. CoRR, abs/1804.04326.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "Inference example between a video and sentences. The description of this video is: The woman tried to put on her outerwear though she could not, because its zipper was not open completely.", "num": null }, "FIGREF1": { "uris": null, "type_str": "figure", "text": "Example video for the action of touching someone's shoulder from the Charades dataset.", "num": null }, "FIGREF2": { "uris": null, "type_str": "figure", "text": "A man is running while holding a pillow. Action labels are \u27e8d 1 , hold, pillow\u27e9 and \u27e8d 1 , run, \u03d5\u27e9", "num": null }, "FIGREF3": { "uris": null, "type_str": "figure", "text": "Annotation example of a video labeled with various types of predicates. Here s 1 , . . . , s 11 are scenes linearly ordered by the temporal precedence relation.", "num": null }, "TABREF2": { "content": "