{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:40:28.874376Z" }, "title": "A Visuospatial Dataset for Naturalistic Verb Learning", "authors": [ { "first": "Dylan", "middle": [], "last": "Ebert", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University", "location": {} }, "email": "ebert@brown.edu" }, { "first": "Ellie", "middle": [], "last": "Pavlick", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University", "location": {} }, "email": "pavlick@brown.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We introduce a new dataset for training and evaluating grounded language models. Our data is collected within a virtual reality environment and is designed to emulate the quality of language data to which a pre-verbal child is likely to have access: That is, naturalistic, spontaneous speech paired with richly grounded visuospatial context. We use the collected data to compare several distributional semantics models for verb learning. We evaluate neural models based on 2D (pixel) features as well as feature-engineered models based on 3D (symbolic, spatial) features, and show that neither modeling approach achieves satisfactory performance. Our results are consistent with evidence from child language acquisition that emphasizes the difficulty of learning verbs from naive distributional data. We discuss avenues for future work on cognitively-inspired grounded language learning, and release our corpus with the intent of facilitating research on the topic.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We introduce a new dataset for training and evaluating grounded language models. Our data is collected within a virtual reality environment and is designed to emulate the quality of language data to which a pre-verbal child is likely to have access: That is, naturalistic, spontaneous speech paired with richly grounded visuospatial context. We use the collected data to compare several distributional semantics models for verb learning. We evaluate neural models based on 2D (pixel) features as well as feature-engineered models based on 3D (symbolic, spatial) features, and show that neither modeling approach achieves satisfactory performance. Our results are consistent with evidence from child language acquisition that emphasizes the difficulty of learning verbs from naive distributional data. We discuss avenues for future work on cognitively-inspired grounded language learning, and release our corpus with the intent of facilitating research on the topic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "While distributional models of semantics have seen incredible success in recent years (Devlin et al., 2018) , most current models lack \"grounding\", or a connection between words and their referents in the non-linguistic world. Grounding is an important aspect to representations of meaning and arguably lies at the core of language \"understanding\" (Bender and Koller, 2020) . Work on grounded language learning has tended to make opportunistic use of large available corpora, e.g. by learning from web-scale corpora of image (Bruni et al., 2012) or video captions (Sun et al., 2019) , or has been driven by particular downstream applications such as robot navigation (Anderson et al., 2018) .", "cite_spans": [ { "start": 86, "end": 107, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF10" }, { "start": 348, "end": 373, "text": "(Bender and Koller, 2020)", "ref_id": "BIBREF3" }, { "start": 525, "end": 545, "text": "(Bruni et al., 2012)", "ref_id": "BIBREF8" }, { "start": 564, "end": 582, "text": "(Sun et al., 2019)", "ref_id": "BIBREF31" }, { "start": 667, "end": 690, "text": "(Anderson et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, we take an aspirational look at grounded distributional semantics models, based on the type of situated contexts and weak supervision from which children are able to learn much of their early vocabulary. Our approach is motivated by the assumption that building computational models which emulate human language processing is in itself a worthwhile endeavor, which can yield both scientific (Potts, 2019) and engineering (Linzen, 2020) advances in NLP. Thus, we aim to develop a dataset that better reflects both the advantages and the challenges of humans' naturalistic learning environments. For example, unlike most vision-and-language models, children likely have the advantage of access to symbolic representations of objects and their physics prior to beginning word learning (Spelke and Kinzler, 2007) . However, also unlike NLP models, which are typically trained on image or video captions with strong signal, children's language input is highly unstructured and the content is often hard to predict given only the grounded context (Gillette et al., 1999) .", "cite_spans": [ { "start": 405, "end": 418, "text": "(Potts, 2019)", "ref_id": "BIBREF27" }, { "start": 796, "end": 822, "text": "(Spelke and Kinzler, 2007)", "ref_id": null }, { "start": 1055, "end": 1078, "text": "(Gillette et al., 1999)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make two main contributions. First ( \u00a72), using a virtual reality kitchen environment, we collect and release 1 the New Brown Corpus 2 : A dataset containing 18K words of spontaneous speech alongside rich visual and spatial information about the context in which the language occurs. Our protocol is designed to solicit naturalistic speech and to have good coverage of vocabulary items with low average ages of acquisition according to data on child language development (Frank et al., 2017) . Second ( \u00a73), we use our corpus to compare several distributional semantics models, specifically comparing mod-els which represent the environment in terms of objects and their physics to models which represent the environment in terms of pixels. We focus on verbs, which have received considerably less attention in work on grounded language learning than have nouns and adjectives (Forbes et al., 2019) . More so than nouns, verb learning is believed to rely on subtle combinations of both syntactic and grounded contextual signals (Piccin and Waxman, 2007) and thus progress on verb learning is likely to require new approaches to modeling and supervision. In our experiments, we find that strong baseline models, both featureengineered and neural network models, perform only marginally above chance. However, comparing models reveals intuitive differences in error patterns, and points to directions for future research.", "cite_spans": [ { "start": 474, "end": 494, "text": "(Frank et al., 2017)", "ref_id": "BIBREF14" }, { "start": 880, "end": 901, "text": "(Forbes et al., 2019)", "ref_id": "BIBREF12" }, { "start": 1031, "end": 1056, "text": "(Piccin and Waxman, 2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of our data collection is to enable research on grounded distributional semantics models using data that better resembles the type of input young children receive on a regular basis during language development. Doing this fully is ambitious if not impossible. Thus, we focus on a few aspects of children's language learning environment that are lacking from typical grounded language datasets and that can be emulated well given current technology: 1) spontaneous speech (i.e. as opposed to contrived image or video captions) and 2) rich information about the 3D world (i.e. physical models of the environment as opposed to flat pixels).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "We develop a virtual reality (VR) environment within which we collect this data in a controlled way. Our environment data is described in Section 2.1 and our language data is described in Section 2.2. Our collection process results in a corpus of 152 minutes of concurrent video, audio, and ground-truth environment information, totaling 18K words across 18 unique speakers performing six distinct tasks each. The current data is available for download in json format at https:// github.com/dylanebert/nbc. The code needed to implement the described environment and data recording is available at https://github.com/ dylanebert/nbc_unity_scripts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "2" }, { "text": "Our environment is a simple kitchen environment, implemented in Unity with SteamVR and our experiments are conducted using an HTC Vive headset. We choose to use VR as opposed to alternative interfaces for simulated interactions (e.g. keyboard or mouse control) since VR enables participants to use their usual hand and arm motions and to narrate in real time, leading to more natural speech and more faithful simulations of the actions they are asked to perform.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Environment Construction", "sec_num": "2.1.1" }, { "text": "We design six different kitchen environments, using two different visual aesthetics ( Fig. 1 ) with three floorplans each. This variation is so that we can test, for example, that learned representations are not overfit to specific pixel configurations or to exact hand positions that are dependent on the training environment(s) (e.g. \"being in the northwest corner of the kitchen\" as opposed to \"being near the sink\"). Each kitchen contains at least 20 common objects (not every kitchen contains every object). These objects were selected because they represent words with low average ages of acquisition (described in detail in \u00a72.2) and were available in different Unity packages and thus could be included in the environment with different appearances. Across all kitchens, the movable objects used are: Apple, Ball, Banana, Book, Bowl, Cup, Fork, Knife, Lamp, Plant, Spoon, Toy1:Bear|Bunny, Toy2:Doll|Dinosaur, Toy3:Truck|Plane. The participant's hands and head are also included as movable objects. We also include the following immovable objects: Cabinets, Ceiling, Chair, Clock, Counter, Dishwasher, Door, Floor, Fridge, Microwave, Oven, Pillar, Rug, Sink, Stove, Table, Trash Bin, Wall, Window. Our environments are constructed using a combination of Unity Asset Store assets and custom models. All paid assets (most objects we used) come from two packs: 3DEverything Kitchen Collection 2 and Synty Studios Simple House Interiors, from the Unity asset store 3 . These packs account for the two visual styles. VR interaction is enabled using the SteamVR Unity plugin, available for free on the Unity asset store.", "cite_spans": [], "ref_spans": [ { "start": 86, "end": 92, "text": "Fig. 1", "ref_id": "FIGREF0" }, { "start": 1173, "end": 1179, "text": "Table,", "ref_id": null } ], "eq_spans": [], "section": "Environment Construction", "sec_num": "2.1.1" }, { "text": "During data collection, we record the physical state of each object in the environment, according to the ground-truth in-game data, at a rate of 90fps (frames per second). The Vive provides accurate motion capture, allowing us to record the physical state of the user's head and hands (Borges et al., 2018) as well. For each object, we record the physical features described in Table 1 . Audio data is also collected in parallel to spatial data, using the built-in microphone. We later transcribe the audio using the Google Cloud Speech-to-Text API 4 . Word-level timestamps from the API allow us to match words to visuospatial frames. While spatial and audio data are recorded in real-time, video recording is not, since this would introduce high computational overhead and drop frames. Instead, we iterate back over the spatial data, and reconstruct/rerender the playback frame-by-frame. This approach makes it possible to render from any perspective if needed, though our provided image data is only from the original first-person perspective.", "cite_spans": [ { "start": 285, "end": 306, "text": "(Borges et al., 2018)", "ref_id": "BIBREF4" } ], "ref_spans": [ { "start": 378, "end": 385, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data Recording", "sec_num": "2.1.2" }, { "text": "We design our protocol so as to solicit the use of vocabulary items that are known to be common among children's early-acquired words. To do this, we first select 20 nouns, 20 verbs, and 20 prepositions/adjectives which have low average ages of acquisition according to Frank et al. (2017) and which can be easily operationalized within our VR environment (e.g. \"apple\", \"put (down)\", \"red\", see Appendix A for full word list). We then choose six basic tasks which the participants will be instructed to carry out within the environment. These tasks are: set the table, eat lunch, wash dishes, play with toys, describe a given object, and clean up toys. The tasks are intended to solicit use of many of the target vocabulary items without explicitly instructing participants to use specific words, since we want to avoid coached or stilted speech as much as possible. One exception is the \"describe a given object\" task, in which we ask participants to describe specific objects as though a child has just asked what the object is, e.g. \"What's a spoon?\". We use this task to ensure uniform coverage of vocabulary items across environments, so that we can construct good train/test splits across differently appearing environments. See Appendix B for details on distributing vocabulary items.", "cite_spans": [ { "start": 270, "end": 289, "text": "Frank et al. (2017)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Language Data Collection", "sec_num": "2.2" }, { "text": "We recruited 18 participants for our data collection. Participants were students and faculty members from multiple departments involved with language research. We asked each participant to perform each of our tasks, one by one, and to narrate their actions as they went, as though they were a parent or babysitter speaking to a young child. The exact instructions given to participants before each task are shown in Appendix C. An illustrative example of the language in our corpus is the following: \"okay let's pick up the ball and play with that will it bounce let's see if we can bounce it exactly let's let it drop off the edge yes it bounced the ball bounced pick it up again...\". The full data can be browsed at https://github. com/dylanebert/nbc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Data Collection", "sec_num": "2.2" }, { "text": "Our study design was determined not to be human subjects research by the university IRB. All participants were informed of the purpose of the study and provided signatures consenting to the recording and release of their anonymized data for research purposes (consent form in Appendix D).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Data Collection", "sec_num": "2.2" }, { "text": "Since our stated goal was to collect data that better mirrors the distribution of language input a young child is likely to receive, we run several corpus analyses to assess whether this goal was met.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Child Directed Speech", "sec_num": "2.3" }, { "text": "First, we compare the distribution of vocabulary in our collected data to that observed in the Brent-Siskind Corpus (Brent and Siskind, 2001 ), a corpus of child-directed speech consisting of 16 English-speaking mothers speaking to their preverbal children. For reference, we also compare with the vocabulary distributions of three existing corpora which could be used for training distributional semantics models: 1) MSR-VTT (Xu et al., Name (Type) Description", "cite_spans": [ { "start": 116, "end": 140, "text": "(Brent and Siskind, 2001", "ref_id": "BIBREF5" }, { "start": 426, "end": 437, "text": "(Xu et al.,", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Vocabulary Distribution", "sec_num": "2.3.1" }, { "text": "Absolute position of object center, computed using the transform.position property; equivalent to position relative to an arbitrary world origin, approximately in the center of the floor. rot (xyzw) Absolute rotation of object, computed using the transform.rotation property. vel (xyz) Absolute velocity of object center, computed using the VelocityEstimator class included with SteamVR. relPos (xyz) Position of object's center relative to the person's head, computed using Unity's built-in head.transform.TransformPoint(objectPosition). relRot (xyzw) Rotation of object relative to the person's head, computed by applying the inverse of the head rotation to the object rotation. relVel xyzVelocity of the object's center, from the frame of reference of the person's head bound (xyz) Distance from object's center to the edge of bounding box inView (bool) Whether or not the object was in the person's field of view, computed using Unity's GeometryUtility to compute if an object is in the Camera renderer bounds. This is based on the default camera's 60 degree FOV, not the wide headset FOV. The head and hands are always considered inView. img url (img) Snapshot of the person's entire field of view as a 2D image. We compute this once per frame (as opposed to the above features which are computed once per object per frame). 2016), a large dataset of YouTube videos labeled with captions, 2) Room2Room (R2R) (Anderson et al., 2018) , a dataset for instruction following within a 3D virtual world, and 3) a random sample of sentences drawn from Wikipedia. Since our primary focus is on grounded language, MSR and R2R offer the more relevant points of comparison, since each contains language aligned with some kind of grounded semantic information (raw RGB video feed for MSR and video+structured navigation map for R2R). We include Wikipedia to exemplify the type of web corpora that are ubiquitous in work on representation learning for NLP. Figure 2 shows, for each of the five corpora, the token-and type-level frequency distributions over major word categories 5 and of individual lexical items. In terms of word categories, we see that our data most closely mirrors the distribution of child-directed speech: Both our corpus and the Brent corpus contain primarily verbs (\u223c23% when computed at the token level) followed by pronouns (\u223c19%) followed by nouns at around 17%. In contrast, the MSR video caption corpus and Wikipedia both contain predominantly nouns (\u223c40%) and the R2R instruction dataset contains nouns and verbs in equal proportions (\u223c33% each). None of the baseline corpora contain significant counts of pronouns. Additionally, in terms of specific vocabulary items, our cor-pus contains decent coverage for many of the most frequent verbs observed in CDS, while the baseline corpora are dominated by a single verb each (\"go\" for R2R and \"be\" for MSR and Wikipedia). For nouns and adjectives, we also see better coverage of top-CDS words in our data compared to the other corpora analyzed, though we note that the difference is less obvious and that the lexical items in these categories are much more topically determined.", "cite_spans": [ { "start": 192, "end": 198, "text": "(xyzw)", "ref_id": null }, { "start": 280, "end": 285, "text": "(xyz)", "ref_id": null }, { "start": 395, "end": 400, "text": "(xyz)", "ref_id": null }, { "start": 779, "end": 784, "text": "(xyz)", "ref_id": null }, { "start": 1413, "end": 1436, "text": "(Anderson et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 1948, "end": 1956, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "pos (xyz)", "sec_num": null }, { "text": "We next look at how well the language corresponds to the the salient objects and events in the context of its use. This property is important as it relates to how strong the \"training signal\" would be for a model that is attempting to learn linguistic meaning from distributional signal. It is hard to directly estimate the quality of the \"training signal\" available to children. However, experiments in psychology using the Human Simulation Paradigm (HSP) (Gillette et al., 1999; Piccin and Waxman, 2007) come close. In the HSP design, experimenters collect audio and video recordings of a child's normal activities (i.e. via head-mounted cameras). Given this data, adults are asked to view segments of videos and predict which words are said at given points in time. This technique is used to estimate how \"predictable\" language is given only the grounded (non-linguistic) input to which a child has access. Using this technique, Gillette et al. (1999) While not directly comparable to our setting, this provides us with an approximate point of comparison against which to benchmark the word-tocontext alignment of our collected data. Rather than try to guess the word given a video clip, we instead view a short (5 second) video clip alongside an uttered word and make a binary judgement for whether or not the clip depicts an instance of the word: e.g., yes or no, does the clip depict an instance of \"pick up\"? We chose this design over the HSP design since it provides a more interpretable measure of the quality of the training signal from the perspective of NLP and ML researchers using the data. We expect this variant of the task to yield higher numbers than the HSP design, since it does not require guessing from the entire vocabulary. We take a sample of (up to) five instances for each of our target nouns and verbs (fewer if the word occurs less often in our data) and label them in this way. We find inner annotator agreement on this task to be very high (91% when computed between two researchers on the project) and thus have a single annotator label all instances. Table 2 shows the results of this analysis. We see the expected trend, in which grounded context is a considerably better signal of noun use than verb use. We also note there is substantial variation in training signal across verbs. For example, while some verbs (e.g. \"pick\", \"take\", \"hold\") have strong signal, other verbs (\"eat\") tend to be used in contexts sufficiently detached from the activities themselves. The noisiness of this signal is one of the biggest challenges of learning from such naturalistic data, as we will discuss further in \u00a73.4.", "cite_spans": [ { "start": 457, "end": 480, "text": "(Gillette et al., 1999;", "ref_id": "BIBREF17" }, { "start": 481, "end": 505, "text": "Piccin and Waxman, 2007)", "ref_id": "BIBREF26" }, { "start": 932, "end": 954, "text": "Gillette et al. (1999)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 2084, "end": 2091, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Word-Context Alignment", "sec_num": "2.3.2" }, { "text": ".3 0.2 0.0 1.5 0.0 1.0 6.6 0.2 1.1 0.3 0.2 2.9 0.9 0.1 0.0 0.0 0.0 0.3 0.0 0.0 0.0 0.0 0.0 0.4 0.0 0.2 0.1 0.1 0.1 0.1 0.5 0.1 0.0 (d) Token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Context Alignment", "sec_num": "2.3.2" }, { "text": "1 0.0 0.3 0.0 0.0 0.2 0.4 0.7 0.2 0.2 0.0 0.0 0.0 0.0 0.1 0.0 0.0 0.0 0.0 1.3 0.0 0.0 0.0 0.0 0.1 0.6 0.2 0.2 0.4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word-Context Alignment", "sec_num": "2.3.2" }, { "text": "Using the above data, we now compare several grounded distributional semantics models (DSM) in terms of how well they encode verb meanings, focusing in particular on differences in how the environment is represented when put in to the DSM. Our hypothesis is that models will perform better if they represent the environment in terms of 3D objects and their physics rather than pixels, since work in psychology has shown that children learn to parse the physical world into objects and agents very early in life (Spelke and Kinzler, 2007) , long before they show evidence of language understanding. We also explore how models vary when they have access to linguistic supervision early in the pipeline, during environment encoding, in addition to later, during language learning. We note that the models explored are intended as sim- Table 2 : Estimates of training signal quality for nouns and verbs. N is the number of times the word occurs in the training data. P is the precision-given a 5 second clip in which the word is used, how often does the clip depict an instance of the word? Note that the verb \"go\" is an outlier, since it appears most often as \"going to\".", "cite_spans": [ { "start": 511, "end": 537, "text": "(Spelke and Kinzler, 2007)", "ref_id": null } ], "ref_spans": [ { "start": 832, "end": 839, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "ple instantiations to test the parameters of interest given our (small) dataset. Future work on more advanced models should no doubt yield improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3" }, { "text": "Our raw data consists of continuous video and game-engine recordings of the environment, and parallel transcriptions of the natural language narration. To convert this into a format usable by our DSM, we perform the following preprocessing steps. This preprocessing phase is common to all the models evaluated. First, we segment the environment data into \"clips\". Each clip is five seconds long 6 and thus consists of 450 frames (since the VR environment recording is at 90fps), which we subsample to 50 frames (10fps). Since our grounded DSMs require associating a word w with its grounded context c, we consider the clip imme-diately following the utterance of w to be the context c. See earlier discussion ( \u00a72.3.2) for estimates of the signal-to-noise ratio produced by this labeling method. Training clips that are not the context of any word are discarded. We hold out two subjects' sessions (one from each visual aesthetic) for test, and use the remaining 16 subjects' sessions for training. Finally, since this verb-learning problem proves quite challenging, we scope down our analysis to the following 14 verbs, which come from the 20 verbs specified in our initial target vocabulary ( \u00a72.2) less 6 which did not ultimately occur in our data: \"walk\", \"throw\", \"put (down)\", \"get\", \"go\", \"give\", \"wash\", \"open\", \"hold\", \"eat\", \"play\", \"take\", \"drop\", \"pick (up)\". Again, these words all have low average ages of acquisition (19 to 28 months) and thus should represent reasonable targets for evaluation. Nonetheless, we will see in \u00a73.3 that models struggle to perform well on this task; we elaborate on this discussion in \u00a74.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "3.1" }, { "text": "We train and evaluate four different DSMs, each of which represent a word w in terms of its grounded context c. The parameters we vary are 1) the feature representation of c ($3.2.1) and 2) the type of supervision provided to the DSM ( \u00a73.2.2). All models share the same simple pipeline. First, we build a word-context matrix M which maps each token-level instance of w to a featurized representation of c. We then run dimensionality reduction on M . Finally, we take the type-level representation of w to be the average row vector of M , across all instances of w. All of our model code is available at http://github.com/dylanebert/ nbc_starsem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.2" }, { "text": "Object-Based. In our Object-Based encoder, we take a feature-engineered approach intended to provide the model with a knowledge of the basic object physics likely to be relevant to the semantics of the verbs we target. Specifically, we represent each clip using four feature templates (trajectory, vel, dist to head, relPos), defined as follows. First, we find the \"most moving object\", i.e., the object with the highest average velocity over the clip. We then compute our four sets of features for this most moving object.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Encoders", "sec_num": "3.2.1" }, { "text": "Our velocity and relPos features are simply the mean, min, max, start, end, and variance of the object's velocity and relative position, respectively, over the clip. For our dist to head feature, for each position dimension (xyz), we compute the following values of the distance from the object's center to the participant's head: start, end, mean, var, min, max, min idx, max idx, where min/max index is the point at which min/max value was reached (recorded as a % of the way through the clip). Finally, our trajectory features are intended to capture the shape of the objects trajectory over the clip. To compute this, for each of position dimension (xyz), we compute four points during the clip: start, peak (max), trough (min), end. Then, if max happens before min, we consider the max to be \"key point 1\" (kp1) and the min to be \"key point 2\" (kp2), and vice-versa if the min happens before the max. We then compute the following features: kp1-start, kp2-kp1, end-kp2, end-start.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Encoders", "sec_num": "3.2.1" }, { "text": "Pretrained CNN. To contrast with the above featured-engineered approach, we also implement an encoder based on the features extracted by a pretained CNN. Our CNN encoder has an advantage over the Object-Based encoder in that it has been trained on far more image data, but has a disadvantage in that it lacks domain-specific feature engineering. We use pretrained VGG16 (Simonyan and Zisserman, 2014), which is a 16-layer CNN trained on ImageNet that produces a 4096dimensional vector for each image. We compute this vector for each frame in the clip, and then compute the following features along each dimension in order to get a vector representation of the full clip: start value, end value, min, max, mean.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context Encoders", "sec_num": "3.2.1" }, { "text": "Given a matrix M that maps each word instance to a feature vector using one of the encoders above, we run dimensionality reduction to get a 10d vector 7 for each word instance. We consider two settings. In the unsupervised setting, we run vanilla SVD. In the supervised setting, we run supervised LDA in which the \"labels\" are the words uttered at the start of the clip as described in \u00a73.1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dimensionality Reduction", "sec_num": "3.2.2" }, { "text": "We evaluate our models in terms of their precision when assigning verbs to unseen clips. Specifically, for our two heldout subjects, we partition the full session into consecutive 5-second clips, resulting in 189 clips total. For testing, unlike in training, we include all clips, even those in which the subject is not speaking. Then, for each model, we encode each clip using the model's encoder and then find the verb with the highest cosine similarity to the encoded clip. The authors then view each clip alongside the predicted verb and make a binary judgement for whether or not the verb accurately depicts the action in the clip, e.g. yes or no, does the clip depict an instance of \"pick up\"? To avoid annotation bias, all four models plus a random baseline are shuffled and evaluated together, and annotators do not know which prediction comes from which model. Annotator agreement was high (91%). Table 3 reports our main results for each model. We compute both \"strict\" precision, in which a prediction is only considered correct if both annotators deemed it correct, as well as \"soft\" precision, in which a prediction is correct as long as one annotator deemed it correct. As the results show, no model performs especially well. Random guessing achieves 32% (soft) precision on average. The supervised Object-Based model and the unsupervised CNN model both perform a bit better (40% on average), but we note that the samples are small and we cannot call these differences significant (see 95% bootstrapped confidence intervals given in Table 3 ). Only the unsupervised Object-Based model stands out in that it performs significantly worse than all other models (20% soft precision). For the CNN models, we do not see a significant difference with the supervised dimensionality reduction. Figure 3 shows example clips for each encoder. Table 4 shows a breakdown of model performance by verb. We see a few intuitive differences between the CNN-based model and the Object-Based model, discussed below. We note these observations are based on a small number of predictions, and thus should be taken only as suggestive.", "cite_spans": [], "ref_spans": [ { "start": 906, "end": 913, "text": "Table 3", "ref_id": null }, { "start": 1547, "end": 1554, "text": "Table 3", "ref_id": null }, { "start": 1799, "end": 1807, "text": "Figure 3", "ref_id": "FIGREF3" }, { "start": 1846, "end": 1853, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "Low-level actions. The Object-Based models achieve higher precision on low-level verbs like \"pick\", \"take\", and \"hold\". This makes intuitive Table 3 : Precision of each method with 95% bootstrapped CI. \"Soft\" means a prediction is correct as long as one annotator considers it to be so; \"strict\" means prediction is only considered correct if both annotators agree that it is correct. sense, since the 3D spatial features are designed to capture these types of mechanical actions, independent of the objects with which they co-occur. The 2D visual data, on the other hand, may struggle to ground a visually diverse set of objects-inmotion to these low-level mechanical actions.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "3.4" }, { "text": "Visual cues. Some actions are strongly predicted by specific objects, which are well captured by visual cues. This is most obvious in the case of \"wash\", on which the CNN achieves higher precision than the Object-Based models. This is again intuitive as wash tends to co-occur with a clear view of the sink, which is a large, visually-distinct part of the field of view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "3.4" }, { "text": "Vague actions. Actions like \"go\", \"walk\", and \"hold\" occur frequently, even when the language signal does not reflect it. That is, in any given clip, there is a high chance that the participant walks, goes somewhere, or holds something. Thus, mod- For each, N is the number of times the model predicts that verb. Precision is the proportion of the time that prediction was correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "3.4" }, { "text": "els which happen to predict these verbs frequently may have artificially high accuracy. For example, the unsupervised Object-Based model only predicts \"go\" once and \"hold\" 5 times , which may contribute to the unsupervised Object-Based model performing significantly worse than random, despite seeming to capture low-level actions well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "3.4" }, { "text": "Special cases. We note that some verbs are very difficult or impossible to detect given limitations of our data. In particular, \"give\", \"eat\", and \"open\" have a precision of 0 across all models, as well as in the training signal ( \u00a72.3.2). For example, \"give\" only occurs twice in our data (\"fluffy teddy bear going to give it a little hug\" and \"turn on the water give it a little sore[sic] and we can let it dry there\"), but cannot occur in its prototypical sense since there is no clear second agent to be a recipient. During instances of \"eat\" and \"open\", participants tended to mime the actions, but the ingame physics data does not faithfully capture the semantics of these verbs (e.g., containers do not actually open). These words highlight limitations of the environment which may be addressed in future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Analysis", "sec_num": "3.4" }, { "text": "We compare two types of models for grounded verb learning, one based on 2D visual features and one based on 3D symbolic and spatial features. Our analysis suggests that these approaches favor in different aspects of verb semantics. One open question is how to combine these differing signals, and how to design training objectives that encourage models to chose the right sensory inputs and time scale to which to ground each verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "We evaluated on a small set of verbs that are acquired comparably early by children. Nonetheless, our models perform only marginally better than random. This disconnect highlights an important challenge to be addressed by work on computational models of grounded language learning: Can statistical associations between words and contexts result in more than simple noun-centric image or video captioning, eventually forming general-purpose language models? While that question is still wide open, research from psychology could better inform work on grounded NLP. For example, Piccin and Waxman (2007) argues that verb learning in particular is not learned from purely grounded signal, but rather is \"scaffolded\" by earlier-acquired knowledge of nouns and of syntax. From this perspective, the models we explored here, which are similar to what is used for noun-learning, are far too simplistic for verb learning. More research is needed on ways to combine linguistic and grounded signal in order to learn more abstract semantic concepts.", "cite_spans": [ { "start": 577, "end": 601, "text": "Piccin and Waxman (2007)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "4" }, { "text": "We contribute to a large body of research on learning grounded representations of language. Grounded representations have been shown to improve performance on intrinsic semantic similary metrics Vuli\u0107 et al., 2017) as well as to be better predictors of human brain activity (Anderson et al., 2015; Bulat et al., 2017) . Much prior work has explored the augmentation of standard language modeling objectives with 2D image (Bruni et al., 2011; Lazaridou et al., 2015; Silberer and Lapata, 2012; Divvala et al., 2014) and video (Sun et al., 2019) data. Recent work on detecting fine-grained events in videos is particularly relevant (Hendricks et al., 2018; Zhukov et al., 2019; Fried et al., 2020, among others) . Especially relevant is the data collected by Gaspers et al. (2014) , in which human subjects were asked to play simple games with a physical robot and narrate while doing so. Our data and work differs primarily in that we focus on the ability to ground to symbolic objects and physics rather than only to pixel data. Past work on \"situated language learning\", inspired by emergence theories of language acquisition (MacWhinney, 2013), has trained AI agents to learn language from scratch by interacting with humans and/or each other in simulated environments or games (Wang et al., 2016; Mirowski et al., 2016; Urbanek et al., 2019; Beattie et al., 2016; Hill et al., 2018; Mirowski et al., 2016) ,", "cite_spans": [ { "start": 195, "end": 214, "text": "Vuli\u0107 et al., 2017)", "ref_id": "BIBREF33" }, { "start": 274, "end": 297, "text": "(Anderson et al., 2015;", "ref_id": "BIBREF0" }, { "start": 298, "end": 317, "text": "Bulat et al., 2017)", "ref_id": "BIBREF9" }, { "start": 421, "end": 441, "text": "(Bruni et al., 2011;", "ref_id": "BIBREF7" }, { "start": 442, "end": 465, "text": "Lazaridou et al., 2015;", "ref_id": "BIBREF22" }, { "start": 466, "end": 492, "text": "Silberer and Lapata, 2012;", "ref_id": "BIBREF28" }, { "start": 493, "end": 514, "text": "Divvala et al., 2014)", "ref_id": "BIBREF11" }, { "start": 525, "end": 543, "text": "(Sun et al., 2019)", "ref_id": "BIBREF31" }, { "start": 630, "end": 654, "text": "(Hendricks et al., 2018;", "ref_id": "BIBREF18" }, { "start": 655, "end": 675, "text": "Zhukov et al., 2019;", "ref_id": "BIBREF36" }, { "start": 676, "end": 709, "text": "Fried et al., 2020, among others)", "ref_id": null }, { "start": 757, "end": 778, "text": "Gaspers et al. (2014)", "ref_id": "BIBREF16" }, { "start": 1280, "end": 1299, "text": "(Wang et al., 2016;", "ref_id": "BIBREF34" }, { "start": 1300, "end": 1322, "text": "Mirowski et al., 2016;", "ref_id": "BIBREF25" }, { "start": 1323, "end": 1344, "text": "Urbanek et al., 2019;", "ref_id": "BIBREF32" }, { "start": 1345, "end": 1366, "text": "Beattie et al., 2016;", "ref_id": null }, { "start": 1367, "end": 1385, "text": "Hill et al., 2018;", "ref_id": "BIBREF20" }, { "start": 1386, "end": 1408, "text": "Mirowski et al., 2016)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "5" }, { "text": "We introduce the New Brown Corpus, a dataset of spontaneous speech aligned with rich environment data, collected in a VR kitchen environment. We show that, compared to existing corpora, the distribution of vocabulary collected is more comparable to that found in child-directed speech. We analyze several baseline distributional models for verb learning. Our results highlight the challenges of learning from naturalistic data, and outlines directions for future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://assetstore.unity.com/ 4 https://cloud.google.com/ text-to-speech/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We preprocess all corpora using the SpaCy 2.3.2 preprocessing pipeline with the en core web lg model. For our data and Brent, we process the entire corpus. Since MSR, R2R, and Wikipedia are much larger, we process a random sample of 5K sentences from each.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The length of 5 seconds was chosen heuristically prior to model development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "10d is chosen since we are only attempting to differentiate between 14 words, and thus our supervised LDA cannot use more than 13d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by DARPA under award number HR00111990064. Thanks to George Konidaris, Roman Feiman, Mike Hughes, members of the Language Understanding and Representation (LUNAR) Lab at Brown, and the reviewers for their help and feedback on this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text", "authors": [ { "first": "Andrew", "middle": [], "last": "James Anderson", "suffix": "" }, { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Lopopolo", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "NeuroImage", "volume": "120", "issue": "", "pages": "309--322", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andrew James Anderson, Elia Bruni, Alessandro Lopopolo, Massimo Poesio, and Marco Baroni. 2015. Reading visually embodied meaning from the brain: Visually grounded computational models de- code visual-object mental imagery induced by writ- ten text. NeuroImage, 120:309-322.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Visionand-Language Navigation: Interpreting visuallygrounded navigation instructions in real environments", "authors": [ { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Damien", "middle": [], "last": "Teney", "suffix": "" }, { "first": "Jake", "middle": [], "last": "Bruce", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Niko", "middle": [], "last": "S\u00fcnderhauf", "suffix": "" }, { "first": "Ian", "middle": [], "last": "Reid", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Gould", "suffix": "" }, { "first": "Anton", "middle": [], "last": "Van Den", "suffix": "" }, { "first": "", "middle": [], "last": "Hengel", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko S\u00fcnderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision- and-Language Navigation: Interpreting visually- grounded navigation instructions in real environ- ments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Climbing towards NLU: On meaning, form, and understanding in the age of data", "authors": [ { "first": "Emily", "middle": [ "M" ], "last": "Bender", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Koller", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5185--5198", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily M. Bender and Alexander Koller. 2020. Climb- ing towards NLU: On meaning, form, and under- standing in the age of data. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5185-5198, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Htc vive: Analysis and accuracy improvement", "authors": [ { "first": "Miguel", "middle": [], "last": "Borges", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Symington", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Coltin", "suffix": "" }, { "first": "Trey", "middle": [], "last": "Smith", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Ventura", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)", "volume": "", "issue": "", "pages": "2610--2615", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miguel Borges, Andrew Symington, Brian Coltin, Trey Smith, and Rodrigo Ventura. 2018. Htc vive: Anal- ysis and accuracy improvement. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2610-2615. IEEE.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The role of exposure to isolated words in early vocabulary development", "authors": [ { "first": "R", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Jeffrey", "middle": [ "Mark" ], "last": "Brent", "suffix": "" }, { "first": "", "middle": [], "last": "Siskind", "suffix": "" } ], "year": 2001, "venue": "Cognition", "volume": "81", "issue": "2", "pages": "33--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael R Brent and Jeffrey Mark Siskind. 2001. The role of exposure to isolated words in early vocabu- lary development. Cognition, 81(2):B33-B44.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A first language: The early stages", "authors": [ { "first": "Roger", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1973, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Brown. 1973. A first language: The early stages. Harvard U. Press.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Distributional semantics from text and images", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Giang Binh Tran", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics", "volume": "", "issue": "", "pages": "22--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Giang Binh Tran, and Marco Baroni. 2011. Distributional semantics from text and images. In Proceedings of the GEMS 2011 Workshop on GE- ometrical Models of Natural Language Semantics, pages 22-32, Edinburgh, UK. Association for Com- putational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Distributional semantics with eyes: Using image analysis to improve computational representations of word meaning", "authors": [ { "first": "Elia", "middle": [], "last": "Bruni", "suffix": "" }, { "first": "Jasper", "middle": [], "last": "Uijlings", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "Nicu", "middle": [], "last": "Sebe", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 20th ACM international conference on Multimedia", "volume": "", "issue": "", "pages": "1219--1228", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elia Bruni, Jasper Uijlings, Marco Baroni, and Nicu Sebe. 2012. Distributional semantics with eyes: Us- ing image analysis to improve computational repre- sentations of word meaning. In Proceedings of the 20th ACM international conference on Multimedia, pages 1219-1228. ACM.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain", "authors": [ { "first": "Luana", "middle": [], "last": "Bulat", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Ekaterina", "middle": [], "last": "Shutova", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1081--1091", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luana Bulat, Stephen Clark, and Ekaterina Shutova. 2017. Speaking, seeing, understanding: Correlating semantic models with conceptual representation in the brain. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1081-1091, Copenhagen, Denmark. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning everything about anything: Weblysupervised visual concept learning", "authors": [ { "first": "K", "middle": [], "last": "Santosh", "suffix": "" }, { "first": "Ali", "middle": [], "last": "Divvala", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Farhadi", "suffix": "" }, { "first": "", "middle": [], "last": "Guestrin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "3270--3277", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santosh K Divvala, Ali Farhadi, and Carlos Guestrin. 2014. Learning everything about anything: Webly- supervised visual concept learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3270-3277.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Neural naturalist: Generating fine-grained image comparisons", "authors": [ { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Kaeser-Chen", "suffix": "" }, { "first": "Piyush", "middle": [], "last": "Sharma", "suffix": "" }, { "first": "Serge", "middle": [], "last": "Belongie", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "708--717", "other_ids": { "DOI": [ "10.18653/v1/D19-1065" ] }, "num": null, "urls": [], "raw_text": "Maxwell Forbes, Christine Kaeser-Chen, Piyush Sharma, and Serge Belongie. 2019. Neural natural- ist: Generating fine-grained image comparisons. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 708- 717, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Brown corpus manual", "authors": [ { "first": "Nelson", "middle": [], "last": "Francis", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Kucera", "suffix": "" } ], "year": 1979, "venue": "Letters to the Editor", "volume": "5", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "W Nelson Francis and Henry Kucera. 1979. Brown corpus manual. Letters to the Editor, 5(2):7.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Wordbank: An open repository for developmental vocabulary data", "authors": [ { "first": "C", "middle": [], "last": "Michael", "suffix": "" }, { "first": "Mika", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Braginsky", "suffix": "" }, { "first": "Virginia", "middle": [ "A" ], "last": "Yurovsky", "suffix": "" }, { "first": "", "middle": [], "last": "Marchman", "suffix": "" } ], "year": 2017, "venue": "Journal of child language", "volume": "44", "issue": "3", "pages": "677--694", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael C Frank, Mika Braginsky, Daniel Yurovsky, and Virginia A Marchman. 2017. Wordbank: An open repository for developmental vocabulary data. Journal of child language, 44(3):677-694.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to segment actions from observation and narration", "authors": [ { "first": "Daniel", "middle": [], "last": "Fried", "suffix": "" }, { "first": "Jean-Baptiste", "middle": [], "last": "Alayrac", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Aida", "middle": [], "last": "Nematzadeh", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2569--2588", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Fried, Jean-Baptiste Alayrac, Phil Blunsom, Chris Dyer, Stephen Clark, and Aida Nematzadeh. 2020. Learning to segment actions from observa- tion and narration. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 2569-2588, Online. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A multimodal corpus for the evaluation of computational models for (grounded) language acquisition", "authors": [ { "first": "Judith", "middle": [], "last": "Gaspers", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Panzner", "suffix": "" }, { "first": "Andre", "middle": [], "last": "Lemme", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Cimiano", "suffix": "" }, { "first": "Katharina", "middle": [ "J" ], "last": "Rohlfing", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Wrede", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 5th Workshop on Cognitive Aspects of Computational Language Learning (CogACLL)", "volume": "", "issue": "", "pages": "30--37", "other_ids": { "DOI": [ "10.3115/v1/W14-0507" ] }, "num": null, "urls": [], "raw_text": "Judith Gaspers, Maximilian Panzner, Andre Lemme, Philipp Cimiano, Katharina J. Rohlfing, and Sebas- tian Wrede. 2014. A multimodal corpus for the eval- uation of computational models for (grounded) lan- guage acquisition. In Proceedings of the 5th Work- shop on Cognitive Aspects of Computational Lan- guage Learning (CogACLL), pages 30-37, Gothen- burg, Sweden. Association for Computational Lin- guistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Human simulations of vocabulary learning", "authors": [ { "first": "Jane", "middle": [], "last": "Gillette", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Gleitman", "suffix": "" }, { "first": "Lila", "middle": [], "last": "Gleitman", "suffix": "" }, { "first": "Anne", "middle": [], "last": "Lederer", "suffix": "" } ], "year": 1999, "venue": "Cognition", "volume": "73", "issue": "2", "pages": "135--176", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jane Gillette, Henry Gleitman, Lila Gleitman, and Anne Lederer. 1999. Human simulations of vocabu- lary learning. Cognition, 73(2):135-176.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Localizing moments in video with temporal language", "authors": [ { "first": "Lisa", "middle": [ "Anne" ], "last": "Hendricks", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Eli", "middle": [], "last": "Shechtman", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Sivic", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Russell", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1380--1390", "other_ids": { "DOI": [ "10.18653/v1/D18-1168" ] }, "num": null, "urls": [], "raw_text": "Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2018. Localizing moments in video with temporal lan- guage. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 1380-1390, Brussels, Belgium. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Understanding grounded language learning agents", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1710.09867" ] }, "num": null, "urls": [], "raw_text": "Felix Hill, Karl Moritz Hermann, Phil Blun- som, and Stephen Clark. 2017. Understanding grounded language learning agents. arXiv preprint arXiv:1710.09867.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Understanding grounded language learning agents", "authors": [ { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Karl", "middle": [ "Moritz" ], "last": "Hermann", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Clark", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hill, Karl Moritz Hermann, Phil Blunsom, and Stephen Clark. 2018. Understanding grounded lan- guage learning agents.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning visually grounded sentence representations", "authors": [ { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Allan", "middle": [], "last": "Jabri", "suffix": "" }, { "first": "Maximilian", "middle": [], "last": "Nickel", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1707.06320" ] }, "num": null, "urls": [], "raw_text": "Douwe Kiela, Alexis Conneau, Allan Jabri, and Maximilian Nickel. 2017. Learning visually grounded sentence representations. arXiv preprint arXiv:1707.06320.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Combining language and vision with a multimodal skip-gram model", "authors": [ { "first": "Angeliki", "middle": [], "last": "Lazaridou", "suffix": "" }, { "first": "", "middle": [], "last": "Nghia The", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Pham", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "153--163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angeliki Lazaridou, Nghia The Pham, and Marco Ba- roni. 2015. Combining language and vision with a multimodal skip-gram model. pages 153-163.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "How can we accelerate progress towards human-like linguistic generalization?", "authors": [ { "first": "", "middle": [], "last": "Tal Linzen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5210--5217", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tal Linzen. 2020. How can we accelerate progress to- wards human-like linguistic generalization? In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5210- 5217, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The emergence of language from embodiment", "authors": [ { "first": "Brian", "middle": [], "last": "Macwhinney", "suffix": "" } ], "year": 2013, "venue": "The emergence of language", "volume": "", "issue": "", "pages": "231--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian MacWhinney. 2013. The emergence of language from embodiment. In The emergence of language, pages 231-274. Psychology Press.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Learning to navigate in complex environments", "authors": [ { "first": "Piotr", "middle": [], "last": "Mirowski", "suffix": "" }, { "first": "Razvan", "middle": [], "last": "Pascanu", "suffix": "" }, { "first": "Fabio", "middle": [], "last": "Viola", "suffix": "" }, { "first": "Hubert", "middle": [], "last": "Soyer", "suffix": "" }, { "first": "J", "middle": [], "last": "Andrew", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Ballard", "suffix": "" }, { "first": "Misha", "middle": [], "last": "Banino", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Denil", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Goroshin", "suffix": "" }, { "first": "Koray", "middle": [], "last": "Sifre", "suffix": "" }, { "first": "", "middle": [], "last": "Kavukcuoglu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.03673" ] }, "num": null, "urls": [], "raw_text": "Piotr Mirowski, Razvan Pascanu, Fabio Viola, Hu- bert Soyer, Andrew J Ballard, Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Ko- ray Kavukcuoglu, et al. 2016. Learning to nav- igate in complex environments. arXiv preprint arXiv:1611.03673.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Why nouns trump verbs in word learning: New evidence from children and adults in the human simulation paradigm", "authors": [ { "first": "B", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "", "middle": [], "last": "Piccin", "suffix": "" }, { "first": "", "middle": [], "last": "Sandra R Waxman", "suffix": "" } ], "year": 2007, "venue": "Language Learning and Development", "volume": "3", "issue": "4", "pages": "295--323", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas B Piccin and Sandra R Waxman. 2007. Why nouns trump verbs in word learning: New evidence from children and adults in the human simulation paradigm. Language Learning and Development, 3(4):295-323.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A case for deep learning in semantics: Response to pater. Language", "authors": [ { "first": "Christopher", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2019, "venue": "", "volume": "95", "issue": "", "pages": "115--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christopher Potts. 2019. A case for deep learning in se- mantics: Response to pater. Language, 95(1):e115- e124.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Grounded models of semantic representation", "authors": [ { "first": "Carina", "middle": [], "last": "Silberer", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", "volume": "", "issue": "", "pages": "1423--1433", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1423-1433, Jeju Island, Korea. Association for Computational Lin- guistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Very deep convolutional networks for large-scale image recognition", "authors": [ { "first": "Karen", "middle": [], "last": "Simonyan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Zisserman", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1409.1556" ] }, "num": null, "urls": [], "raw_text": "Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Videobert: A joint model for video and language representation learning", "authors": [ { "first": "Chen", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Myers", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Vondrick", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Murphy", "suffix": "" }, { "first": "Cordelia", "middle": [], "last": "Schmid", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1904.01766" ] }, "num": null, "urls": [], "raw_text": "Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learn- ing. arXiv preprint arXiv:1904.01766.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Learning to speak and act in a fantasy text adventure game", "authors": [ { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Siddharth", "middle": [], "last": "Karamcheti", "suffix": "" }, { "first": "Saachi", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Humeau", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rockt\u00e4schel", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "673--683", "other_ids": { "DOI": [ "10.18653/v1/D19-1062" ] }, "num": null, "urls": [], "raw_text": "Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt\u00e4schel, Douwe Kiela, Arthur Szlam, and Ja- son Weston. 2019. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 673-683, Hong Kong, China. Association for Computational Lin- guistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Hyperlex: A large-scale evaluation of graded lexical entailment", "authors": [ { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" }, { "first": "Daniela", "middle": [], "last": "Gerz", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Hill", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Korhonen", "suffix": "" } ], "year": 2017, "venue": "Computational Linguistics", "volume": "43", "issue": "4", "pages": "781--835", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivan Vuli\u0107, Daniela Gerz, Douwe Kiela, Felix Hill, and Anna Korhonen. 2017. Hyperlex: A large-scale evaluation of graded lexical entailment. Computa- tional Linguistics, 43(4):781-835.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Learning language games through interaction", "authors": [ { "first": "I", "middle": [], "last": "Sida", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liang", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2368--2378", "other_ids": { "DOI": [ "10.18653/v1/P16-1224" ] }, "num": null, "urls": [], "raw_text": "Sida I. Wang, Percy Liang, and Christopher D. Man- ning. 2016. Learning language games through in- teraction. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2368-2378, Berlin, Germany. Association for Computational Linguis- tics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Msrvtt: A large video description dataset for bridging video and language", "authors": [ { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Tao", "middle": [], "last": "Mei", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Yao", "suffix": "" }, { "first": "Yong", "middle": [], "last": "Rui", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msr- vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR).", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Cross-task weakly supervised learning from instructional videos", "authors": [ { "first": "Dimitri", "middle": [], "last": "Zhukov", "suffix": "" }, { "first": "Jean-Baptiste", "middle": [], "last": "Alayrac", "suffix": "" }, { "first": "Ramazan", "middle": [ "Gokberk" ], "last": "Cinbis", "suffix": "" }, { "first": "David", "middle": [], "last": "Fouhey", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Laptev", "suffix": "" }, { "first": "Josef", "middle": [], "last": "Sivic", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "3537--3545", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gok- berk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly supervised learn- ing from instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3537-3545.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Screenshots of a person picking up a banana in each of our two kitchen aesthetics.", "uris": null }, "FIGREF2": { "num": null, "type_str": "figure", "text": "Comparison of word category and lexical distributions. Lexical item frequencies labels are \u00d71000. Distributions are over the most frequent categories/words according to the Brent-Siskind corpus of child-directed speech.", "uris": null }, "FIGREF3": { "num": null, "type_str": "figure", "text": "Example clips, subsampled to 6 frames. (b) is (a)'s nearest-neighbor using the Object-Based model. In each of these clips, the participant picks up an object with their right hand.(d) is (c)'s nearest-neighbor using the CNN. In each, the participant is washing dishes in a similar looking sink. Soft Strict Random 0.32 (0.25-0.39) 0.23 (0.17-0.29) Obj. 0.20 (0.14-0.25) 0.13 (0.08-0.19) CNN 0.40 (0.33-0.47) 0.29 (0.22-0.36) Obj+Sup. 0.40 (0.33-0.47) 0.28 (0.22-0.34) CNN+Sup. 0.35 (0.28-0.42) 0.25 (0.19-0.31)", "uris": null }, "TABREF0": { "html": null, "type_str": "table", "text": "Object features recorded during data collection. Object appearance does not vary across frames; img url does not vary across objects. All other features vary across object and frame.", "num": null, "content": "" }, "TABREF1": { "html": null, "type_str": "table", "text": "estimates that nouns can be predicted at 45% accuracy and verbs at 15% accuracy.", "num": null, "content": "
INTJ ADV AUX DET NOUN PRON VERB0.08 0.09 0.11 0.14 0.17 0.19 0.23 CDS (Brent)0.01 0.12 0.10 0.18 0.21 0.16 0.22 Ours Data0.00 0.02 0.08 0.25 0.44 0.02 0.19 Captions (MSR)0.00 0.09 0.00 0.26 0.33 0.00 0.32 Instr. (R2R)0.00 0.06 0.08 0.23 0.41 0.04 0.19 Web (Wiki))
(a) Token-Level Frequency of Word Categories
NUM INTJ ADV ADJ PROPN VERB NOUN0.01 0.01 0.04 0.11 0.17 0.24 0.420.01 0.02 0.08 0.13 0.07 0.30 0.400.01 0.00 0.03 0.11 0.12 0.24 0.500.03 0.00 0.08 0.15 0.03 0.15 0.560.05 0.00 0.02 0.10 0.12 0.40 0.30
(b) Type-Level Frequency of Word Categories
want let put look get say see can come go2.6 3.1 3.1 3.2 3.5 3.6 4.0 4.4 4.5 10.43.3 4.7 2.8 2.7 0.6 2.9 11.5 13.1 0.8 16.70.1 0.1 1.1 1.6 1.1 0.3 0.5 0.3 0.4 1.00.0 0.0 0.0 0.0 0.5 0.0 0.1 0.0 0.010.90.2 0.1 0.2 0.1 0.2 0.6 0.5 0.9 0.6 0.6
(c) Token Frequency of Individual Verbs
water kitty hand girl foot ball one book boy baby0.7 0.7 0.7 0.8 0.9 1.1 1.1 1.1 2.1 1.20.0 1.5 0.1 0.1 0.5 24.7 5.2
" }, "TABREF5": { "html": null, "type_str": "table", "text": "Analysis of model precision broken down by verb. Top-level columns are the unsupervised CNN, unsupervised obj model, and supervised obj model. 8", "num": null, "content": "" } } } }