diff --git "a/-NE2T4oBgHgl3EQfQQbP/content/tmp_files/load_file.txt" "b/-NE2T4oBgHgl3EQfQQbP/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/-NE2T4oBgHgl3EQfQQbP/content/tmp_files/load_file.txt" @@ -0,0 +1,663 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf,len=662 +page_content='Learning from What is Already Out There: Few-shot Sign Language Recognition with Online Dictionaries Maty´aˇs Boh´aˇcek1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='2 and Marek Hr´uz1 1 Department of Cybernetics and New Technologies for the Information Society,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' University of West Bohemia,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Pilsen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Czech Republic 2 Gymnasium of Johannes Kepler,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Prague,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Czech Republic Abstract— Today’s sign language recognition models require large training corpora of laboratory-like videos,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' whose collec- tion involves an extensive workforce and financial resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' As a result, only a handful of such systems are publicly available, not to mention their limited localization capabili- ties for less-populated sign languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Utilizing online text-to- video dictionaries, which inherently hold annotated data of various attributes and sign languages, and training models in a few-shot fashion hence poses a promising path for the democratization of this technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In this work, we collect and open-source the UWB-SL-Wild few-shot dataset, the first of its kind training resource consisting of dictionary-scraped videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This dataset represents the actual distribution and characteristics of available online sign language data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We select glosses that directly overlap with the already existing datasets WLASL100 and ASLLVD and share their class mappings to allow for transfer learning experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Apart from providing baseline results on a pose-based architecture, we introduce a novel approach to training sign language recognition models in a few-shot scenario, resulting in state-of-the-art results on ASLLVD-Skeleton and ASLLVD-Skeleton-20 datasets with top- 1 accuracy of 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='97 % and 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='45 %, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' INTRODUCTION Sign languages (SLs) are natural language systems based on manual articulations and non-manual components, serving as the primary means of communication among d/Deaf communities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' While they allow one to convey identical semantics as the written and spoken language, they operate in a distinctively more variable gestural-visual modality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' There are currently over 70 million people worldwide whose native language is one of the approximately 300 SLs that exist [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Nevertheless, no publicly available SL translation system has been introduced so far.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This hinders d/Deaf people’s ability to use their natural form of communication when working with technology or interacting with people that do not sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Although the problem of automatic SL Recognition (SLR) has been addressed for many years, it is far from being solved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Modern solutions utilizing deep learning show promise, and neural networks might help tear these barriers down.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' There are two prevalent topics related to SLs pursued in the literature - SL Synthesis and SLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The first one’s objective is to translate written language into SL, typically by animating avatars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The second is intended to translate videos of performed signs into the written form of a language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' It can This work has been accepted and scheduled for publication at the Face & Gestures 2023 conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 979-8-3503-4544-5/23/$31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='00 ©2023 IEEE be further divided into isolated SLR, which recognizes single sign lemmas out of a known set of glosses, and continuous SLR, translating unconstrained signing utterances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In this paper, we attend to the task of few-shot isolated SLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The current methods can be generally divided into two main approaches differing in the means of input repre- sentations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' the appearance-based and the pose-based.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The first prevalent stream of works uses a sequence of RGB images, optionally complemented with the depth channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' These methods reach state-of-the-art results but are more computationally demanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The second approach performs an intermediate step of first estimating a body pose sequence which is then fed into an ensuing recognition model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' These systems tend to be more lightweight and would thus be more suitable for applications on conventional consumer technology, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', laptops or mobile phones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Multiple model training and evaluation datasets have been published over recent years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Generally large-scale in size of glosses and instances, they vary primarily in the originating SL and the manners of data collection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' It is essential to con- sider that, unlike with many tasks in the Natural Language Processing (NLP) domain, no organic sources of potential SL training data (such as the internet and printed media in the case of NLP) yield vast amounts of training instances daily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' It hence takes a dedicated, tailored effort to record a SLR dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Such an operation is costly and requires specialists from multiple fields at once, making it strenuous and risky to begin with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Accordingly, languages with a smaller user base receive less attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Some of the few resources that contain SL data with built-in annotations are online text-to-video dictionaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We believe they will be crucial in minimizing barriers in constructing future SLR systems, especially for niche regional contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We thus focus on training models using data scraped from such websites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' As these services usually contain a few repetitions per sign lemma, such a configu- ration comprises a few-shot training paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' To account for the lack of a diverse, high-repetitive dataset, we utilize SPOTER [7], a pose-based Transformer [34] architecture for SLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We hypothesize that it will learn faster since it considers only pre-selected information necessary for such a classification, which is much smaller in dimension than raw RGB video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Appearance-based methods, contrastingly, glutted by the large volume of additional sensory infor- mation, need more data to generalize sturdily, as observed arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='03769v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='CV] 10 Jan 2023 by Boh´aˇcek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We further investigate the ability of models to learn across different datasets and introduce boosting training mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The main contributions of this work include: Introducing and open-sourcing UWB-SL-Wild: a new dataset for few-shot SLR obtained from public SL dictionary data, provided with class mappings to already existing SLR datasets;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Proposing Validation Score-Conscious Training proce- dure which adaptively augments and re-trains for classes that are identified as under-performing during training;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Establishing the state-of-the-art results on the ASLLVD- Skeleton and ASLLVD-Skeleton-20 datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' RELATED WORK This section reviews the existing datasets and methods for isolated SLR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' As low-instance training has not yet been explored to a greater extent for this task, we consider the overlaps to few-shot or zero-shot gesture and action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Datasets Multiple datasets of isolated signs have been published and studied in the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We summarize the prominent ones in Table I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Purdue RVL-SLLL ASL Database [23], containing 1, 834 videos across 104 classes within the American Sign Language (ASL), was one of the first to encompass a larger vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' LSA64 [28] for the Argentinian Sign language is similar in size, as it contains 3, 200 instances from 64 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Later on, substantially larger corpora started to emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' DEVISIGN [12], for instance, provides 24, 000 recordings spanning 2, 000 glosses from the Chinese sign language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Its videos were captured in a laboratory-like environment and were, to the best of our knowledge, the first to provide the depth information along RGB for this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' MS-ASL [17] brings a similar scale for the ASL, as it contains 25, 000 RGB videos from 1, 000 classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lastly, the AUTSL [31] dataset pushed the size and per-class instance ratio even further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' It holds 38, 366 RGB-D recordings spanning 226 classes from the Turkish SL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' While the available datasets span different geographical contexts, most research has centered around ASL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We left out recent datasets, which we consider to capture the most significant traction within the community, from the introduc- tory survey and provide their detailed descriptions below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We later utilize these for experiments and for constructing our new dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 1) WLASL: Word-level American Sign Language dataset [21] is a large-scale database of lemmas from the ASL collected from multiple online sources and organizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The dataset’s gloss totals 2, 000 terms with their translations to English.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The authors provide training, validation, and test splits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' There is an average of over 10 repetitions in the training set for each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' There are three primary splits of the dataset depending on the number of classes they cover: WLASL100, WLASL300, and WLASL2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In our experiments, we use the WLASL100 split only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' TABLE I SURVEY OF PROMINENT SLR DATASETS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Dataset SL Gloss Instances Format DEVISIGN [12] CN 2,000 24,000 RGB-D LSA64 [28] AR 64 3,200 RGB AUTSL [31] TR 226 38,336 RGB-D RVL-SLLL [23] US 104 1,834 RGB ASLLVD [24] US 2,745 9,763 RGB/Skelet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' MS-ASL [17] US 1,000 25,000 RGB WLASL [21] US 2,000 21,083 RGB 2) ASLLVD: American Sign Language Lexicon Video Dataset [24] holds 2, 745 classes of unique terms in the ASL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The authors recorded the data in a consistent lab-like environment with a handful of protagonists.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The authors have not defined training and testing splits, resulting in an average of nearly 4 repetitions per gloss in the whole set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 3) ASLLVD-Skeleton: Amorim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' have later created an abbreviation of the ASLLVD dataset focused on evaluating pose-based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' They open-sourced pose estimations of all the included videos from OpenPose [10] and proposed fixed training and test splits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The authors also introduced ASLLVD-Skeleton-20, a smaller subset with only 20 classes, enabling computationally resource-lighter and more distinc- tive ablations studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sign language recognition The primal works in SLR have leveraged shallow sta- tistical modeling such as Hidden Markov Models [32], [33], which achieved reasonable performance on very small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A big leap has been observed with the advent of deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Convolutional Neural Networks (CNNs) were amidst the first deep architectures employed for this prob- lem [9], [20], [26], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' These were used to construct unitary representations of the input frames that could be thereafter used for recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Later, various Recurrent Neural Net- works (RNNs) have been utilized for input encoding as well - namely Long Short-Term Memory Networks (LSTMs) [13], [19] or Transformers [8], [29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The usage of different 3D CNNs has also been studied extensively (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', with I3D [11], [17], [21]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' With the advances in pose estimation, another stream of approaches has emerged, making use of signer pose representations at the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Unlike the previous methods, these models do not process raw RGB/RGB-D data, but rather pose representations of the estimated body, hand, and face landmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' V´azquez-Enr´ıquez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [35] have been the first to use a Graph Convolutional Network (GCN) on top of pose sequences, following Yan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [38] who earlier proposed using GCNs for action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Trans- formers have been recently employed in this regard, as Boh´aˇcek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [7] introduced Pose-based Transformer for SLR (SPOTER).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' While the architecture does not surpass the existing appearance-based approaches in general bench- marks, the authors have shown that when trained only on small splits of a training set, SPOTER outperforms even the appearance-based approaches significantly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lastly, multiple Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Illustrative examples of videos from the used datasets: ASLLVD, WLASL, and our new UWB-SL-Wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' ASLLVD contains videos from a homogeneous lab environment with few repetitions for each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' WLASL consists of videos captured in multiple settings with a larger instance repetition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' UWB-SL-Wild, on the other hand, contains videos from an online dictionary with only a handful of examples for each class and both inconsistent signers and recording settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' ensemble models combining the raw visual data with the pose estimates [16] have also transpired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Few-shot gesture and action recognition Both few-shot gesture and action recognition have not gained extensive traction in literature and are hence not greatly investigated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Most methods have employed metric learning, where the similarity between input videos is learned to classify unfamiliar classes at inference using nearest neighbors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bishay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [6] have proposed the TARN ar- chitecture, being the first to incorporate attention mechanism for this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' More recently, Generative Adversarial Networks (GANs) have also been studied in this regard [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Few- and Zero-shot SLR Zero-shot SLR has been studied by Bilge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [4], [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In both works, the authors propose a pipeline consisting of multiple RNNs and CNNs exploiting the BERT [14] representations of given SL lemmas’ textual translations in corresponding primary written language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This has enabled zero-shot SLR to a limited, yet promising extent, supposing the BERT embeddings are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' To the best of our knowledge, the only work addressing few-shot SLR specif- ically is introduced in [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Therein, Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' leverage a Siamese Network [18] for feature extraction followed by K-means and a custom matching algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' UWB-SL-WILD Online SL dictionaries and learning resources are an excellent fit for in-the-wild training data, as they inherently dispose of a gloss annotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' However, since the primary intention with such platforms is not the training of neural networks, only a limited amount of repetitions can be found for each gloss (often 2 − 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' To the best of our knowledge, no available benchmark in the literature can simulate such a training paradigm, and we thus decided to create one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We collected a custom dataset called UWB-SL-Wild and are introducing it in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' There are numerous text-to-video dictionaries available on the internet1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We decided to use the Sign ASL dictionary as it 1As an example, let us mention Spread the Sign (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' spreadthesign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='com), Signing Savvy (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='signingsavvy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='com), Handspeak (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='handspeak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='com), and Sign ASL (www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='signasl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' org) websites.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Distribution of video repetitions per class in the UWB-SL-Wild dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' introduces the largest variability of signer identities and video settings due to gathering videos from multiple providers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The first three websites either contain laboratory-like videos with a single signer (similar to the already existing datasets), have a limited vocabulary, or hold other unsuitable video properties (such as only possessing black-and-white footage).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' To allow for transfer learning experiments with the already-existing datasets, we decided that our dataset’s vo- cabulary would be equivalent to that of WLASL100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We then scraped the dataset structure from the Sign ASL portal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This yielded 307 videos from 100 classes (corresponding to lemmas in ASL), leaving us with a mean of under 3 repeti- tions per class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The total distribution of repetitions per class is depicted in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' There are 25 unique signers in the set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Each goes hand-in-hand with a different setting: video quality, distance and angle from the camera, and background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' While some stand in front of a wall, many sit casually on a sofa or at a table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We manually annotated the signer identity in each video and are providing this information along with the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The 100 classes in UWB-SL-Wild represent lemmas of frequent terms in ASL, including ordinary objects (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', book, candy, and hat), verbs (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', play, enjoy, go), and other adjectives or particles (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', thin, who, blue).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Given that certain signs in ASL dispose of different variations, it may almost seem as if the signs gathered under a single class were sometimes completely different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Despite distinctively unalike in appearance, they still convey identical or highly similar meanings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This further enlarges the difficulty of learning on this dataset since some glosses’ sign variations ASLLVD (lab recording) UWB Wild Dataset (wild low-shot data) WLASL (uniform sources,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' large-scale) class mapping class mapping25 23 22 20 20 19 Number of classes 15 10 7 5 5 2 1 1 0 1 2 3 4 5 6 7 8 9 Number of training instanceseventually ended up with only a single instance in the entire set (supposing each of the 2 − 3 videos in a given class depicts a different variation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' While this is not the case for most classes with just a single variant, a considerable part of the dataset’s glosses hold at least two versions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We thus provide manual annotations identifying different variations in each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We created a mapping schema of classes between UWB-SL-Wild, WLASL100, and ASLLVD datasets2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This enables future researchers to train on and evaluate using these three datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Examples of videos from all three sources can be seen in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We are open-sourcing the UWB-SL- Wild dataset, including the cross-datasets mappings and pose estimates of signers in all videos at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' com/matyasbohacek/uwb-sl-wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' METHODS This section presents a method that can learn in a few- shot scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We build upon SPOTER [7], as it has shown substantial promise for training on smaller sets of data, fitting our few-shot use case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' According to the authors, it should require lower amounts of training data because it is a pose- based method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We review the pipeline’s key elements and the changes we have made below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Any unmentioned attributes or configurations were kept identical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We hence refer the reader to the original publication for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Preprocessing: We first estimate the signer’s pose in all input video frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 2-D coordinates of key landmarks are obtained for the upper body (9), hands (2 × 21), and face (70).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Augmentations and normalization: We follow the aug- mentation and normalization procedures from [7] to the full extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Architecture SPOTER is a moderate abbreviation of the Transformer architecture [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The input to the network is a sequence of normalized and flattened skeletal representations with a dimension of 242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Learnable positional encoding is added to the sequence before it is processed further by the standard Encoder module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The input to the Decoder module is a single classification query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' It is decoded into corresponding class probabilities by a multi-layer perceptron on top of the Decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' TABLE II PERFORMANCE COMPARISON ON ASLLVD-SKELETON DATASET ASLLVD-S ASLLVD-S-20 Model top-1 top-5 top-1 top-5 HOF [22] – – 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='0 – BHOF [22] – – 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='0 – ST-GCN [2] 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='48 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='15 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='04 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='36 SPOTER [7] 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='77 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='05 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='18 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='72 SPOTER + VSCT 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='97 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='87 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='45 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='00 2There were no related videos for 3 classes of WLASL100 split in SignASL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='org.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We thus took the following 3 classes from the full WLASL to compensate for this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Validation score-conscious training In an attempt to adapt the SLR pipeline for the few- shot training environment, we propose the Validation Score- Conscious Training (VSCT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' It aims to minimize the classi- fication error on the fly by identifying the bottleneck classes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', the classes that get misclassified the most.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' VSCT adds the following steps at the end of each epoch of batch gradient descent: 1) Validation accuracy is calculated for every class within the set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' If a validation split is unavailable, the accuracy is computed on the training split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 2) The classes are sorted by their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A set of classes Wvsct is found as a proportion of γvsct × c worst-performing ones, where c is the total number of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 3) Next, a mini-batch is constructed as a random τvsct share of the training set with classes from Wvsct.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 4) Backpropagation is performed yet again on the above- described mini-batch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' However, the parameters of aug- mentations are drawn from a different distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This allows us to target the problematic classes with better- suited representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' γvsct, τvsct, and all VSCT-specific augmentation parame- ters are constant hyperparameters of a training run.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' EXPERIMENTS In this section, we report our results compared to the already existing methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We also evaluate our approach on a newly proposed benchmark leveraging the class mappings from UWB-SL-Wild and ASLLVD datasets to WLASL100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Implementation details The SPOTER architecture with VSCT has been imple- mented in PyTorch [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The model’s weights were initial- ized from a uniform distribution within [0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We trained it for 130 epochs with an SGD optimizer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The learning rate was set to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='001 with no scheduler and both momentum and weight decay set to 0, following the original implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' VSCT hyperparameters differ based on the examined dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' For body pose estimation, we used the HRNet-w48 [37] complemented by a Faster R-CNN [27] for person detec- tion within the MMPose library [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We also leveraged the Sweep functionality (hyperparameter search) within the Weights and Biases library [3] to find augmentation and VSCT hyperparameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We namely employed the Bayesian hyperparameter search method 3 and conducted this proce- dure for each dataset individually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Quantitative results The results on the ASLLVD-Skeleton dataset, along with a comparison to the already available methods, are shown in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We establish an overall state-of-the-art on this benchmark by achieving 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='97% top-1 and 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='87% top-5 accuracy on the primary dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Our method surpasses the 3For details on this search method, we refer the reader to the official Weights and Biases documentation available at https://docs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='wandb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' ai/guides/sweeps/.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' TABLE III RESULTS OF THE TRANSFER LEARNING EXPERIMENTS WHERE TRAINING AND EVALUATION WERE PERFORMED ON DIFFERENT DATASETS ASLLVD → WLASL UWB-SL-Wild → WLASL Norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' sample VSCT test val test val \x17 \x17 \x17 \x17 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='51 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='94 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='56 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='41 \x13 \x17 \x17 \x17 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='07 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='62 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='79 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='62 \x13 \x13 \x17 \x17 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='84 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='73 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='18 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='91 \x13 \x13 \x13 \x17 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='23 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='13 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='73 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='54 \x13 \x13 \x17 \x13 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='96 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='95 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='68 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='02 pose-based ST-GCN by a significant margin, almost doubling the top-1 performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' When evaluated on the much smaller 20-class subsplit, SPOTER+VSCT achieves 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='45% top- 1 and 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='0% top-5 accuracy, which exceeds the so far best BHOF by more than absolute 10%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Note that all the models listed in rows 1-5 of Table II use appearance-based representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' BHOF, for instance, builds upon a block- based histogram of the incoming videos’ optical flow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The latter of our evaluation settings makes use of the class mappings introduced in Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We trained SPOTER+VSCT on ASLLVD or UWB-SL-Wild dataset but calculated the accuracy on the WLASL100 testing set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We made the WLASL100 validation split available to the training procedure for the purposes of per-class statistics computa- tion within VSCT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The results are presented in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' SPOTER+VSCT achieves a top-1 accuracy of 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='96% when trained on ASLLVD and 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='68% when trained using UWB- SL-Wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' TABLE IV ABLATION STUDY ON ASLLVD-SKELETON DATASET ASLLVD-S Norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' sample VSCT Full 20 cls.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' \x17 \x17 \x17 \x17 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='13 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='73 \x13 \x17 \x17 \x17 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='18 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='36 \x13 \x13 \x17 \x17 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='77 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='64 \x13 \x13 \x13 \x17 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='77 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='91 \x13 \x13 \x17 \x13 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='97 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='45 To provide context to these values, let us consider the results of Boh´aˇcek et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [7] who trained and evaluated SPOTER (without VSCT) on WLASL100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' They achieved 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='18%, roughly three times greater accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Their training set averaged 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='5 repetitions per class, whereas ASLLVD and UWB-SL-Wild have a mean of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='6 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='9 per-class instances, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Moreover, UWB-SL-Wild is signifi- cantly more variable as opposed to the other two datasets in both unique protagonists and camera settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' While these cross-dataset results are not nearly comparable to the standard methods applied for WLASL100 benchmarking, we believe they attest to the pose-based methods’ ability to generalize on characteristically distinct few-shot data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ablation study We have conducted an ablation study of the individual con- tributions of normalization, augmentations, and the VSCT to the above-presented results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We also compare VSCT to the balanced sampling of classes, which counterbalances the disproportion of per-class samples in the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We summarize the ablations on the ASLLVD-Skeleton dataset and its 20-class subset in Table IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Norm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', Aug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', and Bal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' sample refer to using normalization, augmentations, and balanced sampling, respectively, in the given model variant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The baseline models achieved an accuracy of 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='13% and 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='73%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We can observe that normalization itself provides the most significant improvement to 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='18% and 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='36%, while augmentations provide a slight boost on top of that, resulting in an accuracy of 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='77% and 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='64%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' With all the previous modules fixed, we test the advan- tages of using either balanced sampling or VSCT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' As for the complete dataset, balanced sampling does not provide any performance benefits, whereas VSCT brings a slight improvement resulting in 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='97% testing accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' When examined on the smaller subset, the balanced sampling improves the result by a relative 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='6% to 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='91%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' VSCT, nevertheless, still outperforms it by enhancing the result with a relative 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='7% to the final 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='45% testing accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The outturn of ablations on the cross-dataset training experiments is shown in Table III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' For both ASLLVD and UWB-SL-Wild, we conduct the same ablations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The results mimic the tendencies commented on in the previous exper- iment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This study suggests that VSCT provides merits to training on such low-shot data, evincing itself more beneficial than a standard balanced sampling of classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' CONCLUSION We collected and open-sourced a new dataset for SLR with footage from online text-to-video dictionaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We con- structed it with the already-available datasets in mind and created class mappings to WLASL100 and ASLLVD.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' To reflect the attained problem’s few-shot setting, we proposed a novel procedure of training a neural pose-based SLR system called Validation Score-Conscious Training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' This procedure analyzes intermediate training results on a validation split and adaptively selects samples from the worst-performing classes to create additional mini-batches for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' We demonstrated VSCT’s merits in several experiments of few- shot learning tasks utilizing the SPOTER model, resulting in a state-of-the-art result on the ASLLVD-Skeleton dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' ACKNOWLEDGEMENT This work was supported by the Ministry of Educa- tion, Youth and Sports of the Czech Republic, Project No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' LM2018101 LINDAT/CLARIAH-CZ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Computational resources were supplied by the project ”e- Infrastruktura CZ” (e-INFRA CZ LM2018140).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' REFERENCES [1] World federation of the deaf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' https://wfdeaf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='org, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Accessed: 2022-08-30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [2] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Amorim, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Macˆedo, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zanchettin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Spatial-temporal graph convolutional networks for sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In International Conference on Artificial Neural Networks, pages 646– 657.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Springer, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [3] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Biewald.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Experiment tracking with weights and biases, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Software available from wandb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='com.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [4] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bilge, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cinbis, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ikizler-Cinbis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Towards zero-shot sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, PP, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [5] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bilge, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ikizler-Cinbis, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cinbis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zero-shot sign language recognition: Can textual data uncover sign languages?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In The British Machine Vision Conference (BMVC), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bishay, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zoumpourlis, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Patras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Tarn: Temporal attentive relation network for few-shot and zero-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In The British Machine Vision Conference (BMVC), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Boh´aˇcek and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Hr´uz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sign pose-based transformer for word-level sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, pages 182–191, January 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [8] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Camgoz, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Koller, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Hadfield, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bowden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Multi-channel transformers for multi-articulatory sign language translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 301–319.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [9] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Camgoz, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Koller, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Hadfield, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bowden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sign language transformers: Joint end-to-end sign language recognition and translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10023–10033, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [10] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cao, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Hidalgo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Simon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content='-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wei, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sheikh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Openpose: Realtime multi-person 2d pose estimation using part affinity fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 43:172–186, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [11] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Carreira and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Quo vadis, action recognition?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' a new model and the kinetics dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299–6308, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [12] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Chai, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' The devisign large vocabulary of chinese sign language database and baseline evaluations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Technical report VIPL-TR-14-SLR-001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Key Lab of Intelligent Information Pro- cessing of Chinese Academy of Sciences (CAS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Institute of Computing Technology, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [13] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cui, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Liu, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Recurrent convolutional neural networks for continuous sign language recognition by staged optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1610–1618, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Devlin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Chang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lee, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Toutanova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' BERT: pre-training of deep bidirectional transformers for language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Burstein, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Doran, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL- HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Association for Computational Linguistics, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [15] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Dwivedi, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Gupta, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Mitra, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ahmed, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Jain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Protogan: Towards few shot learning for action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pages 1308–1316, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Jiang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sun, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bai, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Li, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Fu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Skeleton aware multi-modal sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3413–3423, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [17] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Joze and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Koller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' MS-ASL: A large-scale data set and benchmark for understanding american sign language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the British Machine Vision Conference 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' University of Surrey, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [18] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Koch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Siamese Neural Networks for One-Shot Image Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' PhD thesis, University of Toronto, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [19] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Koller, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Camgoz, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ney, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bowden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Weakly supervised learning with multi-stream cnn-lstm-hmms to discover sequential parallelism in sign language videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 42:2306–2320, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [20] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Koller, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zargaran, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ney, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bowden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Deep sign: Hybrid cnn-hmm for continuous sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the British Machine Vision Conference 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' University of Surrey, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Rodriguez, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Yu, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Word-level deep sign lan- guage recognition from video: A new large-scale dataset and methods comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 1459–1469, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lim, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Tan, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Tan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Block-based histogram of optical flow for isolated sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' volume 40, pages 538–545, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Martinez, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wilbur, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Shay, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Kak.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Purdue rvl-slll asl database for automatic recognition of american sign language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Fourth IEEE International Conference on Multimodal Interfaces, pages 167–172, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [24] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Neidle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Thangali, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sclaroff.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Challenges in development of the american sign language lexicon video dataset (ASLLVD) corpus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the 5th Workshop on the Representation and Processing of Sign Languages: Interactions between Corpus and Lexicon, Language Resources and Evaluation Conference (LREC) 2012, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [25] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Paszke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Gross, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Massa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lerer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bradbury, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Chanan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Killeen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Gimelshein, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Antiga, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Desmaison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Kopf, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Yang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' DeVito, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Raison, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Tejani, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Chilamkurthy, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Steiner, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Fang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bai, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Chintala.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Pytorch: An imperative style, high- performance deep learning library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Larochelle, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Beygelzimer, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=" d'Alch´e-Buc, E." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Fox, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024– 8035.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Curran Associates, Inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [26] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Rao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Syamala, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Kishore, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sastry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Deep convolutional neural networks for sign language recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In 2018 Confer- ence on Signal Processing And Communication Engineering Systems (SPACES), pages 194–197.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [27] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ren, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' He, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Girshick, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Faster r-cnn: Towards real-time object detection with region proposal networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:1137– 1149, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [28] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Ronchetti, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Quiroga, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Estrebou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lanzarini, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Rosete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' LSA64: an argentinian sign language dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In XXII Congreso Argentino de Ciencias de la Computaci´on (CACIC 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=', 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [29] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Saunders, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Camgoz, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Bowden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Continuous 3D multi- channel sign language production via progressive transformers and mixture density networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' International Journal of Computer Vision, 129:2113–2135, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [30] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sengupta, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Jin, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zhang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' mm-Pose: Real-time human skeletal posture estimation using mmwave radars and cnns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Sensors Journal, 20:10032–10044, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [31] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sincan and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Keles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' AUTSL: A large scale multi-modal turkish sign language dataset and baseline methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Access, 8:181340–181355, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [32] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Starner and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Pentland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Real-time american sign language recognition from video using hidden markov models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Proceedings of International Symposium on Computer Vision - ISCV, pages 265–270, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [33] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Starner, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Weaver, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Pentland.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Real-time american sign language recognition using desk and wearable computer based video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Transactions on pattern analysis and machine intelligence, 20:1371–1375, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [34] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Gomez, Ł.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [35] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' V´azquez-Enr´ıquez, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Alba-Castro, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Doc´ıo-Fern´andez, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Rodr´ıguez-Banga.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Isolated sign language recognition with multi- scale spatial-temporal graph convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition, pages 3462–3471, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [36] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zeng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Xu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Liu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Corner- stone network with feature extractor: a metric-based few-shot model for chinese natural sign language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Applied Intelligence, 51:7139–7150, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [37] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wang, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Sun, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Cheng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Jiang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Deng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Zhao, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Mu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Liu, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Xiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Deep high-resolution representation learning for visual recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 43:3349–3364, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' [38] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Yan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Xiong, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Spatial temporal graph convolutional networks for skeleton-based action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'} +page_content=' Thirty-second AAAI conference on artificial intelligence, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NE2T4oBgHgl3EQfQQbP/content/2301.03769v1.pdf'}