{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T15:22:26.723757Z" }, "title": "PE2LGP Animator: A Tool to Animate a Portuguese Sign Language Avatar", "authors": [ { "first": "Pedro", "middle": [ "Bertrand" ], "last": "Cabral", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Lisboa/INESC-ID", "location": {} }, "email": "pedro.b.cabral@tecnico.ulisboa.pt" }, { "first": "Matilde", "middle": [], "last": "Gon\u00e7alves", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Lisboa/INESC-ID", "location": {} }, "email": "" }, { "first": "Ruben Dos", "middle": [], "last": "Santos", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Lisboa/INESC-ID", "location": {} }, "email": "" }, { "first": "Hugo", "middle": [], "last": "Nicolau", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Lisboa/INESC-ID", "location": {} }, "email": "hugo.nicolau@tecnico.ulisboa.pt" }, { "first": "Luisa", "middle": [], "last": "Coheur", "suffix": "", "affiliation": { "laboratory": "", "institution": "Universidade de Lisboa/INESC-ID", "location": {} }, "email": "luisa.coheur@tecnico.ulisboa.pt" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Software for the production of sign languages is much less common than for spoken languages. Such software usually relies on 3D humanoid avatars to produce signs which, inevitably, necessitates the use of animation. One barrier to the use of popular animation tools is their complexity and steep learning curve, which can be hard to master for inexperienced users. Here, we present PE2LGP, an authoring system that features a 3D avatar that signs Portuguese Sign Language. Our Animator is designed specifically to craft sign language animations using a key frame method, and is meant to be easy to use and learn to users without animation skills. We conducted a preliminary evaluation of the Animator, where we animated seven Portuguese Sign Language sentences and asked four sign language users to evaluate their quality. This evaluation revealed that the system, in spite of its simplicity, is indeed capable of producing comprehensible messages.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Software for the production of sign languages is much less common than for spoken languages. Such software usually relies on 3D humanoid avatars to produce signs which, inevitably, necessitates the use of animation. One barrier to the use of popular animation tools is their complexity and steep learning curve, which can be hard to master for inexperienced users. Here, we present PE2LGP, an authoring system that features a 3D avatar that signs Portuguese Sign Language. Our Animator is designed specifically to craft sign language animations using a key frame method, and is meant to be easy to use and learn to users without animation skills. We conducted a preliminary evaluation of the Animator, where we animated seven Portuguese Sign Language sentences and asked four sign language users to evaluate their quality. This evaluation revealed that the system, in spite of its simplicity, is indeed capable of producing comprehensible messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "PE2LGP is a project to digitalise Portuguese Sign Language (shortened to LGP -L\u00edngua Gestual Portuguesa), the primary language of the Deaf community in Portugal, through a 3D avatar capable of communicating in it. Though a living language used by thousands of people,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "LGP is still largely understudied, with both an absence of linguistic research on it (compared to widely used spoken languages) and a lack of tools and resources for its computational processing. We aim to provide one such resource through this project, in this case through the Animator, a tool that allows users without technical knowledge or animation expertise to create animations of LGP signs for our avatar to use through simple frame-by-frame posing. As part of the larger effort to improve digital support for LGP, the Animator could be used for chatbots, virtual assistants, dictionaries, or, as we have done in PE2LGP, automatic translators. The motivation for this study was to not only expose this project to the community, but to gain greater intuition of the tool's current performance. We thus present in this paper a description of our Animator and an overview of the role it plays in the greater scope of LGP and sign language research, along with a preliminary study of its capabilities, where, using our tool, we animated seven LGP sentences and asked four users of the language to interpret them and give an appreciation of their quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Sign languages are visuospatial languages, i.e., the communication is performed using signs produced at determined locations in the three-dimensional space or on the body. Because of the deafs' linguistic isolation, they must sometimes resort to human interpreters, who are not always available, so alternative translation systems are useful. Sign languages have no widely-used written forms (Kaur and Kumar, 2014) , so such systems require the representation of a human body to produce messages, such as videos of human signers or 3D avatars.", "cite_spans": [ { "start": 392, "end": 414, "text": "(Kaur and Kumar, 2014)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "State of the Art", "sec_num": "2." }, { "text": "However, videos of human signers have serious limitations when stringing together signs to compose sentences (Huenerfauth and Hanson, 2009) . First, because each sign would have to be recorded in its own video, it would not be possible to have smooth movement from one sign to the next: the signer would have to return to a neutral position and there would likely always be seams. Furthermore, the videos would have to be recorded with the same signer under similar conditions, to avoid abrupt visual changes. Finally, this approach would not allow different signs to be combined: in LGP, for example, a facial expression may be combined with other signs to mark a sentence as interrogative. An avatar-based solution, on the other hand, is more flexible and sidesteps all the problems pointed out in the previous approach. The main concern with an avatar is the work required to create animations that can be combined to produce clear, natural-looking signing. Several methods exist to virtually recreate of sign language gestures with 3D avatars. They can be clustered into three types: hand-crafted, motion capture and synthesis from a sign notation system (Gerlach et al., 2016) . Motion capture is done by recording signs made by a human using video cameras or other types of sensors and later mapping the human actor's motions onto an avatar. The more detailed the desired result, the costlier and more complex the technology and expertise required. Motion capture frequently requires calibration and must usually be used alongside hand-crafted animation, because the resulting performance often has to be fine-tuned, especially when using cheaper solutions, such as Kinect and Leap Motion (Gerlach et al., 2016) . Caroline Guardino and Ching-Hua Chuan attained better results using Leap motion than Kinect and Cyblerglove, used in other studies for recognising sign language (Guardino et al., 2014) . The second type of animation consists in creating a system capable of automatically interpreting a phonetic sign language writing system, like HamNoSys (Hamburg Sign Language Notation System) to animate a signing avatar (Zwitserlood et al., ; Elliott et al., 2004) . This writing system gives us detailed information about the hands elements and other human movements that compose a sign (Hanke, 2004) , but not secondary movement (unlike motion capture). Lastly, hand-crafted animation is the oldest of these techniques, widely used, and known to give good results, but also requires intensive work, as someone must manually pose the avatar and adjust the animation until the result is satisfactory. The more realistic and detailed the animations, the more time, effort, expertise and technological sophistication are necessary. Blender 1 and Unity 2 are widely used general-purpose 3D computer graphics software capable of animating avatars. Blender in particular has a vast feature set, but also a steep learning curve (Cano, 2011) . In contrast, our Animator is designed specifically for animating humanoid characters, which allows us to restrain its complexity.", "cite_spans": [ { "start": 109, "end": 139, "text": "(Huenerfauth and Hanson, 2009)", "ref_id": "BIBREF6" }, { "start": 1159, "end": 1181, "text": "(Gerlach et al., 2016)", "ref_id": "BIBREF3" }, { "start": 1695, "end": 1717, "text": "(Gerlach et al., 2016)", "ref_id": "BIBREF3" }, { "start": 1881, "end": 1904, "text": "(Guardino et al., 2014)", "ref_id": "BIBREF4" }, { "start": 2127, "end": 2149, "text": "(Zwitserlood et al., ;", "ref_id": "BIBREF9" }, { "start": 2150, "end": 2171, "text": "Elliott et al., 2004)", "ref_id": "BIBREF2" }, { "start": 2295, "end": 2308, "text": "(Hanke, 2004)", "ref_id": "BIBREF5" }, { "start": 2929, "end": 2941, "text": "(Cano, 2011)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "State of the Art", "sec_num": "2." }, { "text": "PE2LGP was originally created by In\u00eas Almeida 2014and further built upon by Ruben Santos (2016) in their master's theses. The project is part of Corpus & Avatar da L\u00edngua Gestual Portuguesa, a joint effort of researchers at Instituto Superior T\u00e9cnico and Universidade Cat\u00f3lica Portuguesa to create not only an avatar capable of signing LGP, but also the first LGP corpus, complete with video, translation, gloss and syntactic annotation. This interdisciplinary approach of linguistics and computer science allows greater cooperation between two otherwise separate projects, with the corpus being used for applications such as machine translation and animation synthesis through HamNoSys. PE2LGP currently has 5 components, all of which feature our 3D avatar, Catarina, as a centrepiece. These are the Translator, the Animator, the Hand Pose Editor, the Kinect Recorder and the Animation Viewer. The Translator component receives a sentence in Portuguese as input, which will then be automatically translated to LGP and signed by Catarina. The Animator, being the focus of this paper, is described in detail in Section 3.2., and it allows the user to create new animations using forward kinematics (manipulating each of the avatar's joints individually). The Hand Pose Editor (created in the time between the execution of this study and its final revision) allows users to create and modify hand poses, which are then used in the Animator. The Kinect Recorder's purpose is to create new animations through rudimentary motion capture using a Kinect device. The Animation Viewer is a simple menu which allows the user to view and delete existing animations. The Animator plays a crucial role in the project as the principal tool for creating signs for our avatar, with the ultimate goal of creating an animation database to be used by the Translator component or any other future component that requires it (such as dictionaries, messaging systems, chat bots or games).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview", "sec_num": "3.1." }, { "text": "Our Animator component makes use of key frames, premade hand poses, and forward kinematics. You can see a screenshot of the animator's interface in Figure 2 . Key frames are the principal poses the avatar will assume throughout the time span of an animation (at a pace of one key frame per second). The user must define a key pose for each key frame one by one and, when the animation is played back, the avatar will not only assume the correct key pose for each key frame, but also automatically interpolate between key poses to generate all the in-between frames (Figure 1 ). This key frame method allows the user to focus on the essential moments of the sign and only pay attention to the intermediate moments when the situation requires it. Forward kinematics is used by the system to allow the user to manipulate the avatar into the desired key poses by rotating the avatar's joints so, for example, rotating an elbow will move the forearm but leave the upper arm in place. The joints available for the user to manipulate are the neck, waist, shoulders, elbows, and wrists, which can all be rotated in 3 axes. By design, the editor does not permit manipulating the joints on the avatar's hands and fingers directly (only its wrists). Instead, the user must choose from a selection of pre-made hand poses, which may be changed each frame and chosen independently for the right and left hand (Figure 3) . When this experiment was conducted, these hand poses were limited to a selection of older animations created at an earlier stage of the project but, at the time of writing, a new component called the Hand Pose Editor has been fully implemented, enabling users to create and modify hand poses at will. The project does not yet support facial expressions, though this feature is a current priority.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 2", "ref_id": null }, { "start": 565, "end": 574, "text": "(Figure 1", "ref_id": null }, { "start": 1395, "end": 1405, "text": "(Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "Animator", "sec_num": "3.2." }, { "text": "Seven simple LGP sentences (Table 1) were created and animated by two engineers with basic LGP training working on the project, using the corpus (Section 3.1.) and two online dictionaries as reference 3, 4 . Most signs for these sentences were newly-created using the Animator, but some were already present on the platform's database, such as the letter signs used for finger-spelling 5 . Each new sign took between 10 and 60 minutes to animate, depending on its complexity and the desired quality and detail. To string together signs to form sentences, Unity's native animation features were used, specifically its animation controller system, which employs state machines to allow different animations to be played back and mixed. Without this mechanism, the transitions between signs would have been less smooth, because the avatar would have been forced to return to a neutral position (standing upright, with arms at its sides) between every sign. Using this system, however, is not effortless, as it requires calibration dependent on which two animations the transition involves. Note the distinction between: interpolation between key frames Figure 1 : In the Animator, the user defines only key frames A and B, and the system interpolates between those two key frames to create the in-between frames, as the image illustrates. Note that this is a simplified view: in reality, there would normally be 58 in-between frames, not 4. Figure 2 : Screenshot of the Animator. Notice the red hoop around the avatar's waist, indicating the selected joint and how it will rotate. On the right, you can see the X, Y, and Z buttons, which control which axis is to be rotated. On the left, there are various controls, such as for creating new frames, switching hand poses, and previewing the animation.", "cite_spans": [ { "start": 201, "end": 203, "text": "3,", "ref_id": null }, { "start": 204, "end": 205, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 27, "end": 36, "text": "(Table 1)", "ref_id": null }, { "start": 1150, "end": 1158, "text": "Figure 1", "ref_id": null }, { "start": 1438, "end": 1446, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4." }, { "text": "(which occurs within an animation) and transitions between two animations. The former occurs in the Animator, when the sign is created, while the latter is done in Unity's animation controller. The grammatical correctness of the 7 sentences was then verified by a hearing linguist with intermediate mastery of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4." }, { "text": "LGP and a trained LGP interpreter (both native Portuguese speakers). This validation yielded several recommendations, which were then implemented within the boundaries of the Animator before being committed to the next phase. Finally, videos (which are available at this footnote URL 6 ) of the avatar signing the sentences were sent to 4 evaluators, who answered an online form about the avatar's quality without being told the sentences' meaning or which signs were being produced. The evaluators described their proficiency as \"high\", \"moderately proficient\", \"medium\" and \"professional\". The form consisted of a set of ques-6 tinyurl.com/PlaylistAvatarLGP Figure 3 : Examples of pre-made hand poses, which can be selected in the editor. tions which were repeated for each sentence: the evaluator would view the video of the avatar signing and try to determine the meaning of the sentence. Whether or not the evaluators were able to discern a sentence's meaning was the main factor in measuring how intelligible it was. Next, the evaluator would answer how many times they had to watch the video and rate several aspects of the sentence's quality: speed, overall quality, intelligibility, naturalness, grammar, hand configuration, hand orientation, hand location, and hand movement. These aspects were rated on a scale of 1 to 5 with 1 being \"Bad\" and 5 being \"Good\" (speed was the exception, where 1 meant \"Too slow\" and 5 meant \"Too fast\".", "cite_spans": [], "ref_spans": [ { "start": 660, "end": 668, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Evaluation Methodology", "sec_num": "4." }, { "text": "The full quantitative responses to our survey are available in Table 3 and Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 82, "text": "Table 3 and Table 2", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5." }, { "text": "LGP ( Naturalness was the lowest-rated aspect, with an average of 1.7. This was to be expected, as the avatar's movements are too machine-like to appear human. The main requirements to improve naturalness would be facial expressions (both grammatical and non-grammatical), configurable key frame interpolation, automatic secondary movement, lower-body movement. The highest-rated aspect was Speed, with 18 out of 28 perfect classifications. Part of the reason for this may be that speed problems are localised to particular signs or transitions. Another factor may be the initial waiting time within the videos, before the avatar begins signing. This waiting time was unintentional and not consistent across the videos (ranging from 4 seconds to less than a second), and we suspect it may create the illusion of a slower animation. We consider our most successful sentence to be ELE DAN\u00c7 AR BEM, which was almost perfectly understood by all evaluators with a single viewing of the video and consistently outperformed other sentences across all categories. This sentence is interesting, as it is the only one in this experiment to include non-manual movement (waist and neck motion), and we believe that is why it scored higher in Naturalness than any other sentence. We consider the least successful sentence to be the second, MULHER REI IMPORTANTE MUITO CHEFE, which the evaluators viewed more times than usual and had trouble understanding (one could not name any signs correctly). One suspected cause for this difficulty is the absence of context. In the categories of hand configuration, hand orientation, hand location and hand movement, the responses often corresponded to our expectations, where sentences with higher quality signs received better scores. In a few cases, the responses (both quantitative and open-ended) led us to discover more subtle improvements that could be made to the signs (such as bringing the hand closer to the chin in the sign BOM). In some sentences, we detected that unruly transitions between animations made signs less clear, or at least less natural, by making the avatar's motion too quick or anatomically impossible. A number of problems with individual signs were also detected, most often imprecise hand poses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation", "sec_num": null }, { "text": "In this paper, we presented our Animator tool which, as a part the PE2LGP project, aims to be an accessible means to animate signs to be performed by an avatar. We also performed a preliminary evaluation of this system using Portuguese Sign Language sentences which, although too small to yield statistically significant results, provides valuable insight into the current capabilities, limitations and future potential, while demonstrating that the platform is indeed capable of producing comprehensible LGP, which is a positive result, given its simplicity and the complexity of synthesising natural languages. In the future, we would like to improve the Animator with both quality-of-life features, to make editing animations more comfortable and efficient, and, more importantly, improvements to enhance the tool's capacity, such as custom hand posing, facial expressions, and control over interpolation. For further research, it would be interesting to formally study the Animator's ease of use through user tests, as accessibility is one of its main design goals. Ultimately, this usability should enable the development of a thorough database of animations by using the Animator and its companion components to swiftly bring the first-hand knowledge of LGP users into the platform, to be used as a resource in this and other Portuguese Sign Language projects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6." }, { "text": "We owe our thanks to our four evaluators, who took the time to help in this study, and to Mara Moita and Neide Gon\u00e7alves, who helped ensure our LGP sentences were correct. This work was supported by national funds through FCT, Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia, under project UIDB/50021/2020 and PTDC/LLT-LIN/29887/2017.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7." }, { "text": "Almeida, I. R. (2014). Exploring challenges in avatarbased translation from european portuguese to portuguese sign language. Master's thesis, Instituto Superior T\u00e9cnico, October.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bibliographical References", "sec_num": "8." }, { "text": "www.blender.org 2 unity.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "www.spreadthesign.com/pt.pt 4 www.infopedia.pt/dicionarios/ lingua-gestual 5 Finger-spelling consists of signing individual letters of the alphabet to spell out words, usually proper names. InTable 1you can see that the proper name \"J\u00falio\" was finger-spelled as J-U-L-I-O.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": " Category Evaluator S1 S2 S3 S4 S5 S6 S7 A 2 >1 3-4 3-4 4 4-5 1 B 1 7 2 3 2 1 1 C 1 >1 >1 >1 >1 >1 1 Number of Views D 2 5-6 5 1 3 1 1 A 2 1 1 2 1 2 3 B 3 1 3 2 2 3 4 C 4 NA 2 3 2 3 4 Speed D 3 NA 1 2 2 3 5 Table 2 : These were the responses of each evaluator to the questions \"How many times did you need to watch the video?\" and \"How adequate was the sentences's speed?\". Unlike the results presented in Table 3 , the first question was open-ended, while the second was on a scale of 1 to 5, with 1 labeled \"Too slow\" and 5 labeled \"Too fast\". ", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 277, "text": "Category Evaluator S1 S2 S3 S4 S5 S6 S7 A 2 >1 3-4 3-4 4 4-5 1 B 1 7 2 3 2 1 1 C 1 >1 >1 >1 >1 >1 1 Number of Views D 2 5-6 5 1 3 1 1 A 2 1 1 2 1 2 3 B 3 1 3 2 2 3 4 C 4 NA 2 3 2 3 4 Speed D 3 NA 1 2 2 3 5 Table 2", "ref_id": null }, { "start": 469, "end": 476, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "These were the responses of each evaluator for each category and sentence (S1, S2, etc), on a scale of 1 to 5. In the survey, the response NA was written \"I don't know", "authors": [], "year": null, "venue": "", "volume": "3", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "3: These were the responses of each evaluator for each category and sentence (S1, S2, etc), on a scale of 1 to 5. In the survey, the response NA was written \"I don't know\", while 1 was labeled \"Bad\" and 5 was labeled \"Good\".", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The cambrian explosion of popular 3d printing", "authors": [ { "first": "J", "middle": [ "L C" ], "last": "Cano", "suffix": "" } ], "year": 2011, "venue": "", "volume": "IJIMAI", "issue": "", "pages": "30--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cano, J. L. C. (2011). The cambrian explosion of popular 3d printing. IJIMAI, 1(4):30-32.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The development of language processing support for the ViSiCAST project", "authors": [ { "first": "R", "middle": [], "last": "Elliott", "suffix": "" }, { "first": "J", "middle": [ "R W" ], "last": "Glauert", "suffix": "" }, { "first": "J", "middle": [ "R" ], "last": "Kennaway", "suffix": "" }, { "first": "I", "middle": [], "last": "Marshall", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "101--108", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elliott, R., Glauert, J. R. W., Kennaway, J. R., and Marshall, I. (2004). The development of language processing sup- port for the ViSiCAST project. pages 101-108.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "An Open Web Platform for Rule-Based Speech-to-Sign Translation", "authors": [ { "first": "J", "middle": [], "last": "Gerlach", "suffix": "" }, { "first": "I", "middle": [], "last": "Strasly", "suffix": "" }, { "first": "S", "middle": [], "last": "Ebling", "suffix": "" }, { "first": "M", "middle": [], "last": "Rayner", "suffix": "" }, { "first": "P", "middle": [], "last": "Bouillon", "suffix": "" }, { "first": "N", "middle": [], "last": "Tsourakis", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "162--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerlach, J., Strasly, I., Ebling, S., Rayner, M., Bouillon, P., and Tsourakis, N. (2016). An Open Web Platform for Rule-Based Speech-to-Sign Translation. (August):162- 168.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "American sign language recognition using leap motion sensor", "authors": [ { "first": "C", "middle": [], "last": "Guardino", "suffix": "" }, { "first": "C.-H", "middle": [], "last": "Chuan", "suffix": "" }, { "first": "Regina", "middle": [], "last": "", "suffix": "" }, { "first": "E", "middle": [], "last": "", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guardino, C., Chuan, C.-H., and Regina, E. (2014). Amer- ican sign language recognition using leap motion sensor.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "HamNoSys-Representing sign language data in language resources and language processing contexts", "authors": [ { "first": "T", "middle": [], "last": "Hanke", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Workshop on Representation and Processing of Sign Language, Workshop to the forth International Conference on Language Resources and Evaluation (LREC'04)", "volume": "", "issue": "", "pages": "1--6", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hanke, T. (2004). HamNoSys-Representing sign language data in language resources and language processing con- texts. Proceedings of the Workshop on Representation and Processing of Sign Language, Workshop to the forth International Conference on Language Resources and Evaluation (LREC'04), pages 1-6.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Sign language in the interface: Access for deaf signers. The Universal Access Handbook", "authors": [ { "first": "M", "middle": [], "last": "Huenerfauth", "suffix": "" }, { "first": "V", "middle": [ "L" ], "last": "Hanson", "suffix": "" } ], "year": 2009, "venue": "", "volume": "", "issue": "", "pages": "38--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huenerfauth, M. and Hanson, V. L. (2009). Sign language in the interface: Access for deaf signers. The Universal Access Handbook, pages 38-1-38-18.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Hamnosys generation system for sign language", "authors": [ { "first": "R", "middle": [], "last": "Kaur", "suffix": "" }, { "first": "P", "middle": [], "last": "Kumar", "suffix": "" } ], "year": 2014, "venue": "2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI)", "volume": "", "issue": "", "pages": "2727--2734", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaur, R. and Kumar, P. (2014). Hamnosys generation sys- tem for sign language. In 2014 International Conference on Advances in Computing, Communications and Infor- matics (ICACCI), pages 2727-2734, Sep.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Pe2lgp: Do texto\u00e0 l\u00edngua gestual (e vice-versa)", "authors": [ { "first": "R", "middle": [ "E R" ], "last": "Santos", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Santos, R. E. R. (2016). Pe2lgp: Do texto\u00e0 l\u00edngua ges- tual (e vice-versa). Master's thesis, Instituto Superior T\u00e9cnico, October.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "SYNTHETIC SIGNING FOR THE DEAF: eSIGN", "authors": [ { "first": "I", "middle": [], "last": "Zwitserlood", "suffix": "" }, { "first": "M", "middle": [], "last": "Verlinden", "suffix": "" }, { "first": "J", "middle": [], "last": "Ros", "suffix": "" }, { "first": "S", "middle": [ "V D" ], "last": "Schoot", "suffix": "" }, { "first": "T", "middle": [], "last": "Netherlands", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zwitserlood, I., Verlinden, M., Ros, J., Schoot, S. V. D., and Netherlands, T. ). SYNTHETIC SIGNING FOR THE DEAF: eSIGN.", "links": null } }, "ref_entries": {} } }