{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:00.142335Z" }, "title": "\"You move THIS!\": Annotation of Pointing Gestures on Tabletop Interfaces in Low Awareness Situations", "authors": [ { "first": "Dimitra", "middle": [], "last": "Anastasiou", "suffix": "", "affiliation": {}, "email": "dimitra.anastasiou@list.lu" }, { "first": "Hoorieh", "middle": [], "last": "Afkari", "suffix": "", "affiliation": {}, "email": "hoorieh.afkari@list.lu" }, { "first": "Val\u00e9rie", "middle": [], "last": "Maquil", "suffix": "", "affiliation": {}, "email": "valerie.maquil@list.lu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper analyses pointing gestures during low awareness situations occurring in a collaborative problem-solving activity implemented on an interactive tabletop interface. Awareness is considered as crucial requirement to support fluid and natural collaboration. We focus on pointing gestures as strategy to maintain awareness. We describe the results from a user study with five groups, each group consisting of three participants, who were asked to solve a task collaboratively on a tabletop interface. The ideal problem-solving solution would have been, if the three participants had been fully aware of what their personal area is depicting and had communicated this properly to the peers. However, often some participants are hesitant due to lack of awareness, some other want to take the lead work or expedite the process, and therefore pointing gestures to others' personal areas arise. Our results from analyzing a multimodal corpus of 168.68 minutes showed that in 95% of the cases, one user pointed to the personal area of the other, while in a few cases (3%) a user not only pointed, but also performed a touch gesture on the personal area of another user. In our study, the mean for such pointing gestures in low awareness situations per minute and for all groups was M=1.96, SD=0.58.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "This paper analyses pointing gestures during low awareness situations occurring in a collaborative problem-solving activity implemented on an interactive tabletop interface. Awareness is considered as crucial requirement to support fluid and natural collaboration. We focus on pointing gestures as strategy to maintain awareness. We describe the results from a user study with five groups, each group consisting of three participants, who were asked to solve a task collaboratively on a tabletop interface. The ideal problem-solving solution would have been, if the three participants had been fully aware of what their personal area is depicting and had communicated this properly to the peers. However, often some participants are hesitant due to lack of awareness, some other want to take the lead work or expedite the process, and therefore pointing gestures to others' personal areas arise. Our results from analyzing a multimodal corpus of 168.68 minutes showed that in 95% of the cases, one user pointed to the personal area of the other, while in a few cases (3%) a user not only pointed, but also performed a touch gesture on the personal area of another user. In our study, the mean for such pointing gestures in low awareness situations per minute and for all groups was M=1.96, SD=0.58.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Collaborative problem-solving (ColPS) is included in the Learning and Innovation skills of the 21 st Century. It is defined as \"the abilities to recognize the points of view of other persons in a group; contribute knowledge, experience, and expertise in a constructive way; identify the need for contributions and how to manage them; recognize structure and procedure involved in resolving a problem; and as a member of the group, build and develop group knowledge and understanding\" (Griffin et al., 2012) . ColPS represents the interaction of two distinct, though tightly connected dimensions of skills: i) complex problemsolving as the cognitive dimension and ii) collaboration as the interpersonal dimension (OECD, 2017) . During collaborative activities, awareness is considered as crucial. It can reduce effort, increase efficiency, and reduce errors (Gutwin & Greenberg, 2002) . In this paper, we focus on pointing gestures that are performed to reestablish awareness during collaborative problem-solving using a tangible tabletop interface. Our research question is whether and how are pointing gestures related to \"low awareness\" situations. We describe a between-group user study with five groups of three participants each, who were asked to solve a problem collaboratively. This collaborative problem is a computersimulated scenario about an imaginary planet; the participants need to act as space mining crew in order to mine valuable minerals and ship them to earth. The main task of the participants is to collaboratively locate and mine the requested minerals meanwhile avoiding the threats of the environment in the shared activity area. Information and controls were split in three personal areas, each of them dedicated to one participant with the aim to give different and complementary responsibilities to each of the participants. The ideal problem-solving solution would be that each user first fully understands the information and features of their own personal area, then reflects this understanding when communicating to the peers and last, takes action (i.e. manipulating the buttons) after having agreed to suggestions of their peers. However, we noticed that users often instructed each other about which buttons to press, making use of co-speech communicative gestures. In this paper, we focus on pointing gesture cases used in these situations. More precisely, we are interested in the use of pointing gestures towards other users' personal areas with the intention to obtain and maintain awareness in collaborative problem-solving situations. Therefore, the goal of this paper is the gesture data analysis of a multimodal corpus as resulted by a study on collaborative problem-solving using a tabletop. This paper is laid out as follows: in Section 2 we present related work with regards to awareness, interference, and collaboration on tabletop interfaces. In Section 3 we present our research goal along with a few examples of low awareness situations that we observed in our user study. Our study design is presented in Section 4 together with the computer-simulated problem. In Section 5 we present the main contribution of this paper, our multimodal corpus and its data analysis. We close this paper with a discussion and future work in Section 6.", "cite_spans": [ { "start": 484, "end": 506, "text": "(Griffin et al., 2012)", "ref_id": "BIBREF6" }, { "start": 712, "end": 724, "text": "(OECD, 2017)", "ref_id": null }, { "start": 857, "end": 883, "text": "(Gutwin & Greenberg, 2002)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our research work is in the domain of collaborative problem-solving on interactive tabletop interfaces. The main characteristic of an interactive tabletop is a large horizontal screen which is used as display and interactive surface at the same time (Bellucci et al., 2014) . It has been thoroughly stated in prior research that interactive tabletops have a positive impact on collaboration (e.g. Scott et al, 2003) and collaborative learning (Rick et al., 2011) . Hornecker et al. (2008) explored awareness in co-located settings through negative and positive awareness indicators. Negative awareness indicators are i) interference (e.g., reaching for same object) and ii) verbal monitoring (\"what did you do there?\"), while positive awareness indicators are i) reaction without explicit request, ii) parallel work on same activity without verbal coordination, among others. In this paper, we will explore \"pointing gestures towards other users' personal areas\" as an additional awareness mechanism. Falc\u00e3o & Price (2009) run a user study that explored collaborative activity on a tangible tabletop to support colocated learning about the physics of light. They found that the 'interference' activity happened both accidentally and intentionally when children purposely changed arrangements to give demonstrations or help each other out by giving instructions, both physically and verbally. This led the children group through a productive process of collective exploration and knowledge construction. Our research is also related to information visualisation, shared control, territoriality, and multi-view tabletops. Stewart et al. (1999) has shown that shared control resulted in less collaboration due to parallel working without having to share the input device. Lissermann et al. (2014) introduced Permulin, a mixed-focus collaboration on multi-view tabletops, which provides distinct private views or a group view that is overlaid with private contents, thus allowing easy and seamless transitions along the entire spectrum between tightly and loosely coupled collaboration. Most recently, Woodward et al. (2018) adapted the social regulation and group processes of Rogat & Linnenbrink-Garcia (2001) and broke down the social interactions into 4 main themes: Social Regulation, Positive Socioemotional Interactions (encouraging participation), Negative Socioemotional Interactions (discouraging participation), and Interactions. Under Interactions, they included Roles, which is about \"respecting or not respecting assigned role, enforcing roles, pointing to other area\". This paper lies upon this kind of interaction and roles. Since we are exploring pointing gestures in multi-user collaborative environments, cooperative gestures, as described in Morris et al. (2006) are of interest in our research. They introduced the so-called symmetry axis referring to whether participants perform identical or distinct actions, and parallelism as the relative timing of each contributor's axis. An additive gesture is one which is meaningful when performed by a single user, but whose meaning is amplified when simultaneously performed by all members of the group.", "cite_spans": [ { "start": 250, "end": 273, "text": "(Bellucci et al., 2014)", "ref_id": "BIBREF2" }, { "start": 397, "end": 415, "text": "Scott et al, 2003)", "ref_id": "BIBREF19" }, { "start": 443, "end": 462, "text": "(Rick et al., 2011)", "ref_id": "BIBREF17" }, { "start": 465, "end": 488, "text": "Hornecker et al. (2008)", "ref_id": "BIBREF8" }, { "start": 1001, "end": 1022, "text": "Falc\u00e3o & Price (2009)", "ref_id": "BIBREF5" }, { "start": 1620, "end": 1641, "text": "Stewart et al. (1999)", "ref_id": "BIBREF20" }, { "start": 1769, "end": 1793, "text": "Lissermann et al. (2014)", "ref_id": "BIBREF10" }, { "start": 2098, "end": 2120, "text": "Woodward et al. (2018)", "ref_id": "BIBREF23" }, { "start": 2174, "end": 2207, "text": "Rogat & Linnenbrink-Garcia (2001)", "ref_id": null }, { "start": 2758, "end": 2778, "text": "Morris et al. (2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "At Luxembourg Institute of Science and Technology, there have been several user studies on tabletop interfaces conducted (e.g., Ras et al., 2013; Lahure et al., 2018; Anastasiou et al., 2018) , mostly within the context of collaborative problem-solving. Within the past project GETUI 1 , Anastasiou et al. (2018) examined the relevance of gestures in the assessment of group collaborative skills. The current project ORBIT 2 has the goal of enhancing users' awareness of their collaboration strategies by providing them with tasks and tools that induce their collaboration and create overall a positive user experience. To do so, a problem-solving activity is designed and implemented through an iterative design process, in which tasks and features are designed that repeatedly put users in a situation to collaborate (see Sunnen et al., 2019) . ORBIT 1 https://www.list.lu/en/research/project/getui/, 17.02.2020 benefits from the potentials of both tangible and multitouch interaction in terms of promoting collaboration. As far as awareness is concerned, according to Endsley (1995) , situation awareness refers to \"knowing what is going on\" and involves states of knowledge as well as dynamic processes of perception and action. In this paper, we explore the situations of low awareness and define them as \"situations where explicit awareness work occurs\", according to Hornecker et al. (2008) . Table 1 lists a few of such low awareness situations that happened in our user study. As a reaction to obtain and maintain awareness in these situations, a person might employ exaggerated manual actions to draw attention (Hornecker et al., 2008) .", "cite_spans": [ { "start": 128, "end": 145, "text": "Ras et al., 2013;", "ref_id": "BIBREF16" }, { "start": 146, "end": 166, "text": "Lahure et al., 2018;", "ref_id": "BIBREF9" }, { "start": 167, "end": 191, "text": "Anastasiou et al., 2018)", "ref_id": "BIBREF1" }, { "start": 288, "end": 312, "text": "Anastasiou et al. (2018)", "ref_id": "BIBREF1" }, { "start": 824, "end": 844, "text": "Sunnen et al., 2019)", "ref_id": "BIBREF21" }, { "start": 1071, "end": 1085, "text": "Endsley (1995)", "ref_id": "BIBREF4" }, { "start": 1374, "end": 1397, "text": "Hornecker et al. (2008)", "ref_id": "BIBREF8" }, { "start": 1622, "end": 1646, "text": "(Hornecker et al., 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 1400, "end": 1408, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Research goal", "sec_num": "3." }, { "text": "New information is revealed (e.g. new features or hidden items) and users are not yet familiar with them. A suggestion for a route is made, but one or more users are hesitant, thus inactive (no speaking & not pressing any buttons). One or more users take a bad decision by moving the rover towards an unfavorable cell. Two or more users disagree verbally. It is worth mentioning that this list is non-exhaustive and these situations are mostly context-dependent. In this paper, we will focus only on the pointing gestures as a reaction to low awareness situations, and by this, we mean pointing gestures addressed to the area of the tabletop where another participant is responsible for. Table 2 presents such cases along with some relevant figures underneath ( Fig. 1, 2 , 3). After we describe our user study within the ORBIT project (Section 4), we count those pointing gesture occurrences in our data analysis (Section 5). ", "cite_spans": [], "ref_spans": [ { "start": 688, "end": 695, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 762, "end": 771, "text": "Fig. 1, 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Research goal", "sec_num": "3." }, { "text": "In this Section we describe our user study design (4.1), as well as the task of the participants, i.e. the computersimulated problem (4.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "User Study", "sec_num": "4." }, { "text": "The user study was an experimental between-subjects design with 5 groups consisting of 3 participants each. Depending on the analysis objective, the analysis unit might be the group (n=5) or the individual (n=15). The participants were not informed by any means about the task that they had to solve which is in line with the concept of a microworld. Microworlds are defined by Edwards (1991) as the instantiation of an artificial environment that behaves according to a custom set of mathematical rules or scientific subdomains. Moreover, the participants did not know each other, as this familiarity would have biased the interference. The occupational background of the participants is heterogeneous: 6 were employees of municipal departments, 6 elementary school teachers, 2 computer science researchers and 1 civil engineering researcher. They have never used a tangible tabletop before. The groups were gender and age-mixed: 10 male and 5 female; 5 were aged between 25-34, 5 between 35-44, and 5 between 45-54. Groups spoke in different languages; 3 groups spoke in Luxembourgish, 1 in French, and 1 in English. The potential differences in gesture performance due to the language spoken is out of the scope of this paper. As far as the technological setup is concerned, there was the multitouch table Multitaction, which that recognizes fingertips, fingers, hands and objects simultaneously (see Fig.1-3 ). There were four fixed cameras placed at the top,", "cite_spans": [ { "start": 378, "end": 392, "text": "Edwards (1991)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1404, "end": 1411, "text": "Fig.1-3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Study design", "sec_num": "4.1" }, { "text": "The computer-simulated problem in this user study visualised at the tabletop is a joint problem-solving activity developed in the context of the ORBIT project and is called Orbitia. Orbitia aims to support participants in developing their collaboration methods. In the activity narrative, provided as a textual instruction on the tabletop before the commencement of the experiment 3 , participants are located on Orbitia, an imaginary planet where they need to act as space mining crew in order to mine valuable minerals and ship them to earth. The main task of participants is to steer a rover and operate a radar drone on the planet surface to find and collect required minerals. In parallel, participants need to deal with limitations of the environment, such as obstacles, energy and movement constraints. The activity has three missions and takes place within a 9 \u00d7 11 grid presented at the centre of the tabletop screen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computer-simulated problem", "sec_num": "4.2" }, { "text": "Additional to the rover, there are other icons: 1) Minerals: the main collectable items; participants are informed about the number of required minerals at the beginning of each mission as task description. 2) Sharp rocks: steering the rover to the cells containing sharp rocks causes damage to the rover and makes the rover unable to move, unless a repair is done by participants. Damaging the rover more than three times causes failing the mission. 3) Batteries: each movement of the rover costs one unit of energy and participants need to recharge the rover when needed by stepping on a cell containing a battery. 4) Canyons are cells marked darker than normal grid cells;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Computer-simulated problem", "sec_num": "4.2" }, { "text": "leading the rover to a canyon results in destroying the rover and failing the mission. 5) Dust storm area: furthermore, a part of the grid is marked as cloudy-like area. According to the activity scenario, this area is affected by a dust storm and therefore, the items located in any of those cells are hidden. Participants need to use the radar drone in order to find and reveal the hidden items. It is important to note that there were three personal areas known as control panels in three sides of the screen (see Fig. 4 ); The idea is to give each user a specific personal area in front of his/her position, providing them with the opportunity of individual control over certain aspects of the activity: mining, energy and damage. No information was given to the users prior to the study regarding the control panels and the users' specific responsibility. Nevertheless, the distributed location and design of the control panels, led the users to place themselves in front of each panel and find out about their own specific responsibility.", "cite_spans": [], "ref_spans": [ { "start": 517, "end": 523, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Computer-simulated problem", "sec_num": "4.2" }, { "text": "As a result from our observational user study, we collected in total 168.68 minutes of audiovisual material. This audiovisual corpus can be used for many purposes, such as conversational analysis, gesture analysis, complex problem-solving assessment, and many others. In the next Section, we present the results of the complex problemsolving assessment, and the pointing gesture occurrences in low awareness situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multimodal corpus", "sec_num": "5." }, { "text": "Here we present the quantitative data analysis results of the complex problem-solving assessment (5.1.1), which is categorized into two measurements: i) response time and ii) errors. Moreover, we measured the pointing gesture occurrences towards other users' personal areas (5.1.2), as presented in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 306, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data analysis", "sec_num": "5.1" }, { "text": "We looked at the total response time of each group, i.e. the time each group needed to solve the collaborative problem in total (see Table 3 ) as well as the errors the groups made in total. In Orbitia, we have defined an error as destroying the rover, which could have happened if the users had run three times over a cell containing sharp rocks or led the rover in a canyon cell, or run out of energy. Table 3 : Groups' response times and error rates Group 4 was the fastest group with 23:02 min, while the slowest group was Group 1 with 49:25 min. The slowest group spent a lot of time analysing and discussing before they manipulate the tangible objects and items of the activity. This had as an impact on the complete lack of errors (n=0). Interesting is, though, that while Group 2 and Group 4 solved the problem almost at the same time with a slight difference of 1:30 min, Group 2 made 8 errors, while Group 4 made 0 errors. This shows that making errors results in more trials, but does not necessarily decelerate the process of collaborative problem-solving.", "cite_spans": [], "ref_spans": [ { "start": 133, "end": 140, "text": "Table 3", "ref_id": null }, { "start": 404, "end": 411, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Response time & errors in problem-solving", "sec_num": "5.1.1" }, { "text": "Gesture occurrences After annotating the videos with ELAN (Wittenburg et al. 2006) , we found that there are in total 341 such pointing gestures directed to the personal area of the peers. Table 4 depicts the gesture occurrences performed by each participant and in each group. Based on the relative gesture numbers (gesture per second), group 4 performed most gestures. Thus, we deduce that the more frequent the pointing gestures produced by the groups, the less number of errors made. It should be noted that there are some extreme cases, such as user A in Group 3, who performed many more gestures than all other users. In this case, we speak about a person who wants to take the lead in the problem-solving activity. Table 5 presents descriptive statistics about the kind of gesture occurrences during low awareness situations.", "cite_spans": [ { "start": 58, "end": 82, "text": "(Wittenburg et al. 2006)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 189, "end": 196, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 722, "end": 729, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "5.1.2", "sec_num": null }, { "text": "One user pointing to another participant's area (Fig. 1) 312", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 56, "text": "(Fig. 1)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "#gestures", "sec_num": null }, { "text": "Two users pointing to another (same) user's area (Fig. 2) 4", "cite_spans": [], "ref_spans": [ { "start": 49, "end": 57, "text": "(Fig. 2)", "ref_id": null } ], "eq_spans": [], "section": "#gestures", "sec_num": null }, { "text": "User A pointing to user B's area and user C pointing to user A's area (Fig. 3) 2", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 78, "text": "(Fig. 3)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "#gestures", "sec_num": null }, { "text": "One user pointing to and touching at another user's area 11 Table 5 : Gesture occurrences towards other users' personal areas in our scenario (Orbitia)", "cite_spans": [], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 5", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "#gestures", "sec_num": null }, { "text": "The results show that the biggest amount of gestures are when one user points to another user's area (95%). That two users point simultaneously or consecutively to another user's area is quite uncommon, since users retracted their gestures when they saw that their peer is going to perform the same gesture as they planned, so they considered it as a non-additive gesture (according to Morris et al., 2006) . The most seldom cases were the ones that two users pointed at different personal areas. There were also a few cases, where one user not only pointed to the other user's area, but also touched it. These situations are indeed rare, however, the user who manipulates someone else's area, is considering him/herself as a lead person, while in the other cases (pointing only, without touching), it is clear that the users are trying to help and not taking the lead action.", "cite_spans": [ { "start": 386, "end": 406, "text": "Morris et al., 2006)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "#gestures", "sec_num": null }, { "text": "A gesture taxonomy A taxonomy of gestures being performed on tangible tabletops, taking into account both the 2D and 3D space was developed earlier (Anastasiou & Bergmann, 2016; Anastasiou et al., 2018) . We followed the taxonomy of McNeill (1992) , and focused particularly on gesticulation (further classified into iconic, metaphoric, rhythmic, cohesive, and deictic gestures), but also emblems and adaptors. As for gesture taxonomy from an HCI perspective, we followed Quek (1995) who classified meaningful gestures into communicative and manipulative gestures. Manipulative gestures can occur either on the desktop in a 2-D interaction using a direct manipulation device, as a 3-D interaction involving empty-handed movements to mimic manipulations of physical objects, or by manipulating actual physical objects that map onto a virtual object in TUIs. We focus particularly on the first and third categorization of manipulative gestures. Therefore, in our taxonomy we have manipulative gestures, which are restricted to screen-based activity (Table 5) and communicative co-speech gestures, which happen in the 3D space, such as pointing & iconic, but also affect displays, adaptors and emblems. In our setting, many pointing gestures were beats (McNeil, 1992) or batonic gestures, which are simple, brief, repetitive, and coordinated with the speech prosody used either to emphasize information on the other users' personal area or to gain the interlocutor overall attention. Van den Hoven & Mazalek (2010) defined tangible gesture interaction as the use of physical devices for facilitating, supporting, enhancing, or tracking gestures people make for digital interaction purposes. As in the case of Price et al. (2010) , in our study we had also a mixture of manipulative and communicative gestural interaction.", "cite_spans": [ { "start": 148, "end": 177, "text": "(Anastasiou & Bergmann, 2016;", "ref_id": "BIBREF0" }, { "start": 178, "end": 202, "text": "Anastasiou et al., 2018)", "ref_id": "BIBREF1" }, { "start": 233, "end": 247, "text": "McNeill (1992)", "ref_id": "BIBREF12" }, { "start": 472, "end": 483, "text": "Quek (1995)", "ref_id": "BIBREF15" }, { "start": 1250, "end": 1264, "text": "(McNeil, 1992)", "ref_id": null }, { "start": 1489, "end": 1511, "text": "Hoven & Mazalek (2010)", "ref_id": null }, { "start": 1706, "end": 1725, "text": "Price et al. (2010)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 1047, "end": 1056, "text": "(Table 5)", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "5.1.2.1", "sec_num": null }, { "text": "placing removing tracing rotating resizing tapping sweeping flicking holding ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manipulative", "sec_num": null }, { "text": "In this paper, we described a user study on collaborative problem-solving using an interactive tabletop. We examined only the explicit awareness work in the form of pointing to the other participant's personal area. The average number of such pointing gestures per minute in total was 1.96. From the annotations, we can deduce that these gestures mostly happen in the familiarization phase, i.e. the first minutes of the experiment, where the participants familiarize themselves with the features and information of the problem-solving scenario. Certainly, the way the problem-solving scenario is designed is responsible for the frequency of such gesture occurrences. The technological setup, the task of the participants, the territoriality as well as the shape/size of tangibles have a great influence on the resulting interaction patterns. It is common fact in the literature that gestures aid both communicators and recipients in problem-solving (Lozano & Tversky, 2006) and facilitate thinking and speaking. Real decision-making and problem-solving can become highly complex and require the expertise of a heterogeneous group of communicators. In these situations, it is essential that users quickly obtain and maintain awareness of the situation and others. Therefore, it is important to know how to evaluate and assess such pointing gestures as reaction to low awareness. Indeed, it is difficult to observe \"pure\" low awareness situations and thus isolate corresponding gestures. In our microworld scenario, we defined personal areas/control stations for each participant, so when a pointing gesture was addressed to this area of another user, it was counted as a gesture occurrence during low awareness situation. From our gesture analysis, we can deduce that those gestures happen when one user is not reacting fast enough, performing adaptors (head or mouth scratching) or taking a bad decision by moving the rover to an unfavorable cell. In parallel, the speech is often accompanied with loud voice and the utterances are targeted personally. As far as future work is concerned, we plan to run more user studies with Orbitia with more groups speaking the same mother language. With regards to the annotations, it is important to annotate how the person who was pointed to, reacted: verbally, physically, or no reaction. If verbally, what did (s)he say (conversational analysis) and if physically, which kind of gestures (s)he performed. Some of the arguments were at negotiational phase \"We do not need to hurry, it is the number of moves\", whereas some others were targeted personally to the other participants: \"You have not used this wisely\", \"You have to think before we move\". We also plan to annotate the utterances according to the social regulation patterns of Woodward et al. (2018) . Awareness work mechanisms will be enhanced by annotating the change of body position as well as facial expressions and eye gaze. We will also look at using automated systems for gesture annotation to speed up the time-consuming task of annotation. In this case, the automatically recognized manipulative gestures can be also automatically annotated in the system.", "cite_spans": [ { "start": 950, "end": 974, "text": "(Lozano & Tversky, 2006)", "ref_id": "BIBREF11" }, { "start": 2780, "end": 2802, "text": "Woodward et al. (2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6." }, { "text": "Not rare is the case that the instructions of other participants is not semantically correct. This means, that the people that interfere believe that momentarily they give the correct instruction, but often, they self-reflect again (often during their instruction) and correct themselves either verbally (through repair) or physically (retracting gestures) or both. Therefore, the annotation should also include the semantic connotation of the interference: right/wrong. The same holds for the reaction of the pointed person, as it is often the case that (s)he just listens to and obeys the instructions of the peers without self-reflecting if these are right or wrong. Last but not least, in this paper, we have presented only descriptive statistics; after collecting more data, we will run inferential statistics to confirm the statistical significance between gesture occurrences, error rates, and response times.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Future Work", "sec_num": "6." }, { "text": "We would like to thank the Luxembourg National Research Fund (FNR) for funding this research under the CORE scheme (Ref. 11632733) .", "cite_spans": [ { "start": 115, "end": 130, "text": "(Ref. 11632733)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "7." }, { "text": "This narrative was the only instruction given to the participants. front, left and right angle. For our gesture analysis & annotation, we used the front camera view.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A Gesture-Speech Corpus on a Tangible Interface", "authors": [ { "first": "D", "middle": [], "last": "Anastasiou", "suffix": "" }, { "first": "K", "middle": [], "last": "Bergmann", "suffix": "" } ], "year": 2016, "venue": "Proceedings of Multimodal Corpora Workshop, LREC Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastasiou, D. & Bergmann, K. (2016). A Gesture-Speech Corpus on a Tangible Interface. Proceedings of Multimodal Corpora Workshop, LREC Conference.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Assessment of Collaboration and Feedback on Gesture Performance", "authors": [ { "first": "D", "middle": [], "last": "Anastasiou", "suffix": "" }, { "first": "E", "middle": [], "last": "Ras", "suffix": "" }, { "first": "M", "middle": [], "last": "Fal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Technology-enhanced Assessment", "volume": "", "issue": "", "pages": "219--232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anastasiou, D., Ras, E., Fal., M., (2018), Assessment of Collaboration and Feedback on Gesture Performance, in: Proceedings of the Technology-enhanced Assessment (TEA) conference 2018, Springer, 219-232.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Light on horizontal interactive surfaces: Input space for tabletop computing", "authors": [ { "first": "A", "middle": [], "last": "Bellucci", "suffix": "" }, { "first": "A", "middle": [], "last": "Malizia", "suffix": "" }, { "first": "I", "middle": [], "last": "Aedo", "suffix": "" } ], "year": 2014, "venue": "ACM Computing Surveys (CSUR)", "volume": "46", "issue": "3", "pages": "1--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bellucci, A., Malizia, A., & Aedo, I. (2014). Light on horizontal interactive surfaces: Input space for tabletop computing. ACM Computing Surveys (CSUR), 46(3), 1- 42.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The design and analysis of a mathematical microworld", "authors": [ { "first": "L", "middle": [ "D" ], "last": "Edwards", "suffix": "" } ], "year": 1991, "venue": "Journal of Educational Computing Research", "volume": "12", "issue": "1", "pages": "77--94", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edwards, L.D. (1991). The design and analysis of a mathematical microworld. Journal of Educational Computing Research, 12(1), 77-94.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Toward a Theory of Situation Awareness in Dynamic Systems", "authors": [ { "first": "M", "middle": [ "R" ], "last": "Endsley", "suffix": "" } ], "year": 1995, "venue": "Human Factors", "volume": "37", "issue": "1", "pages": "32--64", "other_ids": {}, "num": null, "urls": [], "raw_text": "Endsley, M. R. (1995). Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors, 37(1), 32-64.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "What have you done! the role of 'interference' in tangible environments for supporting collaborative learning", "authors": [ { "first": "T", "middle": [ "P" ], "last": "Falc\u00e3o", "suffix": "" }, { "first": "S", "middle": [], "last": "Price", "suffix": "" } ], "year": 2009, "venue": "CSCL", "volume": "", "issue": "", "pages": "325--334", "other_ids": {}, "num": null, "urls": [], "raw_text": "Falc\u00e3o, T. P., & Price, S. (2009). What have you done! the role of 'interference' in tangible environments for supporting collaborative learning. In CSCL (1), 325-334.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The changing role of education and schools", "authors": [ { "first": "P", "middle": [], "last": "Griffin", "suffix": "" }, { "first": "E", "middle": [], "last": "Care", "suffix": "" }, { "first": "B", "middle": [], "last": "Mcgaw", "suffix": "" } ], "year": 2012, "venue": "Assessment and Teaching 21st Century Skills", "volume": "", "issue": "", "pages": "1--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Griffin, P., Care, E., & McGaw, B. (2012). The changing role of education and schools. In P. Griffin, B. McGaw, & E. Care (Eds.), Assessment and Teaching 21st Century Skills, Heidelberg: Springer, 1-15.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A descriptive framework of workspace awareness for real-time groupware", "authors": [ { "first": "C", "middle": [], "last": "Gutwin", "suffix": "" }, { "first": "S", "middle": [], "last": "Greenberg", "suffix": "" } ], "year": 2002, "venue": "Computer Supported Cooperative Work (CSCW)", "volume": "11", "issue": "3-4", "pages": "411--446", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gutwin, C., & Greenberg, S. (2002). A descriptive framework of workspace awareness for real-time groupware. Computer Supported Cooperative Work (CSCW), 11(3-4), 411-446.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Collaboration and Interference: Awareness with Mice or Touch Input", "authors": [ { "first": "E", "middle": [], "last": "Hornecker", "suffix": "" }, { "first": "P", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "S", "middle": [], "last": "Dalton", "suffix": "" }, { "first": "Y", "middle": [], "last": "Rogers", "suffix": "" } ], "year": 2008, "venue": "Proceedings of Computer Supported Cooperative Work (CSCW'08)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hornecker, E., Marshall, P., Dalton, S., & Rogers, Y. (2008). Collaboration and Interference: Awareness with Mice or Touch Input. Proceedings of Computer Supported Cooperative Work (CSCW'08), San Diego, USA. ACM Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Slowing Down Interactions on", "authors": [ { "first": "C", "middle": [], "last": "Lahure", "suffix": "" }, { "first": "V", "middle": [], "last": "Maquil", "suffix": "" } ], "year": 2018, "venue": "Tangible Tabletop Interfaces. i-com", "volume": "17", "issue": "3", "pages": "189--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lahure, C., & Maquil, V. (2018). Slowing Down Interactions on Tangible Tabletop Interfaces. i-com, 17(3), 189-199.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Permulin: mixed-focus collaboration on multi-view tabletops", "authors": [ { "first": "R", "middle": [], "last": "Lissermann", "suffix": "" }, { "first": "J", "middle": [], "last": "Huber", "suffix": "" }, { "first": "M", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "J", "middle": [], "last": "Steimle", "suffix": "" }, { "first": "M", "middle": [], "last": "M\u00fchlh\u00e4user", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "3191--3200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lissermann, R., Huber, J., Schmitz, M., Steimle, J., & M\u00fchlh\u00e4user, M. (2014). Permulin: mixed-focus collaboration on multi-view tabletops. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3191-3200.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "RETRACTED: Communicative gestures facilitate problem solving for both communicators and recipients", "authors": [ { "first": "S", "middle": [ "C" ], "last": "Lozano", "suffix": "" }, { "first": "B", "middle": [], "last": "Tversky", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lozano, S. C., & Tversky, B. (2006). RETRACTED: Communicative gestures facilitate problem solving for both communicators and recipients.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Hand and mind: What gestures reveal about thought", "authors": [ { "first": "D", "middle": [], "last": "Mcneill", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McNeill, D. (1992). Hand and mind: What gestures reveal about thought, Chicago: University of Chicago Press.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Cooperative gestures: multiuser gestural interactions for co-located groupware", "authors": [ { "first": "M", "middle": [ "R" ], "last": "Morris", "suffix": "" } ], "year": 2006, "venue": "PISA 2015 draft collaborative problem solving framework", "volume": "", "issue": "", "pages": "1201--1210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morris, M.R, et al. (2006). Cooperative gestures: multi- user gestural interactions for co-located groupware. Proceedings of CHI 2006, 1201-1210, Organisation for Economic Co-operation and Develop- ment (OECD). 2013. PISA 2015 draft collaborative problem solving framework, https://www.oecd.org/pisa/pisaproducts/Draft%20PISA %202015%20Collaborative%20Problem%20Solving% 20Framework%20.pdf.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Action and representa-tion in tangible systems: implications for design of learning interactions", "authors": [ { "first": "S", "middle": [], "last": "Price", "suffix": "" }, { "first": "J", "middle": [ "G" ], "last": "Sheridan", "suffix": "" }, { "first": "T", "middle": [], "last": "Falcao", "suffix": "" } ], "year": 2010, "venue": "Proc. 4th Int. Conf. Tangible, Embedded, and Embodied Interaction (TEI '10", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Price, S., Sheridan, J.G., & Pontual Falcao, T. (2010). Action and representa-tion in tangible systems: implications for design of learning interactions. Proc. 4th Int. Conf. Tangible, Embedded, and Embodied Interaction (TEI '10), 145-152.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Eyes in the interface", "authors": [ { "first": "F", "middle": [ "K H" ], "last": "Quek", "suffix": "" } ], "year": 1995, "venue": "Image and Vision Computing", "volume": "13", "issue": "6", "pages": "511--525", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quek, F.K.H. (1995). Eyes in the interface. Image and Vision Computing, 13(6), 511-525.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Empirical studies on a tangible user interface for technology-based assessment: Insights and emerging challenges", "authors": [ { "first": "E", "middle": [], "last": "Ras", "suffix": "" } ], "year": 2013, "venue": "International Journal of e-Assessment", "volume": "3", "issue": "1", "pages": "201--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ras, E. et al. (2013). Empirical studies on a tangible user interface for technology-based assessment: Insights and emerging challenges. International Journal of e- Assessment, 3(1), 201-241.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Beyond one-sizefits-all: How interactive tabletops support collaborative learning", "authors": [ { "first": "J", "middle": [], "last": "Rick", "suffix": "" }, { "first": "P", "middle": [], "last": "Marshall", "suffix": "" }, { "first": "N", "middle": [], "last": "Yuill", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 10th International Conference on Interaction Design and Children", "volume": "", "issue": "", "pages": "109--117", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rick, J., Marshall, P., & Yuill, N. (2011). Beyond one-size- fits-all: How interactive tabletops support collaborative learning. Proceedings of the 10th International Conference on Interaction Design and Children, 109- 117.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Socially Shared Regulation in Collaborative Groups: An Analysis of the Interplay Between Quality of Social Regulation and Group Processes", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Rogat", "suffix": "" }, { "first": "L", "middle": [], "last": "Linnenbrink-Garcia", "suffix": "" } ], "year": 2011, "venue": "Cognition and Instruction", "volume": "29", "issue": "", "pages": "375--415", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rogat, T.K., & Linnenbrink-Garcia, L. (2011). Socially Shared Regulation in Collaborative Groups: An Analysis of the Interplay Between Quality of Social Regulation and Group Processes. Cognition and Instruction 29, 4 (2011), 375-415.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "System guidelines for co-located, collaborative work on a tabletop display", "authors": [ { "first": "S", "middle": [ "D" ], "last": "Scott", "suffix": "" }, { "first": "K", "middle": [ "D" ], "last": "Grant", "suffix": "" }, { "first": "R", "middle": [ "L" ], "last": "Mandryk", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "159--178", "other_ids": {}, "num": null, "urls": [], "raw_text": "Scott, S. D., Grant, K. D., & Mandryk, R. L. (2003). System guidelines for co-located, collaborative work on a tabletop display. In ECSCW 2003, Springer, Dordrecht, 159-178.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Single Display Groupware: A Model for Co-present Collaboration", "authors": [ { "first": "J", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "B", "middle": [ "B" ], "last": "Bederson", "suffix": "" }, { "first": "A", "middle": [], "last": "Druin", "suffix": "" } ], "year": 1999, "venue": "Proceedings of SIGCHI Conference of Human Factors in Computing Systems (CHI 99", "volume": "", "issue": "", "pages": "286--293", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stewart, J., Bederson, B. B., & Druin, A. (1999). Single Display Groupware: A Model for Co-present Collaboration. Proceedings of SIGCHI Conference of Human Factors in Computing Systems (CHI 99), 286 - 293, Pittsburgh, USA. ACM Press.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Designing collaborative scenarios on tangible tabletop interfaces-insights from the implementation of paper prototypes in the context of a multidisciplinary design workshop", "authors": [ { "first": "P", "middle": [], "last": "Sunnen", "suffix": "" }, { "first": "B", "middle": [], "last": "Arend", "suffix": "" }, { "first": "S", "middle": [], "last": "Heuser", "suffix": "" }, { "first": "H", "middle": [], "last": "Afkari", "suffix": "" }, { "first": "V", "middle": [], "last": "Maquil", "suffix": "" } ], "year": 2019, "venue": "Proceedings of 17th European Conference on Computer-Supported Cooperative Work", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sunnen, P., Arend, B., Heuser, S., Afkari, H., & Maquil, V. (2019). Designing collaborative scenarios on tangible tabletop interfaces-insights from the implementation of paper prototypes in the context of a multidisciplinary design workshop. Proceedings of 17th European Conference on Computer-Supported Cooperative Work. European Society for Socially Embedded Technologies.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "ELAN: a professional framework for multimodality research", "authors": [ { "first": "P", "middle": [], "last": "Wittenburg", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the 5th Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "1556--1559", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wittenburg P et al. (2006). ELAN: a professional framework for multimodality research. Proceedings of the 5th Conference on Language Resources and Evaluation, 1556-1559.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Investigating Separation of Territories and Activity Roles in Children's Collaboration around Tabletops", "authors": [ { "first": "J", "middle": [], "last": "Woodward", "suffix": "" }, { "first": "S", "middle": [], "last": "Esmaeili", "suffix": "" }, { "first": "A", "middle": [], "last": "Jain", "suffix": "" }, { "first": "J", "middle": [], "last": "Bell", "suffix": "" }, { "first": "J", "middle": [], "last": "Ruiz", "suffix": "" }, { "first": "L", "middle": [], "last": "Anthony", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the ACM on Human-Computer Interaction", "volume": "2", "issue": "", "pages": "1--21", "other_ids": {}, "num": null, "urls": [], "raw_text": "Woodward, J., Esmaeili, S., Jain, A., Bell, J., Ruiz, J., & Anthony, L. (2018). Investigating Separation of Territories and Activity Roles in Children's Collaboration around Tabletops. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-21.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "One user pointing to another user's personal areaFigure 2: Two users pointing to another (same) user's area" }, "FIGREF1": { "type_str": "figure", "uris": null, "num": null, "text": "Users pointing at different directionsFigure 4: Personal areas/control panels" }, "TABREF0": { "num": null, "content": "", "type_str": "table", "text": "Examples of low awareness situations", "html": null }, "TABREF1": { "num": null, "content": "
One user pointing to another user's area (Fig. 1)
Two users pointing to another (same) user's area (Fig. 2)
User A points to users B's area and user C points to user
A's area (Fig. 3)
One user pointing to and touching at another user's area
", "type_str": "table", "text": "Pointing gestures as awareness work", "html": null }, "TABREF4": { "num": null, "content": "", "type_str": "table", "text": "Pointing gestures towards other users' personal areas performed by each user in each group", "html": null }, "TABREF5": { "num": null, "content": "
: Touch-based or manipulative gestures
The taxonomy of pointing gestures is now extended after
our user study. Now the categories pointing gesture to a
personal area of other participant and pointing and
touching personal area of other participant are added.
Pointingobject(s)
tabletop (shared space)
personal area of other
participant
other participant(s)
self-pointing
pointing and touching personal
area of other participant
Iconicencircling with whole hand
encircling with index finger
movinganopenhand
forward/backward
movinganopenhand
downwards vertically
Adaptorshead scratching
mouth scratching
nail biting
hair twirling
Emblemsthumps up
victory sign
fist(s) pump
", "type_str": "table", "text": "", "html": null }, "TABREF6": { "num": null, "content": "
: Mid-air gestures with new annotation categories under
pointing (in italics)
", "type_str": "table", "text": "", "html": null } } } }