{ "paper_id": "Y08-1032", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:38:09.376596Z" }, "title": "Using a Word-Space Model to Determine the Relevance of Messages in Anchored Asynchronous Online Discussions *", "authors": [ { "first": "Rodolfo", "middle": [], "last": "Raga", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jose Rizal University", "location": { "addrLine": "80 Shaw Boulevard", "settlement": "Mandaluyong City", "country": "Philippines" } }, "email": "" }, { "first": "Jennifer", "middle": [], "last": "Raga", "suffix": "", "affiliation": {}, "email": "jennie.raga@gmail.com" }, { "first": "Erick", "middle": [], "last": "Bonus", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jose Rizal University", "location": { "addrLine": "80 Shaw Boulevard", "settlement": "Mandaluyong City", "country": "Philippines" } }, "email": "erick.bonus@gmail.com" }, { "first": "Raymund", "middle": [], "last": "Sison", "suffix": "", "affiliation": { "laboratory": "", "institution": "Jose Rizal University", "location": { "addrLine": "80 Shaw Boulevard", "settlement": "Mandaluyong City", "country": "Philippines" } }, "email": "sisonr@dlsu.edu.ph" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents results of the first phase of our study aimed at investigating the applicability of word-space models, particularly those generated using Random Indexing (RI) technique, for the task of determining the relevance of messages posted in anchored asynchronous online discussion forums. In this phase, we addressed several questions intended to establish baseline figures: How efficient will the word-space model perform in this task? How much of its classification decisions align with the decisions of the human annotators? More importantly, does the paradigmatic and syntagmatic contexts of words have a direct effect on its output and which produces the best results? Using Cohen's Kappa and Holsti's Coefficient Reliability measure, our experiments generated an initial reliability performance (K=0.41, CR=0.72). It further indicated that the syntagmatic context is more applicable for this task. We concluded with a discussion of the weaknesses identified and possible means of improving the level of performance.", "pdf_parse": { "paper_id": "Y08-1032", "_pdf_hash": "", "abstract": [ { "text": "This paper presents results of the first phase of our study aimed at investigating the applicability of word-space models, particularly those generated using Random Indexing (RI) technique, for the task of determining the relevance of messages posted in anchored asynchronous online discussion forums. In this phase, we addressed several questions intended to establish baseline figures: How efficient will the word-space model perform in this task? How much of its classification decisions align with the decisions of the human annotators? More importantly, does the paradigmatic and syntagmatic contexts of words have a direct effect on its output and which produces the best results? Using Cohen's Kappa and Holsti's Coefficient Reliability measure, our experiments generated an initial reliability performance (K=0.41, CR=0.72). It further indicated that the syntagmatic context is more applicable for this task. We concluded with a discussion of the weaknesses identified and possible means of improving the level of performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Asynchronous Online Discussions (AOD), also known as forums or threaded discussion boards, are a popular form of web-based computer-mediated communication (CMC) . More recently, this form of communication has come into focus in education as an alternative form of assessment that can be used to supplement traditional classroom discussions. The advantage of this communication medium lies in its anytime and anywhere accessibility which provides logistic flexibility for students to communicate with each other and for teachers to assess their interaction. A disadvantage of this medium however is that the focus of discussion is prone to topic drifting; this drifting is primarily due to the time-lags inherent in the asynchronous mode of discussion. Another problem is that the manual monitoring and assessment of message contributions are time-consuming and often tedious, especially for teachers. For these reasons, automated analysis for discussion understanding and better methods of topic-focus monitoring to enable more efficient information assessment are much sought for (Ravi & Kim, 2007) .", "cite_spans": [ { "start": 155, "end": 160, "text": "(CMC)", "ref_id": null }, { "start": 1081, "end": 1099, "text": "(Ravi & Kim, 2007)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In our study, we aim to investigate the feasibility of utilizing the functionality of word-space models for the task of analyzing online discussion transcripts. More particularly, we want to determine how word-space models can be used to detect the relevance of individual message posts Copyright 2008 by Rodolfo Raga, Jennifer Raga, Eric Bonus, and Raymund Sison relative to the overall topic of discussion. Several methods for generating word-space models are available and have been successfully tested on detecting similarity between terms in groups of texts, but to our knowledge very few have been applied and tested on analyzing online discussion transcripts, a rare example is that of (McArthur and Bruza, 2003) which applied Latent Semantic Analysis on a small dataset of email correspondence to detect conversational implicatures. In our experiments, we chose to use the Random Indexing (RI) technique. RI is a word-space modeling technique primarily designed to measure similarity between terms and not between documents. In our work, we extended the capability of RI to work both at the document and message size levels.", "cite_spans": [ { "start": 287, "end": 363, "text": "Copyright 2008 by Rodolfo Raga, Jennifer Raga, Eric Bonus, and Raymund Sison", "ref_id": null }, { "start": 693, "end": 719, "text": "(McArthur and Bruza, 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The rest of the paper has the following organization: The second section endeavored to put our study into context by providing a brief background on the problem of topic drifting in online discussions and cites a recent approach used to address it. The third section discussed the concept of word-space models and provides a brief introduction on how it can be implemented using RI. The fourth section presented a description of the experiments we conducted. In the fifth section we presented the results of the experiments and provided some discussion. We ended the paper with our conclusion and expected future directions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The topic focus of online discussions constantly changes from one topic to another (Beaudin, 1999) . This phenomenon is what (Potter, 2007) referred to as topic drifts or the tendency of online discussions to stray from their announced topic.", "cite_spans": [ { "start": 83, "end": 98, "text": "(Beaudin, 1999)", "ref_id": "BIBREF0" }, { "start": 125, "end": 139, "text": "(Potter, 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Review of Literature", "sec_num": "2." }, { "text": "In academic settings, researchers try to limit the occurrence of this problem by implementing anchored discussions (Guzdial and Turns, 2000) . These educational online discussions are structured to align the contributions of participants to a single topic represented by the contents of a reference document called the anchor document. Literature shows that this method yields a more coherent discussion than traditional forums by promoting conscious topic monitoring (van der Pol et al, 2006) . However, participants are unconsciously still prone to introduce irrelevant posts which often induce off-topic discussion. As such, tools that can help mediators automatically detect, at the earliest, such irrelevant contributions are still needed.", "cite_spans": [ { "start": 115, "end": 140, "text": "(Guzdial and Turns, 2000)", "ref_id": "BIBREF1" }, { "start": 468, "end": 493, "text": "(van der Pol et al, 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Review of Literature", "sec_num": "2." }, { "text": "An initial requirement for the development of such tools is a method for determining the alignment of each message to the overall topic focus of the discussion. This necessitates a content analysis approach similar to the strategy proposed by (Teufel and Moens, 1998) whereby the alignment of one text with another was measured by a mechanism that analyzes several characteristics of the aligned sentences such as the presence of particular phrases and occurrence of thematic words and proper names. For our purpose, we are looking towards utilizing the functionalities of word-space models for the same task. The context of the work we're pursuing is the anchored discussion mentioned above. As such, the basis of the relevance value we are trying to measure is the closeness of the semantic information between the contents of each message to the contents of an anchor document.", "cite_spans": [ { "start": 243, "end": 267, "text": "(Teufel and Moens, 1998)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Review of Literature", "sec_num": "2." }, { "text": "Word-space models have previously been observed to provide two modes of semantic similarity between terms. Sahlgren (2006) noted that these modes depend on whether the paradigmatic or syntagmatic relationship of the terms within a context are captured. Syntagmatic context refers to the linear relationship of words and applies to linguistic entities that occur in sequential combinations while paradigmatic context refers to the substitutional relationship of words and applies to words that can be used in the same context but not at the same time. We aim to initially investigate, among other things, whether or not these two modes will provide different levels of performance for this task, and if so, which provides the best result.", "cite_spans": [ { "start": 107, "end": 122, "text": "Sahlgren (2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Review of Literature", "sec_num": "2." }, { "text": "A word-space model is a spatial representation that derives the meaning of words by plotting these words in an n-dimensional geometric space (Sahlgren, 2005) . This process is similar to the way points are plotted in a two dimensional graphing paper. The main difference is that, in the case of a word-space, the dimension n can be arbitrarily large. The size of this dimension is determined by the number of unique word type in the set of words to be plotted.", "cite_spans": [ { "start": 141, "end": 157, "text": "(Sahlgren, 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Word Space Model", "sec_num": "3.1." }, { "text": "Usually, the coordinates used to plot each word depends upon the frequency of the contextual feature that each word co-occur with within a text. For example, words that do not co-occur with the word to be plotted within a given context are assigned a coordinate value of zero. The set of zero and non-zero values corresponding to the coordinates of a word in a word-space are recorded in a context vector. Because most of the words in any text dataset will never co-occur with a particular word, the context vectors are often sparse or full of zero values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Space Model", "sec_num": "3.1." }, { "text": "By itself, the position of a word in a word space does not indicate anything about its meaning. To deduce a certain level of meaning, this position needs to be measured relative to the position of other words. In this sense, a linguistic concept known as the distributional hypothesis which states that \"words that occur in the same contexts tend to have similar meanings\" is applied. Having similar contexts means that words are surrounded or that they co-occur with same set of words. Thus, if we plot these words in a word-space they would be positioned close to each other. The level of closeness of words in the word-space is often referred to as the spatial proximity of words. This spatial proximity is what is used to represent the semantic similarity of words. A common approach used to determine spatial proximity is to measure the cosine of the angle generated between the plotted context vectors. The formula for computing cosine is as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Space Model", "sec_num": "3.1." }, { "text": "(1) Where: Q is a vector representing one term or document, D is a vector representing another term or document related to Q, and |Q| and |D| are the magnitudes of Q and D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Word Space Model", "sec_num": "3.1." }, { "text": "Currently, there are three major approaches to implement a word space. These include: Latent Semantic Analysis (LSA), Hyperspace Analogue to Language (HAL), and Random Indexing (Sahlgren, 2005) . For our purpose, we opted to use the Random Indexing approach.", "cite_spans": [ { "start": 177, "end": 193, "text": "(Sahlgren, 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Word Space Model", "sec_num": "3.1." }, { "text": "Random Indexing (RI) is a word space modeling technique that can be used with any type of linguistic context. It is inherently incremental and incorporates a built-in dimension reduction phase (Sahlgren, 2005) .", "cite_spans": [ { "start": 193, "end": 209, "text": "(Sahlgren, 2005)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Random Indexing", "sec_num": "3.2." }, { "text": "There are two basic steps involved in using Random Indexing to implement a word-space: 1. The first step involves the assignment of a unique and randomly generated label called an index vector to each word in the data. These vectors are sparse, high-dimensional, and ternary. High-dimensional means that it uses a large number of dimensions while ternary means that the label consists of a small number of randomly distributed +1s and -1s, with the rest of the elements of the vectors set to 0. 2. Then, context vectors with the same number of dimensionality as the index vectors are automatically produced by scanning through the text, and each time a word occurs in a context (e.g. in a document, or within a sliding context window), that word's index vector is added to the context vector for the word in question. 1 RI has attracted much attention with successful applications in term similarity measurement. For our purpose, we have selected RI because of its incremental approach; this means that the 1 Please refer to (Sahlgren,2005) ", "cite_spans": [ { "start": 1025, "end": 1040, "text": "(Sahlgren,2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Random Indexing", "sec_num": "3.2." }, { "text": "for more details Q * D CoSim(Q,D) = |Q| * |D|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Indexing", "sec_num": "3.2." }, { "text": "context vectors can be used for similarity computations even after just a few examples have been encountered. This characteristic is useful in experiments and applications such as ours where the size of the dataset is small. Secondly, as already mentioned, Random Indexing can be used with any type of context. This characteristic is useful in our experiment because we want to separately capture the paradigmatic and the syntagmatic contexts of words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Random Indexing", "sec_num": "3.2." }, { "text": "The experiments described below are meant to address the following questions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Questions", "sec_num": "4.1." }, { "text": "1. Using only the contents of an external anchor document as basis, how efficient is the word-space model's baseline performance in identifying relevant and irrelevant messages in an online discussion transcript? 2. To what extent will the classification decisions generated by the word-space models for this task align with the classification decisions made by the human annotators? 3. Does the paradigmatic and syntagmatic relationship of words have a direct effect on the performance of the word space model for this task? If so, which of the two provides the best performance?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Research Questions", "sec_num": "4.1." }, { "text": "To provide an answer to the above enumerated questions, we need to separately capture the syntagmatic and paradigmatic relationships of words and use it to build a word-space. This entailed implementing two variants of context windows that dissected the sentences in each document into its syntagmatic and paradigmatic context as described below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model Implementations", "sec_num": "4.2." }, { "text": "Following the definition specified in (Sahlgren, 2006) , we defined the syntagmatic context vector v as constituting of n context regions c as follows:", "cite_spans": [ { "start": 38, "end": 54, "text": "(Sahlgren, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Syntagmatic Context Window", "sec_num": "4.2.1." }, { "text": "We then defined each context region c as a window that we deem would capture the syntagmatic context of words in a sentence as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntagmatic Context Window", "sec_num": "4.2.1." }, { "text": "c = (t 1 , t 2 ,\u2026, t m ) (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntagmatic Context Window", "sec_num": "4.2.1." }, { "text": "where c is a single context, t 1 ..t m are m terms occurring in sequential order within the text being modelled and m is a number defining the size of the window. This window slides through each sentence in the text, one word at a time. As such, some windows will necessarily have null values in them (those that are near the end of the sentence).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntagmatic Context Window", "sec_num": "4.2.1." }, { "text": "In RI terminology, the value we assign to each context c is the sum of the index vector of m terms occurring sequentially in it. For weighing function, we multiplied the index vector of each term to its position index within the context (i.e. the 1st term by 1, the 2nd by 2, and so on).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Syntagmatic Context Window", "sec_num": "4.2.1." }, { "text": "Similarly, following the definition specified in (Sahlgren, 2006) , we defined the paradigmatic context vector v as constituting of n word types fw as follows:", "cite_spans": [ { "start": 49, "end": 65, "text": "(Sahlgren, 2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "4.2.2.Paradigmatic Context Window", "sec_num": null }, { "text": "v = (c 1 ,c 2 ,\u2026,c n ) (2) \u2192 v = (fw 1 ,fw 2 ,\u2026,fw n ) (4) \u2192", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.2.2.Paradigmatic Context Window", "sec_num": null }, { "text": "The value we assigned to each word fw is determined by its surrounding words as contained in a window that we deem would capture the paradigmatic context of the word as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.2.2.Paradigmatic Context Window", "sec_num": null }, { "text": "w = (wp m ,,wp m-1 ,\u2026wp 1 , fw ,ws 1 ,ws 2 ,\u2026,ws m ) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.2.2.Paradigmatic Context Window", "sec_num": null }, { "text": "here, fw is the focus word and wp m ,,wp m-1 ,\u2026wp 1 and ws 1 ,ws 2 ,\u2026,ws m are groups of m sequential words that precede and succeed the focus word respectively within the text to be modeled. This window is also a sliding window and rolls through each sentence in the text, one word at a time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.2.2.Paradigmatic Context Window", "sec_num": null }, { "text": "In RI terminology, the value we assign to each focus word fw is the sum of the index vectors of m sequential terms surrounding fw in some m + m context window. For weighing function, we multiplied the value of each surrounding word to their distance from the focus word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.2.2.Paradigmatic Context Window", "sec_num": null }, { "text": "The discussion transcript we used as dataset in our experiments was culled from a public forum. We deemed this transcript as suitable for our purpose since the discussion that generated it revolved around an online article which was cited in the first message. The online article consists of 34 sentences with a total of 602 words and served as the anchor document. Table 4 provides some statistical details on the discussion transcript. ", "cite_spans": [], "ref_spans": [ { "start": 366, "end": 373, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "The Discussion Transcript", "sec_num": "4.3." }, { "text": "The methods we used in this study involved four steps requiring as input a text-based transcript of an online discussion and an external anchor document. The output included a relevance measure for each message in the transcript as well as calculations of reliability statistics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4.4." }, { "text": "First, the text of the selected discussion transcript was downloaded along with the online article that served as the anchor document. These were then encoded into separate tables in a database. This task was accomplished using the Microsoft Access software.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Setup", "sec_num": "4.4.1." }, { "text": "We then asked three people to annotate the messages in the discussion transcript. The annotators were instructed to read first the anchor document, after which they were asked to give a binary judgment as to whether they think each message is relevant or irrelevant to the topic expressed in the document {1 = relevant, 0 = irrelevant}. The annotators performed this task independently, without convening with each other. The technical background of the annotators also vary; one is a computer programmer that had extensive hands-on experience with the software tool discussed in the anchor document, one is a computer science instructor having theoretical understanding but little industry-based experience, and the last is a call-center agent who has no background knowledge nor experience in programming.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Initial Setup", "sec_num": "4.4.1." }, { "text": "Second, a tool was constructed to browse through each message body in the database as well as through the anchor document to perform dataset pre-processing. Pre-processing included replacing special symbols and numeric characters with null values, rolling upper-case alphabetic characters into lower case, and segmenting each text into its sentential units. The delimiters used to identify sentential units include the symbols [., ?, !, :] .", "cite_spans": [ { "start": 427, "end": 439, "text": "[., ?, !, :]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Preprocessing", "sec_num": "4.4.2." }, { "text": "No stopping or stemming of words was applied to the text. However, within the RI computations, the random index of stopwords was set to zero. This has the same effect as removing those words from the text. The stoplist we used was culled from (Sanderson, n.d.) .", "cite_spans": [ { "start": 243, "end": 260, "text": "(Sanderson, n.d.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Dataset Preprocessing", "sec_num": "4.4.2." }, { "text": "In our experiments we first divided the discussion transcript into two distinct dataset. The first dataset included messages that were unanimously classified by the annotators as being either relevant or irrelevant; this dataset served as our Gold Standard and we refer to it as the unanimous dataset (n=66). The annotators unanimously coded 49 messages as relevant and 17 messages as irrelevant. The other dataset included all the messages; we refer to this dataset as the standard dataset (n=87).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.4.3." }, { "text": "We then conducted two sets of experiments: one using the paradigmatic context and the other the syntagmatic context. In both cases, the RI system was first trained on the anchor document before being applied to the datasets. Also, the generated word space was first tested on the unanimous dataset before applying it to the standard dataset. The purpose of the first test is to establish a baseline on the performance of the word space model in identifying the exact category of a message. The second test was meant to probe the reliability of the decisions of the model as compared to humans with all the noise present. We also experimented on various window sizes. For the syntagmatic context the window sizes we used are w = {2, 4, 6, 8} while the paradigmatic counterpart are w = {1, 2, 3, 4}. The Random Indexing system we used in our experiments was constructed using the JavaSDM package (Hassel, 2004) and the parameters we applied are shown in Table 2 : ", "cite_spans": [ { "start": 894, "end": 908, "text": "(Hassel, 2004)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 952, "end": 959, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4.4.3." }, { "text": "In this final step, we first measured the precision and recall of the classification decisions of the model in classifying messages in the unanimous dataset. Then, reliability measures were taken comparing the classification decisions of the human annotators with those produced by the word space model in the standard dataset. Two reliability measures were employed for this purpose: Holsti's coefficient of reliability (CR) which measures the agreement between two annotators divided by the total number of messages analyzed and Cohen's kappa (K) which computes the proportion of agreement actually observed between annotators after adjusting for the proportion of agreement expected by chance 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Measures", "sec_num": "4.4.4." }, { "text": "To compare the RI system's classification decisions with those of our human annotators, we treated any message with a computed cosine between 0.1 and 1.0 to be relevant (category = 1). Otherwise, it was treated as not-relevant (category = 0). Processing and computations in this final stage was done using the Microsoft Excel software.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability Measures", "sec_num": "4.4.4." }, { "text": "Tables 3 and 4 below shows the precision and recall gathered using the unanimous dataset. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification Precision and Recall", "sec_num": "5.1." }, { "text": "Tables 5 and 6 below present the reliability measures gathered using the standard dataset. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reliability", "sec_num": "5.2." }, { "text": "Initial interpretations we derived from our experiments are three-fold, relative to the three questions presented in section 4.1. In our analysis, we divided the messages into four groups: (1) those that were correctly classified as relevant (true positive), (2) those that were correctly classified as irrelevant (true negative), (3) those that were incorrectly classified as relevant (false positive), and (4) those that were incorrectly classified as irrelevant (false negative).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "5.3." }, { "text": "As shown in tables 3 and 4, the highest precision-recall tandem for classifying relevant messages (P=0.57, R=0.71) as well as classifying irrelevant messages (P=0.89, R=0.82) from the unanimous dataset was returned by the RI system implementing the syntagmatic context and using a window size of 2 and 4. The paradigmatic counterpart of these settings returned lower values both for relevant (P=0.31, R=0.47) and irrelevant messages (P=0.78, R=0.63).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #1: Using only the contents of an external anchor document as basis, how efficient is the word-space model's baseline performance in identifying relevant and irrelevant messages in an online discussion transcript?", "sec_num": null }, { "text": "These results seem to indicate that the word-space model is more efficient at identifying irrelevant messages for this situation. To establish some basis for this performance, we manually analyzed the results on the syntagmatic context. We found that messages that were categorically assigned to the true positive group all contain several technical keywords that also occurred in the anchor document; we initially assumed that the presence of these keywords is the primary reason why these messages were classified as relevant. However, we also found that the true negative group is not exclusive to the messages that contained no shared technical keywords; some messages in this group also contained technical keywords that occurred in the anchor document, one of them even contained as much as three keywords. This finding seemed counter-intuitive with the first assumption we made; it implies that while the model relies on the presence of technical keywords to identify relevant messages, it doesn't necessarily rely on the absence of the same to identify irrelevant messages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #1: Using only the contents of an external anchor document as basis, how efficient is the word-space model's baseline performance in identifying relevant and irrelevant messages in an online discussion transcript?", "sec_num": null }, { "text": "Further analysis led us to consider the way technical keywords are used in each message as a probable cause of this phenomenon. We noticed that the anchor document, being a technical document utilized and prescribed technical keywords in proximity with each other. This characteristic can also be observed in most messages that contained technical discussions. As such, when compared with the anchor document, technical messages got a higher chance of being classified as relevant. All messages assigned to the true positive group are of this type.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #1: Using only the contents of an external anchor document as basis, how efficient is the word-space model's baseline performance in identifying relevant and irrelevant messages in an online discussion transcript?", "sec_num": null }, { "text": "On the other hand, most non-technical messages are loosely constructed. Some of these messages also cited technical keywords but often presented them either in wide intervals or in isolation. Thus, when compared with the compact structure of the anchor document, the system has a higher tendency to tag them as irrelevant even though they contained technical keywords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #1: Using only the contents of an external anchor document as basis, how efficient is the word-space model's baseline performance in identifying relevant and irrelevant messages in an online discussion transcript?", "sec_num": null }, { "text": "What we deduced from this observation is that the word-space model may be more efficient at identifying irrelevant messages in this situation because it can utilize two sources of information: the first is the absence of technical keywords, and, if technical keywords are presented, it can analyze the proximity of these keywords to determine whether they project the same topic as those of the anchor document. The analysis on relevant messages, on the other hand, solely relied on the presence of technical keywords. It has no alternative for recognizing the relevance of messages that have no keywords present. This problem is further compounded by the presence of dominant words (i.e., in our dataset, the word \"sqldatasource\") in the messages. Messages assigned to the false positive group seemed to indicate that the model have a tendency to classify as relevant those messages that contained the most frequently occurring word in the anchor document even if this word occurred in isolation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #1: Using only the contents of an external anchor document as basis, how efficient is the word-space model's baseline performance in identifying relevant and irrelevant messages in an online discussion transcript?", "sec_num": null }, { "text": "As indicated in the values presented in Tables 5 and 6, both the paradigmatic and syntagmatic implementations of RI produced unacceptable values of reliability at this phase. With the RI system implementing the syntagmatic context and using a window size of (w=2) producing the higher reliability score (K=0.41, CR=0.72) with the human annotators. Manual inspection of the messages assigned to the false positive and false negative groups revealed that most of the discrepancies incurred on these tests were made on messages that either expounded on a particular concept or led the discussion to an off-topic direction. In the case of the latter, participants introduced other technical keywords in their messages that are related to but are not found in the anchor document. We assumed that this caused discrepancy because the RI system has no way of recognizing these keywords, especially if they are embedded within analogies. In the former, messages contained questions or brief comments that presented isolated technical keywords. If the technical keyword used is dominant (i.e., the highest occurring), the RI system classified the message as relevant otherwise it is classified as irrelevant. Human annotators, however, are still able to methodically discriminate the true value of these messages. We surmised that unconsciously they do this by utilizing the coherence or semantic relatedness of successive messages within the transcript.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #2: To what extent will the classification decisions generated by the word-space models for this task align with the classification decisions made by the human annotators?", "sec_num": null }, { "text": "At any rate, we found that the best performance of the model aligned more closely to the decisions made by the computer science instructor-annotator. Table 7 provides the details of this observation. Although this finding is still open to interpretations, we deem that this gives a good background and motivation for the application that we are envisioning. Clearly, the results presented in Tables 3, 4 , 5, and 6 indicated that context has a direct influence on the performance of the model. Another interesting observation that can be deduced from these results is that the RI implementing the syntagmatic context outperformed the RI implementing the paradigmatic context. Two related and supporting hypotheses may be derived from this discrepancy. One, is that the syntagmatic context, for this task, are better geared to model the human annotators' classification decisions than the paradigmatic context, the other is that human annotators may be relying more on the syntagmatic context of words in determining the relevance of messages under the circumstances given. Further studies are needed to prove these hypotheses.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 392, "end": 403, "text": "Tables 3, 4", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Question #2: To what extent will the classification decisions generated by the word-space models for this task align with the classification decisions made by the human annotators?", "sec_num": null }, { "text": "By intuition, we believe that since information coming from both forms of contexts is available to them, the human annotators are utilizing both of these in finalizing their classification decisions (i.e., the arrangement of the words + the acquired meaning of each word). We argue that this, to a greater extent, caused the misalignment between the decisions exhibited by the word-space model and those made by the human annotators in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Question #2: To what extent will the classification decisions generated by the word-space models for this task align with the classification decisions made by the human annotators?", "sec_num": null }, { "text": "Analyzing the applicability of the word-space model for the task of determining the relevance of forum messages, undoubtedly, requires a multi-phase process. In this paper, we have started exploring the capabilities of the model and gathered figures that will serve as our baseline data. Initially, we have shown the insufficiency of the word-space model's functionality to produce reliable results given the conditions specified. In succeeding phases of our study, we aim to explore other means that may improve the performance level of the word-space model. Some of our prospects include the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "1. Other datasets. It would be interesting to experiment with other datasets, preferably, on larger discussion transcripts but of similar anchor document sizes. This will enable us to determine whether the observations cited in answering question #1 represented recurring patterns or merely an artifact of the dataset we used. 2. Word sense information. In the current model, we tested the performance of the syntagmatic and paradigmatic contexts separately. In future experiments we aim to find ways of combining the functionalities of the two contexts. For example, the paradigmatic context seems more applicable to determining word similarity or antonymity. It would be interesting to determine whether this context can serve as an effective source of word-sense information for the syntagmatic context. This may help improve the precision and recall of the word-space model's performance. 3. Message hierarchy information. In the current model, we treated each message as an independent document. However, in reality, messages in a discussion transcript are interrelated, with previous messages possibly providing a context to succeeding messages. Humans implicitly recognize this relationship but our RI system cannot. We aim to also take this into consideration in future experiments and to enable the system to adjust the relevance scores based on the contextual relationship of each message to its preceding messages. This may help improve the reliability measure. The rising popularity of Asynchronous Online Discussion as a tool for supporting student learning necessitates the development of tools that can be used to efficiently monitor and assess online discussions. Word-space models provide an alternative approach for developing these tools. Current applications of word-space models focus on extracting meaning using large-scale corpora, more so, traditional tests of how well each particular model represents meaning largely revolves around word level comparison. The contribution of this paper is that it provides an initial bench-mark on the performance of this approach on small-sized datasets but on larger linguistic structures. We believe that such benchmarks are vital for fine-tuning word-space models for this particular task and application.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and Future Directions", "sec_num": "6." }, { "text": "22nd Pacific Asia Conference on Language, Information and Computation, pages 321-330", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ". Word-Space Models and Random Indexing (RI)In this section, we present a simple introduction to the concept of word-space modeling, how it is used to assign meaning to a term, and how it can be implemented using Random Indexing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Acceptable values for (K) are between 6.0 -7.0 while (CR) values can range from 0.00 (indicating no agreement) to 1.00 (indicating complete agreement).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Keeping Online Asynchronous Discussions on Topic", "authors": [ { "first": "N", "middle": [ "B" ], "last": "Beaudin", "suffix": "" } ], "year": 1999, "venue": "Journal of Asynchronous Learning Networks", "volume": "3", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Beaudin, N.B. 1999. Keeping Online Asynchronous Discussions on Topic. Journal of Asynchronous Learning Networks, Volume 3, Issue 2 -November 1999", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Effective discussion through a computer-mediated anchored forum", "authors": [ { "first": "M", "middle": [], "last": "Guzdial", "suffix": "" }, { "first": "J", "middle": [], "last": "Turns", "suffix": "" } ], "year": 2000, "venue": "Journal of the Learning Sciences", "volume": "9", "issue": "", "pages": "437--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guzdial, M., and J. Turns. 2000. Effective discussion through a computer-mediated anchored forum. Journal of the Learning Sciences, 9, 4, 437-470.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "JavaSDM -A Java package for working with Random Indexing and Granska", "authors": [ { "first": "M", "middle": [], "last": "Hassel", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hassel, M. 2004. JavaSDM -A Java package for working with Random Indexing and Granska. http://www.nada.kth.se/~xmartin/java/JavaSDM/", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Discovery of tacit knowledge and topical ebbs and flows within the utterances of online community", "authors": [ { "first": "R", "middle": [], "last": "Mcarthur", "suffix": "" }, { "first": "P", "middle": [ "D" ], "last": "Bruza", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "115--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "McArthur, R., and P.D. Bruza. 2003. Discovery of tacit knowledge and topical ebbs and flows within the utterances of online community, in: Chance Discovery, Y. Ohsawa and P. McBurney eds., Springer Verlag, pp. 115-132.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "An Investigation of Interactional Coherence in Asynchronous Learning Environments", "authors": [ { "first": "A", "middle": [], "last": "Potter", "suffix": "" } ], "year": 2007, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Potter, A. 2007. An Investigation of Interactional Coherence in Asynchronous Learning Environments. Ph.D. thesis, Nova Southeastern University", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high dimensional vector spaces", "authors": [ { "first": "S", "middle": [], "last": "Ravi", "suffix": "" }, { "first": "J", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Artificial Intelligence in Education Conference, 2007. Sahlgren, M. 2006. The Word-Space Model", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ravi, S. and J. Kim. 2007. Profiling Student Interactions in Threaded Discussions with Speech Act Classifiers. In Proceedings of Artificial Intelligence in Education Conference, 2007. Sahlgren, M. 2006. The Word-Space Model. Using distributional analysis to represent syntagmatic and paradigmatic relations between words in high dimensional vector spaces. Ph.D. thesis, Stockholm University.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An Introduction to Random Indexing", "authors": [ { "first": "M", "middle": [], "last": "Sahlgren", "suffix": "" } ], "year": 2005, "venue": "Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahlgren, M. 2005. An Introduction to Random Indexing. In Methods and Applications of Semantic Indexing Workshop at the 7th International Conference on Terminology and Knowledge Engineering, Copenhagen, Denmark, 2005.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "IR linguistic utilities -Stop word list", "authors": [ { "first": "M", "middle": [], "last": "Sanderson", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sanderson, M. (n.d.) IR linguistic utilities -Stop word list. Retrieved on June 1, 2008 from http://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/stop_words", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The affordance of anchored discussion for the collaborative processing of academic texts. Computer-Supported Collaborative Learning", "authors": [ { "first": "S", "middle": [], "last": "Teufel", "suffix": "" }, { "first": "M", "middle": [], "last": "Moens", "suffix": "" }, { "first": "", "middle": [], "last": "Stanford", "suffix": "" }, { "first": "J", "middle": [], "last": "Van Der Pol", "suffix": "" }, { "first": "W", "middle": [], "last": "", "suffix": "" }, { "first": "P", "middle": [ "R J" ], "last": "Simons", "suffix": "" } ], "year": 1988, "venue": "AAAI Spring Symposium on Intelligent Text Summarization", "volume": "1", "issue": "", "pages": "339--357", "other_ids": {}, "num": null, "urls": [], "raw_text": "Teufel, S., and M. Moens. 1988. Sentence extraction and rhetorical classification for flexible abstracts. In AAAI Spring Symposium on Intelligent Text Summarization, Stanford. van der Pol, J., W. Admiraal and P.R.J. Simons. 2006. The affordance of anchored discussion for the collaborative processing of academic texts. Computer-Supported Collaborative Learning, 2006. 1: 339-357", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "text": "= window size, P-R = precision on relevant messages, R-R = recall on relevant messages, P-IR = precision on irrelevant messages, R-IR = recall on irrelevant messages.", "num": null }, "TABREF0": { "type_str": "table", "num": null, "html": null, "text": "Some statistical details of the discussion transcript.", "content": "
Total number of messages posted87
Total number of participants16
Discussion duration5 days
Average message length86.3 words
" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "text": "Parameters used in the RI implementation", "content": "
RI ParametersValues Used
Number of Dimensions used900
Degree of Randomness Used8
Seed-value used to generate random number123
" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "text": "Precision and Recall of the classification results using the syntagmatic context", "content": "
w = 2w = 4w = 6w = 8
P-R0.570.570.550.52
R-R0.710.710.710.65
P-IR0.890.890.890.87
R-IR0.820.820.800.80
w = window size, P-R = precision on relevant messages, R-R = recall on relevant messages,
P-IR = precision on irrelevant messages, R-IR = recall on irrelevant messages.
" }, "TABREF3": { "type_str": "table", "num": null, "html": null, "text": "Precision and Recall of the classification results using the paradigmatic context", "content": "
w = 2w = 4w = 6w = 8
P-R0.300.310.310.31
R-R0.470.470.470.47
P-IR0.770.780.780.78
R-IR0.610.630.630.63
" }, "TABREF4": { "type_str": "table", "num": null, "html": null, "text": "Average Kappa and Coefficient Reliability measures with the human annotators using the syntagmatic context.", "content": "
w=2w=4w=6w=8
Ave. Kappa0.410.370.330.30
Ave. CR0.720.710.690.69
" }, "TABREF5": { "type_str": "table", "num": null, "html": null, "text": "Average Kappa and Coefficient Reliability measures with the human annotators using the paradigmatic context.", "content": "
w=2w=4w=6w=8
Ave. Kappa0.120.140.140.14
Ave. CR0.570.580.580.58
" }, "TABREF6": { "type_str": "table", "num": null, "html": null, "text": "Best performance of the word-space model using syntagmatic context (w=2)", "content": "
ProgrammerInstructorCall Center Agent
Kappa0.410.440.37
CR0.720.740.70
Question #3: Does the paradigmatic and syntagmatic relationship of words have a direct effect
on the performance of the word space model for this task? If so, which between the two
provides the best performance?
" } } } }