{ "paper_id": "Y09-1017", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:43:23.422021Z" }, "title": "Shallow Semantic Parsing of Persian Sentences *", "authors": [ { "first": "Azadeh", "middle": [], "last": "Kamel", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ostad Yousefi -Ghasem Abad -Mashhad -Iran", "location": { "postCode": "0098511-6627512" } }, "email": "azadeh_kamel@hotmail.com" }, { "first": "Saeed", "middle": [], "last": "Rahati", "suffix": "", "affiliation": { "laboratory": "", "institution": "Ostad Yousefi -Ghasem Abad -Mashhad -Iran", "location": { "postCode": "0098511-6627512" } }, "email": "" }, { "first": "Azam", "middle": [], "last": "Estaji", "suffix": "", "affiliation": {}, "email": "estaji@um.ac.ir" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Extracting semantic roles is one of the major steps in representing text meaning. It refers to finding the semantic relations between a predicate and syntactic constituents in a sentence. In this paper we present a semantic role labeling system for Persian, using memory-based learning model and standard features. We show that good semantic parsing results can be achieved with a small 1300-sentence training set. In order to extract features, we developed a shallow syntactic parser which divides the sentence into segments with certain syntactic units. The input data for both systems is drawn from Hamshahri corpus which is hand-labeled with required syntactic and semantic information. The results show an F-score of 90.3% on argument boundary detection task and an F-score of 87.4% on semantic role labeling task using Gold-standard parses. An overall system performance shows an F-score of 83.8% on complete semantic role labeling system i.e. boundary plus classification.", "pdf_parse": { "paper_id": "Y09-1017", "_pdf_hash": "", "abstract": [ { "text": "Extracting semantic roles is one of the major steps in representing text meaning. It refers to finding the semantic relations between a predicate and syntactic constituents in a sentence. In this paper we present a semantic role labeling system for Persian, using memory-based learning model and standard features. We show that good semantic parsing results can be achieved with a small 1300-sentence training set. In order to extract features, we developed a shallow syntactic parser which divides the sentence into segments with certain syntactic units. The input data for both systems is drawn from Hamshahri corpus which is hand-labeled with required syntactic and semantic information. The results show an F-score of 90.3% on argument boundary detection task and an F-score of 87.4% on semantic role labeling task using Gold-standard parses. An overall system performance shows an F-score of 83.8% on complete semantic role labeling system i.e. boundary plus classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Semantic role labeling (SRL), also called shallow semantic parsing, involves identifying which groups of words (phrases) act as the arguments to a given predicate. These arguments must be labeled with the role they play in relation to the predicate (verb), indicating how the proposition should be semantically interpreted (Hacioglu, 2004) .", "cite_spans": [ { "start": 323, "end": 339, "text": "(Hacioglu, 2004)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A number of algorithms have been proposed for automatically assigning such shallow semantic structure to English sentences. But little is understood about how these algorithms may perform in other languages, and in general the role of language-specific idiosyncrasies in the extraction of semantic content, and how to train these algorithms when large hand-labeled training sets are not available (Sun and Jurafsky, 2004) .", "cite_spans": [ { "start": 397, "end": 421, "text": "(Sun and Jurafsky, 2004)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "So, to design an optimal model for a Persian SRL system we should take into account specific linguistic aspects of the language. Regarding the remarkable amount of research that has already been done in English, we can capitalize from it to design a basic and effective SRL system. The idea is to use the technology for English and verify if it is suitable for Persian. Our proposed SRL system implements a two-phase architecture to first identify the arguments by a shallow syntactic parser or chunker, and then to label them with appropriate semantic role, with respect to the predicate of the sentence. We treat both phases as a multiclass classification problem, where the classifier is trained in a supervised manner, from human-annotated data, using memory-based learning. To our knowledge it is the first corpus based SRL system for Persian.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Memory-based language processing is based on the idea that NLP problems can be solved by storing solved examples of the problem in their literal form in memory, and applying similarity-based reasoning on these examples in order to solve new ones. Keeping literal forms in memory has been argued to provide a key advantage over abstracting methods in NLP that ignore exceptions and subregularities (Morante and Busser, 2007) .", "cite_spans": [ { "start": 397, "end": 423, "text": "(Morante and Busser, 2007)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "MBL works best when the features have been carefully selected and weighted (Hammerton et al., 2002) . We have used some syntactic properties of arguments for the feature set. But since no automatic parser exists to syntactically parse Persian sentences, we decided to develop a system for shallow parsing of Persian sentences in the first phase of the system. Shallow parsing (also called partial parsing) most often refers to the task of chunking and has become an interesting alternative to full parsing. It is a natural language processing technique that attempts to determine the constituents' boundaries in the sentence, but without parsing it fully into a parsed tree form (Marquez et al., 2008) . Shallow parsing is easily trainable, fast, robust and much less ambiguous. Such properties make it a good choice over full parsing (Hammerton et al., 2002) .", "cite_spans": [ { "start": 75, "end": 99, "text": "(Hammerton et al., 2002)", "ref_id": "BIBREF4" }, { "start": 679, "end": 701, "text": "(Marquez et al., 2008)", "ref_id": "BIBREF6" }, { "start": 835, "end": 859, "text": "(Hammerton et al., 2002)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The rest of this paper is organized as follow: We first describe our creation of a small 2000sentence Persian corpus labeled with 12 selected thematic roles in section 2. Section 3 introduces the general architecture of our model and describes its components in details. The experimental results are shown in section 4. Finally, conclusion of this study is presented in section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In all examples throughout this paper, we will show Persian sentences by their transliteration in italic between quotes followed by their translation to English between parentheses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The creation of semantically annotated corpora for Persian has lagged behind. Here we select some parts of the 2.5M word Hamshahri corpus (Oroumchian, 2006 ) (which has been previously assigned POS tags) and manually label it with the syntactic and semantic information needed for the system. The small created corpus contains sentences with varied structures and domains such as politic, social, science, sport, history. In this section, we first describe the semantic roles we used in the annotation and then introduce the data for our experiments.", "cite_spans": [ { "start": 138, "end": 155, "text": "(Oroumchian, 2006", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Semantic Annotation and the Corpus", "sec_num": "2" }, { "text": "Semantic roles, also called thematic roles or \u03b8-roles, are characterizations of certain semantic relationships which hold between a verb and its complements (and adjuncts). For example in the following sentence: 'pedare Ali in khane ra az tajeri kharid.' (Ali's father bought the house from a businessman.) 'pedare Ali' (Ali's father) is the Agent, 'in khane ra' (the house) is the Patient, 'az tajer' (from a businessman) is the Source of the buying event denoted by the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic roles", "sec_num": "2.1" }, { "text": "Semantic roles are one of the oldest issues in linguistic theory that were first mentioned by Jeffrey Gruber (Wagner, 2004) . There is no standard set of semantic roles, nor about their nature or their status in linguistic theory. The set of roles proposed by linguists range from very specific to very general (Lim et al., 2004) . At the specific end of this spectrum are domainspecific roles applied in some information extraction systems such as the From-City, To-City, or Receive-Time roles, which can be applied in reservation systems, or verb-specific roles such as Buyer, Goods and Seller for the verb buy. The other end of the spectrum consists of theories with only two \"proto-roles\": Proto-Agent and Proto-Patient (Dowty, 1991) . In between there are many theories which propose the limited number of roles (approximately ten roles), such as Fillmore (1971) 's list of nine: Agent, Experiencer, Instrument, Obejct, Source, Goal, Location, Time and Path.", "cite_spans": [ { "start": 109, "end": 123, "text": "(Wagner, 2004)", "ref_id": "BIBREF14" }, { "start": 311, "end": 329, "text": "(Lim et al., 2004)", "ref_id": "BIBREF5" }, { "start": 724, "end": 737, "text": "(Dowty, 1991)", "ref_id": "BIBREF1" }, { "start": 852, "end": 867, "text": "Fillmore (1971)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Semantic roles", "sec_num": "2.1" }, { "text": "For the task of this paper, we initially employed the role set proposed by Fillmore and then a number of roles are added to provide more abstract semantic characterization. Our proposed role set consists of 12 roles (Table 1) which is divided into two classes: primary and general roles. (1) The primary roles are the roles which are predicate-specific such as Agent, Patient, Source, Goal, Topic, Percept, Instrument and Beneficiary. For different predicates some subset of these roles may be available.", "cite_spans": [], "ref_spans": [ { "start": 216, "end": 225, "text": "(Table 1)", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Semantic roles", "sec_num": "2.1" }, { "text": "(2) The general roles are those which are assumed to apply across all verbs, they are optional for an event but supply more information about an event including Location, Time, Manner and Reason. For example in the sentence: 'Ali enshaayash ra ba sedaye boland dar kelas khand.' (Ali read his composition loudly in the class.) we have the following primary and general roles:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic roles", "sec_num": "2.1" }, { "text": "Phrase Role Role-Class 'Ali' (Ali) Agent Primary 'enshayash ra' (his composition) Patient Primary 'ba sedaye boland' (loudly) Manner General 'dar kelass' (in the class) Location General 'khand' (read) Predicate", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Semantic roles", "sec_num": "2.1" }, { "text": "We created our training and test corpora by choosing 50 simple verbs 1 , and then selecting all sentences containing these 50 verbs from the 2.5M-word Hamshahri corpus. We chose the 50 verbs by considering frequency, syntactic diversity, and word sense. We chose verbs that were frequent enough to provide sufficient training data. The frequencies of the 50 verbs range from 10 to 60, with an average of 35.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "We chose verbs that were representative of the variety of verbal syntactic behavior in Persian, including verbs with one, two, and three arguments, and verbs with various patterns of argument linking. Finally, we chose verbs that varied in their number of word senses. In total, we selected 2000 sentences. The third author then labeled each verbal argument/adjunct in each sentence with a role label. We created our training and test sets by splitting the data for each verb into two parts: 70% for training and 30% for test. Thus there are 1300 sentences in the training set and 700 sentences in the test set, and each test set verb has been seen in the training set. The list of verbs chosen along with their semantic class will be discussed in section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "It is worth pointing out that the system can be generalized to perform on all verbs of the language by annotating a larger corpus with semantic information. In the next section we will describe our proposed SRL approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "3 System description Figure 1 shows the overall architecture of our model. As it can be seen from the figure, the task of automatic semantic role assignment is divided into two main subtasks: (1) Identification of the target argument boundaries and (2) labeling the arguments with appropriate semantic roles.", "cite_spans": [], "ref_spans": [ { "start": 21, "end": 29, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "The first part (subtask) can be accomplished by developing a shallow syntactic parser. As Persian is almost a free word order language and this property results in high structural ambiguity, applying a shallow parsing method can make significant improvements in argument identification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "The second part (subtask) uses a machine learning method to distinguish different roles such as Agent, Goal, etc and also a repository of various Persian verbs and their features. This part faces a complicated problem since the number of arguments and their positions vary depending on a verb's voice (active/passive) and sense, along with many other factors. Regarding the classifier we have used Memory-Based Learning (MBL) for both systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "In the rest of this section we first provide more technical details of Memory-based learning and then describe the implementation of both systems in more detailed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The training and test sets", "sec_num": "2.2" }, { "text": "The basic idea behind memory-based learning is that concepts can be classified by their similarity with previously seen concepts (Stevens, 2006) . For the task of this paper we have used TiMBL (Tilburg Memory-Based Learner), a software tool which contains several algorithms with different parameters. We describe these algorithms briefly in continue.", "cite_spans": [ { "start": 129, "end": 144, "text": "(Stevens, 2006)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "An MBL system, contains two components: a learning component which is memory-based (from which MBL borrows its name), and a performance component which is similarity-based.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "The learning component of MBL is memory-based as it involves adding training instances to memory. An instance consists of a fixed-length vector of n feature-value pairs, and an information field containing the classification of that particular feature-value vector (Daelemans et al., 2006) .", "cite_spans": [ { "start": 265, "end": 289, "text": "(Daelemans et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "IB1 is a k-nearest neighbor algorithm, which is the default learning method in TiMBL. The second algorithm, IGTREE, stores examples in a tree which is pruned according to the weightings. This makes it much faster and of comparable accuracy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "In the performance component of an MBL system, the product of the learning component is used as a basis for mapping input to output; this usually takes the form of performing classification (Daelemans et al., 2006) .", "cite_spans": [ { "start": 190, "end": 214, "text": "(Daelemans et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "During classification, a previously unseen test example is presented to the system. The similarity between the new instance X and all examples Y in memory is computed using some", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "distance metric ( ) Y X , \u2206", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": ". The extrapolation is done by assigning the most frequent category within the found set of most similar example(s) (the k-nearest neighbors) as the category of the new test example. In case of a tie among categories, a tie breaking resolution method is used (Daelemans et al., 2006) .", "cite_spans": [ { "start": 259, "end": 283, "text": "(Daelemans et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "The most basic metric that works for patterns with symbolic features is the Overlap metric given in (1) and (2); where ( )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "Y X , \u2206", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "is the distance between instances X and Y, represented by n features, and \u03b4 is the distance per feature. The distance between two patterns is simply the sum of the differences between the features. The k-NN algorithm with this metric is called IB1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "( ) ( ) \u2211 = = n 1 i i y , i x Y , X \u03b4 \u2206 (1) Where: ( ) \uf8f4 \uf8f4 \uf8f4 \uf8f3 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f1 \u2260 = \uf8f7 \uf8f7 \uf8f8 \uf8f6 \uf8ec \uf8ec \uf8ed \uf8eb \u2212 \u2212 = i y i x if 1 i y i x if 0 numeric if i min i max y x abs i y , i x i i \u03b4 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "TiMBL, also, automatically learns weights for the features, using one of five different weighting methods: no weighting, gain ratio, information gain, chi-squared and shared variance. Daelemans et al. (1999) have shown that for typical natural language tasks, this approach has the advantage that it also extrapolates from exceptional and low-frequency instances.", "cite_spans": [ { "start": 184, "end": 207, "text": "Daelemans et al. (1999)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Memory-Based Learning", "sec_num": "3.1" }, { "text": "The main goal of a shallow parser is to divide a sentence into segments which correspond to certain syntactic units (mostly either noun, verb, or preposition phrase). These segments represent semantic arguments of a given predicate (often shown by the verb). There are different tagging methods for determining constituents' boundaries in the sentence. The bracket style and IOB tag set are the two common tagging styles. In this paper the alternative style for representing chunks is IOB form to determine the begining and continuation of chunks in a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "IOB was first used by Ratnaparkhi (1997) . In this approach, each word is associated with one of three tags: I (for a word inside a chunk), O (for outside of a chunk), and B (for between the end of one and the start of a chunk). The B and I tags are suffixed with the chunk type. For instance, if we try to chunk a sentence into NP, VP, and PP chunks, we might have the following tags:", "cite_spans": [ { "start": 22, "end": 40, "text": "Ratnaparkhi (1997)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "\u2022 B-X: the word begins a chunk of type X (NP, VP, PP, and so forth)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "\u2022 I-X: the word belong to a chunk of type X but does not begin it \u2022 O: the word does not belong any chunk There are different chunk representation, on the basis of IOB, from which we can mention IOB1, IOB2, IOE1, IOE2, where 'E' shows the last word in phrase. The example below illustrates three different chunk types (NP, VP and PP) for the sentence 'Ali ketab ra beh doostash daad.' (Ali gave the book to his friend.) shown in IOB structure: We have manually tagged 1500 sentences of Hamshahri corpus with IOB tag set to serve as training data and benchmark corpus for the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "The following features (Table 2) , used for shallow parsing, are selected according to the empirical observation and some semantic meanings. Post -1 word POS tag 5", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 32, "text": "(Table 2)", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "Post -2 word POS tag Current POS tag is the part of speech tag for the current word. Pre-1 is the POS tag of the first word before the labeled word in the sentence. If the Pre-1 word does not exit null tag will be assigned. Pre-2 is the POS tag of the second word before the labeled word in the sentence. Post -1 is the POS tag of the first word after the labeling word in the sentence. And post -2 is the POS tag of the second word after the labeling word in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "We have developed a program in VB.NET to extract these features automatically and feed to MBL classifier.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 1: Shallow Syntactic Parsing", "sec_num": "3.2" }, { "text": "After identifying the arguments, it is time to tag them with semantic roles. The SRL system makes use of the information provided by syntactic parser. In this way we replaced features derived from the hierarchical structure with ones derived from a flat chunked representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "The feature set plays an important role in MBL performance, and choosing features is certainly not a trivial task. Features were mainly selected from the review of previous literature. We investigated each of these features in Persian, some acted quite similarly to English, while others showed interesting differences. Six features showed interesting patterns that are discussed below:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "\u2022 Current argument phrase type: the syntactic type of constituents (NP,PP,VP,ADV,SP).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "\u2022 Previous argument phrase type: Since we do not have any hierarchical syntactic parser for Persian sentences, we tried to exploit the syntactic structure of the sentence by moving a sliding window of size three, over the sentence's constituents and make use of the collocation pattern of phrase type in the sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "\u2022 Next argument phrase type \u2022 Position: The position feature indicates that a constituent is before or after the target verb. Normal sentences in Persian are structured SOV, subject-preposition-object-verb. However, Persian can have relatively free word order, often called scrambling. This is because the parts of speech are generally unambiguous, and prepositions and the accusative marker help disambiguate the case of a given noun phrase. In our corpus, 75% of the roles are before the verb while 25% are after the verb. As in English the position is a useful cue for role identity. For example all the agents are before the verb and 30% of patients are after the verb, mostly appearing in the form of complement clause. \u2022 Voice: The active and passive verb forms in Persian share the same predicate argument structure; but the grammatical functions may be mapped to different sets of semantic roles. Our entire 1324-sentence corpus consists of 1152 (87%) active sentences and 172 (13%) passive. \u2022 Verb Class: These classes are based on the semantic roles each verb can take. More detailed descriptions are given below. Although several classifications are now available for English verbs, there is no such classification for Persian verbs. In this work, we provided a classification for Persian verbs consist of 18 classes which groups on the basis of both syntactic and semantic alternations. For this purpose, we first grouped a number of Persian verbs (50 verbs at the first stage) according to the number of syntactic arguments and then classified them into smaller groups which have similar set of semantic roles. Having a membership in a particular class says something about the predicate-argument structure of a verb and when a verb is absent in the training data, the class information may tell the system how to label the semantic roles of the verbs belonging to a particular class.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "For example verb 'Gorikhtan' (Escape) belongs to verb class 9 which is described as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "Verb class 9: [+Agent, source,goal, inst] Semantic roles for the sentence 'se zendani ba helicopter rooze shanbe az zendan gorikhtand' (three prisoners have escaped from prison with a helicopter on Saturday) are as follows: Agent Se zendani (three prisoners) Instrument Ba helicopter (with helicopter) Time Rooze shanbe (on saturday) source Az zendan (from prison) Predicate Gorikhtand (escape)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "The example demonstrates the fact that arguments don't necessarily appear in the order that they are written in the role set. The complete list of verbs and their semantic classes are given in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 200, "text": "Table 3", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Phase 2: Semantic Role Labeling", "sec_num": "3.3" }, { "text": "The experiments are carried out with the TiMBL software available at http://ilk.uvt.nl/. Regarding the learning algorithm, we use the IB1 classifier, parameterized by using overlap as the similarity metric, information gain for feature weighting, using 1 k-nearest neighbors, and weighting the class vote of neighbors as a function of their inverse linear distance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "4" }, { "text": "We used three measures for the evaluation of our system: precision, recall and a combined measure: F-Score. Precision is defined as the proportion of predicted arguments that is predicted correctly, recall as the proportion of correctly predicted arguments. The F-Score is the harmonic mean of precision and recall. To measure the performance of the automatic systems, the automatically assigned labels were compared to the labels assigned by a human annotator. The results obtained from the shallow syntactic parser with 1000 sentences training data and 500 sentences for test are shown in Table 4 : Table 5 shows the performance of semantic role labeling, trained on 1300 sentences and tested with 700 sentences, for each semantic role. We have assumed that the system input is correct (ignoring the errors caused by the syntactic parser). Low scores are generally related to low frequency of the SR in the training corpus, and high scores are related to high frequency or to overt marking of the SR. Table 6 shows the overall results for the SRL phase, with Gold Standard (hand-corrected) input, which is the average of values from Table 5 . Common methods for averaging F-scores are micro-averaging and macro-averaging. In micro-averaging, each class' F-score is weighted proportionally to the frequency of the class in the test set. A macro-average adds all the Fscores and divides this sum by the number of classes in the training set Table 6 : Results for Semantic role set using hand-corrected input.", "cite_spans": [], "ref_spans": [ { "start": 591, "end": 598, "text": "Table 4", "ref_id": "TABREF4" }, { "start": 601, "end": 608, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 1003, "end": 1010, "text": "Table 6", "ref_id": null }, { "start": 1135, "end": 1142, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 1441, "end": 1448, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Experimental Results", "sec_num": "4" }, { "text": "87.4 F_Score (Micro-avg) 70.6 F_Score (Macro-avg) Table 7 presents the overall SRL system. In practical use, automatic parses will not be as accurate. The final system performance will depend on the results obtained from both phases and is obviously less than what was reported on Table 6 . It is difficult to compare our system with existing systems, since our system is the first one to be applied to Persian texts. Moreover, our data format and data size are different from earlier research. However, to put our results somewhat in perspective, we looked at the performance of state-of-the-art SRL systems for English.", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 7", "ref_id": "TABREF6" }, { "start": 281, "end": 288, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Total Accuracy", "sec_num": "87.4" }, { "text": "The CoNLL shared tasks provide an excellent source of information on English PropBank SRL systems that use features extracted from binary phrase structure trees. The best performing system that participated in CoNLL 2005 (Pradhan et al., 2005) reached an F-score of around 93%. By considering the significant difference between our corpus size and the one they used, along with the syntactic parsers exists for English, this difference in performance can be explained.", "cite_spans": [ { "start": 221, "end": 243, "text": "(Pradhan et al., 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Total Accuracy", "sec_num": "87.4" }, { "text": "A system that did not participate in the CoNLL task, but still provides interesting material for comparison since it is the only SRL system developed for Persian, is Mousavi and Shamsfard's (2007) system. They used a rule-based approach to semantically label Persian sentences and achieved 76.8% precision and 75.1% recall. Since Persian is a free word order language it's not practical to extract a limited list of rules which covers all different sentence structures plus exceptions and irregularities. But our system learns from inside the text, and so deals better with unseen data, and as the data size increases the system can identify more cases with higher accuracy.", "cite_spans": [ { "start": 166, "end": 196, "text": "Mousavi and Shamsfard's (2007)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Total Accuracy", "sec_num": "87.4" }, { "text": "In this paper we addressed the question of assigning semantic roles to sentences in Persian (Farsi). We have developed a two phase semantic role labeling system for Persian using memory-based learning model. Since no semantic annotated corpus is available for Persian we created a small 2000 sentence corpus and hand-labeled it for semantic roles. The system yields results that are very promising, 90.3% for chunking phase and 83.8% for the overall SRL system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We can draw a number of conclusions from our investigation of semantic parsing in Persian. First, reasonably good performance can be achieved with a very small (1300 sentences) training set. Second, the features that we extracted for English semantic parsing worked well when applied to Persian. And that shallow parsing can be a good replacement for full parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "We also need to conduct more experiment with the features to figure out which features are most useful for Persian. It would also be interesting to see how the classifier would perform on larger collections and new genres of data. The follow-up of the Persian SRL project will provide new semantically annotated data to facilitate research in this area, and also improve Persian parsing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "23rd Pacific Asia Conference on Language, Information and Computation, pages 150-159", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "A simple verb is one whose infinitive consist of one word", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TiMBL: Tilburg Memory-Based Learner", "authors": [ { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" }, { "first": "J", "middle": [], "last": "Zavrel", "suffix": "" }, { "first": "K", "middle": [], "last": "Sloot", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daelemans, W., J. Zavrel, and K. Sloot. 2006. TiMBL: Tilburg Memory-Based Learner. Tilburg University and CNTS Research Group, University of Antwerp.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Thematic Proto-roles and Argument Selection. Language, 67", "authors": [ { "first": "D", "middle": [], "last": "Dowty", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "547--619", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dowty, D. 1991. Thematic Proto-roles and Argument Selection. Language, 67, pp.547-619.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Case for Case", "authors": [ { "first": "C", "middle": [], "last": "Fillmore", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fillmore, C. 1997. The Case for Case. Academic Press, New York.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Semantic role labeling using dependency trees", "authors": [ { "first": "K", "middle": [], "last": "Hacioglu", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 20th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hacioglu, K. 2004. Semantic role labeling using dependency trees. Proceedings of the 20th International Conference on Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introduction to Special Issue on Machine Learning Approaches to Shallow Parsing", "authors": [ { "first": "J", "middle": [], "last": "Hammerton", "suffix": "" }, { "first": "M", "middle": [], "last": "Osborne", "suffix": "" }, { "first": "S", "middle": [], "last": "Armstrong", "suffix": "" }, { "first": "W", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2002, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "551--558", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hammerton, J., M. Osborne, S. Armstrong, and W. Daelemans. 2002. Introduction to Special Issue on Machine Learning Approaches to Shallow Parsing. Journal of Machine Learning Research, 551-558.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Semantic role labeling using maximum entropy model", "authors": [ { "first": "J", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Y", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "S", "middle": [], "last": "Park", "suffix": "" }, { "first": "H", "middle": [], "last": "Rim", "suffix": "" } ], "year": 2004, "venue": "Proceedings of CoNLL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lim, J., Y. Hwang, S. Park, and H. Rim. 2004. Semantic role labeling using maximum entropy model. In Proceedings of CoNLL-2004.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Semantic Role Labeling: An Introduction to the Special Issue", "authors": [ { "first": "L", "middle": [], "last": "Marquez", "suffix": "" }, { "first": "X", "middle": [], "last": "Carreras", "suffix": "" }, { "first": "K", "middle": [ "C" ], "last": "Litkowski", "suffix": "" }, { "first": "S", "middle": [], "last": "Stevenson", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "2", "pages": "145--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marquez, L., X. Carreras, K.C. Litkowski, and S. Stevenson. 2008. Semantic Role Labeling: An Introduction to the Special Issue. Computational Linguistics, 34(2), 145-159.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Semantic Role Labelling for Catalan and Spanish using TiMBL", "authors": [ { "first": "R", "middle": [], "last": "Morante", "suffix": "" }, { "first": "B", "middle": [], "last": "Busser", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "183--186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Morante, R. and B. Busser. 2007. Semantic Role Labelling for Catalan and Spanish using TiMBL. Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval- 2007), pages 183-186.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Thematic Role Extraction Using Shallow Parsing", "authors": [ { "first": "M", "middle": [ "S" ], "last": "Mousavi", "suffix": "" }, { "first": "M", "middle": [], "last": "Shamsfard", "suffix": "" } ], "year": 2007, "venue": "International Journal of Computational Intelligence", "volume": "4", "issue": "", "pages": "126--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mousavi, M.S. and M. Shamsfard. 2007. Thematic Role Extraction Using Shallow Parsing. International Journal of Computational Intelligence, Volume 4, 126-132.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Creating a Feasible Corpus for Persian POS Tagging", "authors": [ { "first": "F", "middle": [], "last": "Oroumchian", "suffix": "" } ], "year": 2006, "venue": "UOWD Technical Reports Series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oroumchian, F. 2006. Creating a Feasible Corpus for Persian POS Tagging. UOWD Technical Reports Series, University of Tehran.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Support Vector Learning for Semantic Argument Classification", "authors": [ { "first": "S", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "K", "middle": [], "last": "Hacioglu", "suffix": "" }, { "first": "V", "middle": [], "last": "Karugler", "suffix": "" }, { "first": "W", "middle": [], "last": "Ward", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "11--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pradhan, S., K. Hacioglu, V. Karugler, W. Ward, J.H. Martin, and D. Jurafsky. 2005. Support Vector Learning for Semantic Argument Classification. Springer Science, 11-39.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A linear observed time statistical parser based on maximum entropy models", "authors": [ { "first": "", "middle": [], "last": "Ratnaparkhi", "suffix": "" } ], "year": 1997, "venue": "EMNLP-97, The Second Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In EMNLP-97, The Second Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Automatic semantic role labeling in a Dutch corpus", "authors": [ { "first": "G", "middle": [], "last": "Stevens", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stevens, G. 2006. Automatic semantic role labeling in a Dutch corpus. Master thesis, University of Utrecht, Faculty of arts.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Shallow Semantic Parsing of Chinese", "authors": [ { "first": "H", "middle": [], "last": "Sun", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2004, "venue": "Proceedings of NAACL 2004", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sun, H. and D. Jurafsky. 2004. Shallow Semantic Parsing of Chinese. In Proceedings of NAACL 2004, Boston, USA.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning thematic role relations for lexical semantic nets", "authors": [ { "first": "A", "middle": [], "last": "Wagner", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wagner, A. 2004. Learning thematic role relations for lexical semantic nets. PHD thesis, Tubingen University.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Overall architecture." }, "TABREF0": { "num": null, "type_str": "table", "html": null, "text": "Semantic role set.", "content": "
Semantic
Role
" }, "TABREF2": { "num": null, "type_str": "table", "html": null, "text": "Shallow Parser Feature Set.", "content": "
Feature type indexFeature type name
1Pre-1 word POS tag
2Pre-2 word POS tag
3Current POS tag
4
" }, "TABREF3": { "num": null, "type_str": "table", "html": null, "text": "Verb classes with their semantic roles.", "content": "
VerbsSemantic Role PropertiesClass
To think ,to teach, to write[+Agent,+(topic or patient), +goal]1
To stand , to sleep , to sit[+Agent, location]2
To kiss, to choose, to kill, to test[+Agent,+patient]3
To wear, to build, to break, to cut[+Agent,+patient, inst]4
To weave, to send[+Agent,+patient,goal, benf ]5
To buy , to steal , to snatch[+Agent,+patient, source]6
To sell , to lose , to include , to[+Agent,+patient, goal]7
threw, to press
To splash, to pour, to pay[+Agent,+patient, goal, source]8
To fly, to escape[+Agent, source,goal, inst]9
To run, to laugh, to look, to fight[+Agent,goal, inst]10
To accept, to see, to understand[+Agant,+(patient or topic)]11
To hear, to read, to ask[+Agent,+(patient or topic),12
source,benf]
To frighten[+Agent,+(topic or source)]13
To command, to try, to say[+Agent,+topic, goal]14
To stick[+(Agent or patient), +goal, inst]15
To burn[+Patient, inst]16
To recognize[+Agent,+patient,+percept]17
To know[+Agent,+(patient or topic), +percept]18
" }, "TABREF4": { "num": null, "type_str": "table", "html": null, "text": "Results for Syntactic Parser Subtask", "content": "
Total Accuracy90.3
F_Score (Micro-avg)90.3
F_Score (Macro-avg)88.5
" }, "TABREF5": { "num": null, "type_str": "table", "html": null, "text": "Per-class performance of the SRL", "content": "
Semantic RolePrecisionRecall
Agent89.695.3
Predicate11
Topic98.199.5
#11
Goal62.881.7
Manner38.937.4
Time50.720.3
Reason58.363.8
Location56.252.6
Patient87.487.7
Instrument70.394.3
Source61.672.8
Beneficiary56.752.3
Percept75.282.4
" }, "TABREF6": { "num": null, "type_str": "table", "html": null, "text": "Final Results for SRL System.", "content": "
Total Accuracy83.8
F_Score (Micro-avg)83.8
F_Score (Macro-avg)60.9
" } } } }