{ "paper_id": "Y12-1035", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T13:45:35.041783Z" }, "title": "A Model of Vietnamese Person Named Entity Question Answering System", "authors": [ { "first": "Mai-Vu", "middle": [], "last": "Tran", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology -Vietnam National University", "location": { "settlement": "Hanoi" } }, "email": "" }, { "first": "Duc-Trong", "middle": [], "last": "Le", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Engineering and Technology -Vietnam National University", "location": { "settlement": "Hanoi" } }, "email": "" }, { "first": "Xuan-Tu", "middle": [], "last": "Tran", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University", "location": { "settlement": "Hanoi", "country": "Vietnam" } }, "email": "" }, { "first": "Tien-Tung", "middle": [], "last": "Nguyen", "suffix": "", "affiliation": { "laboratory": "", "institution": "National University", "location": { "settlement": "Hanoi", "country": "Vietnam" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we proposed a Vietnamese named entity question answering (QA) model. This model applies an analytical question method using CRF machine learning algorithm combined with two automatic answering strategies: indexed sentences database-based and Google search engine-based. We gathered a Vietnamese question dataset containing about 2000 popular \"Who, Whom, Whose\" questions to evaluate our question chunking method and QA model. According to experiments, question chunking phase acquired the average F1 score of 92.99%. Equally significant, in our QA evaluation, experimental results illustrated that our approaches were completely reasonable and realistic with 74.63% precision and 87.9% ability to give the answers.", "pdf_parse": { "paper_id": "Y12-1035", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we proposed a Vietnamese named entity question answering (QA) model. This model applies an analytical question method using CRF machine learning algorithm combined with two automatic answering strategies: indexed sentences database-based and Google search engine-based. We gathered a Vietnamese question dataset containing about 2000 popular \"Who, Whom, Whose\" questions to evaluate our question chunking method and QA model. According to experiments, question chunking phase acquired the average F1 score of 92.99%. Equally significant, in our QA evaluation, experimental results illustrated that our approaches were completely reasonable and realistic with 74.63% precision and 87.9% ability to give the answers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Numerous researches about Question Answering (QA) systems have been discussed in recent years. Initially, they only answered simple questions; however, currently researches have been focused on methods for more complex questions. Those methods analyze and parse complex questions to various simple questions before using existed techniques to respond. [1] Automatic question answeringthe ability of computers to answer simple or complex questions, posed in ordinary human languageis the most exciting. Building the question answering system is a difficult issue in terms of natural language processing tasks. Presently, automatic question answering systems are revolutionizing the processing of textual information. By coordinating complex natural language processing techniques, sophisticated linguistic representations and advanced machine learning methods, automatic question answering systems can detect exact responses from a wide variety of natural language questions in unstructured texts.", "cite_spans": [ { "start": 352, "end": 355, "text": "[1]", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent researches demonstrated that the increasing in performance of systems is dependent on the number of probable answers in documents. The exact answer detection is one of the most significant problems in QA systems. For this purpose, our model utilized CRF [5] machine learning algorithm to parse natural questions and some IR strategies to extract answers. The model works on closed domain by extracting human names based on knowledge warehouse and search engines. If answers are not found in database, the question will push into Google search engine. The QA system just supports questions (such as \"Who?\", \"Whom?\", \"Whose?\") in factoid form or one sentence.", "cite_spans": [ { "start": 261, "end": 264, "text": "[5]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The aim of this paper is to design and implement a new classification model, reformulation and answer validation in a QA system. The methodology in our system is to discover correct answer in person domain with NLP techniques, CRF model to parse question, and some strategies to extract answer: knowledgebased, search engine-based and hybrid method. The primary reason of an answer validation component in the system concerns the difficulty of picking up from a document the \"exact answer\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our approach relies on investigating a statistical machine learning method to parse natural question and extract answer candidates by mining the documents or a domain text corpus for their cooccurrence tendency [2] . In the initial phase, questions are parsed by using CRF model. Subsequently, query patterns based on their types are clarified before the search engine detect candidate answer documents and send them to answer processing module to extract correct answers. The system filters candidate answers collection based on their similarities with question and assigns a priority number to the candidate answers. Finally, the system ranks the answers and sends to user for final validation in order to extract the exact answer. Our system modeled in person domain however it could be expanded to open domains in QA systems.", "cite_spans": [ { "start": 211, "end": 214, "text": "[2]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Question answering researches wereclassified by diverse competitive evaluations which are conducted by the question answering track of the Text Retrieval Conference 1 , an annual event sponsored by the U.S. National Institute of Standards and Technology (NIST). Starting in 1999, the TREC question answering evaluation initially focused on factoid (or fact-recall) questions, which could be answered by extracting phrase length passages. Some of the TREC systems achieved a remarkable accuracy: the best factoid QA systems can now answer over 70% of arbitrary, open domain factoid questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In Webclopedia [6] , with each question type, the system provides a set of pattern questions and answers. The system has to determine the type of question based on the similarities between the input question and each of the question patterns. Then the corresponding pattern will be used to find passages containing the answer. Finally, the answer is extracted from the found passages.", "cite_spans": [ { "start": 15, "end": 18, "text": "[6]", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "The True Knowledge Answer Engine 2 attempts to comprehend a given question by disambiguation from all possible meanings of the words in the question to find the most likely one.It discoverson its database of knowledge of discrete facts. As these facts are stored in a form that a computer can understand, the answering engine attempts to produce an answer according to its comprehended meaning of the input question [8] .", "cite_spans": [ { "start": 416, "end": 419, "text": "[8]", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Wolfram Alpha 3 is an answering engine developed by Wolfram Research. It is an online service that answers factual questions directly by computing the answer from structured data, rather than providing a list of documents or web pages that might contain the answer as a search engine does, Knowledge Base [9] .", "cite_spans": [ { "start": 14, "end": 15, "text": "3", "ref_id": "BIBREF2" }, { "start": 305, "end": 308, "text": "[9]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In Vietnamese text experiments, Vu M.T, et al [7] proposed a model of question answering system which is based on semantic relation extraction. It is a combination of two methods: snowball of Agichtein, Gravano and the search engine of Ravichandran, Hovy to extract semantic relation patterns from the Vietnamese texts. The experimental system achieves positive results on the domain of tourism and also shows the correctness of the model. However, the statistic relation impacts on the system precision and executed time is depended on network speed.", "cite_spans": [ { "start": 46, "end": 49, "text": "[7]", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Nguyen Q.D, et al proposed an ontology-based Vietnamese question answering system that allows users to express their questions in natural language [4] . It includes two components: a natural language question analysis engine and an answer retrieval module. They built a set of relations in the ontology which includes only two person relations. According the system's experimental results are relatively high, the cost for building the database is high, and sometimes the extracted relations cannot cover the data domain.", "cite_spans": [ { "start": 147, "end": 150, "text": "[4]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "From these systems, this paper introducesa model of person named entity question answering system in Vietnamese domain with machine learning CRF-based method in question analysis phase; sentences data collection-based and search engine-based strategies in answer extraction phase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "VPQA model consist of three fundamental modules. The first module (1) focuses on Vietnamese natural language question analysis by CRF. The result set of tagged component in the 3rd step is used in the recommendation sub-module (2) . It offers user answers and question patterns by Lucene searching from QA Log Database. Additionally, it is also utilized for the question expansion step and expands queries which are the output for next module.", "cite_spans": [ { "start": 227, "end": 230, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "System architecture", "sec_num": "3" }, { "text": "(1) ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System architecture", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(2)", "eq_num": "(4)" } ], "section": "System architecture", "sec_num": "3" }, { "text": "The feature selection is the most important step in CRF method. It impacts on the quality of NER and chunking systems. The more careful selection is, the more accurate system is. At a position i of observed data sequence include two parts. The former is data features, the other is respective label. The information of data features helps us determine the information of respective label at an observed data position. It means that labels can be automatically extracted model when has data features. From this point of view, the features used in our system are shown in Table2. From the features in Table 2 , the using CRF method for about 2000 tagged questions (Training dataset).", "cite_spans": [], "ref_spans": [ { "start": 599, "end": 606, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Module processing", "sec_num": "3.1.3" }, { "text": "Atthe result, a model which is base for analyzing user question components later is built.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Module processing", "sec_num": "3.1.3" }, { "text": "Answer extraction module proposes two primary answering strategies: sentences data collectionbased and search engine-based. We will address in greater detail each strategy in the following sections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Answer processing module", "sec_num": "3.2" }, { "text": "First, documents are retrieved and extracted using freely available Wikipedia dumps 1 of Vietnamese editions in XML format in which document contain fields: title, URL, content of article in Wikipedia respectively. Finally, question answering will be conducted follow three steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentences data collection-based strategy", "sec_num": "3.2.1" }, { "text": "Step 1: Building data collection", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentences data collection-based strategy", "sec_num": "3.2.1" }, { "text": "The obtained documents are conducted noise reduction and sentence tokenization using JVnTextPro 2 toolkit. After that, we index this new data with some specific fields such as: title, URL, sentences of document using Lucence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentences data collection-based strategy", "sec_num": "3.2.1" }, { "text": "Step 2: Candidate Answer Extraction Underlying each component of our question answering system is keyword-based document retrieval using Lucene. The system explored two modifications to extract answer: baseline method (Baseline) using word tokenization and CRF method in the question analysis phase (KLB). These strategies are described in greater detail below, and summarized in table4 \uf0b7 Baseline: this is a basic approach to compare with our proposed method which it only uses keywords taken from question to make query for Lucence. To illustrate our method clearer let us observe the example which will use in this paper: \uf0fc With a question: \"Ai l\u00e0 ng\u01b0\u1eddi t\u00ecm ra Ch\u00e2u M\u1ef9?\" (\"Who discovered theAmerican?\") \uf0fc Keywords: \"t\u00ecm ra\", \"Ch\u00e2u M\u1ef9\" (\"discovered\", \"the American\") \uf0fc Query in lucence: +\"t\u00ecm ra\" +\"Ch\u00e2u M\u1ef9\" (+\"discovered\"+\"the American\") \uf0b7 KLB: In this section, the system proposed an algorithm to extract answers. Firstly, components of a question have been sent by the question processing phase. These components consist of parts with tag of question, for instance: \"Ai l\u00e0 -WH\", \"ng\u01b0\u1eddi -O\", \"t\u00ecm ra -V_W\", \"Ch\u00e2u M\u1ef9 -Obj\" (\"Who -WH\", \"discovered -V_W\", \"the American -Obj\"). Subsequently, the system chooses potential words to make Lucene query contains labels: \"V_W\", \"A_W\", \"N_W\", \"Obj\" and other words such as: \"D_Time\", \"D_Loc\", \"D_Attr\" to acquire exact answer by filtering retrieved results from Lucene. Finally, to get more exact answer, the system supplements a query expansion procedure by using a Vietnamese synonym dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentences data collection-based strategy", "sec_num": "3.2.1" }, { "text": "Candidate answers collection which has been sent by answer extraction feed in a filteringcomponent. These candidates are ranked by using score formula of Lucene (1) . Sentence ranking is based on precision-and recall-like measures. Each question term is assigned by a weight based on its . Words that are synonymous according to our lexicons are pooled and their weights summed. The weights of words in the final sentence, and of some other useful terms, are boosted. Synonymous terms from the question are included in the Lucene query as well, each with the pooled weight. We note each document's Lucene DocScore. Finally, answer sentence candidates are recognized person entity answer by using Java open source library VSW 3 and ranked by a formula (2) .", "cite_spans": [ { "start": 161, "end": 164, "text": "(1)", "ref_id": "BIBREF0" }, { "start": 725, "end": 726, "text": "3", "ref_id": "BIBREF2" }, { "start": 751, "end": 754, "text": "(2)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Step 3: Answer selection", "sec_num": null }, { "text": "In there: rank entity/d : rank of answer entity; scored: score of sentence candidate which contain entity; freq entity : Frequency of entity in N candidates; N: Number of sentences candidates, \uf064", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Step 3: Answer selection", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Threshold = ( \u00d7 2 \u00d7 ( . ) \u00d7 ( . ) \u00d7 ( , ) \u00d7 (1) / = \u00d7 \u00d7 + 1 \u2212", "eq_num": "(2)" } ], "section": "Step 3: Answer selection", "sec_num": null }, { "text": "In previous section, our system proposed a strategy based on collected data (SEB). The capability of answering in this strategy depends on amount of data warehouse. Therefore, to improve this as well as increase accuracy of answer, we observed other method based on obtained results of search engine. These strategies are described in greater detail follow two step:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search engine-based", "sec_num": "3.2.2" }, { "text": "Step 1: Snippet Retrieval Same to previous strategy, after achieve keywords from question processing phase, these keywords will be made Google query by adding wildcard \"*\" or \"**\" into keywords. By this way, the system achieve some Google queries form: \"k1 k2\u2026\"\"k1 * k2\u2026\", \"k1 ** k2\u2026\" ( : is ith keyword). Example: \"t\u00ecm ra * Ch\u00e2u M\u1ef9\" (\"discovered * the American\"); \"t\u00ecm ra\" \"Ch\u00e2u M\u1ef9\" (\"discovered\", \"the American\") Next, queries will be pushed to Google search engine and obtain candidate snippets by using JSOAP API.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search engine-based", "sec_num": "3.2.2" }, { "text": "Step 2: Answer extraction Candidate snippets collection which has been sent by step 1 are recognized person entity answer by using Java open source library VSW and ranked by using frequency of each entity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Search engine-based", "sec_num": "3.2.2" }, { "text": "In this section, the paper present some achieved results which illustrate that the proposed model as well as our approach is completely reasonable and highly applicable. Our model conducted two main experiments to evaluate system: one to appraise question analysis phase and another one to appraise entire system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment and Discussion", "sec_num": "4" }, { "text": "In question analysis phase, initially, we built a question dataset containingabout 2000 popular\"Who, Whom, Whose\" questions. This dataset was majorly drawn from Yahoo! Answer and some Vietnamese e-newspaper websites with some following requirements: the question must be less ambiguous and meaningful in natural language. After that, we standardized these questions into suitable syntax as well as Vietnamese context and conducted labeling to obtain a standard training dataset. Next, we used 10 fold cross validation in which were dividedthe training data randomly by 9:1 ratio. Then we carried out test and exposed the validated measures: precision, recall and F1 measure as show in table 3. In Table 3 , we presented a chart to compare the measures of 10 folds. The figure shown that the precision of using CRF in question analysis is quite high with F1 measure approximate 93%. This result illustrated that our approach is completely reasonable. However, the chart shown some unexpected results in several sample tests but these will be made well by supplement some specific dictionary as well as strengthen the training data much more. In the next phase, we evaluated precision and responding time of entire system in which we proposed a method for question analysis as basic system to compare with our system. Here, we used 1000 questions taken from training data. After that we compared obtained result from 3 strategies of answering: knowledge-based (KLB), search engine-based (SEB) and hybrid method of these two strategies (KLB+SEB). Especially, with knowledge-basedstrategy, we carried out one more experiment named Baseline, instead of using CRF we only analyze questions at morphological layer to illustrate the effectiveness of CRF.The result is divided into 3 levels: Top one, three, and five per question, respectively. These obtained results are presented in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 697, "end": 704, "text": "Table 3", "ref_id": "TABREF4" }, { "start": 1876, "end": 1883, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiment and Discussion", "sec_num": "4" }, { "text": "In this experiment, we used 3 main measures to evaluate. The first one is capability of answering which is defined by = (q is amount of questions which system get answers; Q is amount of tested questions). The second one is precision of answers which is defined by = q x q ( is amount of questions which system get exact answers). And the last one is system performance which is time that system obtains an answer with each question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment and Discussion", "sec_num": "4" }, { "text": "To evaluate this measure we run system with 1000 loops to answer one question before computing total running time and divided by total of loops. Particularly, it is defined by t 1000 (t is total running time 1000 times). Table 4 presents a chart to compare obtained result per strategy. The chart shows that accuracy of answers and system performance is satisfactory. Top three levels generates the best results, however capability of answering is not really good because of its dependence on covered knowledge warehouse as well as ranking algorithms for returned answer did not achieve highly effectiveresults. Whilst the strategy using search engine has capability of answering as well as its accuracy of answer is acceptable but the running time is too slow. This is not efficient to build a real system, thus we proposed building a two layer system (combine both of above strategy) and achieved result which illustrates that hybrid system is completely reasonable. Additionally, we observed that the result of baseline method and compared it to CRF-based method. Using CRF create results which are much higher than baseline. These shown that the approach based on machine learning algorithms achieved results quite highly as well as illustrated that our proposed system is reasonable and realistic.", "cite_spans": [], "ref_spans": [ { "start": 221, "end": 228, "text": "Table 4", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Experiment and Discussion", "sec_num": "4" }, { "text": "In this paper, we proposed and built a model of automatic system to answer questions about name of person in Vietnamese data domain. The achieved results illustrated that our approaches were completely reasonable and realistic. Furthermore, we also built an open framework for building an automatic question answering system. However, the system still remains some limitations due to the lack of amount of training question dataset as well as pessimistic rank algorithms for returned answers.We recommend the knowledgebased method to acquire the most remarkable performance and F1 score. Our future works will focus on building a huge training question dataset, boost a more optimal rank algorithm as well as improve system performance to deploy a real application. Additionally, we'll also extend knowledge warehouse and question domain to build an automatic open domain question answering system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future works", "sec_num": "5" }, { "text": "http://www.trec.nist.gov 2 http://www.trueknowledge.com 3 http//www.woframalpha.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://dumps.wikimedia.org/viwiki/20101031/ 2 http://jvntextpro.sourceforge.net/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://code.google.com/p/vsw/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Complex Question Answering Based on Semantic Domain Model of Clinical Medicine", "authors": [ { "first": "Dina", "middle": [], "last": "Demner-Fushman", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Demner-Fushman, Dina, \"Complex Question Answering Based on Semantic Domain Model of Clinical Medicine\", OCLC's Experimental Thesis Catalog, College Park, Md.: University of Maryland (United States), 2006.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Comparing Statisticaland Content-Based Techniques for Answer Validation on the Web", "authors": [ { "first": "B", "middle": [], "last": "Magnini", "suffix": "" }, { "first": "M", "middle": [], "last": "Negri", "suffix": "" }, { "first": "R", "middle": [], "last": "Prevete", "suffix": "" }, { "first": "H", "middle": [], "last": "Tanev", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the VIII Convegno AI*IA", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Magnini, B., Negri, M., Prevete, R., Tanev, H.: \"Comparing Statisticaland Content-Based Techniques for Answer Validation on the Web\", Proceedings of the VIII Convegno AI*IA, Siena, Italy, 2002.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Annotating the World Wide Web using Natural Language", "authors": [ { "first": "Boris", "middle": [], "last": "Katz", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the 5th RAIO conference on Computer Assisted information searching on the internet (RIAO'97)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boris Katz,Annotating the World Wide Web using Natural Language, In Proceedings of the 5th RAIO conference on Computer Assisted information searching on the internet (RIAO'97) 1997.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Dat Quoc Nguyen, Son Bao Pham, A Vietnamese Question Answering System, KSE", "authors": [ { "first": "", "middle": [], "last": "Dai Quoc Nguyen", "suffix": "" } ], "year": 2009, "venue": "International Conference on Knowledge and Systems Engineering", "volume": "", "issue": "", "pages": "26--32", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dai Quoc Nguyen, Dat Quoc Nguyen, Son Bao Pham, A Vietnamese Question Answering System, KSE, pp.26-32, 2009 International Conference on Knowledge and Systems Engineering, 2009", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML", "authors": [ { "first": "John", "middle": [ "D" ], "last": "Lafferty", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Fernando", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D. Lafferty, Andrew McCallum, Fernando C. N. Pereira: Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. ICML 2001: 282-289", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Question Semantic Analysis in Vietnamese QA System. The Advances in Intelligent Information and Database Systems book, Serie of Studies in Computational Intelligence", "authors": [ { "first": "", "middle": [], "last": "Tuoit", "suffix": "" }, { "first": "Thanhc", "middle": [], "last": "Phan", "suffix": "" }, { "first": "", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": ".", "middle": [ "T" ], "last": "Thuyn", "suffix": "" }, { "first": "", "middle": [], "last": "Huynh", "suffix": "" } ], "year": 2010, "venue": "", "volume": "283", "issue": "", "pages": "29--40", "other_ids": {}, "num": null, "urls": [], "raw_text": "TuoiT.Phan, ThanhC.Nguyen, ThuyN.T.Huynh. Question Semantic Analysis in Vietnamese QA System. The Advances in Intelligent Information and Database Systems book, Serie of Studies in Computational Intelligence, Volume 283, pp.29- 40, (2010)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "An Experimental Study of Vietnamese Question Answering System", "authors": [], "year": null, "venue": "Proceedings of IALP'2009", "volume": "", "issue": "", "pages": "152--155", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vu Mai Tran, Vinh Duc Nguyen, Oanh Thi Tran, Uyen Thu Thi Pham, Thuy Quang Ha. An Experimental Study of Vietnamese Question Answering System. In Proceedings of IALP'2009. pp.152~155", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Interactive Documents as Interfaces to Computer AlgebraSystems: JOBAD and Wolfram|Alpha; Centre d'\u00c9tude et de Recherche en Informatique du CNAM (C\u00e9dric)", "authors": [ { "first": "Catalin", "middle": [], "last": "David", "suffix": "" }, { "first": "Christoph", "middle": [], "last": "Lange", "suffix": "" }, { "first": "Florian", "middle": [], "last": "Rabe", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Catalin David, Christoph Lange, Florian Rabe: Interactive Documents as Interfaces to Computer AlgebraSystems: JOBAD and Wolfram|Alpha; Centre d'\u00c9tude et de Recherche en Informatique du CNAM (C\u00e9dric) 2010", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "num": null, "text": "Part of speech pos:N, pos:V, pos:adj, etc. Name, location, organization, job dictionary Per:job, org:i:, etc org:0:FPT per:job:-2" }, "TABREF0": { "num": null, "content": "
LabelMeaningType of component
WHQuestion typeNatural language question
D_Attr D_Time D_LocFeature of job, position Feature of time Feature of locationSubject/Agent Pre-processing Question analyser Idirect_Object/Co_theme QALog Database
A_WAdjective phraseTagged components Verb/Action
V_WVerb phraseLucene searchingRecommendation
N_WNoun phraseQuestion expansion
ObjObjectObject/Theme
OOthersExpanded queries
Pre-processingLucene IndexingSente-nces DBLucene searchingInteractive moduleSearch engine
Raw dataViWiki, Vietgle, ...Sentence candidatesSnippets and URLs
Word segmentation, NERWord segmentation, NER
RankingRanking
Answer extractionAnswer extraction
Answer
\uf0b7 Relating two classes of objects:
\uf0fc Subject/agent + Verb/action +
Object/theme
\uf0fc Object/Theme + Subject/agent +
Verb/action
\uf0fc Object/Theme + Verb/action +
Subject/agent
Example 1: The question \"Who was the Harry
Potter book written by?\" is same as the
3.1 Question analysis moduleVietnamese question \"Cu\u1ed1n s\u00e1ch Harry Potter \u0111\u01b0\u1ee3c vi\u1ebft b\u1edfi ai?\"
3.1.1 \"Who, Whom, Whose\" question in VietnameseAbove T\u00e1cgi\u1ea3/Author and S\u00e1ch/Book examples havetwoclasses:
Vietnamese linguists have classified Vietnamese sentences by alternative criteria or syntax structure. By Vietnamese \"Who, Whom, Whose\" questions properties and their mean, they are classified in some forms with four types of component such as: Subject/agent, Verb/action, Object/theme, and Indirect_Ojbect/Co_themyge[6].Commonly, a simple question relate to two forms: two classes of object and three classes of object. Example:\uf0b7 Relating three classes of objects: \uf0fc Object/Theme: Indirect_Object/Co_theme+ Verb/action + Subject/agent Example 2: The Vietnamese question \"Ai l\u00e0 t\u00e1c gi\u1ea3 c\u1ee7a cu\u1ed1n Harry Potter xu\u1ea5t b\u1ea3n n\u0103m 2004?\" is same meaning with \"Who is author of the Harry Potter book published in 2004?\" include 3 classes: T\u00e1c gi\u1ea3/Author, S\u00e1ch/Book, N\u0103m/Year
1 http://lucene.apache.org
", "type_str": "table", "html": null, "text": "Instead of looking in Lucene Database, the last module extracts the set of candidates from snippets returned from Google. The next steps are similar with the 2nd module." }, "TABREF1": { "num": null, "content": "", "type_str": "table", "html": null, "text": "Proposed features and labels" }, "TABREF2": { "num": null, "content": "
, for
", "type_str": "table", "html": null, "text": "" }, "TABREF4": { "num": null, "content": "
of experiment results: 10
foldscross-validation
", "type_str": "table", "html": null, "text": "Table" }, "TABREF5": { "num": null, "content": "
capability of answering (C), responded time (T)
", "type_str": "table", "html": null, "text": "The comparisons of KLB, SEB, (KLB+SEB), and Baseline with 3 measures: precision ( \uf072 )," } } } }