{ "paper_id": "P93-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:52:22.398369Z" }, "title": "FOR FINDING NOUN PHRASE CORRESPONDENCES IN BILINGUAL CORPORA", "authors": [ { "first": "Julian", "middle": [], "last": "Kupiec", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xerox Palo Alto Research Center", "location": { "addrLine": "3333 Coyote Hill Road", "settlement": "Palo Alto", "region": "CA" } }, "email": "kupiec@parc.xerox.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The paper describes an algorithm that employs English and French text taggers to associate noun phrases in an aligned bilingual corpus. The taggets provide part-of-speech categories which are used by finite-state recognizers to extract simple noun phrases for both languages. Noun phrases are then mapped to each other using an iterative re-estimation algorithm that bears similarities to the Baum-Welch algorithm which is used for training the taggers. The algorithm provides an alternative to other approaches for finding word correspondences, with the advantage that linguistic structure is incorporated. Improvements to the basic algorithm are described, which enable context to be accounted for when constructing the noun phrase mappings.", "pdf_parse": { "paper_id": "P93-1003", "_pdf_hash": "", "abstract": [ { "text": "The paper describes an algorithm that employs English and French text taggers to associate noun phrases in an aligned bilingual corpus. The taggets provide part-of-speech categories which are used by finite-state recognizers to extract simple noun phrases for both languages. Noun phrases are then mapped to each other using an iterative re-estimation algorithm that bears similarities to the Baum-Welch algorithm which is used for training the taggers. The algorithm provides an alternative to other approaches for finding word correspondences, with the advantage that linguistic structure is incorporated. Improvements to the basic algorithm are described, which enable context to be accounted for when constructing the noun phrase mappings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Areas of investigation using bilingual corpora have included the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "\u2022 Automatic sentence alignment [Kay and RSscheisen, 1988 , Brown eL al., 1991a , Gale and Church, 1991b ].", "cite_spans": [ { "start": 31, "end": 56, "text": "[Kay and RSscheisen, 1988", "ref_id": "BIBREF4" }, { "start": 57, "end": 78, "text": ", Brown eL al., 1991a", "ref_id": null }, { "start": 79, "end": 103, "text": ", Gale and Church, 1991b", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "\u2022 Word-sense disambiguation [Dagan el al., 1991 , Brown et ai., 1991b ].", "cite_spans": [ { "start": 28, "end": 47, "text": "[Dagan el al., 1991", "ref_id": null }, { "start": 48, "end": 69, "text": ", Brown et ai., 1991b", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "\u2022 Extracting word correspondences [Gale and Church, 1991a ].", "cite_spans": [ { "start": 44, "end": 57, "text": "Church, 1991a", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "\u2022 Finding bilingual collocations [Smadja, 1992] .", "cite_spans": [ { "start": 33, "end": 47, "text": "[Smadja, 1992]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "\u2022 Estimating parameters for statistically-based machine translation [Brown et al., 1992] .", "cite_spans": [ { "start": 68, "end": 88, "text": "[Brown et al., 1992]", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The work described here makes use of the aligned Canadian Hansards [Gale and Church, 1991b] to obtain noun phrase correspondences between the English and French text.", "cite_spans": [ { "start": 77, "end": 91, "text": "Church, 1991b]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "The term \"correspondence\" is used here to signify a mapping between words in two aligned sentences. Consider an English sentence Ei and a French sentence Fi which are assumed to be approximate translations of each other. The subscript i denotes the i'th alignment of sentences in both languages. A word sequence in E/is defined here as the correspondence of another sequence in Fi if the words of one sequence are considered to represent the words in the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Single word correspondences have been investigated [Gale and Church, 1991a ] using a statistic operating on contingency tables. An algorithm for producing collocational correspondences has also been described [Smadja, 1992] . The algorithm involves several steps. English collocations are first extracted from the English side of the corpus. Instances of the English collocation are found and the mutual information is calculated between the instances and various single word candidates in aligned French sentences. The highest ranking candidates are then extended by another word and the procedure is repeated until a corresponding French collocation having the highest mutual information is found. An alternative approach is described here, which employs simple iterative re-estimation. It is used to make correspondences between simple noun phrases that have been isolated in corresponding sentences of each language using finitestate recognizers. The algorithm is applicable for finding single or multiple word correspondences and can accommodate additional kinds of phrases.", "cite_spans": [ { "start": 61, "end": 74, "text": "Church, 1991a", "ref_id": "BIBREF3" }, { "start": 209, "end": 223, "text": "[Smadja, 1992]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "In contrast to the other methods that have been mentioned, the algorithm can be extended in a straightforward way to enable correct correspondences to be made in circumstances where numerous low frequency phrases are involved. This is important consideration because in large text corpora roughly a third of the word types only occur once.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "Several applications for bilingual correspondence information have been suggested. They can be used in bilingual concordances, for automatically constructing bilingual lexicons, and probabilistically quantified correspondences may be useful for statistical translation methods. Figure 1 illustrates how the corpus is analyzed. The words in sentences are first tagged with their corresponding part-of-speech categories. Each tagger contains a hidden Markov model (HMM), which is trained using samples of raw text from the Hansards for each language. The taggers are robust and operate with a low error rate . Simple noun phrases (excluding pronouns and digits) are then extracted from the sentences by finite-state recognizers that are specified by regular expressions defined in terms of part-ofspeech categories. Simple noun phrases are identified because they are most reliably recognized; it is also assumed that they can be identified unambiguously. The only embedding that is allowed is by prepositional phrases involving \"of\" in English and \"de\" in French, as noun phrases involving them can be identified with relatively low error (revisions to this restriction are considered later). Noun phrases are placed in an index to associate a unique identifier with each one.", "cite_spans": [], "ref_spans": [ { "start": 278, "end": 286, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "INTRODUCTION", "sec_num": null }, { "text": "A noun phrase is defined by its word sequence, excluding any leading determiners. Singular and plural forms of common nouns are thus distinct and assigned different positions in the index. For each sentence corresponding to an alignment, the index positions of all noun phrases in the sentence are recorded in a separate data structure, providing a compact representation of the corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "COMPONENTS", "sec_num": null }, { "text": "So far it has been assumed (for the sake of simplicity) that there is always a one-to-one mapping between English and French sentences. In practice, if an alignment program produces blocks of several sentences in one or both languages, this can be accommodated by treating the block instead as a single bigger \"compound sentence\" in which noun phrases have a higher number of possible correspondences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "COMPONENTS", "sec_num": null }, { "text": "Some terminology is necessary to describe the algorithm concisely. Let there be L total alignments in the corpus; then Ei is the English sentence for alignment i. Let the function \u00a2(Ei) be the number of noun phrases identified in the sentence. If there are k of them, k = \u00a2(Ei), and they can be referenced by j = 1...k. Considering the j'th noun phrase in sentence Ei, the function I~ (Ei, j) produces an identifier for the phrase, which is the position of the phrase in the English index. If this phrase is at position s, then I~(Ei,j) = s. In turn, the French sentence Fi will contain \u00a2(Fi) noun phrases and given the p'th one, its position in the French index will be given by/~(Fi, p). It will also be assumed that there are a total of VE and Vr phrases in the English and French indexes respectively. Finally, the indicator function I 0 has the value unity if its argument is true, and zero otherwise. Assuming these definitions, the algorithm is Figure 2 . The equations assume a directionality: finding French \"target\" correspondences for English \"source\" phrases. The algorithm is reversible, by swapping E with F.", "cite_spans": [ { "start": 385, "end": 392, "text": "(Ei, j)", "ref_id": null } ], "ref_spans": [ { "start": 952, "end": 960, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "THE MAPPING ALGORITHM", "sec_num": null }, { "text": "The model for correspondence is that a source noun phrase in Ei is responsible for producing the various different target noun phrases in Fi with correspondingly different probabilities. Two quantities are calculated; Cr(s, t) and Pr(s, t). 1assumes that each English noun phrase in Ei is initially equally likely to correspond to each French noun phrase in Fi. All correspondences are thus equally weighted, reflecting a state of ignorance. Weights are summed over the corpus, so noun phrases that co-occur in several sentences will have larger sums. The weights C0(s, t) can be interpreted as the mean number of times that npF(t) corresponds to apE(s) given the corpus and the initial assumption of equiprobable correspondences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE MAPPING ALGORITHM", "sec_num": null }, { "text": "These weights can be used to form a new estimate of the probability that npF(t) corresponds to npE(s), by considering the mean number of times npF(t) corresponds to apE(s) as a fraction of the total mean number of correspondences for apE(s), as in Equation (2). The procedure is then iterated using Equations (3), and (2) to obtain successively refined, convergent estimates of the prob-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "THE MAPPING ALGORITHM", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Co( ,t) = = cr( ,t) = r>O VE>s>I Vv>t>l L \u00a2(E~) \u00a2(F0 1 E E E I(tt(Ei' J) = s)l(tt(Fi' k) = t) \u00a2(F,) i=1 j=l k=l Cr-l(S,t) vF Eq=l Cr-l(s, q) L \u00a2(E0 \u00a2(F0 E E E I(#(Ei,j) = s)I(tt(Fi,k) = t)Pr_l(s,t) i=I j=l k=l (1)", "eq_num": "(2)" } ], "section": "THE MAPPING ALGORITHM", "sec_num": null }, { "text": "(3) Figure 2 : The Algorithm ability that ripE(t) corresponds to ripE(s). The probability of correspondences can be used as a method of ranking them (occurrence counts can be taken into account as an indication of the reliability of a correspondence). Although Figure 2 defines the coefficients simply, the algorithm is not implemented literally from it. The algorithm employs a compact representation of the correspondences for efficient operation. An arbitrarily large corpus can be accommodated by segmenting it appropriately.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 12, "text": "Figure 2", "ref_id": null }, { "start": 261, "end": 269, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "THE MAPPING ALGORITHM", "sec_num": null }, { "text": "The algorithm described here is an instance of a general approach to statistical estimation, represented by the EM algorithm [Dempster et al., 1977] . In contrast to reservations that have been expressed [Gale and Church, 1991a] about using the EM algorithm to provide word correspondences, there have been no indications that prohibitive amounts of memory might be required, or that the approach lacks robustness. Unlike the other methods that have been mentioned, the approach has the capability to accommodate more context to improve performance.", "cite_spans": [ { "start": 125, "end": 148, "text": "[Dempster et al., 1977]", "ref_id": "BIBREF3" }, { "start": 214, "end": 228, "text": "Church, 1991a]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "THE MAPPING ALGORITHM", "sec_num": null }, { "text": "A sample of the aligned corpus comprising 2,600 alignments was used for testing the algorithm (not all of the alignments contained sentences). 4,900 distinct English noun phrases and 5,100 distinct French noun phrases were extracted from the sample.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "When forming correspondences involving long sentences with many clauses, it was observed that the position at which a noun phrase occurred in El was very roughly proportional to the corresponding noun phrase in Fi. In such cases it was not necessary to form correspondences with all noun phrases in Fi for each noun phrase in Ei. Instead, the location of a phrase in Ei was mapped linearly to a position in Fi and correspondences were formed for noun phrases occurring in a window around that position. This resulted in a total of 34,000 correspondences. The mappings are stable within a few (2-4) iterations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "In discussing results, a selection of examples will be presented that demonstrates the strengths and weaknesses of the algorithm. To give an indication of noun phrase frequency counts in the sample, the highest ranking correspondences are shown in Table 1. The figures in columns (1) and 3 To give an informal impression of overall performance, the hundred highest ranking correspondences were inspected and of these, ninety were completely correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Less frequently occurring noun phrases are also of interest for purposes of evaluation; some of these are shown in Table 2 The table also illustrates an unembedded English noun phrase having multiple prepositional phrases in its French correspondent. Organizational acronyms (which may be not be available in general-purpose dictionaries) are also extracted, as the taggers are robust. Even when a noun phrase only occurs once, a correct correspondence can be found if there are only single noun phrases in each sentence of the alignment. This is demonstrated in the last row of Table 2 , which is the result of the following alignment:", "cite_spans": [], "ref_spans": [ { "start": 115, "end": 122, "text": "Table 2", "ref_id": "TABREF3" }, { "start": 579, "end": 586, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Ei: \"The whole issue of free trade has been mentioned.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Fi: \"On a mentionn~ la question du libre-~change.\" Table 3 shows some incorrect correspondences produced by the algorithm (in the table, \"usine\" means \"factory\").", "cite_spans": [], "ref_spans": [ { "start": 51, "end": 58, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "11 r \u00b0 tho obtraining I 01 asia0 I 1 mix of on-the-job 6 usine Table 3 The sentences that are responsible for these correspondences illustrate some of the problems associated with the correspondence model: Ei: \"They use what is known as the dual system in which there is a mix of on-the-job and offthe-job training.\"", "cite_spans": [], "ref_spans": [ { "start": 63, "end": 70, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Fi: \"Ils ont recours \u00a3 une formation mixte, partie en usine et partie hors usine.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "The first problem is that the conjunctive modifiers in the English sentence cannot be accommodated by the noun phrase recognizer. The tagger also assigned \"on-the-job\" as a noun when adjectival use would be preferred. If verb correspondences were included, there is a mismatch between the three that exist in the English sentence and the single one in the French. If the English were to reflect the French for the correspondence model to be appropriate, the noun phrases would perhaps be \"part in the factory\" and \"part out of the factory\". Considered as a translation, this is lame. The majority of errors that occur are not the result of incorrect tagging or noun phrase recognition, but are the result of the approximate nature of the correspondence model. The correspondences in Table 4 are likewise flawed (in the table, \"souris\" means \"mouse\" and \"tigre de papier\" means \"paper tiger\"):", "cite_spans": [], "ref_spans": [ { "start": 783, "end": 790, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "1 toothless tiger 1 souris 1 toothless tiger 1 tigre de papier 1 roaring rabbit 1 souris 1 roaring rabbit 1 tigre de papier Table 4 These correspondences are the result of the following sentences:", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 131, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Ei: \"It is a roaring rabbit, a toothless tiger.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "Fi: \"C' est un tigre de papier, un souris qui rugit.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "In the case of the alliterative English phrase \"roaring rabbit\", the (presumably) rhetorical aspect is preserved as a rhyme in \"souris qui rugit\"; the result being that \"rabbit\" corresponds to \"souris\" (mouse). Here again, even if the best correspondence were made the result would be wrong because of the relatively sophisticated considerations involved in the translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "RESULTS", "sec_num": null }, { "text": "As regards future possibilities, the algorithm lends itself to a range of improvements and applications, which are outlined next.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "Finding Word Correspondences: The algorithm finds corresponding noun phrases but provides no information about word-level correspondences within them. One possibility is simply to eliminate the tagger and noun phrase recognizer (treating all words as individual phrases of length unity and having a larger number of correspondences). Alternatively, the following strategy can be adopted, which involves fewer total correspondences. First, the algorithm is used to build noun phrase correspondences, then the phrase pairs that are produced are themselves treated as a bilingual noun phrase corpus. The algorithm is then employed again on this corpus, treating all words as individual phrases. This results in a set of single word correspondences for the internal words in noun phrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "Reducing Ambiguity: The basic algorithm assumes that noun phrases can be uniquely identified in both languages, which is only true for simple noun phrases. The problem of prepositional phrase attachment is exemplified by the following corresp on den ces: Table 5 The correct English and French noun phrases are \"Secretary of State for External Affairs\" and \"secr~taire d' Etat aux Affaires ext~rieures\". If prepositional phrases involving \"for\" and \"~\" were also permitted, these phrases would be correctly identified; however many other adverbial prepositional phrases would also be incorrectly attached to noun phrases.", "cite_spans": [], "ref_spans": [ { "start": 255, "end": 262, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "If all embedded prepositional phrases were permitted by the noun phrase recognizer, the algorithm could be used to reduce the degree of ambiguity between alternatives. Consider a sequence np~ppe of an unembedded English noun phrase npe followed by a prepositional phrase PPe, and likewise a corresponding French sequence nplpp I. Possible interpretations of this are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "1. The prepositional phrase attaches to the noun phrase in both languages. 2. The prepositional phrase attaches to the noun phrase in one language and does not in the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "3. The prepositional phrase does not attach to the noun phrase in either language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "If the prepositional phrases attach to the noun phrases in both languages, they are likely to be repeated in most instances of the noun phrase; it is less likely that the same prepositional phrase will be used adverbially with each instance of the noun phrase. This provides a heuristic method for reducing ambiguity in noun phrases that occur several times. The only modifications required to the algorithm are that the additional possible noun phrases and correspondences between them must be included. Given thresholds on the number of occurrences and the probability of the correspondence, the most likely correspondence can be predicted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "Including Context: In the algorithm, correspondences between source and target noun phrases are considered irrespectively of other correspondences in an alignment. This does not make the best use of the information available, and can be improved upon. For example, consider the following alignment:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "El: \"The Bill was introduced just before Christmas.\" Fi: \"Le projet de lot a ~t~ present~ juste avant le cong~ des F~tes.\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "Here it is assumed that there are many instances of the correspondence \"Bill\" and \"projet de lot\", but only one instance of \"Christmas\" and \"cong~ des F~tes\". This suggests that \"Bill\" corresponds to \"projet de lot\" with a high probability and that \"Christmas\" likewise corresponds strongly to \"cong~ des F~tes\". However, the model will assert that \"Christmas\" corresponds to \"projet de lot\" and to \"cong~ des F~tes\" with equal probability, no matter how likely the correspondence between \"Bill\" and \"projet de lot\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "The model can be refined to reflect this situation by considering the joint probability that a target npr(t) corresponds to a source ripE(s) and all the other possible correspondences in the alignment are produced. This situation is very similar to that involved in training HMM text taggers, where joint probabilities are computed that a particular word corresponds to a particular part-ofspeech, and the rest of the words in the sentence are also generated (e.g. [Cutting et al., 1992] ).", "cite_spans": [ { "start": 465, "end": 487, "text": "[Cutting et al., 1992]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "EXTENSIONS", "sec_num": null }, { "text": "The algorithm described in this paper provides a practical means for obtaining correspondences between noun phrases in a bilingual corpus. Linguistic structure is used in the form of noun phrase recognizers to select phrases for a stochastic model which serves as a means of minimizing errors due to the approximations inherent in the correspondence model. The algorithm is robust, and extensible in several ways.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "CONCLUSION", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word sense disambiguation using statistical methods", "authors": [ { "first": "[", "middle": [], "last": "References", "suffix": "" }, { "first": "", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the 29th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "264--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "References [Brown et al., 1991a] P. F. Brown, J. C. Lai, and R. L. Mercer. Aligning sentences in parallel cor- pora. In Proceedings of the 29th Annual Meeting of the Association of Computational Linguis- tics, pages 169-176, Berkeley, CA., June 1991. [Brown et al., 1991b] P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mer- cer. Word sense disambiguation using statisti- cal methods. In Proceedings of the 29th Annual Meeting of the Association of Computational Linguistics, pages 264-270, Berkeley, CA., June 1991.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Analysis, statistical transfer, and synthesis in machine translation", "authors": [ { "first": "[", "middle": [], "last": "Brown", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation", "volume": "", "issue": "", "pages": "83--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Brown et al., 1992] P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, J. D. Lafferty, and R. L. Mercer. Analysis, statistical transfer, and synthesis in machine translation. In Proceedings of the Fourth International Conference on The- oretical and Methodological Issues in Machine Translation, pages 83-100, Montreal, Canada., June 1992.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Concordances for parallel text", "authors": [ { "first": "K", "middle": [ "W" ], "last": "Church", "suffix": "" }, { "first": "W", "middle": [ "A" ], "last": "Gale", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Seventh Annual Conference of the UW Center for the New OED and Text Research", "volume": "", "issue": "", "pages": "40--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Church and Gale, 1991] K. W. Church and W. A. Gale. Concordances for parallel text. In Proceedings of the Seventh Annual Conference of the UW Center for the New OED and Text Research, pages 40-62, September 1991.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Maximum likelihood from incomplete data via the EM algorithm", "authors": [ { "first": "D", "middle": [], "last": "Cutting", "suffix": "" }, { "first": "J", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "J", "middle": [], "last": "Pedersen", "suffix": "" }, { "first": "P", "middle": [], "last": "Sibun ; Dagan", "suffix": "" } ], "year": 1977, "venue": "Proceedings of the 29th Annual Meeting of the Association of Computational Linguistics", "volume": "39", "issue": "", "pages": "177--184", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Cutting et at., 1992] D. Cutting, J. Kupiec, J. Pedersen, and P. Sibun. A practical part- of-speech tagger. In Proceedings of the Third Conference on Applied Natural Language Pro- cessing, Trento, Italy, April 1992. ACL. [Dagan et al., 1991] I. Dagan, A. Itai, and U. Schwall. Two languages are more informa- tive than one. In Proceedings of the 29th Annual Meeting of the Association of Computational Linguistics, pages 130-137, Berkeley, CA., June 1991. [Dempster et ai., 1977] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statis- tical Society, B39:1-38, 1977. [Gale and Church, 1991a] W. A. Gale and K. W. Church. Identifying word correspondences in parallel texts. In Proceedings of the Fourth DARPA Speech and Natural Language Work- shop, pages 152-157, Pacific Grove, CA., Febru- ary 1991. Morgan Kaufmann. [Gale and Church, 1991b] W. A. Gale and K. W. Church. A program for aligning sentences in bilingual corpora. In Proceedings of the 29th Annual Meeting of the Association of Compu- tational Linguistics, pages 177-184, Berkeley, CA., June 1991. [Kay and RSscheisen, 1988]", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "How to compile a bilingual collocational lexicon automatically", "authors": [ { "first": "M", "middle": [], "last": "Kay", "suffix": "" }, { "first": "M", "middle": [ "; J M" ], "last": "Rsscheisen", "suffix": "" }, { "first": "; F", "middle": [], "last": "Kupiec", "suffix": "" }, { "first": "", "middle": [], "last": "Smadja", "suffix": "" } ], "year": 1988, "venue": "Proceedings of the AAAI-92 Workshop on Statistically-Based NLP Techniques", "volume": "94304", "issue": "", "pages": "225--242", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Kay and M. RSscheisen. Text-translation alignment. Technical Report P90-00143, Xerox Palo Alto Research Center, 3333 Coyote Hill Rd., Palo Alto, CA 94304, June 1988. [Kupiec, 1992] J. M. Kupiec. Robust part-of- speech tagging using a hidden markov model. Computer Speech and Language, 6:225-242, 1992. [Smadja, 1992] F. Smadja. How to compile a bilingual collocational lexicon automatically. In C. Weir, editor, Proceedings of the AAAI- 92 Workshop on Statistically-Based NLP Tech- niques, San Jose, CA, July 1992.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Figure 1: Component Layout", "type_str": "figure", "uris": null }, "FIGREF1": { "num": null, "text": "Computation proceeds by evaluating Equation (1), Equation (2) and then iteratively applying Equations (3) and (2); r increasing with each successive iteration. The argument s refers to the English noun phrase nps(s) having position s in the English index, and the argument t refers to the French noun phrase npF(t) at position t in the French index. Equation", "type_str": "figure", "uris": null }, "TABREF3": { "type_str": "table", "html": null, "text": "Other correspondences", "content": "