{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:48:10.933799Z" }, "title": "Transformer-GCRF: Recovering Chinese Dropped Pronouns with General Conditional Random Fields", "authors": [ { "first": "Jingxuan", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts and Telecommunications", "location": {} }, "email": "" }, { "first": "Kerui", "middle": [], "last": "Xu", "suffix": "", "affiliation": {}, "email": "xukerui@bupt.edu.cn" }, { "first": "Jun", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts and Telecommunications", "location": {} }, "email": "junxu@ruc.edu.cn" }, { "first": "Si", "middle": [], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts and Telecommunications", "location": {} }, "email": "lisi@bupt.edu.cn" }, { "first": "Sheng", "middle": [], "last": "Gao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts and Telecommunications", "location": {} }, "email": "gaosheng@bupt.edu.cn" }, { "first": "Jun", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Beijing University of Posts and Telecommunications", "location": {} }, "email": "guojun@bupt.edu.cn" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Renmin University of China", "location": {} }, "email": "jirong.wen@gmail.com" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brandeis University", "location": {} }, "email": "xuen@brandeis.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Pronouns are often dropped in Chinese conversations and recovering the dropped pronouns is important for NLP applications such as Machine Translation. Existing approaches usually formulate this as a sequence labeling task of predicting whether there is a dropped pronoun before each token and its type. Each utterance is considered to be a sequence and labeled independently. Although these approaches have shown promise, labeling each utterance independently ignores the dependencies between pronouns in neighboring utterances. Modeling these dependencies is critical to improving the performance of dropped pronoun recovery. In this paper, we present a novel framework that combines the strength of Transformer network with General Conditional Random Fields (GCRF) to model the dependencies between pronouns in neighboring utterances. Results on three Chinese conversation datasets show that the Transformer-GCRF model outperforms the state-of-the-art dropped pronoun recovery models. Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Pronouns are often dropped in Chinese conversations and recovering the dropped pronouns is important for NLP applications such as Machine Translation. Existing approaches usually formulate this as a sequence labeling task of predicting whether there is a dropped pronoun before each token and its type. Each utterance is considered to be a sequence and labeled independently. Although these approaches have shown promise, labeling each utterance independently ignores the dependencies between pronouns in neighboring utterances. Modeling these dependencies is critical to improving the performance of dropped pronoun recovery. In this paper, we present a novel framework that combines the strength of Transformer network with General Conditional Random Fields (GCRF) to model the dependencies between pronouns in neighboring utterances. Results on three Chinese conversation datasets show that the Transformer-GCRF model outperforms the state-of-the-art dropped pronoun recovery models. Exploratory analysis also demonstrates that the GCRF did help to capture the dependencies between pronouns in neighboring utterances, thus contributes to the performance improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In pro-drop languages such as Chinese, pronouns can be dropped as the identity of the pronoun can be inferred from the context, and this happens more frequently in conversations (Yang et al., 2015) . Recovering dropped pronouns (DPs) is a critical task for many NLP applications such as Machine Translation where the dropped pronouns need to be translated explicitly in the target language (Wang et al., 2016a (Wang et al., ,b, 2018 . Recovering dropped pronoun is different from traditional pronoun resolution tasks (Zhao and Ng, 2007; A 4 : (\u6211) \u6253\u5370 \u4e24 \u4efd \u5427 \uff0c \u5f20\u5e06 \u662f\u4e0d\u662f \u4e5f \u9700\u8981 \uff1f (I) will print two copies. Does Fan Zhang also need it?", "cite_spans": [ { "start": 178, "end": 197, "text": "(Yang et al., 2015)", "ref_id": "BIBREF23" }, { "start": 390, "end": 409, "text": "(Wang et al., 2016a", "ref_id": "BIBREF18" }, { "start": 410, "end": 432, "text": "(Wang et al., ,b, 2018", "ref_id": null }, { "start": 517, "end": 536, "text": "(Zhao and Ng, 2007;", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "B 4 : \u6211 \u6253\u7535\u8bdd \u95ee\u95ee (\u4ed6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "I will ask (him) about it by phone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(I) need to print out the travel itinerary and invitation letter and bring them with me.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: A conversation snippet between participant A and B. The dropped pronouns are shown in the brackets, and the dialogue patterns are marked with blue arrows.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "2018), which aim to resolve the anaphoric pronouns to their antecedents. In dropped pronoun recovery, we consider both anaphoric and nonanaphoric pronouns, and we do not directly resolve the dropped pronoun to its antecedent, which is infeasible for non-anaphoric pronouns. We recover the dropped pronoun as one of 17 types pronouns pre-defined in (Yang et al., 2015) , which include five types of abstract pronouns corresponding to the non-anaphoric pronouns. Thus traditional rulebased pronoun resolution methods are not suitable for recovering dropped pronouns. Existing approaches formulate dropped pronoun recovery as a sequence labeling task of predicting whether a pronoun has been dropped before each token and the type of the dropped pronoun. For example, Yang et al. (2015) first studied this problem in SMS data and utilized a Maximum Entropy classifier to recover dropped pronouns. Deep neural networks such as Multi-Layer Perceptrons (MLPs) and structured attention networks have also been used to tackle this problem (Zhang et al., 2016; . Giannella et al. (2017) used a linear-chain CRF to model the dependency be-tween the sequence of predictions in a utterance.", "cite_spans": [ { "start": 348, "end": 367, "text": "(Yang et al., 2015)", "ref_id": "BIBREF23" }, { "start": 765, "end": 783, "text": "Yang et al. (2015)", "ref_id": "BIBREF23" }, { "start": 1031, "end": 1051, "text": "(Zhang et al., 2016;", "ref_id": "BIBREF26" }, { "start": 1054, "end": 1077, "text": "Giannella et al. (2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "Although these models have achieved various degrees of success, they all assume that each utterance in a conversation should be labeled independently. This practice overlooks the dependencies between dropped pronouns in neighboring utterances, and results in sequences of predicted dropped pronouns are incompatible with one another. We illustrate this problem through an example in Figure 1 , in which the dropped pronouns are shown in brackets. The pronoun can be dropped as a subject at the beginning of a utterance, or as an object in the middle of a utterance. Pronouns dropped at the beginning of consecutive utterances usually have strong dependencies that pattern with three types of dialogue transitions (i.e., Reply, Expansion and Acknowledgment) presented in (Xue et al., 2016) . For example, in Figure 1 , the pronoun in the second utterance B 1 is \"\u6211 (I)\", the dropped pronoun in the third utterance B 2 should also be \"\u6211 (I)\" since B 2 is an expansion of B 1 by the same speaker. Thus modeling the dependency between pronouns in adjacent sentences is helpful to recover pronoun dropped at utterance-initial positions. In contrast, the pronoun \"\u4ed6 (him)\" dropped as an object in utterance B 4 should be recovered by capturing referent semantics from the context and modeling token dependencies in the same utterance.", "cite_spans": [ { "start": 770, "end": 788, "text": "(Xue et al., 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [ { "start": 383, "end": 391, "text": "Figure 1", "ref_id": null }, { "start": 807, "end": 815, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "To model the dependencies between predictions in the conversation snippet, we propose a novel framework called Transformer-GCRF that combines the strength of the Transformer model (Vaswani et al., 2017) in representation learning and the capacity of general Conditional Random Fields (GCRF) to model the dependencies between predictions. In the GCRF, a vertical chain is designed to capture the pronoun dependencies between the neighboring utterances, and horizontal chains are used for modeling the prediction dependencies inside each utterance. In this way, Transformer-GCRF successfully models the cross-utterance pronoun dependencies as well as the intra-utterance prediction dependencies simultaneously. Experimental results on three conversation datasets show that Transformer-GCRF significantly outperforms the state-of-the-art recovery models. We also conduct ablative experiments that demonstrate the improvement in performance of our Transformer-GCRF model derives both from the Transformer encoder and the ability of GCRF layer to model the dependencies between dropped pronouns in neighboring utterances. All code is available at https://github. com/ningningyang/Transformer-GCRF.", "cite_spans": [ { "start": 180, "end": 202, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "The major contributions of the paper are summarized as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "\u2022 We conduct statistical study on pronouns dropped at the beginning of consecutive utterances in conversational corpus, and observe that modeling the dependencies between pronouns in neighboring utterances is important to improve the performance of dropped pronoun recovery. \u2022 We propose a novel Transformer-GCRF approach to model both intra-utterance dependencies between predictions in a utterance and cross-utterance dependencies between dropped pronouns in neighboring utterance. The model jointly predicts all dropped pronouns in an entire conversation snippet. the dependencies between pronouns in neighboring utterances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "Context \"& Context $& Context \"' Sentence \"# Sentence \"& Sentence $& Sentence \"' Sentence \"& Context \"& \" # $ # $ & \" ' \" & (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reply Expansion Acknowledge", "sec_num": null }, { "text": "Zero pronoun resolution (Zhao and Ng, 2007; Kong and Zhou, 2010; Chen and Ng, 2016; Yin et al., 2017 Yin et al., , 2018 ) is a line of research closely related to dropped pronoun recovery. The difference between these two tasks is that zero pronoun resolution focuses on resolving anaphoric pronouns to their antecedents assuming the position of the dropped pronoun is already known. However, in dropped pronoun recovery, we consider both anaphoric and non-anaphoric pronouns, and attempt to recover the type of dropped pronoun but not its referent. Su et al. (2019) also presented a new utterance rewriting task which improves the multi-turn dialogue modeling through recovering missing information with coreference.", "cite_spans": [ { "start": 24, "end": 43, "text": "(Zhao and Ng, 2007;", "ref_id": "BIBREF27" }, { "start": 44, "end": 64, "text": "Kong and Zhou, 2010;", "ref_id": "BIBREF5" }, { "start": 65, "end": 83, "text": "Chen and Ng, 2016;", "ref_id": "BIBREF0" }, { "start": 84, "end": 100, "text": "Yin et al., 2017", "ref_id": "BIBREF24" }, { "start": 101, "end": 119, "text": "Yin et al., , 2018", "ref_id": "BIBREF25" }, { "start": 550, "end": 566, "text": "Su et al. (2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Zero pronoun resolution", "sec_num": "2.2" }, { "text": "Conditional Random Fields (CRFs) are commonly used in sequence labeling. It models the conditional probability of a label sequence given a corresponding sequence of observations. Lafferty et al. (2001) made a first-order Markov assumption among labels and proposed a linear-chain structure that can be decoded efficiently with the Viterbi algorithm. Sutton et al. (2004) introduced dynamic CRFs to model the interactions between two tasks and jointly solve the two tasks when they are conditioned on the same observation. Zhu et al. (2005) introduced two-dimensional CRFs to model the dependency between neighborhoods on a 2D grid to extract object information from the web. Sut-ton et al. 2012also explored how to generalize linear-chain CRFs to general graphs. CRFs have also been combined with powerful neural networks to tackle sequence labeling problems in NLP tasks such as POS tagging and Named Entity Recognition (NER) (Lample et al., 2016; Ma and Hovy, 2016; , but existing research has not explored how to combine deep neural networks with general CRFs.", "cite_spans": [ { "start": 350, "end": 370, "text": "Sutton et al. (2004)", "ref_id": "BIBREF13" }, { "start": 522, "end": 539, "text": "Zhu et al. (2005)", "ref_id": "BIBREF28" }, { "start": 927, "end": 948, "text": "(Lample et al., 2016;", "ref_id": "BIBREF7" }, { "start": 949, "end": 967, "text": "Ma and Hovy, 2016;", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Conditional random fields", "sec_num": "2.3" }, { "text": "We start by formalizing the dropped pronoun recovery task as follows. Given a Chinese conversation snippet", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "X = (x 1 , \u2022 \u2022 \u2022 , x n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "which consists of n pro-drop utterances, where the i-th utterance", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "x i = (x i1 , \u2022 \u2022 \u2022 , x im i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "is a sequence of m i tokens, and additionally given a set of k possible", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "labels Y = {y 1 , \u2022 \u2022 \u2022 , y k\u22121 } \u222a {None}", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "where each y j corresponds to a pre-defined pronoun (Yang et al., 2015) or 'None', which means no pronoun is dropped, the goal of our task is to assign a label y \u2208 Y to each token in X to indicate whether a pronoun is dropped before this token and the type of pronoun. We model this task as the problem of maximizing the conditional probability p(Y|X), where Y is the label sequence assigned to the tokens in X. The conditional probability of a label assignment Y given the whole conversation snippet X can be written as:", "cite_spans": [ { "start": 52, "end": 71, "text": "(Yang et al., 2015)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "p(Y|X) = e s(X,Y) Y\u2208Y X e s(X, Y) ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "where s(X, Y) denotes score of the sequences of predictions in the conversation snippet. The denominator is known as partition function, and Y X contains all possible tag sequences for the conversation snippet X.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach: Transformer-GCRF", "sec_num": "3" }, { "text": "We score each pair of (X, Y) with our proposed Transformer-GCRF, as shown in Figure 2 . When pre-processing the inputs, we attach a context to each pro-drop utterance x n in the snippet X. The context C n = {x n\u22125 , ...x n\u22121 , x n+1 , x n+2 } consists of the previous five utterances as well as the next two utterances following the practices in , and provides referent related contextual information to help recover the dropped pronouns. The representation layer uses the Transformer structure to encode the context C n and generates representations for tokens in utterance x n from the decoder. The prediction layer then utilizes a generalized CRF to model the cross-utterance and inter-utterance dependencies between the predictions in the conversation snippet, and outputs the predicted sequence for tokens in the snippet.", "cite_spans": [], "ref_spans": [ { "start": 77, "end": 85, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Overview of Transformer-GCRF", "sec_num": "3.1" }, { "text": "We employ the encoder-decoder structure of Transformer (Vaswani et al., 2017) to generate the representations for the tokens in pro-drop utterance x i and context C i separately.", "cite_spans": [ { "start": 55, "end": 77, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Representation layer", "sec_num": "3.2" }, { "text": "The context encoder first unfolds all tokens in the context C i into a linear sequence as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "(x i\u22125,1 , x i\u22125,2 , ..., x i+2,m i+2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": ", and then inserts the delimiter '[SEP]' between each pair of utterances. Following the Transformer model (Vaswani et al., 2017) , the input embedding of each token x k,l is the sum of its word embedding WE(x k,l ), position embedding POE(x k,l ), and speaker embedding PAE(x k,l ) as:", "cite_spans": [ { "start": 106, "end": 128, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "E(x k,l ) = WE(x k,l ) + POE(x k,l ) + PAE(x k,l ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "The token embeddings E(x k,l ) are then fed into the encoder, which is a stack of L encoding blocks. Each block contains two sub-layers (i.e., a selfattention layer and a feed-forward layer) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "H (l) = FNN(SelfATT(H (l\u22121) Q , H (l\u22121) K , H (l\u22121) V )), (1) for l = 1, \u2022 \u2022 \u2022 , L,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "where 'FNN' and 'SelfATT' denotes the feed-forward and self-attention networks respectively, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "H (0) = [E(x i\u22122,1 ), E(x i\u22122,2 ), \u2022 \u2022 \u2022 , E(x i+1,m i+1 )].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "In Equation 1, the self-attention layer first projects the input as a query matrix (H", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "(l\u22121) Q ), a key matrix (H (l\u22121) K )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": ", and a value matrix (H", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "(l\u22121) V", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "). A multi-head attention mechanism is then applied to these three matrices to encode the input tokens in the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context encoder", "sec_num": "3.2.1" }, { "text": "To generate the representations for tokens in the pro-drop utterance x i and exploit referent information from its context C i , we utilize the decoder component of the Transformer to represent x i . Similar to the context encoder, the inputs to the utterance decoder are the embeddings of the tokens. Each embedding E(x i,j ) is also a sum of its word embedding, position embedding, and speaker embedding. Then, the input to the decoder, denoted as S (0) , is a concatenation of all the token embeddings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "S (0) i = [E(x i,1 ), E(x i,2 ), \u2022 \u2022 \u2022 , E(x i,m i )].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "The decoder is still a stack of L decoding blocks. Each decoding block Dec(\u2022) contains three sublayers (i.e., a self-attention layer, an interaction attention layer, and a feed-forward layer) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "S (l) i =Dec(S (l\u22121) i , H (L) i ) =FFN(InterATT(SelfATT(S (l\u22121) i ), H (L) i )), for l = 1, \u2022 \u2022 \u2022 , L, where FFN is a feed-forward network, SelfATT is a self-attention network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "Finally, the output states of the decoder S (L) are transformed into logits through a two-layer MLP network as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "P = W 1 \u2022 tanh(W 2 \u2022 S (L) + b 2 ) + b 1 , (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "where the logits matrix P of size n \u00d7 m \u00d7 k will be fed into a subsequent prediction layer. k is the number of distinct tags, and each element P i,j,l refers to the emission score of the l-th tag of the j-th word in the i-th utterance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utterance decoder", "sec_num": "3.2.2" }, { "text": "We utilize an elaborately designed general conditional random fields (GCRF) layer to recover dropped pronouns by modeling cross-utterance and intra-utterance dependencies between dropped pronouns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF layer", "sec_num": "3.3" }, { "text": "\u80fd \u542c\u89c1 \u5417 ? \u6211 \u542c \u4e0d \u89c1 \u3002 (I) (hear) (not) (Can) (hear) (Aha)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF layer", "sec_num": "3.3" }, { "text": "\u63d2\u9519 \u8033\u673a \u4e86 \u5427 ?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF layer", "sec_num": "3.3" }, { "text": "(plug wrong) (earphone)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF layer", "sec_num": "3.3" }, { "text": "Step 1: initial graph", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u54c8\u54c8", "sec_num": null }, { "text": "Step 2-1 : after processing an OVP", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u54c8\u54c8", "sec_num": null }, { "text": "Step 2-2 : after processing an interjection \uff0c Figure 3 : The GCRF graph construction.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 54, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "\u54c8\u54c8", "sec_num": null }, { "text": "Step 1 constructs a initial graph. The tokens in each utterance are shown and the nodes corresponding to the first token in each utterance are highlighted in red; step 2-1 processes an OVP (in the second utterance) and adds an observed (shaded) node for token \"\u6211/(I)\"; step 2-2 processes an interjection (in the third utterance) and skips the node corresponding to the token \"\u54c8\u54c8/(Aha)\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u54c8\u54c8", "sec_num": null }, { "text": "Given a conversation snippet, a graph is constructed where each node, corresponding to a token, is a random variable y that represents the type of the pronoun defined in Y. The edges in the graph are defined by the following two steps:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "Step 1: Initial graph construction: We first split each compound utterance into several simple utterances by punctuation, and connect the nodes corresponding to the tokens in the same simple utterance with horizontal edges to model intra-utterance dependencies. Then we link the first tokens in consecutive utterances with a vertical chain to model the cross-utterance dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "Step 1 in Figure 3 shows an initial graph for a conversation snippet.", "cite_spans": [], "ref_spans": [ { "start": 10, "end": 18, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "Step 2: Vertical edge refinement: Though the vertical chain constructed in Step 1 can capture most of the cross-utterance dependencies, they can be further refined considering the following two general cases in conversation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "\u2022 Overt pronouns (OVP): If an OVP appears as the first token in a utterance, it is clear that there is a dependency between the OVP and the dropped pronoun in neighboring utterances. To model this phenomenon, an observed node (with the value of its pronoun type) is inserted in the graph, and the vertical chain linked to the original node is moved to this new node.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "Step 2-1 in Figure 3 shows the refined graph after OVPs are processed. \u2022 Interjections: If the first token in an utterance is an interjection (e.g., \"\u55ef/ Well\", \"\u54c8 \u54c8/ Aha\" etc.), it is better to skip the utterance in the vertical chain because the short utterance consisting of only interjections and punctuation does not provide useful information about the dependencies between pronouns.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "Step 2-2 in Figure 3 shows the refined graph after interjections are processed.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Graph construction in GCRF", "sec_num": "3.3.1" }, { "text": "It is obvious that the GCRF is a special case of the 2D CRFs. To predict the labels of the nodes following the practices in (Zhu et al., 2005) , we employ a modified Viterbi algorithm in which the nodes in the vertical chain are decoded first. Specifically, the constructed graph consists of two types of cliques: one from the horizontal chains and the other from the vertical chain. Given the emission score matrix P outputted from the decoder layer (see Section 3.2.1), the joint score s(X, Y) of the predictions can be computed by first computing the sum of horizontal chains and then summing up scores of the transitions in the vertical chain as:", "cite_spans": [ { "start": 124, "end": 142, "text": "(Zhu et al., 2005)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "s(X, Y) = n i=1 s hi + n\u22121 i=1 A (2) T i ,T i+1 ,", "eq_num": "(3)" } ], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "s hi = m\u22121 j=1 A (1) y i,j ,y i,j+1 + m j=1 P i,j,y i,j ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "where A (1) and A (2) are the transition matrices of the horizontal chains and the vertical chain, respectively; A i,j indicates the transition score from tag i to tag j; and the node T i is defined as,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "T i = y OVP if the node is an observed OVP y i,1 otherwise where y OVP \u2208 Y is the observed label corresponds to the specific OVP. The first term in Eq. (3) is the score corresponding to the horizontal chain cliques, and the second term corresponds to the vertical chain clique.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "Algorithm 1 Transformer-GCRF Decoding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "Input: The emission score matrix P; Transition matrices A (1) and A (2) . Output: The best path Y * 1: for i = 1, . . . , n do 2: ", "cite_spans": [ { "start": 68, "end": 71, "text": "(2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "s hi , bp i \u2190 ForwardScore(P i , A (1) ) 3: end for 4: P h = [s h1 , s h2 , \u2022 \u2022 \u2022 , s hn ] 5: s(X, Y), bp v \u2190 ForwardScore(P h , A (2) ) 6: Y * n,1 \u2190 arg max (s(X, Y)) 7: {Y * 1,1 , \u2022 \u2022 \u2022 , Y * n\u22121,1 }\u2190TraceBack (Y * n,1 , bp v ) 8: for i = 1, ..., n do 9: {Y * i,2 , \u2022 \u2022 \u2022 , Y * i,m i }\u2190TraceBack(Y * i,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "for j = 2, \u2022 \u2022 \u2022 , t do 15: z j \u2190 bp j,z j\u22121 16: end for 17: return {z 2 , \u2022 \u2022 \u2022 , z t } 18: end function 19: return Y *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Pronoun prediction", "sec_num": "3.3.2" }, { "text": "The sequence that maximizes the conditional probability p(Y|X) is outputted as the prediction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding the GCRF and Model training", "sec_num": "3.4" }, { "text": "Y * = arg max Y\u2208Y X p(Y|X).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding the GCRF and Model training", "sec_num": "3.4" }, { "text": "A modified Viterbi algorithm is used to find the best labeling sequence. Specifically, we first applies the Viterbi algorithm to decode the vertical chain. Then, the vertical chain decoding results are used as the observed nodes in the graph, and the standard Viterbi algorithm is applied to each horizontal chain in parallel. Algorithm 1 shows the Transformer-GCRF decoding process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding the GCRF and Model training", "sec_num": "3.4" }, { "text": "Given a set of labeled conversation snippets D, the model parameters are learned by jointly maximizing the overall log-probabilities of the groundtruth label sequences:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding the GCRF and Model training", "sec_num": "3.4" }, { "text": "max (X,Y)\u2208D log(p(Y|X)).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoding the GCRF and Model training", "sec_num": "3.4" }, { "text": "We evaluate the performance of Transformer-GCRF on three conversation benchmarks: Chinese text message dataset (SMS), OntoNotes Release 5.0, and BaiduZhidao. The Training Test #Sentences #DPs #Sentences #DPs SMS 35,933 28,052 4,346 3,539 TC 6,734 5,090 1,122 774 Zhidao 7,970 5,097 1,406 786 SMS dataset is described in (Yang et al., 2015) and contains 684 text message documents generated by users via SMS or Chat. Following (Yang et al., 2015 , we reserved 16.7% of the training set as the development set, and a separate test set was used to evaluate the models. The OntoNotes Release 5.0 was released in the CoNLL 2012 Shared Task. We used the TC section which consists of transcripts of Chinese telephone conversation speech. The BaiduZhidao dataset is a question answering dialogue corpus collected by (Zhang et al., 2016) . Ten types of dropped pronouns are annotated according to the pronoun annotation guidelines. The statistics of these three benchmarks are reported in Table 1 . Baselines: State-of-the-art dropped pronoun recovery models are used as baselines: (1) MEPR (Yang et al., 2015) which leverages a set of elaborately designed features and trains a Maximum Entropy classifier to predict the type of dropped pronoun before each token; (2) NRM (Zhang et al., 2016) which employs two separate MLPs to predict the position and type of a dropped pronoun utilizing representation of words in a fixed-length window; (3) BiGRU which utilizes a bidirectional RNN to encode each token in a pro-drop sentence and makes prediction based on the encoded states; (4) NDPR which models dropped pronoun referents by attending to the context and independently predicts the presence and type of DP for each token.", "cite_spans": [ { "start": 331, "end": 350, "text": "(Yang et al., 2015)", "ref_id": "BIBREF23" }, { "start": 437, "end": 455, "text": "(Yang et al., 2015", "ref_id": "BIBREF23" }, { "start": 819, "end": 839, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF26" }, { "start": 1093, "end": 1112, "text": "(Yang et al., 2015)", "ref_id": "BIBREF23" }, { "start": 1274, "end": 1294, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 203, "end": 286, "text": "#DPs SMS 35,933 28,052 4,346 3,539 TC 6,734 5,090 1,122 774 Zhidao 7,970", "ref_id": "TABREF4" }, { "start": 991, "end": 998, "text": "Table 1", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Datasets and Experimental Setup Datasets:", "sec_num": "4" }, { "text": "We also compare three variants of Transformer-GCRF as: (1) Transformer-GCRF(w/o refine) which removes Step 2 in Section 3.3.1 during the graph construction process, for exploring the effectiveness of processing OVP and interjections; (2) Transformer which removes the whole GCRF layer that globally optimizes the prediction sequences, and directly adds a MLP layer on the top of Transformer encoder to predict the dropped pronouns. It aims to explore the contribution of Transformer encoder among the total effectiveness of Transformer-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Datasets and Experimental Setup Datasets:", "sec_num": "4" }, { "text": "Chinese SMS TC of OntoNotes BaiduZhidao P(%) R(%) F P(%) R(%) F P(%) R(%) F MEPR (Yang et al., 2015) 37.27 45.57 38.76 ------NRM (Zhang et al., 2016) 37 Table 2 : Results in terms of precision, recall and F-score produced by the baseline systems and variants of our proposed Transformer-GCRF framework. ' * ' indicates the improvement over the best baseline NDPR is significant (t-tests and p-value \u2264 0.05).", "cite_spans": [ { "start": 81, "end": 100, "text": "(Yang et al., 2015)", "ref_id": "BIBREF23" }, { "start": 129, "end": 149, "text": "(Zhang et al., 2016)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "(3) NDPR-GCRF which replaces the Transformer structure in the presentation layer with the NDPR model .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF;", "sec_num": null }, { "text": "Training details: In all of the experiments, a vocabulary was first generated based on the entire dataset, and the out-of-vocabulary words are represented as \"UNK\". The length of utterances in a conversation snippet is set as 8 in our work. In Transformer-GCRF, both the encoder and decoder in the Transformer have 512 units in each hidden layer. We augment each utterance with a context consisting of seven neighboring utterances according to the practice in . In each experiment, we trained the model for 30 epochs on one GPU, which took more than five hours, and the model with the highest F-score on the development set was selected for testing. Following (Glorot and Bengio, 2010) , in all of the experiments the weight matrices were initialized with uniform sam-", "cite_spans": [ { "start": 660, "end": 685, "text": "(Glorot and Bengio, 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "GCRF;", "sec_num": null }, { "text": "ples from [\u2212 6 r+c , + 6 r+c ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF;", "sec_num": null }, { "text": ", where r and c are the number of rows and columns in the corresponding matrix. Adam optimizer (Kingma and Ba, 2015) is utilized to conduct the optimization.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "GCRF;", "sec_num": null }, { "text": "We apply our Transformer-GCRF model to all three conversation datasets to demonstrate the effectiveness of the model. Table 2 reports the results of our Transformer-GCRF model as well as the baseline models in terms of precision (P), recall (R), and F-score (F).", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5.1" }, { "text": "From the results, we can see that our proposed model and its variants outperformed the baselines on all datasets. The best model Transformer-GCRF achieves a gain of 2.58% average absolute improvement across all three datasets in terms of F-score. We also conducted significance tests on all three datasets in terms of F-score. The results show that our method significantly outperforms the best baseline NDPR (p < 0.05). The proposed Transformer-GCRF suffers from performance degradation when", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5.1" }, { "text": "Step 2 is removed from the graph construction process (i.e., referring to the results of Transformer-GCRF(w/o refine) in Table 2 ), which demonstrates the important role of OVPs in modeling dependencies between different utterances, and the contribution of noise reduction resulting from skipping short utterances starting with interjections. Both our proposed Transformer-GCRF model and the variant Transformer-GCRF(w/o refine) model outperform the variant Transformer, which demonstrates that the effectiveness comes from not only the powerful Transformer encoder, but also the elaborately designed GCRF layer. Moreover, the variant NDPR-GCRF, which encodes the pro-drop utterances with BiGRU as NDPR , still outperforms the original NDPR. This shows that the proposed GCRF is effective in modeling cross-utterance dependencies regardless of the underlying representation.", "cite_spans": [], "ref_spans": [ { "start": 121, "end": 128, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Performance Evaluation", "sec_num": "5.1" }, { "text": "The GCRF model is motivated with a quantitative analysis of our data, which shows that 79.6% of the dropped pronouns serve as the subject of a sentence, and occur at utterance-initial positions. The pronouns dropped at the beginning of consecutive utterances are strongly correlated with dialogue patterns and thus modeling conversational structures Figure 4 : Visualization of the transition weight between each pair of pronouns among 16 types of predefined pronouns (i.e., except the category 'None'), obtained from the vertical chain transition matrix A (2) . Darker color indicates higher transition weight between these two types of pronouns.", "cite_spans": [ { "start": 557, "end": 560, "text": "(2)", "ref_id": null } ], "ref_spans": [ { "start": 350, "end": 358, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Motivation by statistical results", "sec_num": "5.2.1" }, { "text": "helps improve recover dropped pronouns. Other pronouns dropped as objects in the middle of a utterance should be recovered by modeling intrautterance dependencies.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation by statistical results", "sec_num": "5.2.1" }, { "text": "To further explore the cross-utterance pronoun dependencies, we collected all pronoun pairs occurring at the beginning of consecutive utterances and classified the dependencies into one of the three dialogue transitions defined in (Xue et al., 2016) . We found that 27.33% of the pairs correspond to reply transition, where the second utterance is a response to the first utterance, and 18.60% of pairs correspond to the acknowledgment transition, where the second utterance is an acknowledgment of the first utterance. In both cases, the utterances involve a shift of speaker, which is accompanied by a shift in the use of personal pronouns. Another 47.79% of the pairs correspond to the expansion transition, where the second utterance is an elaboration of the first utterance and the same pronoun is used.", "cite_spans": [ { "start": 231, "end": 249, "text": "(Xue et al., 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation by statistical results", "sec_num": "5.2.1" }, { "text": "To investigate whether our GCRF model actually learned the dependencies revealed by the quantitative analysis of our corpus, we visualize the transition matrix A (2) of the vertical chain in Figure 4 . We can see that the learned transition matrix matches well with the distribution of dialogue patterns. The matrix shows that the higher transition weights on diagonal correspond to the strong expansion transition in which the same pronoun is used in consecutive utterances and the transition weights between \"\u6211(I)\" and \"\u4f60(you)\" (top-left corner) are high as well, indicating the strong reply transition. Moreover, the acknowledgement transition usually exists from the pronoun \"previous utterance\" to \"\u6211(I)\" or \"\u4f60(you)\".", "cite_spans": [], "ref_spans": [ { "start": 191, "end": 199, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Visualizing transition matrix of GCRF", "sec_num": "5.2.2" }, { "text": "We demonstrate the effectiveness of GCRF by comparing the outputs of NDPR and NDPR-GCRF on the entire test set, and present some concrete cases in Figure 5 . The examples show that the horizontal chains in GCRF contributes by preventing redundant predictions in the same utterance. For example, in the first case, the second pronoun \"\u4f60(you)\" is repeatedly recovered by NDPR since the dependency between the predictions of the first two tokens is ignored. The vertical chain contributes by predicting coherent dropped pronouns at the beginning of the utterances. For example, in the second case, the second utterance is a reply of the first one, and NDPR-GCRF recovers these two pronouns correctly by considering their dependency.", "cite_spans": [], "ref_spans": [ { "start": 147, "end": 155, "text": "Figure 5", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Case studies", "sec_num": "5.2.3" }, { "text": "We further study the effectiveness of multi-head attention in Transformer structure. Figure 6 shows an example conversation snippet with three utterances and the pronoun \"\u5b83(it)\" in the last utterance is dropped. The Transformer's attention weights corresponding to three heads which are shown in blue, and the NDPR's attention weights are shown in brown. From the results, we can see that \"head 1\" is responsible for associating \"\u80a1\u7968(stock)\" with \"\u5b83(it)\" (in utterance A 1 ), \"head 2\" is responsible for associating \"\u5b83(it)\" with \"\u5b83(it)\", and \"head 3\" is responsible for collecting noisy information, which is helpful for the training process (Michel et al., 2019; Correia et al., 2019) . This is consistent with the observation in (Vig, 2019) that multi-head attention is powerful because it uses different heads to capture different relations. NDPR, on the other hand, captures all these the relations with a single attention structure. The results explain why Transformer is suitable for dropped pronoun recovery.", "cite_spans": [ { "start": 641, "end": 662, "text": "(Michel et al., 2019;", "ref_id": "BIBREF10" }, { "start": 663, "end": 684, "text": "Correia et al., 2019)", "ref_id": "BIBREF1" }, { "start": 730, "end": 741, "text": "(Vig, 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 85, "end": 93, "text": "Figure 6", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Effects of the Transformer architecture", "sec_num": "5.3" }, { "text": "Besides conducting the performance evaluation and analyzing the effects of different components, we also investigate some typical mistakes made by our Transformer-GCRF model. The task of recovering dropped pronouns consists of first identifying the referent of each dropped pronoun from the context and then recovering the referent as a concrete Chinese pronoun based on the referent semantics. Existing work has focused on modeling referent semantics of the dropped pronoun from context, and globally optimizing the prediction sequences by exploring label dependencies. However, there is also something need to do about how to recover the referent as a proper pronoun based on the referent semantics. For example, in two cases of Figure 7 , the referents of the dropped pronouns are correctly identified, while the final pronoun was recovered as \"(\u4ed6\u4eec/they)\" and \"(\u5b83/it)\" by mistake. We attribute this to that the model needs to be augmented with some common knowledge about how to recover a referent to the proper Chinese pronoun.", "cite_spans": [], "ref_spans": [ { "start": 731, "end": 739, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.4" }, { "text": "In this paper, we presented a novel model for recovering the dropped pronouns in Chinese conversations. The model, referred to as Transformer-GCRF, formulates dropped pronoun recovery as A 1 : \u6211 \u7ed9 \u7237\u7237 \u4e70 \u7684 \u836f \u4ed6 \u5403 \u4e86 \u5417 \uff1f Did my grandfather take the medicine I bought for him? Context B 1 : (\u4ed6) \u5403 \u4e86 (He) had taken the medicine. B 1 : (\u4ed6\u4eec) \u5403 \u4e86 (They) had taken the medicine.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Transformer-GCRF A 1 : \u590d\u5408\u5f0f \u542c\u5199 \u662f \u600e\u4e48 \u505a \uff1f How should I finish compound dictation questions?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold", "sec_num": null }, { "text": "A 2 : (\u5b83\u4eec) \u5c31 \u662f \u542c\u5199 \u53e5\u2f26 \u5417\uff1f Do (they) require to write down the sentence you hear?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold", "sec_num": null }, { "text": "A 2 : (\u5b83) \u5c31 \u662f \u542c\u5199 \u53e5\u2f26 \u5417\uff1f Do (it) require to write down the sentence you hear?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Gold", "sec_num": null }, { "text": "Transformer-GCRF Figure 7 : Example errors made by Transformer-GCRF. a sequence labeling problem. Transformer is employed to represent the utterances and GCRF is used to make the final predictions, through capturing both cross-utterance and intra-utterance dependencies between pronouns. Experimental results on three Chinese conversational datasets show that Transformer-GCRF consistently outperforms stateof-the-art baselines.", "cite_spans": [], "ref_spans": [ { "start": 17, "end": 25, "text": "Figure 7", "ref_id": null } ], "eq_spans": [], "section": "Context", "sec_num": null }, { "text": "In the future, we will do some extrinsic evaluation by applying our proposed model in some downstream applications like pronoun resolution, to further explore the effectiveness of modeling cross-utterance dependencies in practical applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Chinese zero pronoun resolution with deep neural networks", "authors": [ { "first": "Chen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2016, "venue": "Meeting of the Association for Computational Linguistic", "volume": "", "issue": "", "pages": "778--788", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Chen and Vincent Ng. 2016. Chinese zero pro- noun resolution with deep neural networks. In Meet- ing of the Association for Computational Linguistic., pages 778-788.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Adaptively sparse transformers. arXiv: Computation and Language", "authors": [ { "first": "M", "middle": [], "last": "Goncalo", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Correia", "suffix": "" }, { "first": "Andre F T", "middle": [], "last": "Niculae", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goncalo M Correia, Vlad Niculae, and Andre F T Mar- tins. 2019. Adaptively sparse transformers. arXiv: Computation and Language.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Dropped personal pronoun recovery in chinese sms", "authors": [ { "first": "Chris", "middle": [], "last": "Giannella", "suffix": "" }, { "first": "K", "middle": [], "last": "Ransom", "suffix": "" }, { "first": "Stacy", "middle": [], "last": "Winder", "suffix": "" }, { "first": "", "middle": [], "last": "Petersen", "suffix": "" } ], "year": 2017, "venue": "Natural Language Engineering", "volume": "", "issue": "", "pages": "905--927", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Giannella, Ransom K Winder, and Stacy Pe- tersen. 2017. Dropped personal pronoun recovery in chinese sms. Natural Language Engineering, pages 905-927.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Understanding the difficulty of training deep feedforward neural networks", "authors": [ { "first": "Xavier", "middle": [], "last": "Glorot", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2010, "venue": "International Conference on Artificial Intelligence and Statistics", "volume": "", "issue": "", "pages": "249--256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neural networks. International Conference on Artificial In- telligence and Statistics., pages 249-256.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Adam: A method for stochastic optimization. International Conference on Learning Representations", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A tree kernelbased unified framework for chinese zero anaphora resolution", "authors": [ { "first": "Fang", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Guodong", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2010, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "882--891", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fang Kong and Guodong Zhou. 2010. A tree kernel- based unified framework for chinese zero anaphora resolution. In Conference on Empirical Methods in Natural Language Processing., pages 882-891.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "authors": [ { "first": "D", "middle": [], "last": "John", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Lafferty", "suffix": "" }, { "first": "Fernando", "middle": [], "last": "Mccallum", "suffix": "" }, { "first": "", "middle": [], "last": "Pereira", "suffix": "" } ], "year": 2001, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "282--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "John D Lafferty, Andrew Mccallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data. International Conference on Machine Learning, pages 282-289.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural architectures for named entity recognition. North American Chapter of the Association for Computational Linguistics", "authors": [ { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Miguel", "middle": [], "last": "Ballesteros", "suffix": "" }, { "first": "Sandeep", "middle": [], "last": "Subramanian", "suffix": "" }, { "first": "Kazuya", "middle": [], "last": "Kawakami", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "260--270", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. North American Chapter of the Association for Com- putational Linguistics., pages 260-270.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Empower sequence labeling with task-aware neural language model. National Conference on Artificial Intelligence", "authors": [ { "first": "Liyuan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jingbo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "F", "middle": [], "last": "Frank", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Peng", "suffix": "" }, { "first": "", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "5253--5260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Liyuan Liu, Jingbo Shang, Frank F Xu, Xiang Ren, Huan Gui, Jian Peng, and Jiawei Han. 2018. Em- power sequence labeling with task-aware neural lan- guage model. National Conference on Artificial In- telligence, pages 5253-5260.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf. Meeting of the Association for Computational Linguistics", "authors": [ { "first": "Xuezhe", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Eduard", "middle": [ "H" ], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "1064--1074", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xuezhe Ma and Eduard H Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. Meeting of the Association for Computational Lin- guistics., pages 1064-1074.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Are sixteen heads really better than one. arXiv: Computation and Language", "authors": [ { "first": "Paul", "middle": [], "last": "Michel", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one. arXiv: Computation and Language.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Improving multi-turn dialogue modelling with utterance rewriter. arXiv: Computation and Language", "authors": [ { "first": "Hui", "middle": [], "last": "Su", "suffix": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Rongzhi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Pengwei", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Su, Xiaoyu Shen, Rongzhi Zhang, Fei Sun, Peng- wei Hu, Cheng Niu, and Jie Zhou. 2019. Improv- ing multi-turn dialogue modelling with utterance rewriter. arXiv: Computation and Language.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "An introduction to conditional random fields. Foundations and Trends R in Machine Learning", "authors": [ { "first": "Charles", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2012, "venue": "", "volume": "4", "issue": "", "pages": "267--373", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. Founda- tions and Trends R in Machine Learning, 4(4):267- 373.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Dynamic conditional random fields: factorized probabilistic models for labeling and segmenting sequence data", "authors": [ { "first": "A", "middle": [], "last": "Charles", "suffix": "" }, { "first": "Khashayar", "middle": [], "last": "Sutton", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Rohanimanesh", "suffix": "" }, { "first": "", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2004, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles A Sutton, Khashayar Rohanimanesh, and An- drew Mccallum. 2004. Dynamic conditional ran- dom fields: factorized probabilistic models for la- beling and segmenting sequence data. International Conference on Machine Learning, page 99.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Dropped pronoun recovery in chinese conversations with knowledge-enriched neural network", "authors": [ { "first": "Jianzhuo", "middle": [], "last": "Tong", "suffix": "" }, { "first": "Jingxuan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Si", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Chinese Computational Linguistics", "volume": "", "issue": "", "pages": "545--557", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianzhuo Tong, Jingxuan Yang, Si Li, and Sheng Gao. 2019. Dropped pronoun recovery in chinese conver- sations with knowledge-enriched neural network. In Chinese Computational Linguistics, pages 545-557, Cham. Springer International Publishing.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Neural Information Processing Systems, pages 5998-6008.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A multiscale visualization of attention in the transformer model. arXiv: Human-Computer Interaction", "authors": [ { "first": "Jesse", "middle": [], "last": "Vig", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jesse Vig. 2019. A multiscale visualization of attention in the transformer model. arXiv: Human-Computer Interaction.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Translating pro-drop languages with reconstruction models", "authors": [ { "first": "Longyue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Shuming", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longyue Wang, Zhaopeng Tu, Shuming Shi, Tong Zhang, Yvette Graham, and Qun Liu. 2018. Trans- lating pro-drop languages with reconstruction mod- els. AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A novel approach to dropped pronoun translation. North American Chapter", "authors": [ { "first": "Longyue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Way", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longyue Wang, Zhaopeng Tu, Xiaojun Zhang, Hang Li, Andy Way, and Qun Liu. 2016a. A novel ap- proach to dropped pronoun translation. North Amer- ican Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dropped pronoun generation for dialogue machine translation", "authors": [ { "first": "Longyue", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhaopeng", "middle": [], "last": "Tu", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "International Conference on Acoustics, Speech and Signal Processing", "volume": "", "issue": "", "pages": "6110--6114", "other_ids": {}, "num": null, "urls": [], "raw_text": "Longyue Wang, Xiaojun Zhang, Zhaopeng Tu, Hang Li, and Qun Liu. 2016b. Dropped pronoun gener- ation for dialogue machine translation. In Interna- tional Conference on Acoustics, Speech and Signal Processing, pages 6110-6114.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Deep learning for matching in search and recommendation", "authors": [ { "first": "Jun", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "" }, { "first": "Hang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Foundations and Trends R in Information Retrieval", "volume": "14", "issue": "2-3", "pages": "102--288", "other_ids": { "DOI": [ "10.1561/1500000076" ] }, "num": null, "urls": [], "raw_text": "Jun Xu, Xiangnan He, and Hang Li. 2020. Deep learning for matching in search and recommenda- tion. Foundations and Trends R in Information Re- trieval, 14(2-3):102-288.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Annotating the discourse and dialogue structure of sms message conversations", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Qishen", "middle": [], "last": "Su", "suffix": "" }, { "first": "Sooyoung", "middle": [], "last": "Jeong", "suffix": "" } ], "year": 2016, "venue": "Linguistic Annotation Workshop on Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "180--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue, Qishen Su, and Sooyoung Jeong. 2016. Annotating the discourse and dialogue structure of sms message conversations. Linguistic Annotation Workshop on Meeting of the Association for Compu- tational Linguistics, pages 180-187.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Recovering dropped pronouns in chinese conversations via modeling their referents", "authors": [ { "first": "Jingxuan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jianzhuo", "middle": [], "last": "Tong", "suffix": "" }, { "first": "Si", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "North American Chapter", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingxuan Yang, Jianzhuo Tong, Si Li, Sheng Gao, Jun Guo, and Nianwen Xue. 2019. Recovering dropped pronouns in chinese conversations via mod- eling their referents. In North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Recovering dropped pronouns from chinese text messages", "authors": [ { "first": "Yaqin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yalin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2015, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "309--313", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaqin Yang, Yalin Liu, and Nianwen Xue. 2015. Re- covering dropped pronouns from chinese text mes- sages. In Meeting of the Association for Computa- tional Linguistics., pages 309-313.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Chinese zero pronoun resolution with deep memory network", "authors": [ { "first": "Qingyu", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1309--1318", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingyu Yin, Yu Zhang, Weinan Zhang, and Ting Liu. 2017. Chinese zero pronoun resolution with deep memory network. In Conference on Empirical Meth- ods in Natural Language Processing., pages 1309- 1318.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Deep reinforcement learning for chinese zero pronoun resolution", "authors": [ { "first": "Qingyu", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "569--578", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qingyu Yin, Yu Zhang, Weinan Zhang, Ting Liu, and William Yang Wang. 2018. Deep reinforcement learning for chinese zero pronoun resolution. Meet- ing of the Association for Computational Linguistics, pages 569-578.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Neural recovery machine for chinese dropped pronoun. arXiv: Computation and Language", "authors": [ { "first": "Weinan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qingyu", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Weinan Zhang, Ting Liu, Qingyu Yin, and Yu Zhang. 2016. Neural recovery machine for chinese dropped pronoun. arXiv: Computation and Language.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Identification and resolution of chinese zero pronouns: A machine learning approach", "authors": [ { "first": "Shanheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Hwee Tou", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "541--550", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shanheng Zhao and Hwee Tou Ng. 2007. Identifi- cation and resolution of chinese zero pronouns: A machine learning approach. In Conference on Em- pirical Methods in Natural Language Processing., pages 541-550.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "2d conditional random fields for web information extraction. International Conference on Machine Learning", "authors": [ { "first": "Jun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Zaiqing", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jirong", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Weiying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "1044--1051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jun Zhu, Zaiqing Nie, Jirong Wen, Bo Zhang, and Weiying Ma. 2005. 2d conditional random fields for web information extraction. International Con- ference on Machine Learning, pages 1044-1051.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Overall architecture of our Transformer-GCRF model.", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "Example results of NDPR and NDPR-GCRF. The recovered pronouns are marked with red color and shown in brackets.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Visualization of multi-head attention in Transformer-GCRF and structured attention in NDPR.", "uris": null }, "TABREF0": { "text": "2017, * Corresponding author", "html": null, "content": "
A 1 : | (\u4f60) \u53bb \u5df4\u897f \u7684 \u65f6\u5019 \u9700\u8981 \u63d0\u4f9b \u56de\u7a0b \u673a\u7968 \u5417 \uff1f |
Do (you) need to provide the return ticket when you go to Brazil? | |
B 1 : | (\u6211) \u9700\u8981 |
Yes, (I) do. | |
B 2 : | (\u6211) \u8981 \u628a \u2f8f\u7a0b\u5355 \u548c \u9080\u8bf7\u51fd \u6253\u5370 \u51fa\u6765 \u5e26\u7740 \u3002 |
A 2 : | \u7535\u2f26 \u2f8f\u7a0b\u5355 \u53ef\u4ee5 \u5417 \uff1f |
Is electronic itinerary OK? | |
B 3 : | (\u4f60) \u6700\u597d \u8fd8\u662f \u6253\u5370 \u2f00 \u4efd \u5427 \uff0c \u7701\u5f97 \u9ebb\u70e6 \u3002 |
(You)'d better print one copy to save trouble. | |
A 3 : | (previous utterance) \u662f\u7684 |
Fine. |