ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2020.eval4nlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:38:42.336505Z"
},
"title": "Improving Text Generation Evaluation with Batch Centering and Tempered Word Mover Distance",
"authors": [
{
"first": "Xi",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {},
"email": "chenx@g.harvard.edu"
},
{
"first": "Nan",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {},
"email": "dingnan@google.com"
},
{
"first": "Tomer",
"middle": [],
"last": "Levinboim",
"suffix": "",
"affiliation": {},
"email": "tomerl@google.com"
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": "",
"affiliation": {},
"email": "rsoricut@google.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent advances in automatic evaluation metrics for text have shown that deep contextualized word representations, such as those generated by BERT encoders, are helpful for designing metrics that correlate well with human judgements. At the same time, it has been argued that contextualized word representations exhibit sub-optimal statistical properties for encoding the true similarity between words or sentences. In this paper, we present two techniques for improving encoding representations for similarity metrics: a batch-mean centering strategy that improves statistical properties; and a computationally efficient tempered Word Mover Distance, for better fusion of the information in the contextualized word representations. We conduct numerical experiments that demonstrate the robustness of our techniques, reporting results over various BERTbackbone learned metrics and achieving state of the art correlation with human ratings on several benchmarks.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent advances in automatic evaluation metrics for text have shown that deep contextualized word representations, such as those generated by BERT encoders, are helpful for designing metrics that correlate well with human judgements. At the same time, it has been argued that contextualized word representations exhibit sub-optimal statistical properties for encoding the true similarity between words or sentences. In this paper, we present two techniques for improving encoding representations for similarity metrics: a batch-mean centering strategy that improves statistical properties; and a computationally efficient tempered Word Mover Distance, for better fusion of the information in the contextualized word representations. We conduct numerical experiments that demonstrate the robustness of our techniques, reporting results over various BERTbackbone learned metrics and achieving state of the art correlation with human ratings on several benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic evaluation metrics play an important role in comparing candidate sentences generated by machines against human references. Firstgeneration metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) use predefined handcrafted rules to measure surface similarity between sentences and have no ability, or very little ability (Banerjee and Lavie, 2005) , to go beyond word surface level. To address this problem, later work (Kusner et al., 2015; Zhelezniak et al., 2019) utilize static embedding techniques such as word2vec (Mikolov et al., 2013) and Glove (Pennington et al., 2014) to represent the words in sentences as vectors in a low-dimensional continuous space, so that word-to-word correlation can be measured by their cosine similarity. However, static embeddings cannot capture the rich syntactic, semantic, and pragmatic aspects of word usage across sentences and paragraphs.",
"cite_spans": [
{
"start": 170,
"end": 193,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 204,
"end": 215,
"text": "(Lin, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 341,
"end": 367,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF2"
},
{
"start": 439,
"end": 460,
"text": "(Kusner et al., 2015;",
"ref_id": "BIBREF6"
},
{
"start": 461,
"end": 485,
"text": "Zhelezniak et al., 2019)",
"ref_id": "BIBREF22"
},
{
"start": 539,
"end": 561,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF11"
},
{
"start": 572,
"end": 597,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Modern deep learning models based on the Transformer (Vaswani et al., 2017 ) utilize a multilayered self-attention structure that encodes not only a global representation of each word (a word embedding), but also its contextualized information within the context considered. Such contextualized word representations have yielded significant improvements on various tasks, including machine translation (Vaswani et al., 2017) , NLU tasks (Devlin et al., 2019; Lan et al., 2020) , summarization (Zhang et al., 2019a) , and automatic evaluation metrics (Reimers and Gurevych, 2019; Zhang et al., 2019b; Zhao et al., 2019; Sellam et al., 2020) .",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF17"
},
{
"start": 402,
"end": 424,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF17"
},
{
"start": 437,
"end": 458,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 459,
"end": 476,
"text": "Lan et al., 2020)",
"ref_id": "BIBREF7"
},
{
"start": 493,
"end": 514,
"text": "(Zhang et al., 2019a)",
"ref_id": "BIBREF18"
},
{
"start": 550,
"end": 578,
"text": "(Reimers and Gurevych, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 579,
"end": 599,
"text": "Zhang et al., 2019b;",
"ref_id": "BIBREF19"
},
{
"start": 600,
"end": 618,
"text": "Zhao et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 619,
"end": 639,
"text": "Sellam et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate how to better use BERT-based contextualized embeddings in order to arrive at effective evaluation metrics for generated text. We formalize a unified family of text similarity metrics, which operate either at the word/token or sentence level, and show how a number of existing embedding-based similarity metrics belong to this family. In this context, we present a tempered Word Mover Distance (TWMD) formulation by utilizing the Sinkhorn distance (Cuturi, 2013) , which adds an entropy regularizer to the objective of WMD (Kusner et al., 2015) . Compared to WMD, our TWMD formulation allows for a more efficient optimization using the iterative Sinkhorn algorithm (Cuturi, 2013) . Although in theory the Sinkhorn algorithm may require a number of iterations to converge, we find that a single iteration is sufficient and surprisingly effective for TWMD.",
"cite_spans": [
{
"start": 477,
"end": 491,
"text": "(Cuturi, 2013)",
"ref_id": "BIBREF3"
},
{
"start": 552,
"end": 573,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 694,
"end": 708,
"text": "(Cuturi, 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Moreover, we follow (Ethayarajh, 2019) and carefully analyze the similarity between contextualized word representations along the different layers of a BERT model. We posit three properties that multi-layered contextualized word representations should have (Section 5): (1) zero expected similarity between random words, (2) decreasing out-of-context self-similarity, and (3) increasing in-context similarity between words. As already shown by Ethayarajh (2019) , cosine similarity between BERT word-embeddings does not satisfy some of these properties. To address these issues, we design and analyze several centering techniques and find one that satisfies the three properties above. The usefulness of the centering technique and TWMD formulation is validated by our empirical studies over several well-known benchmarks, where we obtain significant numerical improvements and SoTA correlations with human ratings.",
"cite_spans": [
{
"start": 20,
"end": 38,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF5"
},
{
"start": 444,
"end": 461,
"text": "Ethayarajh (2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work on learned automatic evaluation metrics leverage pretrained contextualized embeddings by building on top of BERT (Devlin et al., 2019) or variant representations.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "SentenceBERT (Reimers and Gurevych, 2019) uses cosine similarity of two mean-pooled sentence embedding from the top layer of BERT. BERTscore (Zhang et al., 2019b) computes the similarity of two sentences as a sum of cosine similarities between maximum-matching tokens embeddings. Mover-Score (Zhao et al., 2019) measures word distance using BERT embeddings and computes the Word Mover Distance (WMD) (Kusner et al., 2015) from the word distribution of the system text to that of the human reference.",
"cite_spans": [
{
"start": 13,
"end": 41,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 141,
"end": 162,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF19"
},
{
"start": 292,
"end": 311,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 400,
"end": 421,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In the next section we propose an abstract framework of embedding-based similarity metrics and show that it contains the metrics mentioned above. We then extend this family of metrics with our own improved evaluation metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We consider a family of normalized similarity metrics for both word-level and sentence-level representations parameterized by a function C, as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sim(x 1 , x 2 ) = C(x 1 , x 2 ) C(x 1 , x 1 )C(x 2 , x 2 ) .",
"eq_num": "(1)"
}
],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Clearly, Sim(x, x) = 1, and furthermore, if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "C(x 1 , x 2 ) 2 \u2264 C(x 1 , x 1 )C(x 2 , x 2 ), then Sim(x 1 , x 2 ) \u2208 [\u22121, 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "For word similarity, x represents a single word vector. A standard choice is defining C(x 1 , x 2 ) = x 1 , x 2 , the inner product between the two vectors. The resulting word similarity metric",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Sim(x 1 , x 2 ) = x 1 x 1 , x 2 x 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "becomes the cosine similarity between the two word vectors. If the word vectors are pre-normalized such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "x = 1, then Sim(x 1 , x 2 ) = x 1 , x 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "For sentence similarity, we use X = x 1 , x 2 , . . . , x L to denote a D \u00d7 L matrix composed by L word vectors belonging to the sentence embedded in a D-dimensional space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "In what follows, we briefly review existing sentence similarity metrics and show that they belong to our family of similarity metrics Eq.(1) with different choices of C(X 1 , X 2 ) (with L 1 and L 2 denoting the sentence length for X 1 and X 2 , respectively). Note that we do not consider word re-weighting schemes (e.g. by IDF as in (Zhang et al., 2019b) ) in this paper, as their contribution does not appear to be consistent over various tasks. In addition, we assume that all word vectors are already pre-normalized.",
"cite_spans": [
{
"start": 335,
"end": 356,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Sentence-BERT Sentence-BERT (Reimers and Gurevych, 2019) uses the cosine-similarity between two mean-pooling sentence embeddings. This is the same as Eq.(1) when",
"cite_spans": [
{
"start": 28,
"end": 56,
"text": "(Reimers and Gurevych, 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "C(X 1 , X 2 ) = 1 L 1 L 1 i=1 x i 1 , 1 L 2 L 2 j=1 x j 2 = 1 L 1 L 2 L 1 i=1 L 2 j=1 x i 1 , x j 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Wordset-CKA Wordset-CKA (Zhelezniak et al., 2019) uses the centered kernel alignment between the two sentences represented as word sets, where",
"cite_spans": [
{
"start": 24,
"end": 49,
"text": "(Zhelezniak et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "C(X 1 , X 2 ) = Tr X 1 X 1 X 2 X 2 = L 1 i=1 L 2 j=1 x i 1 , x j 2 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Here we assume each word embedding x is precentered by the mean of its own dimensions. We refer to this centering method as dimension-mean centering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "MoverScore MoverScore (Zhao et al., 2019) measures the sentence similarity using the Word Mover Distance (Kusner et al., 2015) from the word distribution of the hypothesis to that of the gold reference:",
"cite_spans": [
{
"start": 22,
"end": 41,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 105,
"end": 126,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C(X 1 , X 2 ) = max \u03c0 L 1 i=1 L 2 j=1 \u03c0 ij x i 1 , x j 2 s.t. L 1 i=1 \u03c0 ij = 1 L 2 , L 2 j=1 \u03c0 ij = 1 L 1 .",
"eq_num": "(2)"
}
],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "The original MoverScore does not normalize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "C(X 1 , X 2 ) by C(X 1 , X 1 )C(X 2 , X 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "In practice, we find the performance to be similar with or without such normalization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "BERTscore BERTscore (Zhang et al., 2019b) introduces three metrics corresponding to recall, precision, and F1 score. We focus the discussion here on BERTscore-Recall, as it performs most consistently across all tasks (see discussions of the precision and F1 scores in Appendix C). BERTscore-Recall uses the sum of cosine similarities between maximum-matching tokens embeddings:",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "C(X 1 , X 2 ) = 1 L 1 L 1 i=1 max j=1...L 2 x i 1 , x j 2 . (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "For BERTscore, since the words are prenormalized, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "C(X 1 , X 1 ) = C(X 2 , X 2 ) = 1 and therefore Sim(X 1 , X 2 ) = C(X 1 , X 2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Note that BERTscore is closely related to Mover-Score, since Eq.(3) is the solution of the Relaxed-WMD (Kusner et al., 2015) :",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C(X 1 , X 2 ) = max \u03c0 L 1 i=1 L 2 j=1 \u03c0 ij x i 1 , x j 2 s.t. L 2 j=1 \u03c0 ij = 1 L 1 .",
"eq_num": "(4)"
}
],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "which is the same as Eq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "(2) but without the first constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Family of Similarity Metrics",
"sec_num": "3"
},
{
"text": "Word Mover Distance (Kusner et al., 2015) used in MoverScore (Zhao et al., 2019) is rooted in the classical optimal transport distance for probability measures and histograms of features. Despite its excellent performance and intuitive formulation, its computation involves a linear programming solver whose cost scales as O(L 3 log L) and becomes prohibitive for long sentences or documents with more than a few hundreds of words/tokens. For this reason, (Kusner et al., 2015) proposed a Relaxed-WMD (RWMD) with only one constraint (see Eq.(4)), which can be evaluated in O(L 2 ). However, RWMD uses the closest distance without considering there may be multiple words transforming to single words. Inspired by the Sinkhorn distance (Cuturi, 2013 ) which smooths the classic optimal transport problem with an entropic regularization term, we introduce the following formulation, which we refer to as tempered-WMD (TWMD):",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 61,
"end": 80,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF21"
},
{
"start": 456,
"end": 477,
"text": "(Kusner et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 734,
"end": 747,
"text": "(Cuturi, 2013",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max \u03c0 L 1 i=1 L 2 j=1 \u03c0 ij x i 1 , x j 2 \u2212 T L 1 i=1 L 2 j=1 \u03c0 ij log \u03c0 ij s.t. L 1 i=1 \u03c0 ij = 1 L 2 , L 2 j=1 \u03c0 ij = 1 L 1 .",
"eq_num": "(5)"
}
],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "The temperature parameter T \u2265 0 determines the trade-off between the two terms. When T = 0, Eq. 5reduce to the original WMD as in Eq. 2. When T is larger, (5) encourages more homogeneous distributions. The added entropy term makes Eq.(5) a strictly concave problem, which can be solved using a matrix scaling algorithm with a linear convergence rate. For example, the Sinkhorn algorithm (Cuturi, 2013) uses the initial condition",
"cite_spans": [
{
"start": 387,
"end": 401,
"text": "(Cuturi, 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "\u03c0 0 ij = exp \u2212 1 T x i 1 , x j 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "and alternates between",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03be t ij = \u03c0 t\u22121 ij L 2 i \u03c0 t\u22121 ij , \u03c0 t ij = \u03be t ij L 1 j \u03be t ij .",
"eq_num": "(6)"
}
],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "The computational cost for each iteration is O(L 2 ), which is more efficient than to that of WMD. Although in theory this iterative algorithm may require a few of iterations to converge, our experiments show that a single iteration (i.e., t = 1) is sufficient and surprisingly effective. Similarly, a tempered-RWMD (TRWMD) can be obtained by adding an entropy term to Eq.(4):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "max \u03c0 L 1 i=1 L 2 j=1 \u03c0 ij x i 1 , x j 2 \u2212 T L 1 i=1 L 2 j=1 \u03c0 ij log \u03c0 ij s.t. L 2 j=1 \u03c0 ij = 1 L 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "By taking the derivative of the Lagrangian of the above objective, the following closed-form solution is obtained:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "\u03c0 * ij = 1 L 1 softmax j 1 T x i 1 , x j 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "Plugging in the optimal \u03c0 * ij back into the objective yields the following metric:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "C(X 1 , X 2 ) = = T L 1 L 1 i=1 log \uf8eb \uf8ed L 2 j=1 exp 1 T x i 1 , x j 2 \uf8f6 \uf8f8 . (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "We note that as T \u2192 0, T log j exp(f j /T ) \u2192 max j (f j ), and therefore Eq. 7reduces to Eq.(3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tempered Word Mover Distance",
"sec_num": "4"
},
{
"text": "Ethayarajh 2019reports that representations obtained by deep models such as BERT exhibit high cosine similarity between any two random words in a corpus, especially at higher layers. They attribute this phenomenon to a highly anisotropic distribution of the word vectors, and further argue that such high similarity represents a bias that blurs the true similarity relationship between word (and sentence) representations and hampers performance in NLP tasks (Mu and Viswanath, 2018) . We reproduce here the main results of (Ethayarajh, 2019) , including the cosine similarity between two random words (baseline), same words in two different sentences (self-similarity) and two random words in the same sentence (intra-similarity). Figure 1 shows these results for several BERT and BERTlike models. As the leftmost figure shows, most of Figure 1 : Cosine similarity between two random words (baseline), same words in two different sentences (selfsimilarity) and two random words in the same sentence (intra-similarity) for five base models, using the original layer representation of words.",
"cite_spans": [
{
"start": 459,
"end": 483,
"text": "(Mu and Viswanath, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 524,
"end": 542,
"text": "(Ethayarajh, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 732,
"end": 740,
"text": "Figure 1",
"ref_id": null
},
{
"start": 837,
"end": 845,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "these models indeed have a high baseline similarity that quickly increases with layer depth. Ethayarajh (2019) proposes to mitigate this bias by subtracting the baseline similarity from the self-similarity and intra-similarity values (per layer). However, the mathematical and statistical meaning of this solution remains unclear.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "In this context, we posit the following three properties that are desirable for word vector representations in context:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "1. Zero expected similarity: The word similarity between two random word vectors in the corpus is approximately zero, which indicates random words are unrelated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "2. Decreasing self-similarity: The word similarity between representations of the same word taken from different sentences decreases in higher layers, as each representation encodes more contextual information about its respective sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "3. Increasing intra-similarity: The word similarity between different words within the same sentence increases in higher layers, as the words encode more common information about the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "Besides their intuitive appeal, our empirical results (in Section 6) do validate that word representations that obey these properties result in higher performance with respect to modeling similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "Since the original word representations does not satisfy these three properties, we explore three methods for centering the word vectors distribution. Consider a corpus C containing M sentences",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "{s i }, each of length N i . Each word vector is D-dimensional, w i,j = [w (1) i,j , ..., w (D) i,j ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "We propose three candidate word distribution centering approaches:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "\u2022 Dimension mean centering: centering a word by subtracting the mean of the dimensions within each word vector,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "v i,j = w i,j \u2212 1 D D l=1 w (l) i,j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "The second term on the RHS is a scalar, which broadcasts to all dimensions of w i,j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "\u2022 Sentence mean centering: centering a word by subtracting the mean of the words within the corresponding sentence,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "v i,j = w i,j \u2212 1 N N k=1 w i,k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "\u2022 Corpus mean centering: centering a word by subtracting the mean of the words in the entire",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "corpus, v i,j = w i,j \u2212 1 i N i M i=1 N i k=1 w i,k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "We compare these three centering approaches in Figure 2 . Due to the layer norm operation in the BERT models, the dimension mean is a small constant that has little effect after subtraction, and therefore it fails on properties 1 and 2 above. The sentence mean centering achieves approximately zero baseline (property 1), but it also reduces the intra-sim to approximately zero (failing property 3). This indicates the subtraction of sentence mean removes the common knowledge of the words about the sentence, which can have a detrimental effect on modeling similarity. Lastly, corpus-mean centering fulfills all three properties above (Fig. 2, bottom row) . In this context, we note that, after applying corpus mean centering, cosine-similarity function is reduced to Pearson's correlation.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 55,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 636,
"end": 657,
"text": "(Fig. 2, bottom row)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "Since the computational cost of corpus mean centering can be prohibitive for a large dataset, we consider a batch-mean centering approach, which would be especially useful for fine-tuning tasks. In practice, we find that the values obtained from batch-mean-centered word vectors are very close to those of corpus-mean-centered word vectors. Therefore and henceforth, we use batch-mean centering to approximate the effect of corpus-mean centering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "Finally, it is worth noting that corpus (batch)mean centering has recently been applied in normalizing multilingual representations (Libovick\u1ef3 et al., 2019; Zhao et al., 2020) . However, we are the first to demonstrate its superiority over various other centering methods in single-language by analyzing the inter-layer representation similarities.",
"cite_spans": [
{
"start": 132,
"end": 156,
"text": "(Libovick\u1ef3 et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 157,
"end": 175,
"text": "Zhao et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Centered Word Vectors",
"sec_num": "5"
},
{
"text": "In order to demonstrate the effectiveness of our newly proposed approaches, we conduct extensive numerical experiments based on two commonlyused benchmarks: Semantic Textual Similarity (STS), and WMT 17-18 metrics shared task. Our experiments are designed to answer the following questions: (1) Are corpus (batch) centered word vectors better than other centered and un-centered word vectors, across different sentence similarity metrics? (2) How do tempered WMD and RWMD compare to their family-relatives MoverScore and BERTscore? (3) How do the temperature hyperparameter and the Sinkhorn iterations affect the performance, and how sensitive are they?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "To show that our results are consistent across different BERT variants, we analyze our similarity metrics over four backbone models: bertbase-uncased, bert-large-uncased, roberta-base and roberta-large, all obtained from the Huggingface * Transformers package. (Zhang et al., 2019b) found that the better layers for evaluation metric are usually not the top layer, since the top one is greatly impacted by the pretraining task. In particular, (Zhang et al., 2019b) perform an an extensive layer sweep analysis and report that the better layers were always around Layer-10 for the base models, and Layer-19 for the large models. Therefore, in our * https://huggingface.co/models experiments, we used Layer-10 for all base models, and Layer-19 for all large models. We also present the results of evaluation metrics using different layers in the Appendix A and confirm that our main conclusion is not affected by the choice of layer.",
"cite_spans": [
{
"start": 261,
"end": 282,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF19"
},
{
"start": 443,
"end": 464,
"text": "(Zhang et al., 2019b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The STS benchmark (Agirre et al., 2016) contains sentence pairs and human evaluated scores between 0 and 5 for each pair, with higher scores indicating higher semantic relatedness or similarity for the pair. From 2012 to 2016, it contains 3108, 1500, 3750, 3000, and 1186 records, respectively.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity (STS)",
"sec_num": "6.1"
},
{
"text": "We answer the first question by comparing batchcentered word vectors with other centered and uncentered word vectors using several sentence similarity metrics, including Sentence-BERT, Wordset-CKA, BERTscore and MoverScore.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Textual Similarity (STS)",
"sec_num": "6.1"
},
{
"text": "The results on STS 12-16 for various metrics are shown in Table 1 . In general, for all four models (per column: base and large version of BERT and RoBERTa), batch centering gets higher Pearson and Spearman's correlation of sentencemean cosine similarity (SBERT) and BERTscore. Dimension-mean centering has very little effect on performance, while sentence-mean improves performance for a few methods. Since Sentence-BERT uses the mean-pooling of the sentence (which would become zero after sentence mean centering), we exclude sentence-mean centering from Sentence-BERT. Overall, batch-mean centering brings an averaged +3.41 / +3.02 improvement, and sentence-mean centering brings an averaged -0.02 / +0.55 on Pearson and Spearman coefficients, across different metrics and models.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 65,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Semantic Textual Similarity (STS)",
"sec_num": "6.1"
},
{
"text": "The WMT metrics shared task is an annual competition for comparing translation metrics against human assessments on machine-translated sentences. We use years 2017 and 2018 of the official WMT test set for evaluation. The 2017 test data includes 3,920 pairs of sentences from the news domain (including a system generated sentence and a groundtruth sentence by human) with human ratings. Similarly, the 2018 test data includes 138,188 pairs of sentences with human ratings but is reported to be much noisier (Sellam et al., 2020) .",
"cite_spans": [
{
"start": 508,
"end": 529,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "WMT metrics shared task",
"sec_num": "6.2"
},
{
"text": "We compare the Tempered WMD (TWMD) and TRWMD with the original WMD (Moverscore) and RWMD (BERTscore) as well as SBERT and WordSet-CKA on WMT 17 and 18. We report the results of RoBERTa-base and RoBERTa-large for WMT 17 and WMT 18, because they appear to be the best performing backbone models for these tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics without fine-tuning",
"sec_num": null
},
{
"text": "To choose reasonable temperatures for TWMD and TRWMD, we tried a few values between 0.001 and 0.15 on WMT 15-16, and chose for each method based on the best averaged performance (details in Appendix B). The resulting temperatures for TWMD, TRWMD, TWMD-b (where \"-b\" stands for batch centering of word vectors) and TRWMD-b are T = 0.02, 0.02, 0.10, 0.15 respectively. We used a single Sinkhorn iteration for TWMD (-b) .",
"cite_spans": [
{
"start": 412,
"end": 416,
"text": "(-b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics without fine-tuning",
"sec_num": null
},
{
"text": "The main results of WMT 17 and 18 are summarized in Table 2 and 3. Batch-mean centering appears to be helpful in improving the scores for all methods. TWMD-b performs the best in most of the cases. In particular, it is on average +2.3 / +2.8 higher than the WMD-based Moverscore-b in WMT-17 and +1.1 / +1.9 higher in WMT-18.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation metrics without fine-tuning",
"sec_num": null
},
{
"text": "We also test the effectiveness of batch-mean centering and TWMD in the fine-tuning process. Similar to (Sellam et al., 2020) , we make use of the human ratings from WMT 15-16 for training, and evaluate the fine-tuned models on WMT 17 and 18. We use the L2 loss function during fine-tuning,",
"cite_spans": [
{
"start": 103,
"end": 124,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics with fine-tuning",
"sec_num": null
},
{
"text": "Loss =MSE(Sim(X 1 , X 2 ),\u0177),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics with fine-tuning",
"sec_num": null
},
{
"text": "where X 1 , X 2 denotes two sentences, and\u0177 is the human score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation metrics with fine-tuning",
"sec_num": null
},
{
"text": "We present the result of TWMD based on the RoBERTa-base and RoBERTa-large backbones in Table 4 . We compare the result with that of state-of-the-art BLEURT (Sellam et al., 2020) models. BLEURTbase-pre and BLEURT-pre are directly fine-tuned on WMT 15-16 (with 5344 records in total), while BLEURTbase and BLEURT are additionally pretrained on a large amount of synthetic data from Wikipedia. The scores obtained by TWMD-b not only clearly outperform BLEURTbase-pre and BLEURT-pre with the same training setting, but are comparable or better than the performance of BLEURT with the extra pretraining stage, on both base and large conditions. This last result is especially notable considering that the synthetic data and the task setup used to further pretrain BLUERT were designed with metric similarity in mind (by leveraging on classical evaluation metrics for MT such as BLEU and ROUGE), whereas TWMD owes its performance solely to a better use of the representations.",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 87,
"end": 94,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation metrics with fine-tuning",
"sec_num": null
},
{
"text": "The results of TWMD, TRWMD, TWMDb, TRWMD-b in Table 2 and 3 used the fixed temperature (tuned in WMT15-16) T = 0.02, 0.02, 0.10, 0.15 for evaluation. A natural question to ask is how sensitive does the result depend on these hyperparameters. Figure 3 shows the Pearson correlation vs. temperature for all four models and metrics with different temperature hyperparameters in WMT 15-18. We can see that the TWMD-b and TRWMD-b methods are robust with temperature. In compar- 3: Correlation with human scores on the WMT18 Metrics Shared Task. '-b' stands for batch centering of word vectors. Table 4 : Correlation with human scores on the WMT17-18 after fine-tuning on WMT15-16. BLEURTbase and BLEURT have an extra pretraining step, as described in (Sellam et al., 2020) .",
"cite_spans": [
{
"start": 746,
"end": 767,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 242,
"end": 250,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 589,
"end": 596,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Temperature dependence",
"sec_num": null
},
{
"text": "WMT-17 Avg.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric",
"sec_num": null
},
{
"text": "WMT ison, TWMD and TRWMD without batch-mean centering appears sensitive to the temperature. The Kendall \u03c4 correlation follow a similar trend. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Metric",
"sec_num": null
},
{
"text": "We also investigate how the Sinkhorn iterations affect the TWMD-b. (Figure 4, left) shows the Pearson correlation vs the number of Sinkhorn iterations in four different temperatures. Somewhat surprisingly, although Sinkhorn algorithm needs more iterations to converge especially for low temperatures (Figure 4, right) , the Pearson correlation of TWMD with only 1 iteration is the highest \u2020 of the Sinkhorn update for various temperatures.",
"cite_spans": [],
"ref_spans": [
{
"start": 67,
"end": 83,
"text": "(Figure 4, left)",
"ref_id": "FIGREF2"
},
{
"start": 300,
"end": 317,
"text": "(Figure 4, right)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Sinkhorn iteration dependence for Tempered-WMD",
"sec_num": null
},
{
"text": "Designing automatic evaluation metrics for text is a challenging task. Recent advances in the field leverage contextualized word representations, which are in turn generated by deep neural network models such as BERT and its variants. We present two techniques for improving such similarity metrics: a batch-mean centering strategy for word representations which addresses the statistical biases within deep contextualized word representations, and a computationally efficient tempered Word Mover Distance. Numerical experiments conducted using representations obtained from a range of BERT-like models confirm that our proposed metric consistently improves the correlation with human judgements. \u2020 A minor exception appears to be for T = 0.01, where the 1-iter TWMD-b is slightly worse than the 10-iter TWMD-b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [
"Rigau"
],
"last": "Claramunt",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "SemEval-2016. 10th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "16--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez Agirre, Rada Mihalcea, Ger- man Rigau Claramunt, and Janyce Wiebe. 2016. Semeval-2016 task 1: Semantic textual similar- ity, monolingual and cross-lingual evaluation. In SemEval-2016. 10th International Workshop on Se- mantic Evaluation; 2016 Jun 16-17;",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ACL (Association for Computational Linguistics)",
"authors": [
{
"first": "C",
"middle": [
"A"
],
"last": "San Diego",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "San Diego, CA. Stroudsburg (PA): ACL; 2016. p. 497-511. ACL (As- sociation for Computational Linguistics).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on intrinsic and ex- trinsic evaluation measures for machine translation and/or summarization.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Sinkhorn distances: Lightspeed computation of optimal transport",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Cuturi",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2292--2300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in neural information processing systems, pages 2292- 2300.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "How contextual are contextualized word representations? IJCNLP",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh. 2019. How contextual are contextu- alized word representations? IJCNLP.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From word embeddings to document distances",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Kusner",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Kolkin",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2015,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "957--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to doc- ument distances. In International conference on ma- chine learning, pages 957-966.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learn- ing of language representations.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "How language-neutral is multilingual bert? arXiv preprint",
"authors": [
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u1ef3",
"suffix": ""
},
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Fraser",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03310"
]
},
"num": null,
"urls": [],
"raw_text": "Jind\u0159ich Libovick\u1ef3, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual bert? arXiv preprint arXiv:1911.03310.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Rouge: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.11692"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "All-but-thetop: Simple and effective postprocessing for word representations",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu and Pramod Viswanath. 2018. All-but-the- top: Simple and effective postprocessing for word representations. In International Conference on Learning Representations.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: A method for automatic eval- uation of machine translation. In Proceedings of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of EMNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Sentencebert: Sentence embeddings using siamese bertnetworks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.10084"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bleurt: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur",
"middle": [
"P"
],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur P. Parikh. 2020. Bleurt: Learning robust metrics for text gen- eration.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.08777"
]
},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J Liu. 2019a. Pegasus: Pre-training with ex- tracted gap-sentences for abstractive summarization. arXiv preprint arXiv:1912.08777.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09675"
]
},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019b. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Johannes Bjerva, and Isabelle Augenstein. 2020. Inducing languageagnostic multilingual representations",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.09112"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Steffen Eger, Johannes Bjerva, and Is- abelle Augenstein. 2020. Inducing language- agnostic multilingual representations. arXiv preprint arXiv:2008.09112.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eger",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02622"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized em- beddings and earth mover distance. arXiv preprint arXiv:1909.02622.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Aleksandar Savkov, and Nils Hammerla",
"authors": [
{
"first": "Vitalii",
"middle": [],
"last": "Zhelezniak",
"suffix": ""
},
{
"first": "April",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Busbridge",
"suffix": ""
}
],
"year": 2019,
"venue": "Correlations between word vector sets",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.02902"
]
},
"num": null,
"urls": [],
"raw_text": "Vitalii Zhelezniak, April Shen, Daniel Busbridge, Alek- sandar Savkov, and Nils Hammerla. 2019. Corre- lations between word vector sets. arXiv preprint arXiv:1910.02902.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Comparison of the three centering approaches. Dimension mean centering has very little effect. Sentence mean centering removes too much common sentence information. Corpus mean centering shows the correct word contextualization.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "Pearson correlation vs. temperature in evaluation of WMT 17-18.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "Left: Pearson correlation as a function of the number of iteration in Sinkhorn algorithm. Right: the convergence rate of the Sinkhorn algorithm. The underlying model in this figure is roberta-base with batch mean centered word vectors.",
"num": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"text": "Experimental results for various metrics on STS 12-16 datasets (averaged) with BERT/Roberta pretrained checkpoints. The correlations are Pearson (left) and Spearman's rank (right).",
"type_str": "table",
"content": "<table><tr><td>Metric</td><td>bert-base-uncased r / \u03c1</td><td>bert-large-uncased r / \u03c1</td><td>roberta-base r / \u03c1</td><td>roberta-large r / \u03c1</td></tr><tr><td>SBERT</td><td>58.7 / 58.9</td><td>56.9 / 57.3</td><td>58.0 / 59.6</td><td>58.5 / 60.2</td></tr><tr><td>SBERT-batch</td><td>63.8 / 62.8</td><td>62.8 / 62.3</td><td>65.9 / 65.1</td><td>67.1 / 66.3</td></tr><tr><td>SBERT-dim</td><td>58.7 / 58.9</td><td>56.9 / 57.3</td><td>58.0 / 59.6</td><td>58.5 / 60.2</td></tr><tr><td>CKA</td><td>59.8 / 59.5</td><td>58.7 / 58.9</td><td>58.6 / 59.9</td><td>59.1 / 60.4</td></tr><tr><td>CKA-batch</td><td>60.3 / 61.1</td><td>58.9 / 60.0</td><td>61.1 / 61.5</td><td>62.3 / 62.5</td></tr><tr><td>CKA-sent</td><td>58.6 / 59.8</td><td>59.1 / 60.5</td><td>58.7 / 59.2</td><td>60.6 / 61.0</td></tr><tr><td>CKA-dim</td><td>59.8 / 59.5</td><td>58.7 / 58.9</td><td>58.6 / 59.9</td><td>59.1 / 60.4</td></tr><tr><td>MoverScore</td><td>56.3 / 58.2</td><td>54.4 / 56.7</td><td>54.8 / 56.2</td><td>54.5 / 56.0</td></tr><tr><td>MoverScore-batch</td><td>58.0 / 60.1</td><td>56.2 / 58.6</td><td>57.2 / 59.0</td><td>57.7 / 59.3</td></tr><tr><td>MoverScore-sent</td><td>54.2 / 57.4</td><td>54.9 / 58.3</td><td>54.1 / 56.5</td><td>55.9 / 58.1</td></tr><tr><td>MoverScore-dim</td><td>56.3 / 58.2</td><td>54.4 / 56.7</td><td>54.8 / 56.2</td><td>54.5 / 56.0</td></tr><tr><td>BERTscore</td><td>59.3 / 59.0</td><td>57.7 / 57.8</td><td>57.3 / 57.2</td><td>57.0 / 57.1</td></tr><tr><td>BERTscore-batch</td><td>61.1 / 60.9</td><td>59.6 / 59.7</td><td>60.6 / 60.6</td><td>61.5 / 61.4</td></tr><tr><td>BERTscore-sent</td><td>57.3 / 57.6</td><td>58.1 / 58.6</td><td>56.8 / 57.2</td><td>59.0 / 59.2</td></tr><tr><td>BERTscore-dim</td><td>59.3 / 59.0</td><td>57.7 / 57.8</td><td>57.3 / 57.2</td><td>57.0 / 57.1</td></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"text": "Correlation with human scores on the WMT17 Metrics Shared Task. '-b' stands for batch centering of word vectors.",
"type_str": "table",
"content": "<table><tr><td>Metric</td><td>cs-en \u03c4 / r</td><td>de-en \u03c4 / r</td><td>fi-en \u03c4 / r</td><td>lv-en \u03c4 / r</td><td>ru-en \u03c4 / r</td><td>tr-en \u03c4 / r</td><td>zh-en \u03c4 / r</td><td>Avg. \u03c4 / r</td></tr><tr><td>roberta-base</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>SBERT</td><td>45.1 / 60.0</td><td>44.6 / 58.3</td><td>58.4 / 69.6</td><td>42.9 / 60.6</td><td>45.8 / 63.1</td><td>46.3 / 52.9</td><td>46.0 / 62.0</td><td>47.0 / 60.9</td></tr><tr><td>SBERT-b</td><td>45.2 / 63.4</td><td>45.8 / 64.1</td><td>56.8 / 74.6</td><td>45.1 / 64.9</td><td>44.9 / 64.0</td><td>47.8 / 63.4</td><td>45.4 / 66.1</td><td>47.3 / 65.8</td></tr><tr><td>CKA</td><td>45.0 / 60.5</td><td>44.8 / 58.8</td><td>58.3 / 70.5</td><td>42.8 / 61.0</td><td>45.9 / 63.4</td><td>46.3 / 53.9</td><td>46.1 / 62.4</td><td>47.0 / 61.5</td></tr><tr><td>CKA-b</td><td>48.8 / 68.4</td><td>49.1 / 69.1</td><td>61.3 / 81.3</td><td>48.5 / 69.6</td><td>49.6 / 69.6</td><td>52.1 / 71.7</td><td>49.6 / 70.8</td><td>51.3 / 71.5</td></tr><tr><td>MoverScore</td><td>48.5 / 66.0</td><td>47.1 / 65.9</td><td>61.6 / 80.9</td><td>48.9 / 68.2</td><td>51.6 / 69.8</td><td>53.8 / 74.2</td><td>53.4 / 74.0</td><td>52.1 / 71.3</td></tr><tr><td>MoverScore-b</td><td>47.9 / 66.3</td><td>47.3 / 66.1</td><td>61.6 / 81.2</td><td>48.6 / 68.6</td><td>51.4 / 69.8</td><td>54.3 / 74.9</td><td>52.2 / 72.8</td><td>51.9 / 71.3</td></tr><tr><td>BERTscore</td><td>47.4 / 64.7</td><td>48.0 / 66.9</td><td>61.9 / 79.9</td><td>49.7 / 69.6</td><td>50.8 / 69.5</td><td>53.4 / 71.3</td><td>50.8 / 71.7</td><td>51.7 / 70.5</td></tr><tr><td>BERTscore-b</td><td>47.5 / 66.4</td><td>48.8 / 68.7</td><td>61.7 / 81.3</td><td>49.9 / 70.6</td><td>50.7 / 69.8</td><td>53.8 / 73.2</td><td>49.1 / 70.1</td><td>51.6 / 71.5</td></tr><tr><td>TWMD</td><td>48.3 / 65.8</td><td>49.6 / 68.8</td><td>62.5 / 81.2</td><td>51.3 / 70.5</td><td>52.1 / 71.2</td><td>54.6 / 73.8</td><td>54.7 / 75.5</td><td>53.3 / 72.3</td></tr><tr><td>TWMD-b</td><td>50.0 / 68.5</td><td>51.5 / 70.8</td><td>63.0 / 82.8</td><td>51.9 / 72.3</td><td>53.5 / 73.2</td><td>56.6 / 77.0</td><td>54.0 / 75.0</td><td>54.4 / 74.3</td></tr><tr><td>TRWMD</td><td>47.4 / 64.9</td><td>47.9 / 67.0</td><td>61.8 / 80.1</td><td>49.5 / 69.3</td><td>50.9 / 69.5</td><td>53.4 / 71.8</td><td>50.7 / 71.7</td><td>51.7 / 70.7</td></tr><tr><td>TRWMD-b</td><td>48.5 / 66.8</td><td>49.0 / 68.5</td><td>61.1 / 81.3</td><td>49.5 / 69.3</td><td>51.4 / 69.8</td><td>54.3 / 74.7</td><td>50.2 / 70.8</td><td>52.0 / 71.6</td></tr><tr><td>roberta-large</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>SBERT</td><td>50.9 / 67.2</td><td>53.1 / 70.8</td><td>61.3 / 73.6</td><td>51.6 / 70.5</td><td>51.4 / 69.0</td><td>52.4 / 61.4</td><td>51.9 / 68.0</td><td>53.2 / 68.6</td></tr><tr><td>SBERT-b</td><td>47.6 / 66.9</td><td>50.7 / 69.5</td><td>56.8 / 74.1</td><td>47.9 / 67.8</td><td>47.3 / 66.4</td><td>48.5 / 65.2</td><td>47.6 / 67.5</td><td>49.5 / 68.2</td></tr><tr><td>CKA</td><td>51.4 / 68.7</td><td>53.4 / 71.3</td><td>61.5 / 74.5</td><td>51.8 / 71.1</td><td>51.8 / 69.3</td><td>52.7 / 62.7</td><td>52.1 / 68.8</td><td>53.5 / 69.5</td></tr><tr><td>CKA-b</td><td>51.6 / 72.3</td><td>54.4 / 74.2</td><td>61.8 / 81.6</td><td>52.5 / 73.7</td><td>53.2 / 73.0</td><td>53.6 / 73.8</td><td>52.7 / 73.5</td><td>54.3 / 74.6</td></tr><tr><td>MoverScore</td><td>51.6 / 68.8</td><td>53.9 / 71.8</td><td>62.0 / 81.1</td><td>53.4 / 71.7</td><td>54.5 / 71.8</td><td>56.3 / 76.2</td><td>56.3 / 76.1</td><td>55.5 / 73.9</td></tr><tr><td>MoverScore-b</td><td>51.2 / 69.6</td><td>53.2 / 71.7</td><td>63.1 / 82.1</td><td>53.3 / 72.7</td><td>54.5 / 72.8</td><td>56.8 / 76.9</td><td>55.1 / 75.4</td><td>55.3 / 74.5</td></tr><tr><td>BERTscore</td><td>50.9 / 66.9</td><td>53.4 / 72.3</td><td>61.7 / 79.6</td><td>53.5 / 71.6</td><td>53.8 / 71.5</td><td>54.8 / 71.7</td><td>53.9 / 74.4</td><td>54.6 / 72.6</td></tr><tr><td>BERTscore-b</td><td>51.7 / 71.2</td><td>53.9 / 74.1</td><td>63.6 / 82.5</td><td>54.8 / 75.1</td><td>54.8 / 73.7</td><td>55.6 / 75.0</td><td>52.7 / 73.6</td><td>55.3 / 75.0</td></tr><tr><td>TWMD</td><td>52.3 / 69.1</td><td>55.7 / 74.4</td><td>63.1 / 81.5</td><td>54.1 / 72.6</td><td>56.0 / 74.1</td><td>55.7 / 74.5</td><td>57.5 / 77.7</td><td>56.3 / 74.9</td></tr><tr><td>TWMD-b</td><td>53.9 / 73.3</td><td>56.4 / 75.9</td><td>64.4 / 83.5</td><td>55.2 / 75.1</td><td>56.9 / 76.2</td><td>57.9 / 78.1</td><td>56.8 / 77.4</td><td>57.4 / 77.1</td></tr><tr><td>TRWMD</td><td>50.8 / 67.3</td><td>53.3 / 72.1</td><td>61.5 / 79.7</td><td>53.1 / 71.3</td><td>54.0 / 71.5</td><td>54.5 / 72.0</td><td>54.0 / 74.3</td><td>54.5 / 72.6</td></tr><tr><td>TRWMD-b</td><td>52.5 / 71.2</td><td>53.9 / 73.4</td><td>62.7 / 82.0</td><td>53.8 / 73.4</td><td>54.8 / 72.8</td><td>55.7 / 76.1</td><td>53.4 / 74.1</td><td>55.3 / 74.7</td></tr></table>",
"num": null
},
"TABREF2": {
"html": null,
"text": "",
"type_str": "table",
"content": "<table/>",
"num": null
}
}
}
}