{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:10:27.353419Z" }, "title": "Training and Domain Adaptation for Supervised Text Segmentation", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Mannheim", "location": {} }, "email": "" }, { "first": "Ananya", "middle": [], "last": "Ganesh", "suffix": "", "affiliation": {}, "email": "aganesh002@ets.org" }, { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "", "affiliation": {}, "email": "ssomasundaran@ets.org" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Unlike traditional unsupervised text segmentation methods, recent supervised segmentation models rely on Wikipedia as the source of large-scale segmentation supervision. These models have, however, predominantly been evaluated on the in-domain (Wikipedia-based) test sets, preventing conclusions about their general segmentation efficacy. In this work, we focus on the domain transfer performance of supervised neural text segmentation in the educational domain. To this end, we first introduce K12SEG, a new dataset for evaluation of supervised segmentation, created from educational reading material for grade-1 to collegelevel students. We then benchmark a hierarchical text segmentation model (HITS), based on RoBERTa, in both in-domain and domaintransfer segmentation experiments. While HITS produces state-of-the-art in-domain performance (on three Wikipedia-based test sets), we show that, subject to the standard fullblown fine-tuning, it is susceptible to domain overfitting. We identify adapter-based finetuning as a remedy that substantially improves transfer performance.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Unlike traditional unsupervised text segmentation methods, recent supervised segmentation models rely on Wikipedia as the source of large-scale segmentation supervision. These models have, however, predominantly been evaluated on the in-domain (Wikipedia-based) test sets, preventing conclusions about their general segmentation efficacy. In this work, we focus on the domain transfer performance of supervised neural text segmentation in the educational domain. To this end, we first introduce K12SEG, a new dataset for evaluation of supervised segmentation, created from educational reading material for grade-1 to collegelevel students. We then benchmark a hierarchical text segmentation model (HITS), based on RoBERTa, in both in-domain and domaintransfer segmentation experiments. While HITS produces state-of-the-art in-domain performance (on three Wikipedia-based test sets), we show that, subject to the standard fullblown fine-tuning, it is susceptible to domain overfitting. We identify adapter-based finetuning as a remedy that substantially improves transfer performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Organizing long texts into coherent segments facilitates human text comprehension as well as downstream tasks like text summarization (Angheluta et al., 2002; Bokaei et al., 2016) , passage retrieval (Huang et al., 2003; Prince and Labadi\u00e9, 2007; Shtekh et al., 2018) , and sentiment analysis (Xia et al., 2010; Li et al., 2020) . Text segmentation is very important in the educational domain as it enables large-scale passage extraction. Educators, for example, need to extract coherent passage segments from books to create reading materials for students. Similarly, test developers must create reading assessments at scale by extracting coherent segments from a variety of sources.", "cite_spans": [ { "start": 134, "end": 158, "text": "(Angheluta et al., 2002;", "ref_id": "BIBREF0" }, { "start": 159, "end": 179, "text": "Bokaei et al., 2016)", "ref_id": "BIBREF3" }, { "start": 200, "end": 220, "text": "(Huang et al., 2003;", "ref_id": "BIBREF15" }, { "start": 221, "end": 246, "text": "Prince and Labadi\u00e9, 2007;", "ref_id": "BIBREF22" }, { "start": 247, "end": 267, "text": "Shtekh et al., 2018)", "ref_id": "BIBREF26" }, { "start": 293, "end": 311, "text": "(Xia et al., 2010;", "ref_id": "BIBREF29" }, { "start": 312, "end": 328, "text": "Li et al., 2020)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most segmentation models allow for (i.e., sequential) segmentation (Hearst, 1994; Choi, 2000; Riedl and Biemann, 2012; Glava\u0161 et al., 2016; Koshorek et al., 2018; Glava\u0161 and Somasundaran, 2020) , though methods for hierarchical segmentation have been proposed as well (Eisenstein, 2009; Du et al., 2013; Bayomi and Lawless, 2018) . Owing to the absence of large annotated datasets, (linear) text segmentation has long been limited to unsupervised models, relying on various measures of lexical and semantic sentence overlap (Hearst, 1994; Choi, 2000; Utiyama and Isahara, 2001; Fragkou et al., 2004; Glava\u0161 et al., 2016) and topic modeling (Brants et al., 2002; Misra et al., 2009; Riedl and Biemann, 2012) . More recently, Koshorek et al. (2018) automatically created a large segment-annotated dataset WIKI727 by leveraging the headings structure in Wikipedia articles, effectively enabling supervised text segmentation; they then trained a hierarchical recurrent neural segmentation model on WIKI727. In subsequent work, Glava\u0161 and Somasundaran (2020) improved on their segmentation performance by (i) replacing recurrent components of the hierarchical model with transformer networks (Vaswani et al., 2017) and (ii) adding an auxiliary self-supervised coherence objective. Although both Koshorek et al. (2018) and Glava\u0161 and Somasundaran (2020) report massive gains over unsupervised baselines, their models have mostly been subject to in-domain evaluation on test sets also derived from Wikipedia.", "cite_spans": [ { "start": 67, "end": 81, "text": "(Hearst, 1994;", "ref_id": "BIBREF12" }, { "start": 82, "end": 93, "text": "Choi, 2000;", "ref_id": "BIBREF6" }, { "start": 94, "end": 118, "text": "Riedl and Biemann, 2012;", "ref_id": "BIBREF24" }, { "start": 119, "end": 139, "text": "Glava\u0161 et al., 2016;", "ref_id": "BIBREF10" }, { "start": 140, "end": 162, "text": "Koshorek et al., 2018;", "ref_id": "BIBREF17" }, { "start": 163, "end": 193, "text": "Glava\u0161 and Somasundaran, 2020)", "ref_id": "BIBREF11" }, { "start": 268, "end": 286, "text": "(Eisenstein, 2009;", "ref_id": "BIBREF8" }, { "start": 287, "end": 303, "text": "Du et al., 2013;", "ref_id": "BIBREF7" }, { "start": 304, "end": 329, "text": "Bayomi and Lawless, 2018)", "ref_id": "BIBREF1" }, { "start": 524, "end": 538, "text": "(Hearst, 1994;", "ref_id": "BIBREF12" }, { "start": 539, "end": 550, "text": "Choi, 2000;", "ref_id": "BIBREF6" }, { "start": 551, "end": 577, "text": "Utiyama and Isahara, 2001;", "ref_id": "BIBREF27" }, { "start": 578, "end": 599, "text": "Fragkou et al., 2004;", "ref_id": "BIBREF9" }, { "start": 600, "end": 620, "text": "Glava\u0161 et al., 2016)", "ref_id": "BIBREF10" }, { "start": 640, "end": 661, "text": "(Brants et al., 2002;", "ref_id": "BIBREF4" }, { "start": 662, "end": 681, "text": "Misra et al., 2009;", "ref_id": "BIBREF20" }, { "start": 682, "end": 706, "text": "Riedl and Biemann, 2012)", "ref_id": "BIBREF24" }, { "start": 724, "end": 746, "text": "Koshorek et al. (2018)", "ref_id": "BIBREF17" }, { "start": 1023, "end": 1053, "text": "Glava\u0161 and Somasundaran (2020)", "ref_id": "BIBREF11" }, { "start": 1187, "end": 1209, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF28" }, { "start": 1290, "end": 1312, "text": "Koshorek et al. (2018)", "ref_id": "BIBREF17" }, { "start": 1317, "end": 1347, "text": "Glava\u0161 and Somasundaran (2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this work, in contrast, we concern ourselves with domain transfer of supervised text segmentation models, with a focus on the educational domain. To investigate the effects of domain transfer in supervised text segmentation, we first introduce K12SEG -a segment-annotated dataset which we automatically created from educational texts designed for grade-1 to college-level student population. We then benchmark a hierarchical neural token transformer text segmentation model (HITS) in a range of indomain and domain-transfer segmentation experiments involving WIKI727 (Koshorek et al., 2018) and our new K12SEG dataset. Our HITS model, illustrated in Figure 1 , though similar to the hierarchical segmentation models of Glava\u0161 and Somasundaran (2020) , differs in two crucial aspects. First, we initialize the parameters of the lower (token-level) transformer with the weights of the pretrained RoBERTa (Liu et al., 2019) . Secondly, aiming to prevent both (1) forgetting of distributional information captured in RoBERTa's parameters and (2) overfitting to the training domain -we augment the layers of the token-level transformer with adapter parameters (Rebuffi et al., 2018; Houlsby et al., 2019) before segmentation training. Adapter-based fine-tuning only updates the additional adapter parameters and the original transformer parameters are frozen: this fully preserves the distributional knowledge obtained in transformer's pretraining. Encoding out-of-domain segmentation knowledge (e.g., from the WIKI727 dataset) separately from the distributional information (original RoBERTa parameters) allows to combine the two types of information more flexibly during the secondary segmentation training in the target domain (e.g., on K12SEG), resulting in more effective domain transfer.", "cite_spans": [ { "start": 570, "end": 593, "text": "(Koshorek et al., 2018)", "ref_id": "BIBREF17" }, { "start": 722, "end": 752, "text": "Glava\u0161 and Somasundaran (2020)", "ref_id": "BIBREF11" }, { "start": 905, "end": 923, "text": "(Liu et al., 2019)", "ref_id": "BIBREF19" }, { "start": 1158, "end": 1180, "text": "(Rebuffi et al., 2018;", "ref_id": "BIBREF23" }, { "start": 1181, "end": 1202, "text": "Houlsby et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 653, "end": 661, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "s 1 : w 1,1 w 1,2 w 1,3 ... w 1,T s 2 : w 2,1 w 2,2 w 2,3 ... w 2,T ... s N : w N,1 w N,2 w N,3 ... w N,T ...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results confirm the above expectations. Our adapter-augmented HITS model trained on WIKI727, besides yielding state-of-the-art indomain (Wikipedia) segmentation performance, facilitates domain transfer and leads to substantial gains in our target educational domain (K12SEG).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Hierarchical Transformer-Based Model. Our base segmentation model ( Figure 1 ) consists of two hierarchically linked transformers: the lower transformer contextualizes tokens within sentences and yields sentence embeddings; the upper transformer then contextualizes sentence representations. An individual training instance is a sequence of N sentences, {s 1 , . . . , s N }, each consisting of T (subword) tokens, s i = {w i,1 , . . . , w i,T }. We initialize the lower transformer with the pretrained RoBERTa weights (Liu et al., 2019) . 1 We then use the transformed vector of the sentence start token (), s i , as the embedding of the sentence s i . The purpose of the randomly initialized upper sentence-level transformer is to contextualize the sentences in the sequence with one another. Let x i be the transformed representation of the sentence s i , produced by the upper transformer. The segmentation prediction for the sentence s i is then made by the simple feed-forward softmax classifier:", "cite_spans": [ { "start": 519, "end": 537, "text": "(Liu et al., 2019)", "ref_id": "BIBREF19" }, { "start": 540, "end": 541, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 68, "end": 76, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "y i = softmax (Wx i + b).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "We minimize the binary cross-entropy loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "Adapter-Based Training. Unlike Koshorek et al. (2018) ; Glava\u0161 and Somasundaran (2020), we initialize the lower transformer with RoBERTa weights, encoding general-purpose distributional knowledge. Full fine-tuning, in which all transformer parameters are updated in downstream training, may overwrite useful distributional signal with domain-specific artefacts, overfit the model to the training domain, and impede domain transfer for the downstream task (R\u00fcckl\u00e9 et al., 2020) . Alternative, adapter-based fine-tuning (Rebuffi et al., 2018; Houlsby et al., 2019; , injects additional adapter parameters into transformer layers and updates only them during downstream fine-tuning, keeping the original transformer parameters unchanged. We adopt the bottleneck adapter architecture of Houlsby et al. 2019, reported effective for a wide range of downstream task. Concretely, in each layer of the lower transformer, we insert two bottleneck adapters: one after the multi-head attention sublayer and another after the feed-forward sublayer. Let X \u2208 R T \u00d7H stack contextualized vectors for the sequence of T tokens in one of the transformer layers, input to the adapter layer. The adapter then yields the following output:", "cite_spans": [ { "start": 31, "end": 53, "text": "Koshorek et al. (2018)", "ref_id": "BIBREF17" }, { "start": 455, "end": 476, "text": "(R\u00fcckl\u00e9 et al., 2020)", "ref_id": "BIBREF25" }, { "start": 518, "end": 540, "text": "(Rebuffi et al., 2018;", "ref_id": "BIBREF23" }, { "start": 541, "end": 562, "text": "Houlsby et al., 2019;", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "X = X + g (XW d + b d ) W u + b u .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "The parameter matrix W d \u2208 R H\u00d7a down-projects the token vectors from X to the adapter size a < H, and W u \u2208 R a\u00d7H up-projects the activated down-projections back to transformer's hidden size H; g is the non-linear activation function.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "Training Instances and Inference. We train the model with sequences of N sentences as instances which we create by sliding the window of size N over document's sentences, with a sliding step of N/2. At inference, for each sentence s, we make predictions for all of the windows that contain s. This means that we obtain (at most) N segmentation probabilities for each sentence (for the i-th sentence, we get predictions from windows", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "[i\u2212N +1 : i], [i\u2212N +2 : i+1], . . . , [i : i+N \u22121]).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "We average the sentence's segmentation probabilities obtained across different windows and predict that it starts a new segment if the average is above the threshold t. We treat the sequence length N and threshold t as hyperparameters and optimize them using the development datasets. For brevity, we describe the optimization details in the Appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Segmentation Model", "sec_num": "2" }, { "text": "3.1 Data WIKI727. To the best of our knowledge, WIKI727 (Koshorek et al., 2018) is the only large segment-annotated dataset designed for supervised text segmentation. It consists of 727K Wikipedia articles (train portion: 582K articles), automatically segmented according to the articles' heading structure.", "cite_spans": [ { "start": 56, "end": 79, "text": "(Koshorek et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "K12SEG. To empirically evaluate domain transfer in supervised text segmentation, we introduce a new dataset, dubbed K12SEG, created from educational reading material designed for grade-1 to college-level students (Zeno et al., 1995) . The original dataset, the Educators Word Frequency, was created by standardized sampling of reading materials from a variety of content areas (e.g. science, social science, home economics, fine arts, health, business etc.). Each sample is 250-325 words long. We create one synthetic K12SEG instance by selecting and concatenating two samples from (a) the same book, (b) different books from the same content area (e.g., science) or (c) different books from different content areas. In contrast to WIKI727, in which the number and sizes of segments greatly vary across Wikipedia articles, K12SEG documents are more uniform: with two segments (samples) each and minor variation in sentence length (mean: 30 sentences). Besides the different genre between WIKI727 and K12SEG, this stark difference between their distributions of segment numbers and sizes poses an additional challenge for the domain transfer. We split the total of 18,906 K12SEG documents into train (15,570 documents), development (3,000), and test portions (336). An example 2-segment document from from K12SEG is shown in Table 1 .", "cite_spans": [ { "start": 213, "end": 232, "text": "(Zeno et al., 1995)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 1324, "end": 1331, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "Wikipedia-Based Test Sets. For the in-domain (Wikipedia) evaluation, we use three small-sized test sets. WIKI50 is an additional test set consisting of 50 documents, created by Koshorek et al. (2018) ", "cite_spans": [ { "start": 177, "end": 199, "text": "Koshorek et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3" }, { "text": "Experiments. We carry out two sets of experiments. We first benchmark the performance of our HITS model \"in domain\", i.e., by training it on WIKI727 and evaluating it on WIKI50, EL-EMENTS, and CITIES. Here we directly compare HITS (with full and adapter-based fine-tuning) with state-of-the-art segmentation models: the hierarchical Bi-LSTM (HBi-LSTM) model of Koshorek et al. (2018) , and two hierarchical transformer variants of Glava\u0161 and Somasundaran (2020) -with (CATS) and without (TLT-TS) the auxiliary coherence objective. The second set of experiments quantifies the efficacy of HITS in transfer for the educational domain. We compare the performance of \"in-domain\" training on K12SEG with transfer strategies: (i) zero-shot transfer: HITS variants (full and adapter-based fine-tuning) trained on WIKI727 and evaluated on the K12SEG test set and (ii) sequential training: HITS variants sequentially trained first on WIKI727 and then on the train portion of K12SEG.", "cite_spans": [ { "start": 361, "end": 383, "text": "Koshorek et al. (2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "Training and Optimization Details. We initialize the weights of the lower transformer network in all HITS variants with the pretrained RoBERTa Base model, having L L = 12 layers (with 12 attention heads each) and hidden representations", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3.2" }, { "text": "Second segment", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First segment", "sec_num": null }, { "text": "Traveling familiar routes in our family cars we grow so accustomed to crossing small bridges and viaducts that we forget they are there. We have to stop and think to remember how often they come along. Only when a bridge is closed for repairs and we have to take a long detour do we realize how difficult life would be without it. Try to imagine our world with all the bridges removed. In many places life would be seriously disrupted, traffic would be paralyzed, and business would be terrible. Bridges bring us together and keep us together. They are a basic necessity for civilization. The first structures human beings built were bridges. Before prehistoric people began to build even the crudest shelter for themselves, they bridged streams. Early prehistoric tribes were wanderers. Since they did not stay in one place they did not think of building themselves houses. But they could not wander far without finding a stream in their way. Nature provided the first bridges. Finding themselves confronted with some narrow but rapid river, humans noticed a tree that had fallen across the river from bank to bank. The person who first scrambled across a fallen log, perhaps after watching monkeys run across it, was the first human being to cross a bridge. Eventually, when they had learned how to chop down a tree, they also learned how to make a tree fall in the direction they wanted it to fall. The wandering tribe that first deliberately made a tree fall across a stream were the first bridge-builders.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "First segment", "sec_num": null }, { "text": "Working in the mud and water of a river bottom was difficult and dangerous. People were often crushed or maimed by the pile driver or the piles. But the work on the foundations is the most important part of bridgebuilding. The part of a bridge that is underwater, the part we never see, is more important than the part we do see, because no matter how well made the superstructure may be, if the foundation is not solid the bridge will fall. Not only did the pier foundations have to be solid, they also had to be protected as much as possible from wear. A flowing river constantly stirs up the bottom, so that the water's lower depths are a thick soup filled with mud and sand and pebbles which grind against anything in the path of the current. This action is called scour. to reduce the wear and tear of the current, the Romans built the fronts of their piers in the shape of a boat's prow. The Romans used only one kind of arch, the semicircular. The arch describes a full half-circle from pier to pier. Each end of the half-circle rests on a pier, and the two piers will hold the arch up by themselves, even before the rest of the bridge is built, provided each pier is at least one third as thick as the width of the arch. Thus a bridge could be built one arch at a time, and if the work had to stop the partial structure would stay in place until work could be resumed. The Romans built their bridges during the summer and fall, when the weather was best and the water level was generally lowest, and stopped during winter and spring. of size H = 768. Our upper-level transformer for sentence contextualization has L U = 6 layers (with 6 attention heads each), and the same hidden size H = 768. We apply a dropout (p = 0.1) on the outputs of both the lower and upper transformer outputs. In adapter-based fine-tuning we set the adapter size to a = 64 and use GeLU (Hendrycks and Gimpel, 2016) as the activation function. In all experiments, we limit the sentence input to T = 128 subword tokens (shorter sentences are padded, longer sentences trimmed). We optimize models' parameters using the Adam algorithm (Kingma and Ba, 2015) with the initial learning rate of 10 \u22125 . We train for at most 30 epochs over the respective training set (WIKI727 or K12SEG), with early stopping based on the loss on the respective development set. We train in batches of 32 instances (i.e., 32 sequences of N sentences) and have found (via cross-validation on respective development sets) the optimal sequence length to be N = 16 sentences and the optimal average segmentation probability threshold at inference time to be t = 0.35.", "cite_spans": [ { "start": 1871, "end": 1899, "text": "(Hendrycks and Gimpel, 2016)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "First segment", "sec_num": null }, { "text": "We report the results in terms of P K , the standard evaluation metric for text segmentation (Beeferman et al., 1999) . P K is the percentage of wrong predictions on whether or not the first and last sentence in a sequence of K consecutive sentences belong to the same segment. As in previous work (Koshorek et al., 2018; Glava\u0161 and Somasundaran, 2020) , we set K to one half of the average gold-standard segment size of the evaluation dataset.", "cite_spans": [ { "start": 93, "end": 117, "text": "(Beeferman et al., 1999)", "ref_id": "BIBREF2" }, { "start": 298, "end": 321, "text": "(Koshorek et al., 2018;", "ref_id": "BIBREF17" }, { "start": 322, "end": 352, "text": "Glava\u0161 and Somasundaran, 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Results and Discussion", "sec_num": "3.3" }, { "text": "In-Domain Wikipedia Evaluation. We report the results of the in-domain Wikipedia evaluation on WIKI50, CITIES, and ELEMENTS in Table 2 . Our HITS model variants, which we start training with the pretrained RoBERTa as the token-level transformer, outperform the hierarchical neural models from (Koshorek et al., 2018) and ( and Somasundaran, 2020), which start from a randomly initialized token-level encoder. This is consistent with findings from many other tasks: finetuning pretrained transformers yields better results than task-specific training from scratch, even if the dataset is large (as is the case with WIKI727). Full fine-tuning produces better results on WIKI50, whereas adapter-based fine-tuning exhibits stronger performance on CITIES and ELEMENTS. Since the articles in WIKI50 come from a range of Wikipedia categories, much like in the training set WIKI727, whereas CITIES and ELEMENTS each contain articles from a single category, we believe these results already indicate that full fine-tuning is more prone to domain (genre, topic) overfitting than adapterbased tuning. Remarkably, HITS (Full) surpasses the human WIKI50 performance, reported to stand at 14.97 P K points (Koshorek et al., 2018) .", "cite_spans": [ { "start": 293, "end": 316, "text": "(Koshorek et al., 2018)", "ref_id": "BIBREF17" }, { "start": 1192, "end": 1215, "text": "(Koshorek et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 127, "end": 134, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results and Discussion", "sec_num": "3.3" }, { "text": "Transfer Results. Table 3 shows the performance of both in-domain and transferred HITS model variants on the K12SEG test set. Interestingly, with Full fine-tuning, we observe the same performance regardless of whether we train the model on the out-of-domain (but much larger) WIKI727 dataset or the (smaller) in-domain K12SEG training set. More interestingly, adapterbased fine-tuning in the zero-shot domain transfer yields better performance than in-domain adapter fine-tuning. Poor performance of in-domain training could mean that K12SEG is either (a) insufficiently large or (b) contains such versatile segmentation examples over which it is hard to generalize. Gains from sequential domain transfer, in which the model is exposed to exactly the same K12SEG training set but only after it was trained on a much larger out-of-domain WIKI727 dataset, point to (a) as the more likely explanation. In both in-domain and zero-shot setups, adapter-based fine-tuning produces better segmentation than full fine-tuning, con- Table 3 : Segmentation performance in domain transfer. Evaluation on K12SEG test set. In domain: training on the K12SEG train set; Zero-shot: training on the train portion of WIKI727; Sequential: sequential training, first on WIKI727 and then on the train portion of K12SEG. For Sequential training, the column Freeze specifies whether the the lower transformer's parameters were frozen during secondary, in-domain finetuning (on the train portion of K12SEG).", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 25, "text": "Table 3", "ref_id": null }, { "start": 1022, "end": 1029, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Domain", "sec_num": null }, { "text": "tributing to the conclusion that adapter-based finetuning curbs overfitting to domain-specific artefacts, improving the model's generalization ability. Finally, the sequential training in which we freeze the lower transformer's parameters (including the adapters) during the (secondary) in-domain training, gives the best result overall. We speculate that the relatively small K12SEG train set gives the advantage to the model variant that uses that limited-size data to fine-tune fewer parameters (i.e., only the upper, sentence-level transformer).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain", "sec_num": null }, { "text": "Supervised text segmentation has been limited to a single large-scale segmentation dataset, WIKI727, built automatically from Wikipedia. In this work, we studied domain transfer for supervised text segmentation: we introduce K12SEG, a new dataset for supervised text segmentation built from educational reading materials (grade-1 to college-level students), and use it together with WIKI727 in our domain transfer experiments. Our hierarchical segmentation model (HITS) couples the pretrained RoBERTa with the upper-level transformer that provides sentence contextualization. We show that HITS obtains state-of-the-art performance on standard Wikipedia-based evaluation datasets, but overfits to the training domain (Wikipedia). We finally substantially improve model's transfer capabilities through adapter-based fine-tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The use of topic segmentation for automatic summarization", "authors": [ { "first": "Roxana", "middle": [], "last": "Angheluta", "suffix": "" }, { "first": "Rik", "middle": [ "De" ], "last": "Busser", "suffix": "" }, { "first": "Marie-Francine", "middle": [], "last": "Moens", "suffix": "" } ], "year": 2002, "venue": "Proc. of the ACL-2002 Workshop on Automatic Summarization", "volume": "", "issue": "", "pages": "11--12", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roxana Angheluta, Rik De Busser, and Marie-Francine Moens. 2002. The use of topic segmentation for au- tomatic summarization. In Proc. of the ACL-2002 Workshop on Automatic Summarization, pages 11- 12.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "C-hts: A concept-based hierarchical text segmentation approach", "authors": [ { "first": "Mostafa", "middle": [], "last": "Bayomi", "suffix": "" }, { "first": "S\u00e9amus", "middle": [], "last": "Lawless", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mostafa Bayomi and S\u00e9amus Lawless. 2018. C-hts: A concept-based hierarchical text segmentation ap- proach. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Statistical models for text segmentation. Machine learning", "authors": [ { "first": "Doug", "middle": [], "last": "Beeferman", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" }, { "first": "John", "middle": [], "last": "Lafferty", "suffix": "" } ], "year": 1999, "venue": "", "volume": "34", "issue": "", "pages": "177--210", "other_ids": {}, "num": null, "urls": [], "raw_text": "Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Ma- chine learning, 34(1-3):177-210.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Extractive summarization of multi-party meetings through discourse segmentation", "authors": [ { "first": "Mohammad", "middle": [], "last": "Hadi Bokaei", "suffix": "" }, { "first": "Hossein", "middle": [], "last": "Sameti", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2016, "venue": "Natural Language Engineering", "volume": "22", "issue": "1", "pages": "41--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohammad Hadi Bokaei, Hossein Sameti, and Yang Liu. 2016. Extractive summarization of multi-party meetings through discourse segmentation. Natural Language Engineering, 22(1):41-72.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Topic-based document segmentation with probabilistic latent semantic analysis", "authors": [ { "first": "Thorsten", "middle": [], "last": "Brants", "suffix": "" }, { "first": "Francine", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" } ], "year": 2002, "venue": "Proc. of CIKM", "volume": "", "issue": "", "pages": "211--218", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorsten Brants, Francine Chen, and Ioannis Tsochan- taridis. 2002. Topic-based document segmentation with probabilistic latent semantic analysis. In Proc. of CIKM, pages 211-218. ACM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Global models of document structure using latent permutations", "authors": [ { "first": "", "middle": [], "last": "Harr Chen", "suffix": "" }, { "first": "Regina", "middle": [], "last": "Srk Branavan", "suffix": "" }, { "first": "David", "middle": [ "R" ], "last": "Barzilay", "suffix": "" }, { "first": "", "middle": [], "last": "Karger", "suffix": "" } ], "year": 2009, "venue": "Proc. of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "371--379", "other_ids": {}, "num": null, "urls": [], "raw_text": "Harr Chen, SRK Branavan, Regina Barzilay, and David R Karger. 2009. Global models of document structure using latent permutations. In Proc. of Hu- man Language Technologies: The 2009 Annual Con- ference of the North American Chapter of the Associ- ation for Computational Linguistics, pages 371-379. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Advances in domain independent linear text segmentation", "authors": [ { "first": "Y", "middle": [ "Y" ], "last": "Freddy", "suffix": "" }, { "first": "", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2000, "venue": "1st Meeting of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Freddy YY Choi. 2000. Advances in domain indepen- dent linear text segmentation. In 1st Meeting of the North American Chapter of the Association for Com- putational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Topic segmentation with a structured topic model", "authors": [ { "first": "Lan", "middle": [], "last": "Du", "suffix": "" }, { "first": "Wray", "middle": [], "last": "Buntine", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2013, "venue": "Proc. of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "190--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lan Du, Wray Buntine, and Mark Johnson. 2013. Topic segmentation with a structured topic model. In Proc. of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 190-200.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Hierarchical text segmentation from multi-scale lexical cohesion", "authors": [ { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2009, "venue": "Association for Computational Linguistics", "volume": "", "issue": "", "pages": "353--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Eisenstein. 2009. Hierarchical text segmentation from multi-scale lexical cohesion. In Proc. of HLT- NAACL, pages 353-361. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A dynamic programming algorithm for linear text segmentation", "authors": [ { "first": "Pavlina", "middle": [], "last": "Fragkou", "suffix": "" }, { "first": "Vassilios", "middle": [], "last": "Petridis", "suffix": "" }, { "first": "Ath", "middle": [], "last": "Kehagias", "suffix": "" } ], "year": 2004, "venue": "Journal of Intelligent Information Systems", "volume": "23", "issue": "2", "pages": "179--197", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pavlina Fragkou, Vassilios Petridis, and Ath Kehagias. 2004. A dynamic programming algorithm for linear text segmentation. Journal of Intelligent Informa- tion Systems, 23(2):179-197.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised text segmentation using semantic relatedness graphs", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Federico", "middle": [], "last": "Nanni", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" } ], "year": 2016, "venue": "Proc. of the Fifth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "125--130", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation us- ing semantic relatedness graphs. In Proc. of the Fifth Joint Conference on Lexical and Computational Se- mantics, pages 125-130.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Twolevel transformer and auxiliary coherence modeling for improved text segmentation", "authors": [ { "first": "Goran", "middle": [], "last": "Glava\u0161", "suffix": "" }, { "first": "Swapna", "middle": [], "last": "Somasundaran", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "7797--7804", "other_ids": {}, "num": null, "urls": [], "raw_text": "Goran Glava\u0161 and Swapna Somasundaran. 2020. Two- level transformer and auxiliary coherence modeling for improved text segmentation. In Proceedings of the The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI), pages 7797-7804.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Multi-paragraph segmentation of expository text", "authors": [ { "first": "A", "middle": [], "last": "Marti", "suffix": "" }, { "first": "", "middle": [], "last": "Hearst", "suffix": "" } ], "year": 1994, "venue": "Proc. of the 32nd annual meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "9--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marti A Hearst. 1994. Multi-paragraph segmentation of expository text. In Proc. of the 32nd annual meet- ing on Association for Computational Linguistics, pages 9-16. Association for Computational Linguis- tics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Gaussian error linear units (GELUs). CoRR", "authors": [ { "first": "Dan", "middle": [], "last": "Hendrycks", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Gimpel", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaussian er- ror linear units (GELUs). CoRR, abs/1606.08415.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Parameter-efficient transfer learning for nlp", "authors": [ { "first": "Neil", "middle": [], "last": "Houlsby", "suffix": "" }, { "first": "Andrei", "middle": [], "last": "Giurgiu", "suffix": "" }, { "first": "Stanislaw", "middle": [], "last": "Jastrzebski", "suffix": "" }, { "first": "Bruna", "middle": [], "last": "Morrone", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "De Laroussilhe", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Gesmundo", "suffix": "" }, { "first": "Mona", "middle": [], "last": "Attariyan", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Gelly", "suffix": "" } ], "year": 2019, "venue": "International Conference on Machine Learning", "volume": "", "issue": "", "pages": "2790--2799", "other_ids": {}, "num": null, "urls": [], "raw_text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Applying machine learning to text segmentation for information retrieval", "authors": [ { "first": "Xiangji", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Cercone", "suffix": "" }, { "first": "Stephen", "middle": [ "E" ], "last": "Robertson", "suffix": "" } ], "year": 2003, "venue": "Information Retrieval", "volume": "6", "issue": "3-4", "pages": "333--362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiangji Huang, Fuchun Peng, Dale Schuurmans, Nick Cercone, and Stephen E Robertson. 2003. Apply- ing machine learning to text segmentation for infor- mation retrieval. Information Retrieval, 6(3-4):333- 362.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Text segmentation as a supervised learning task", "authors": [ { "first": "Omri", "middle": [], "last": "Koshorek", "suffix": "" }, { "first": "Adir", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Mor", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Rotman", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Berant", "suffix": "" } ], "year": 2018, "venue": "Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "2", "issue": "", "pages": "469--473", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omri Koshorek, Adir Cohen, Noam Mor, Michael Rot- man, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Proc. of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 469-473.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural text segmentation and its application to sentiment analysis", "authors": [ { "first": "Jing", "middle": [], "last": "Li", "suffix": "" }, { "first": "Billy", "middle": [], "last": "Chiu", "suffix": "" }, { "first": "Shuo", "middle": [], "last": "Shang", "suffix": "" }, { "first": "Ling", "middle": [], "last": "Shao", "suffix": "" } ], "year": 2020, "venue": "IEEE Transactions on Knowledge and Data Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Li, Billy Chiu, Shuo Shang, and Ling Shao. 2020. Neural text segmentation and its application to sen- timent analysis. IEEE Transactions on Knowledge and Data Engineering.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Text segmentation via topic modeling: An analytical study", "authors": [ { "first": "Hemant", "middle": [], "last": "Misra", "suffix": "" }, { "first": "Fran\u00e7ois", "middle": [], "last": "Yvon", "suffix": "" }, { "first": "M", "middle": [], "last": "Joemon", "suffix": "" }, { "first": "Olivier", "middle": [], "last": "Jose", "suffix": "" }, { "first": "", "middle": [], "last": "Cappe", "suffix": "" } ], "year": 2009, "venue": "Proc. of CIKM", "volume": "", "issue": "", "pages": "1553--1556", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hemant Misra, Fran\u00e7ois Yvon, Joemon M Jose, and Olivier Cappe. 2009. Text segmentation via topic modeling: An analytical study. In Proc. of CIKM, pages 1553-1556. ACM.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Mad-x: An adapter-based framework for multi-task cross-lingual transfer", "authors": [ { "first": "Jonas", "middle": [], "last": "Pfeiffer", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.00052" ] }, "num": null, "urls": [], "raw_text": "Jonas Pfeiffer, Ivan Vuli\u0107, Iryna Gurevych, and Sebas- tian Ruder. 2020. Mad-x: An adapter-based frame- work for multi-task cross-lingual transfer. arXiv preprint arXiv:2005.00052.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Text segmentation based on document understanding for information retrieval", "authors": [ { "first": "Violaine", "middle": [], "last": "Prince", "suffix": "" }, { "first": "Alexandre", "middle": [], "last": "Labadi\u00e9", "suffix": "" } ], "year": 2007, "venue": "International Conference on Application of Natural Language to Information Systems", "volume": "", "issue": "", "pages": "295--304", "other_ids": {}, "num": null, "urls": [], "raw_text": "Violaine Prince and Alexandre Labadi\u00e9. 2007. Text segmentation based on document understanding for information retrieval. In International Conference on Application of Natural Language to Information Systems, pages 295-304. Springer.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Efficient parametrization of multidomain deep neural networks", "authors": [ { "first": "Hakan", "middle": [], "last": "Sylvestre-Alvise Rebuffi", "suffix": "" }, { "first": "Andrea", "middle": [], "last": "Bilen", "suffix": "" }, { "first": "", "middle": [], "last": "Vedaldi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sylvestre-Alvise Rebuffi, Hakan Bilen, and Andrea Vedaldi. 2018. Efficient parametrization of multi- domain deep neural networks. In CVPR.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Topictiling: a text segmentation algorithm based on lda", "authors": [ { "first": "Martin", "middle": [], "last": "Riedl", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Biemann", "suffix": "" } ], "year": 2012, "venue": "Proc. of ACL 2012 Student Research Workshop", "volume": "", "issue": "", "pages": "37--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Riedl and Chris Biemann. 2012. Topictiling: a text segmentation algorithm based on lda. In Proc. of ACL 2012 Student Research Workshop, pages 37- 42. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Multicqa: Zero-shot transfer of selfsupervised text matching models on a massive scale", "authors": [ { "first": "Andreas", "middle": [], "last": "R\u00fcckl\u00e9", "suffix": "" }, { "first": "Jonas", "middle": [], "last": "Pfeiffer", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Andreas R\u00fcckl\u00e9, Jonas Pfeiffer, and Iryna Gurevych. 2020. Multicqa: Zero-shot transfer of self- supervised text matching models on a massive scale.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Exploring influence of topic segmentation on information retrieval quality", "authors": [ { "first": "Gennady", "middle": [], "last": "Shtekh", "suffix": "" }, { "first": "Polina", "middle": [], "last": "Kazakova", "suffix": "" }, { "first": "Nikita", "middle": [], "last": "Nikitinsky", "suffix": "" }, { "first": "Nikolay", "middle": [], "last": "Skachkov", "suffix": "" } ], "year": 2018, "venue": "International Conference on Internet Science", "volume": "", "issue": "", "pages": "131--140", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gennady Shtekh, Polina Kazakova, Nikita Nikitinsky, and Nikolay Skachkov. 2018. Exploring influence of topic segmentation on information retrieval qual- ity. In International Conference on Internet Science, pages 131-140. Springer.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A statistical model for domain-independent text segmentation", "authors": [ { "first": "Masao", "middle": [], "last": "Utiyama", "suffix": "" }, { "first": "Hitoshi", "middle": [], "last": "Isahara", "suffix": "" } ], "year": 2001, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Masao Utiyama and Hitoshi Isahara. 2001. A statis- tical model for domain-independent text segmenta- tion. In Proc. of ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Sentiment text classification of customers reviews on the web based on svm", "authors": [ { "first": "Huosong", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Min", "middle": [], "last": "Tao", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2010, "venue": "Sixth International Conference on Natural Computation", "volume": "7", "issue": "", "pages": "3633--3637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Huosong Xia, Min Tao, and Yi Wang. 2010. Senti- ment text classification of customers reviews on the web based on svm. In 2010 Sixth International Con- ference on Natural Computation, volume 7, pages 3633-3637. IEEE.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "The educator's word frequency guide", "authors": [ { "first": "Susan", "middle": [], "last": "Zeno", "suffix": "" }, { "first": "H", "middle": [], "last": "Stephen", "suffix": "" }, { "first": "", "middle": [], "last": "Ivens", "suffix": "" }, { "first": "T", "middle": [], "last": "Robert", "suffix": "" }, { "first": "Raj", "middle": [], "last": "Millard", "suffix": "" }, { "first": "", "middle": [], "last": "Duvvuri", "suffix": "" } ], "year": 1995, "venue": "Touchstone Applied Science Associates", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Susan Zeno, Stephen H Ivens, Robert T Millard, and Raj Duvvuri. 1995. The educator's word frequency guide. Touchstone Applied Science Associates.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Architecture of the adapter-augmented hierarchical model for supervised text segmentation.", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "in the same manner as WIKI727. Chen et al. (2009) similarly created the CITIES (64 articles) and ELEMENTS (117) from Wikipedia pages of world cities and chemical elements, respectively.", "uris": null, "type_str": "figure" }, "TABREF0": { "num": null, "content": "", "html": null, "text": "An example 2-segment document from the K12SEG dataset.", "type_str": "table" }, "TABREF2": { "num": null, "content": "
: \"In-domain\" performance of hierarchical neu-
ral segmentation models, trained on the large WIKI727
dataset, on three Wikipedia-based test sets (smaller val-
ues of the error measure P K mean better performance).
", "html": null, "text": "", "type_str": "table" } } } }