diff --git "a/dNFQT4oBgHgl3EQfjDY0/content/tmp_files/load_file.txt" "b/dNFQT4oBgHgl3EQfjDY0/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/dNFQT4oBgHgl3EQfjDY0/content/tmp_files/load_file.txt" @@ -0,0 +1,1583 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf,len=1582 +page_content='Sentence Identification with BOS and EOS Label Combinations Takuma Udagawa, Hiroshi Kanayama, Issei Yoshida IBM Research - Tokyo, Japan Takuma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='Udagawa@ibm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='com, {hkana,issei}@jp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='ibm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='com Abstract The sentence is a fundamental unit in many NLP applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Sentence segmentation is widely used as the first preprocessing task, where an input text is split into consecutive sentences considering the end of the sentence (EOS) as their boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' This task formula- tion relies on a strong assumption that the in- put text consists only of sentences, or what we call the sentential units (SUs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' However, real- world texts often contain non-sentential units (NSUs) such as metadata, sentence fragments, nonlinguistic markers, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' which are unrea- sonable or undesirable to be treated as a part of an SU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To tackle this issue, we formulate a novel task of sentence identification, where the goal is to identify SUs while excluding NSUs in a given text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To conduct sentence iden- tification, we propose a simple yet effective method which combines the beginning of the sentence (BOS) and EOS labels to determine the most probable SUs and NSUs based on dynamic programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To evaluate this task, we design an automatic, language-independent procedure to convert the Universal Dependen- cies corpora into sentence identification bench- marks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Finally, our experiments on the sen- tence identification task demonstrate that our proposed method generally outperforms sen- tence segmentation baselines which only uti- lize EOS labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' 1 Introduction The sentence, which we refer to as the sentential unit (SU), is a fundamental unit of processing in many NLP applications including syntactic pars- ing (Dozat and Manning, 2017), semantic parsing (Dozat and Manning, 2018), and machine transla- tion (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Existing works mostly rely on sentence segmentation (a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' sentence bound- ary detection) as the first preprocessing task, where we predict the end of the sentence (EOS) to split a text into consecutive SUs (Kiss and Strunk, 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Gillick, 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' This approach relies on a strong assumption that the text only consists of SUs;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' how- ever, real-world texts like web contents often con- tain non-sentential units (NSUs) such as the meta- data of attachments embedded in the email body, repetition of symbols for separating texts, irregular series of nouns, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' (just to name a few).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Such NSUs may cause detrimental or unexpected results in the downstream tasks if considered as parts of the SUs and are more desirable to be distinguished from SUs in the first preprocessing step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To tackle this problem, we formulate a novel task of sentence identification, where the goal is to identify SUs while excluding NSUs in a given text (§3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' This can be regarded as an SU span ex- traction task, where each SU span is represented by the beginning of the sentence (BOS) and the EOS labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='1 We illustrate the difference between sentence segmentation and sentence identification in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' In sentence segmentation, the text frag- ment of an embedded file (“- TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm << File: TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm >>”) needs to be considered as a part of an SU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' In contrast, sentence identification can regard it as an NSU and exclude it for downstream applications such as dependency parsing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To conduct sentence identification, we propose a simple method which effectively combines the BOS and EOS probabilities to determine both SUs and NSUs (§4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To be specific, we first train the BOS and EOS labeling models based on ei- ther the sentence identification dataset (with SUs and NSUs) or sentence segmentation dataset (only SUs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Then, we search for the most probable spans of SUs and NSUs using a simple dynamic program- ming framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Theoretically, our method can be considered as a natural generalization of existing sentence segmentation algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To evaluate this task, we design an automatic pro- 1For simplicity, we assume that the input text can be seg- mented into consecutive, non-overlapping units of SUs and NSUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' This way, we can also represent and evaluate SU extraction as an equivalent BIO labeling task (§5-§7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='13352v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='CL] 31 Jan 2023 Input Text Thank you.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' - TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm << File: TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm >> I was thinking of converting it to a hover (from EWT) vehicle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' I might just sell the car and get you to drive me around all winter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' E Sentence Thank you.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' - TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm << File: TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm >> I was thinking of converting it to a hover Segmentation E E vehicle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' I might just sell the car and get you to drive me around all winter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' B E B Sentence Thank you.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' - TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm << File: TEXT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='htm >> I was thinking of converting it to a hover Identification E B E vehicle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' I might just sell the car and get you to drive me around all winter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Table 1: Illustration of sentence segmentation and sentence identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' In sentence segmentation, EOS labels (E) are used to segment the input text into consecutive SUs (in blue).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' In sentence identification, only the spans bracketed by the BOS (B) and EOS labels are extracted as SUs, while the rest can be excluded as NSUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' cedure to convert the Universal Dependencies (UD) corpora (de Marneffe et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2021) into sentence identification benchmarks (§5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To be specific, (i) we use the original sentence boundaries in UD as the unit (SU and NSU) boundaries and (ii) classify each unit as an SU iff it contains at least one clausal predicate with a core/non-core argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Impor- tantly, our classification rule follows the definition of lexical sentence in linguistics (Nunberg, 1990), is easily customizable with language-independent rules, and makes reasonable classification within the scope of our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To conduct our experiments, we focus on the English Web Treebank (Silveira et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2014) as the primary benchmark for sentence identification and train the BOS/EOS labeling models by finetuning RoBERTa (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2019) (§6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' We also propose techniques to develop these models using a stan- dard sentence segmentation dataset, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' the Wall Street Journal corpus (Marcus et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 1993), which only contains clean, edited SUs without any NSUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Based on our experimental results, we demon- strate that our proposed method generally outper- forms sentence segmentation baselines which only utilize EOS labels (§7).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' These results highlight the importance of combining the BOS labels in addition to the EOS labels for accurate sentence identification under various conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' 2 Background Sentence segmentation, a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' sentence boundary detection, is the task of segmenting an input text into the unit of sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Despite the long his- tory of study (Riley, 1989) and its importance in the entire NLP pipeline (Walker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2001), this area has received relatively little attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' For one reason, the task has been recognized as “long solved” (Read et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2012) with the most recent approach reporting 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='8% F1 score on the standard English Wall Street Journal (WSJ) dataset (Wicks and Post, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Their state-of-the-art method ER- SATZ combines (i) a regular-expression based de- tector of candidate sentence boundaries, followed by (ii) a Transformer-based (Vaswani et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2017) binary classifier which predicts whether the can- didate boundary is EOS based on the local con- text, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' surrounding few words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' This modern context-based approach has been shown to outper- form competitive, widely used baselines such as SPLITTA (Gillick, 2009), PUNKT (Kiss and Strunk, 2006), and MOSES (Koehn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' However, two important aspects are not fully ad- dressed in the current literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' First is the cover- age of diverse domains, genres, and writing styles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Existing works (including Wicks and Post, 2021) focus on formal/edited text and assume the exis- tence of sentence ending punctuations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' full stops) at the sentence boundaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' However, social media texts often lack such punctuations and con- tain various types of non-linguistic noise, which can lead to a substantial degradation in the seg- mentation performance (Read et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Rudra- pal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Speech transcription texts also usually contain disfluent, ungrammatical, or frag- mented structures and lack both punctuations and casing (Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Rehbein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Considering the amount of such informal or non- standard texts in the real world, it is compelling to expand the capability of sentence segmentation beyond formal, standardized text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' The second aspect is the coverage of multiple languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Different languages involve different complexities in sentence segmentation, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Chi- nese requires the disambiguation of commas as the sentence ending punctuation (Xue and Yang, 2011) and Thai does not mark EOS with any type of punc- tations (Aroonmanakun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To advance NLP from a multilingual per- spective, it is crucial to develop and evaluate mod- els in multiple languages: Wicks and Post (2021) make an important step in this direction, proposing a language-agnostic, unified sentence segmentation model covering a total of 87 languages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Based on these observations, we first propose to extend the task of sentence segmentation to sen- tence identification, which expands the capability of sentence segmentation beyond formal, standard- ized text (§3, §4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Secondly, we propose a cross- lingual method of benchmarking sentence identifi- cation based on the UD corpora, considering every word or character as the candidate boundary to cover diverse domains, genres, and languages that lack sentence ending punctuations (§5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Finally, we follow Wicks and Post (2021) to develop mod- ern neural-based models that require no language- specific engineering and can be developed for dif- ferent languages in a unified manner (§6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' 3 Task Formulation 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='1 Sentence Segmentation Task First, we introduce a precise (re-)formulation of the sentence segmentation task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Let W = (w0, w1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', wN−1) represent the input text, where each wi denotes a word (but can also be a sub- word or character).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' We also define the text span W [i : j] = (wi, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', wj−1), their concatenation W [i : j] ⊕ W [j : k] = W [i : k], and SU bound- ary indices B = (b0, b1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', bM) where b0 = 0, bM = N, and �M i=1 W [bi−1 : bi] = W (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' the concatenation of all SUs recovers the input text).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Next, we introduce the SU probability pSU(W [i: j]) which corresponds to the probability of the text span W [i:j] being an SU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Based on this probabil- ity, the task of sentence segmentation can be for- malized as searching for the boundaries B which maximize the following probability:2 arg max B M � i=1 pSU(W [bi−1 :bi]) (1) The most standard approach is to define pSU(W [i: j]) based on a pretrained EOS labeling model, as we describe in §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' However, our (re-)formulation 2M is a variable and need not be fixed during the search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' as Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' (1) is more general and permits other defini- tions of SU probability as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='2 Sentence Identification Task In sentence identification, we consider the input text W can be segmented into consecutive, non- overlapping units of SUs and NSUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Hence, we regard B = (b0, b1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', bM) as the unit (SU and NSU) boundaries and define the unit indicators A = (a1, a2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=', aM) for each unit as follows: ai = � 1 if W [bi−1 :bi] is an SU 0 if W [bi−1 :bi] is an NSU Next, we introduce the NSU probability pNSU(W [i : j]) which corresponds to the prob- ability of the text span W [i : j] being an NSU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Based on pSU and pNSU, we can formalize the task of sentence identification as searching for the unit boundaries B and unit indicators A which maximize the following probability: arg max B,A M � i=1 pSU(W [bi−1 :bi])ai pNSU(W [bi−1 :bi])1−ai (2) Note that this strictly generalizes the sentence seg- mentation task in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' (1), which is a special case where ai = 1, ∀ai ∈ A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Based on this task formu- lation, we discuss how we can define pSU(W [i:j]) and pNSU(W [i:j]) to derive our sentence identifi- cation algorithm in §4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' 4 Methods 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='1 Sentence Segmentation Method In the most standard approach, sentence segmenta- tion employs an EOS labeling model pEOS to define the SU probability pSU in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' To be specific, let pEOS(wi|W ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' θ) denote the EOS labeling model, which computes the probability of wi being EOS in W (θ denotes the model parameters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Typically, it is straightforward to train this model in a super- vised learning setup using a dataset annotated with gold EOS boundaries (Wicks and Post, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' For brevity, we use the notation pEOS(wi) as a short- hand for pEOS(wi|W ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' θ), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' we omit W and θ (unless required) in the rest of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/dNFQT4oBgHgl3EQfjDY0/content/2301.13352v1.pdf'} +page_content=' Based on the pretrained model pEOS, we can define the SU probability as pSU(W [i : j]) = pEOS(wj−1) � i≤k