ACL-OCL / Base_JSON /prefixH /json /hackashop /2021.hackashop-1.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:46.006594Z"
},
"title": "Related Named Entities Classification in the Economic-Financial Context",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "De",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
"location": {
"addrLine": "University of\u00c9vora",
"country": "Portugal"
}
},
"email": "daniel.reyes@edu.pucrs.br"
},
{
"first": "Los",
"middle": [],
"last": "Reyes\u00b9",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
"location": {
"addrLine": "University of\u00c9vora",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "Allan",
"middle": [],
"last": "Barcelos\u00b9",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
"location": {
"addrLine": "University of\u00c9vora",
"country": "Portugal"
}
},
"email": ""
},
{
"first": "Renata",
"middle": [],
"last": "Vieira\u00b2",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
"location": {
"addrLine": "University of\u00c9vora",
"country": "Portugal"
}
},
"email": "renatav@uevora.pt"
},
{
"first": "Isabel",
"middle": [
"H"
],
"last": "Manssour\u00b9",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "\u00b9Pontifical Catholic University of Rio Grande do Sul",
"location": {
"addrLine": "University of\u00c9vora",
"country": "Portugal"
}
},
"email": "isabel.manssour@pucrs.br"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The present work uses the Bidirectional Encoder Representations from Transformers (BERT) to process a sentence and its entities and indicate whether two named entities present in a sentence are related or not, constituting a binary classification problem. It was developed for the Portuguese language, considering the financial domain and exploring deep linguistic representations to identify a relation between entities without using other lexical-semantic resources. The results of the experiments show an accuracy of 86% of the predictions.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The present work uses the Bidirectional Encoder Representations from Transformers (BERT) to process a sentence and its entities and indicate whether two named entities present in a sentence are related or not, constituting a binary classification problem. It was developed for the Portuguese language, considering the financial domain and exploring deep linguistic representations to identify a relation between entities without using other lexical-semantic resources. The results of the experiments show an accuracy of 86% of the predictions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In the context of the financial market, the news bring information regarding sectors economy, industrial policies, acquisitions and partnerships of companies, among others. The analysis of this data, in the form of financial reports, headlines and corporate announcements, can support personal and corporate economic decision making (Zhou and Zhang, 2018) . However, thousands of news items are published every day and this number continues to increase, which makes the task of using and interpreting this huge amount of data impossible through manual means.",
"cite_spans": [
{
"start": 333,
"end": 355,
"text": "(Zhou and Zhang, 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Information Extraction (IE) can contribute with tools that allow the monitoring of these news items in a faster way and with less effort, through automation of the extraction and structuring of information. IE is the technology based on natural language, that receives text as input and generates results in a predefined format (Cvita\u0161, 2011) . Among the tasks of the IE area, it is possible to highlight both Named Entity Recognition (NER) and Relation Extraction (RE). For example, it is possible to extract that a given organization (first entity) was purchased (relation) by another organization (second entity) (Sarawagi, 2008) .",
"cite_spans": [
{
"start": 328,
"end": 342,
"text": "(Cvita\u0161, 2011)",
"ref_id": "BIBREF4"
},
{
"start": 616,
"end": 632,
"text": "(Sarawagi, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A model based on the BERT language model (Devlin et al., 2018) is proposed to classify whether a sentence containing a tuple entity 1 and entity 2 (e1,e2), expresses a relation among them. Leveraging the power of BERT networks, the semantics of the sentence can be obtained without using enhanced feature selection or other external resources.",
"cite_spans": [
{
"start": 41,
"end": 62,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The contribution of this work is in building an approach for extracting entity relations for the Portuguese language on the financial context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this work is organized as follows. Section 2 presents news processing for the Competitive Intelligence (CI) area. Section 3 presents the related work. Section 4 provides a detailed description of the proposed solution. Section 5 explains the experimental process in detail, followed by section 6, which shows the relevant experimental results. Finally, section 7 presents our conclusions, as well as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some of the largest companies in the financial segment have a Competitive Intelligence (CI) sector where information from different sources is strategically analyzed, allowing to anticipate market trends, enabling the evolution of the business compared to its competitors. This sector is usually formed by one or more professionals dedicated specifically to monitor the movements of the competition. In a time of competitiveness that is based on knowledge and innovation, CI allows companies to exercise pro-activity. The conclusions obtained through this process allow the company to know if it really remains competitive and if there is sustainability for its business model. CI can provide some advantages to companies that use it, such as: minimizing surprises from competitors, identify-ing opportunities and threats, obtaining relevant knowledge to formulate strategic planning, understanding the repercussions of their actions in the market, among others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Competitive Intelligence and News Processing",
"sec_num": "2"
},
{
"text": "The process of capturing information through news still requires a lot of manual effort, as it often depends on a professional responsible for carefully reading numerous news about organizations to highlight possible market movements that also retain this knowledge. It is then estimated that a system, that automatically filters the relations between financial market entities, can reduce the effort and the time spent on these tasks. Another benefit is that this same system can feed the Business Intelligence (BI) systems and, thus, establish a historical database with market events. Thus, knowledge about market movements can be stored and organized more efficiently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Competitive Intelligence and News Processing",
"sec_num": "2"
},
{
"text": "ER is a task that has been the subject of many studies, especially now when information and communication technologies allow the storage of and processing of massive data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Zhang (Zhang et al., 2017) proposes to incorporate the position of words and entities into an approach employing combinations of N-grams for extracting relations. Presenting a different methodology to extract the relations, Wu (Wu and He, 2019) proposed to use a pre-trained BERT language model and the entity types for RE on the English language. In order to circumvent the problem of lack of memory for very large sequences in convolutional networks, some authors (Li et al., 2018; Florez et al., 2019; Pandey et al., 2017) have adopted an approach using memory cells for neural networks, Long short-term memory (LSTM). In this sense, Qingqing's Li work (Li et al., 2018) uses a Bidirectional Long Short-Term Memory (Bi-LSTM) network, which are an extension of traditional LSTMs, for its multitasking model, and features a version with attention that considerably improves the results in all tested datasets. Also using Bi-LSTM networks, Florez (Florez et al., 2019) differs from other authors in that it uses types of entities and the words of the entities being considered for a relation in addition to using information such as number of entities and distances, measured by the number of words and phrases between the pair of entities. The entry of the Bi-LSTM layer is concatenation of words and relations, with all words between the candidate entities (included), provided by a pre-trained interpolation layer. Yi (Yi and Hu, 2019) proposes to join a BERT language model and a Bidirectional Gated Recurrent Unit (Bi-GRU) network, which is a version of Bi-LSTM with a lower computational cost. Finally, they train their model based on a pre-trained BERT network, instead of training from the beginning, to speed up coverage. Some works (Qin et al., 2017; GAN et al., 2019; Zhou and Zhang, 2018) use attention mechanisms to improve the performance of their neural network models. Such mechanisms assist in the automatic information filtering step that helps to find the most appropriate sentence section to distinguish named entities. Thus, it is possible that even in a very long sentence, and due to its size being considered complex, the model can capture the context information of each token in the sentence, being able to concentrate more in these terms the weights of influence. Pengda Qin (Qin et al., 2017) proposes a method using Bi-GRU with an attention mechanism that can automatically focus on valuable words, also using the pairs of entities and adding information related to them.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Zhang et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 227,
"end": 244,
"text": "(Wu and He, 2019)",
"ref_id": "BIBREF18"
},
{
"start": 466,
"end": 483,
"text": "(Li et al., 2018;",
"ref_id": "BIBREF9"
},
{
"start": 484,
"end": 504,
"text": "Florez et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 505,
"end": 525,
"text": "Pandey et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 656,
"end": 673,
"text": "(Li et al., 2018)",
"ref_id": "BIBREF9"
},
{
"start": 947,
"end": 968,
"text": "(Florez et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 1421,
"end": 1438,
"text": "(Yi and Hu, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 1742,
"end": 1760,
"text": "(Qin et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 1761,
"end": 1778,
"text": "GAN et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 1779,
"end": 1800,
"text": "Zhou and Zhang, 2018)",
"ref_id": "BIBREF21"
},
{
"start": 2302,
"end": 2320,
"text": "(Qin et al., 2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Tao Gan (GAN et al., 2019) also addresses RE with an attention method to capture important parts of the sentence and for that, it uses an LSTM attention network for entities at the subsequent level. In this way, he focuses more on important contextual information between two entities. Zhou (Zhou and Zhang, 2018 ) also implement a model based on RNN Bi-GRU with an attention mechanism to focus on the most important assumptions of the sentences for the financial market.",
"cite_spans": [
{
"start": 4,
"end": 26,
"text": "Gan (GAN et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 291,
"end": 312,
"text": "(Zhou and Zhang, 2018",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Despite having great importance, the financial domain, specifically, has been little explored in the literature. The authors at (Zhou and Zhang, 2018) created a corpus collecting 3000 sentence records manually from the main news sites, which was used to recognize the entity and extract relations such as learning and training as a whole.",
"cite_spans": [
{
"start": 128,
"end": 150,
"text": "(Zhou and Zhang, 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "Most studies present RE solutions for English texts, and, in this way, it is also possible to identify a larger number of data sets in this language. There are few data sets available in the Portuguese language, such as the Golden Collection HAREM, which is widely used in the literature (Chaves, 2008; Cardoso, 2008; Collovini et al., 2016) . HAREM is a joint assessment event for the Portuguese language, organized by Linguateca (Santos and Cardoso, 2007) . Its objective is to evaluate recognizing systems of NE (Santos and Cabral, 2009) . The Golden Collection (GC) is a subset of the HAREM collection, being used for the task of evaluating the systems that deal with Recognition of Named Entities.",
"cite_spans": [
{
"start": 288,
"end": 302,
"text": "(Chaves, 2008;",
"ref_id": "BIBREF1"
},
{
"start": 303,
"end": 317,
"text": "Cardoso, 2008;",
"ref_id": "BIBREF0"
},
{
"start": 318,
"end": 341,
"text": "Collovini et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 431,
"end": 457,
"text": "(Santos and Cardoso, 2007)",
"ref_id": "BIBREF14"
},
{
"start": 515,
"end": 540,
"text": "(Santos and Cabral, 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "The lack of this type of resource forces researchers to develop their own research corpus. In most cases, it is necessary to first create a set with the sentences and write them down when the classification is supervised to proceed with the RE task. Besides, the lack of public data sets also makes it difficult to fairly compare related work, as well as requires more time and effort from the researcher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "It is possible to observe that there are works that discuss the task of extracting relations between NE and that already employ machine learning techniques for this purpose. However, although we found some works for the RE task, few of them are suitable for the Portuguese language, and none of them are related to the financial context. Considering other languages, The work of Zhou (Zhou and Zhang, 2018) was the only one that came closest to our goals. However, there is a gap in the literature for works that address such tasks using deep learning techniques and Portuguese as the main language, especially in the financial-economic context as addressed in this work.",
"cite_spans": [
{
"start": 384,
"end": 406,
"text": "(Zhou and Zhang, 2018)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "In this section, we present our BERT-based model in detail. As shown in Figure 1 , it contains three parts: (1) Input layer; (2) BERT layer; and (3) Output layer, which is composed of a Sigmoid activation function and two neurons that represent the classes to be predicted.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "The input layer consists of a BERT encoder used for input sentence tokenization and produces a tuple of arrays (token, mask, sequence ids), which were used as input to the second layer that is the Portuguese BERT language model (Souza et al., 2020) 1 from Huggingface python package 2 (Wolf et al., 2020) . Figure 2 illustrates the input layer of the proposed model. The entry consists of (1) the original sentence with the mentioned entities and (2) the entities to be verified concatenated. A special token [cls] and a token [sep] are added at the beginning and end of the input string respectively, as mentioned in the original BERT implementation (Devlin et al., 2018) .",
"cite_spans": [
{
"start": 285,
"end": 304,
"text": "(Wolf et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 527,
"end": 532,
"text": "[sep]",
"ref_id": null
},
{
"start": 651,
"end": 672,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 307,
"end": 315,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "The third layer of the model architecture is identified as the output layer. This layer is fully connected with a tangent activation function. The output of this layer is propagated to a new fully connected layer, with a Sigmoid activation function, whose characteristic is the mapping of input values to 0 or 1. In this model, these values represent non-relation and relation, respectively. As shown in Figure 1 , this layer still has two output neurons, which indicate the respective classes to be predicted by the model. In the end, we added a dropout layer with a 0.1 rate to avoid model overfitting, which happens when the model memorizes the training data and thereby loses the power of generalization.",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 412,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "4"
},
{
"text": "The purpose of this section is to verify the proposed model performance thought experiments on the financial domain corpus. The proposed study follows the classic methodology of Knowledge Discovery in Databases (KDD) (Fayyad et al., 1996) , which contains 5 phases that range from data collection to the evaluation of the results.",
"cite_spans": [
{
"start": 217,
"end": 238,
"text": "(Fayyad et al., 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The following subsections aim to indicate how each step of the methodology was applied in the context of our work. Subsection 5.1 refers to the Selection step and seeks to indicate what data will be used during the experiments for the RE task. Subsection 5.2 addresses the Pre-processing step, indicating procedures for quality checking, cleaning, correction, or removal of inconsistent or missing data. Subsection 5.3 reports the Transformation phase, where the transformation processes applied to the data set in the context of our work are explored. Subsection 5.4 brings the penultimate phase, of Mining, where the data mining process is presented. Finally, the last phase of the methodology is presented in the subsection 5.5, which consists of evaluating the performance of the model applied on top of the data that were not used in the training or mining phase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "As indicated in section 3, there was no evidence of open data sets in the context of extracting relations in the financial field for the Portuguese language. Therefore, for this work, a corpus was created with 3,288 tuples annotated manually. These tuples originate from more than 4,000 paragraphs of financial : Examples of data transformations in the input layer of the model. The entities to be evaluated appear in bold, and the text that represents the semantic relation between them is underlined. market news, provided by a partner company that collected them in various communication vehicles such as financial market websites, newspapers, and corporate balance sheets. Sentences that include co-referral are also removed because co-reference treatment would require additional processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection",
"sec_num": "5.1"
},
{
"text": "The next step concerns the data pre-processing and cleaning. This step occurs through the manual process of spelling correction of each sentence. Acronyms are also extended, as well as the standardization of different ways of indicating the same named entity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "5.2"
},
{
"text": "The standardization can be done manually, but in a real work scenario, this task becomes massive and can be automated by creating a base of named entities and their acronyms. Thus, it is possible to elaborate a process that validates the acronyms contained in the sentence and replace them with their extensions or even with an approach that focuses on only a few specific entities informed by the CI analyst himself.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "5.2"
},
{
"text": "The data cleaning process is also done manually, where special characters and acronyms that follow the description itself are removed. Sentences containing less than 4 tokens will also be removed, as they can be considered irrelevant to the context of the approach. At the end of this cleaning step, just over 2500 sentences are filtered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "5.2"
},
{
"text": "In this same phase, the identification of named entities will also occur, through a single NER tool, called SpaCy 3 , ensuring that the same criterion was used for all sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "5.2"
},
{
"text": "The named entities in question are those related to the categories person, location, and organization. The focal point is information about the organizations, as well as its relations with other organizations, persons, and locations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "5.2"
},
{
"text": "After identifying all named entities, sentences that have less than 2 entities are discarded. At the end of this new disposal, the corpus consists of 1292 unique sentences that move on to the next stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-processing",
"sec_num": "5.2"
},
{
"text": "With the identification of the Named Entities in the previous phase, a combination of all the entities present in the sentence is made and a triple (sentence, entity, entity) is formed for each combination, which can generate several records for the same sentence. After this creation of records with the combination of entities, manual annotation of records that have a semantic relation between the highlighted named entities is made manually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "After the end of the manual annotation of the relations between the entities, the corpus consists of 3288 records. Of this total, 1485 (45%) are positive tuples, that is, it contains a relation between the highlighted entities, and 1803 (55%) are negative tuples, where there is no relation between the entities. Finally, the two named entities are concatenated at the end of the sentence. The data set is available at https://github.com/DanielReeyes/ relation-extraction-deep-learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "The relation annotating process did not consider the past defined classes or relations. A positive tuple is considered when there is any semantic relation between two named entities of the categories defined in 5.1. Here are some examples of positive annotated tuples that contain relation between 3 https://spacy.io/ named entities of type organization:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "\u2022 A Abra\u00e7o\u00e9 uma Institui\u00e7\u00e3o Particular de Solidariedade Social.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "\u2022 A Caixa\u00e9 controladora do Pan , ao lado do BTG , com 32,8% do neg\u00f3cio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "\u2022 A Havanna fecha parceria com o Santander para inaugurar um novo modelo de neg\u00f3cios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "\u2022 A partir de agora , a NET est\u00e1 na Claro.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "As sentences are naturally composed of words and characters, then the transformation step in the present study also consists of transforming the tokens into numerical representations by the BERT encoder. As stated in past sections, the special tokens [CLS] and [SEP] are also added and encoded properly on each sentence, finalizing the composition of the input layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformation",
"sec_num": "5.3"
},
{
"text": "The predictive task is characterized by the search for a behavioral pattern that can predict the behavior of a future entity (Fayyad et al., 1996) . The corpus data are randomly divided into two parts, 80% of which are used for training the model and 20% for testing. The part for the test is still divided equally into 2, where they are used as validation and test sets to test the generalization of the model. The first set is used so that the algorithm can search for this particular pattern in the data concerning the relation label. Thus, after the training stage where the model can recognize this pattern, it is possible to apply it to the validation data and later on the test set, simulating a real environment. In this step, the original balance level is also maintained in all sets created, being able to rule out that the model contains any bias to learn a certain type of complexity.",
"cite_spans": [
{
"start": 125,
"end": 146,
"text": "(Fayyad et al., 1996)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mining",
"sec_num": "5.4"
},
{
"text": "The adjustment of hyper-parameters of the BERT used was due to the combination of all values indicated by Jacob Devlin in (Devlin et al., 2018) , in addition to the standard values for the Simple Transformers library model. In this work, Jacob used most of the hyper-parameters with default values except for the lot size, learning rate, and the number of training epochs. The dropout rate was always maintained at 0.1. Thus, the values tested for this task were:",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mining",
"sec_num": "5.4"
},
{
"text": "\u2022 Batch Size: 16, 32;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining",
"sec_num": "5.4"
},
{
"text": "Hyper-parameter Value Batch Size 32 Learning Rate 5e-5 Epochs 4 \u2022 Learning Rate (AdamW): 5e-5, 3e-5, 2e-5;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining",
"sec_num": "5.4"
},
{
"text": "\u2022 Epochs: 2, 3, 4, 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mining",
"sec_num": "5.4"
},
{
"text": "In the end, we did a total of 24 experiments with all the possible combinations of the above described parameters. After analyzing the results, the model with the values was selected according to Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 203,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Mining",
"sec_num": "5.4"
},
{
"text": "To evaluate the model, metrics such as Accuracy, Recall, Precision, and F1-Measure were provided. According to Table 2 , each set maintained the original imbalance of the data set according to the target variable, in this case, indicating whether or not there is a relation between the entities assessed. In this way, the model is evaluated for the ability to indicate whether a given pair of entities contained in a sentence has a relation or not, configuring a binary classification problem, whose positive class refers to entities that have a semantic relation.",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 118,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.5"
},
{
"text": "After the training stage of the model, it was applied to the test data set. In this evaluation step, the model obtained reasonable results, achieving an overall accuracy and F-Measure of 86%. An important observation to make is that results are also good when it comes to the target class, that is, when the label is positive, as can be seen in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 353,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "As indicated in Section 3, the vast majority of studies present RE solutions for texts in English or a domain other than finance. Thus, it is difficult to compare the results of the proposed method with state-of-the-art approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Nevertheless, it is shown that the proposed model was able to recognize patterns and indicate when two entities are semantically related in the same sentence in the financial domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The process of finding the best parameters for BERT is time-consuming as the predictions made by the network. The time might not be a constraint to using the RE task model applied to the context of the financial domain considering that this demand does not require the processing time to be real-time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We believe that if the data set is increased with more samples, the model may have a performance gain. Also, we can notice that the data set has a small unbalanced distribution rate, with a greater number of negative samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "This imbalance can help explain the difference in precision and F-measure between the positive and negative class indicated in Table 3 , where it is possible to see that the model gets more right when the tested entities had no relation in the sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Regarding Recall, the study indicates that, even with the imbalance of the data, the proposed model achieved a very good performance of approximately 90% when it comes to the positive class (it has a relation). That is, when it really belongs to the positive class, in approximately 90% of the cases, it identifies correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "It is also possible to carry out tests with adjustments of more hyper-parameters such as loss function, optimizers, among others. In addition to adjustments to the hyper-parameters of the approach, more contextual information of the samples can be added, such as the type of the named entity, whether it is an organization, person, or place, and scope adopted for the task being worked on. In this way, it is possible to delimit the types of relations between 2 entities, excluding, for example, an acquisition relation between two entities of the person type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The present work proposed an approach to extract relations between named entities, in the financialeconomic context, based on the Portuguese BERT language model, to our best knowledge, different from what is already in the literature. Thus, it provides an insight into the use of pre-trained deep language models for extracting relations for the Portuguese language financial market.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future works",
"sec_num": "7"
},
{
"text": "From the related work section, it is possible to verify that there is little research on the technology for extracting the relation between named entities for the financial domain, for the Portuguese language. This domain lacks practical solutions, given a large amount of information in the financial field, and manual analysis becomes difficult to meet the needs and make full use of that information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future works",
"sec_num": "7"
},
{
"text": "A model of classification of relations between named entities based on BERT was proposed, which replaces explicit linguistic resources, required by previous methods. This approach uses the information from the sentence and the concatenated entity pair, which allows more than one entry to be sent since a sentence can have N pairs of named entities. Therefore, the adopted approach allows the sentence and the pair of entities to be inferred to be sent separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future works",
"sec_num": "7"
},
{
"text": "The results demonstrate that the approach used can bring satisfactory results, reaching an accuracy of 86%. During the discussion of results, some adjustments were made to try to improve accuracy, such as testing other combinations of hyper-parameters and also the increase in the corpus. However, the development of memory improvements and optimizations are still in need, especially in the training period, due to the complexity of the pre-trained BERT model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future works",
"sec_num": "7"
},
{
"text": "As a natural continuation of this work, we will proceed with tests with other combinations of hyper-parameters as indicated in Section 6. To try to reduce the chance of the model being surprised with some non-standard samples, new data will be annotated and added to the research corpus. Thus, the model can be trained with a greater amount of data and a greater diversity of data patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future works",
"sec_num": "7"
},
{
"text": "As a continuity, a second model will also be developed, with sequential classification, so that it is possible to highlight the parts of the sentences that represent or describe the relation between the named entities verified. To achieve this goal, this second model will be trained only with the tuples that contain the annotated relation. Thus, the output of the model proposed in this work will be the input of the sequential classifier model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future works",
"sec_num": "7"
},
{
"text": "Available at https://simpletransformers. ai/ 2 Available at https://github.com/ huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially funded by the Portuguese Foundation for Science and Technology, project UIDB/00057/2020.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Rembrandt-reconhecimento de entidades mencionadas baseado em rela\u00e7oes e an\u00e1lise detalhada do texto. quot; Encontro do Segundo HAREM",
"authors": [
{
"first": "Nuno",
"middle": [],
"last": "Cardoso",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nuno Cardoso. 2008. Rembrandt-reconhecimento de entidades mencionadas baseado em rela\u00e7oes e an\u00e1lise detalhada do texto. quot; Encontro do Se- gundo HAREM (Universidade de Aveiro Portugal 7 de Setembro de 2008).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Geo-ontologias e padr\u00f5es para reconhecimento de locais e de suas rela\u00e7\u00f5es em textos: o sei-geo no segundo harem. quot",
"authors": [
{
"first": "Marc\u00edrio",
"middle": [],
"last": "Chaves",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc\u00edrio Chaves. 2008. Geo-ontologias e padr\u00f5es para reconhecimento de locais e de suas rela\u00e7\u00f5es em tex- tos: o sei-geo no segundo harem. quot; In Cristina Mota;",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Desafios na avalia\u00e7\u00e3o conjunta do reconhecimento de entidades mencionadas: O Segundo HAREM Linguateca",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Santos",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Santos (ed) Desafios na avalia\u00e7\u00e3o con- junta do reconhecimento de entidades mencionadas: O Segundo HAREM Linguateca 2008.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A sequence model approach to relation extraction in portuguese",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Collovini",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Machado",
"suffix": ""
},
{
"first": "Renata",
"middle": [],
"last": "Vieira",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1908--1912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Collovini, Gabriel Machado, and Renata Vieira. 2016. A sequence model approach to relation extrac- tion in portuguese. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1908-1912.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Relation extraction from text documents",
"authors": [
{
"first": "",
"middle": [],
"last": "Cvita\u0161",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 Proceedings of the 34th International Convention MIPRO",
"volume": "",
"issue": "",
"pages": "1565--1570",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Cvita\u0161. 2011. Relation extraction from text docu- ments. In 2011 Proceedings of the 34th Interna- tional Convention MIPRO, pages 1565-1570. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "From data mining to knowledge discovery in databases",
"authors": [
{
"first": "Usama",
"middle": [],
"last": "Fayyad",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Piatetsky-Shapiro",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 1996,
"venue": "AI magazine",
"volume": "17",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1609/aimag.v17i3.1230"
]
},
"num": null,
"urls": [],
"raw_text": "Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth. 1996. From data mining to knowl- edge discovery in databases. AI magazine, 17(3):37. GS Search.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep learning for identification of adverse drug reaction relations",
"authors": [
{
"first": "Edson",
"middle": [],
"last": "Florez",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Precioso",
"suffix": ""
},
{
"first": "Romaric",
"middle": [],
"last": "Pighetti",
"suffix": ""
},
{
"first": "Michel",
"middle": [],
"last": "Riveill",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 International Symposium on Signal Processing Systems",
"volume": "",
"issue": "",
"pages": "149--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edson Florez, Frederic Precioso, Romaric Pighetti, and Michel Riveill. 2019. Deep learning for identifica- tion of adverse drug reaction relations. In Proceed- ings of the 2019 International Symposium on Signal Processing Systems, pages 149-153.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Subsequence-level entity attention lstm for relation extraction",
"authors": [
{
"first": "",
"middle": [],
"last": "Tao",
"suffix": ""
},
{
"first": "Yunqiang",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Yanmin",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 16th International Computer Conference on Wavelet Active Media Technology and Information Processing",
"volume": "",
"issue": "",
"pages": "262--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "TAO GAN, YUNQIANG GAN, and YANMIN HE. 2019. Subsequence-level entity attention lstm for re- lation extraction. In 2019 16th International Com- puter Conference on Wavelet Active Media Tech- nology and Information Processing, pages 262-265. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A multi-task learning based approach to biomedical entity relation extraction",
"authors": [
{
"first": "Qingqing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ling",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kan",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)",
"volume": "",
"issue": "",
"pages": "680--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qingqing Li, Zhihao Yang, Ling Luo, Lei Wang, Yin Zhang, Hongfei Lin, Jian Wang, Liang Yang, Kan Xu, and Yijia Zhang. 2018. A multi-task learning based approach to biomedical entity relation extrac- tion. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 680- 682. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Improving rnn with attention and embedding for adverse drug reactions",
"authors": [
{
"first": "Chandra",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Zina",
"middle": [],
"last": "Ibrahim",
"suffix": ""
},
{
"first": "Honghan",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 International Conference on Digital Health",
"volume": "",
"issue": "",
"pages": "67--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chandra Pandey, Zina Ibrahim, Honghan Wu, Ehte- sham Iqbal, and Richard Dobson. 2017. Improving rnn with attention and embedding for adverse drug reactions. In Proceedings of the 2017 International Conference on Digital Health, pages 67-71.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Designing an adaptive attention mechanism for relation classification",
"authors": [
{
"first": "Pengda",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Weiran",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Joint Conference on Neural Networks (IJCNN)",
"volume": "",
"issue": "",
"pages": "4356--4362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengda Qin, Weiran Xu, and Jun Guo. 2017. De- signing an adaptive attention mechanism for rela- tion classification. In 2017 International Joint Con- ference on Neural Networks (IJCNN), pages 4356- 4362. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Gikiclef: Crosscultural issues in an international setting: asking non-english-centered questions to wikipedia",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Lu\u00eds Miguel",
"middle": [],
"last": "Cabral",
"suffix": ""
}
],
"year": 2009,
"venue": "quot; In Francesca Borri; Alessandro Nardi",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Santos and Lu\u00eds Miguel Cabral. 2009. Giki- clef: Crosscultural issues in an international setting: asking non-english-centered questions to wikipedia. In quot; In Francesca Borri; Alessandro Nardi;",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cross Language Evaluation Forum: Working notes for CLEF",
"authors": [
{
"first": "Carol",
"middle": [],
"last": "Peters",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carol Peters (ed) Cross Language Evaluation Fo- rum: Working notes for CLEF 2009 (Corfu 30",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reconhecimento de entidades mencionadas em portugu\u00eas: Documenta\u00e7\u00e3o e actas do harem",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Nuno",
"middle": [],
"last": "Cardoso",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Santos and Nuno Cardoso. 2007. Reconhec- imento de entidades mencionadas em portugu\u00eas: Documenta\u00e7\u00e3o e actas do harem, a primeira avalia\u00e7\u00e3o conjunta na\u00e1rea.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Information extraction",
"authors": [
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunita Sarawagi. 2008. Information extraction. Now Publishers Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BERTimbau: pretrained BERT models for Brazilian Portuguese",
"authors": [
{
"first": "F\u00e1bio",
"middle": [],
"last": "Souza",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Lotufo",
"suffix": ""
}
],
"year": 2020,
"venue": "9th Brazilian Conference on Intelligent Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F\u00e1bio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2020. BERTimbau: pretrained BERT models for Brazilian Portuguese. In 9th Brazilian Conference on Intelligent Systems, BRACIS, Rio Grande do Sul, Brazil, October 20-23 (to appear).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R\u00e9mi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Joe",
"middle": [],
"last": "Davison",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Shleifer",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Patrick Von Platen",
"suffix": ""
},
{
"first": "Yacine",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Jernite",
"suffix": ""
},
{
"first": "Canwen",
"middle": [],
"last": "Plu",
"suffix": ""
},
{
"first": "Teven",
"middle": [
"Le"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Scao",
"suffix": ""
},
{
"first": "Mariama",
"middle": [],
"last": "Gugger",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Drame",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Lhoest",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "38--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Enriching pretrained language model with entity information for relation classification",
"authors": [
{
"first": "Shanchan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "2361--2364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanchan Wu and Yifan He. 2019. Enriching pre- trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2361-2364.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pre-trained bert-gru model for relation extraction",
"authors": [
{
"first": "Rongli",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Wenxin",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "453--457",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rongli Yi and Wenxin Hu. 2019. Pre-trained bert-gru model for relation extraction. In Proceedings of the 2019 8th International Conference on Computing and Pattern Recognition, pages 453-457.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A convolutional neural network method for relation classification",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianhua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhixiong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 International Conference on Progress in Informatics and Computing (PIC)",
"volume": "",
"issue": "",
"pages": "440--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Zhang, Jianhua Liu, Ying Wang, and Zhixiong Zhang. 2017. A convolutional neural network method for relation classification. In 2017 Interna- tional Conference on Progress in Informatics and Computing (PIC), pages 440-444. IEEE.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Research on entity relationship extraction in financial and economic field based on deep learning",
"authors": [
{
"first": "Zhenyu",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Haiyang",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 IEEE 4th International Conference on Computer and Communications (ICCC)",
"volume": "",
"issue": "",
"pages": "2430--2435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenyu Zhou and Haiyang Zhang. 2018. Research on entity relationship extraction in financial and eco- nomic field based on deep learning. In 2018 IEEE 4th International Conference on Computer and Com- munications (ICCC), pages 2430-2435. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Complete model architecture with its 3 layers: (1) Input layer; (2) BERT layer; (3) Output layer.",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Figure 2: Examples of data transformations in the input layer of the model. The entities to be evaluated appear in bold, and the text that represents the semantic relation between them is underlined.",
"type_str": "figure"
},
"TABREF0": {
"content": "<table><tr><td/><td/><td>Positive</td><td>Positive</td></tr><tr><td>Set</td><td colspan=\"2\">Samples Class</td><td>Samples</td></tr><tr><td/><td/><td>Distribution (%)</td><td/></tr><tr><td>Original</td><td>3288</td><td>45.16</td><td>1485</td></tr><tr><td>Training</td><td>2630</td><td>45.17</td><td>1188</td></tr><tr><td colspan=\"2\">Validation 329</td><td>45.28</td><td>149</td></tr><tr><td>Test</td><td>329</td><td>45.98</td><td>148</td></tr></table>",
"num": null,
"type_str": "table",
"html": null,
"text": "Combination of hyper-parameters that presented better results."
},
"TABREF1": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Sample composition of each data set used in the experiments."
},
"TABREF3": {
"content": "<table/>",
"num": null,
"type_str": "table",
"html": null,
"text": "Precision, Recall and F-Measure calculated for each class and Accuracy and general F-Measure of the model."
}
}
}
}