ACL-OCL / Base_JSON /prefixE /json /ecnlp /2022.ecnlp-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:06.625538Z"
},
"title": "CoVA: Context-aware Visual Attention for Webpage Information Extraction",
"authors": [
{
"first": "Anurendra",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": ""
},
{
"first": "Keval",
"middle": [],
"last": "Morabia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "morabia2@illinois.edu"
},
{
"first": "Jingjin",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "jingjin9@illinois.edu"
},
{
"first": "Kevin Chen-Chuan",
"middle": [],
"last": "Chang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "kcchang@illinois.edu"
},
{
"first": "Alexander",
"middle": [],
"last": "Schwing",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "aschwing@illinois.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Webpage information extraction (WIE) is an important step to create knowledge bases. For this, classical WIE methods leverage the Document Object Model (DOM) tree of a website. However, use of the DOM tree poses significant challenges as context and appearance are encoded in an abstract manner. To address this challenge we propose to reformulate WIE as a context-aware Webpage Object Detection task. Specifically, we develop a Contextaware Visual Attention-based (CoVA) detection pipeline which combines appearance features with syntactical structure from the DOM tree. To study the approach we collect a new large-scale dataset 1 of e-commerce websites for which we manually annotate every web element with four labels: product price, product title, product image and others. On this dataset we show that the proposed CoVA approach is a new challenging baseline which improves upon prior state-of-the-art methods.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Webpage information extraction (WIE) is an important step to create knowledge bases. For this, classical WIE methods leverage the Document Object Model (DOM) tree of a website. However, use of the DOM tree poses significant challenges as context and appearance are encoded in an abstract manner. To address this challenge we propose to reformulate WIE as a context-aware Webpage Object Detection task. Specifically, we develop a Contextaware Visual Attention-based (CoVA) detection pipeline which combines appearance features with syntactical structure from the DOM tree. To study the approach we collect a new large-scale dataset 1 of e-commerce websites for which we manually annotate every web element with four labels: product price, product title, product image and others. On this dataset we show that the proposed CoVA approach is a new challenging baseline which improves upon prior state-of-the-art methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Webpage information extraction (WIE) is an important step when creating a large-scale knowledge base (Chang et al., 2006; Azir and Ahmad, 2017) which has many downstream applications such as knowledge-aware question answering (Lin et al., 2019) and recommendation systems (Ma et al., 2019; Lin et al., 2020) .",
"cite_spans": [
{
"start": 101,
"end": 121,
"text": "(Chang et al., 2006;",
"ref_id": "BIBREF6"
},
{
"start": 122,
"end": 143,
"text": "Azir and Ahmad, 2017)",
"ref_id": null
},
{
"start": 226,
"end": 244,
"text": "(Lin et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 272,
"end": 289,
"text": "(Ma et al., 2019;",
"ref_id": "BIBREF32"
},
{
"start": 290,
"end": 307,
"text": "Lin et al., 2020)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Classical methods for WIE, like Wrapper Induction (Soderland, 1999; Muslea et al., 1998; Chang and Lui, 2001) , rely on the publicly available source code of websites. The code is commonly parsed into a document object model (DOM) tree. The DOM tree is a programming language independent tree representation of any website, which contains all its elements. It can be obtained using 1 CoVA dataset and code are available at github.com/kevalmorabia97/CoVA-Web-Object-Detection * These authors contributed equally to this work various libraries like Puppeteer. These elements contain information about their location in the rendered webpage, styling like font size, etc., and text if it is a leaf node. State of the art method in WIE (Lin et al., 2020 ) uses text and markup information and employ CNN-BiLSTM encoder (Rhanoui et al., 2019) on the sequence of HTML nodes obtained from DOM to learn the embedding of each node. However, using only the DOM tree for WIE is increasingly challenging for a variety of reasons: 1) Webpages are programmed to be aesthetically pleasing; 2) Oftentimes content and style is separated in website code and hence the DOM tree; 3) The same visual result can be obtained in a plethora of ways; 4) Branding banners and advertisements are interspersed with information of interest.",
"cite_spans": [
{
"start": 50,
"end": 67,
"text": "(Soderland, 1999;",
"ref_id": "BIBREF41"
},
{
"start": 68,
"end": 88,
"text": "Muslea et al., 1998;",
"ref_id": "BIBREF36"
},
{
"start": 89,
"end": 109,
"text": "Chang and Lui, 2001)",
"ref_id": "BIBREF7"
},
{
"start": 382,
"end": 383,
"text": "1",
"ref_id": null
},
{
"start": 731,
"end": 748,
"text": "(Lin et al., 2020",
"ref_id": "BIBREF26"
},
{
"start": 814,
"end": 836,
"text": "(Rhanoui et al., 2019)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this reason, recently, WIE applied optical character recognition (OCR) on rendered websites followed by word embedding-based natural language extraction (Staar et al., 2018) . However, as mentioned before, recent webpages are highly enriched with visual content, and classical word embeddings don't capture this contextual information. For instance, text in advertising banners may be interpreted as valuable information. For this reason, a simple OCR detection followed by natural language processing techniques is a suboptimal for WIE (Vishwanath et al., 2018) .",
"cite_spans": [
{
"start": 157,
"end": 177,
"text": "(Staar et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 541,
"end": 566,
"text": "(Vishwanath et al., 2018)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In response to these challenges we develop WIE based on a visual representation of a web element and its context. This permits to address the aforementioned four challenges. Moreover, visual features are independent of the programming language (e.g., HTML for webpages, Dart for Android or iOS apps) and partially also the website language (e.g., Arabic, Chinese, English). Intuitively, we aim to mimic the ability of humans to detect the location of target elements like product price, product title and product image on a webpage in a foreign language like the one shown in Fig. 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 576,
"end": 582,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For this, we develop a context-aware Webpage Object Detection (WOD), which we refer to as Context-aware Visual Attention-based detection (CoVA), where entities like prices are objects. Somewhat differently from an object in natural images which can be detected largely based on its appearance, objects on a webpage are strongly defined by contextual information. e.g., a cat's appearance is largely independent of its nearby objects, whereas a product price is a highly ambiguous object ( Fig. 2) . It refers to the price of a product only when it is contextually related to a product title and a product image. The developed WOD uses a graph attention based architecture, which leverages the underlying syntactic DOM tree (Zhou et al., 2021) to focus on important context (Zhu et al., 2005) while classifying an element on a webpage. Once these web elements are identified, the relevant information e.g. price and title can be obtained from the corresponding DOM nodes. These information can then be indexed and used for applications like product search and price comparison across online retailers.",
"cite_spans": [
{
"start": 723,
"end": 742,
"text": "(Zhou et al., 2021)",
"ref_id": "BIBREF50"
},
{
"start": 773,
"end": 791,
"text": "(Zhu et al., 2005)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 489,
"end": 496,
"text": "Fig. 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To facilitate this task we create a dataset of 7.7k English product webpage screenshots along with DOM information spanning 408 different websites (domains). We compare the results of CoVA with existing and newly created baselines that take visual features into account. We show that CoVA leads to substantial improvements while yielding interpretable contextual representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, we make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We formulate WIE as a context-aware WOD problem. 2. We develop a Context-aware Visual Attentionbased (CoVA) detection pipeline, which is end-to-end trainable and exploits syntactic structure from the DOM tree along with screenshots. CoVA improves recent state-ofthe-art baselines by a significant margin. 3. We create the largest public dataset of 7.7k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "English product webpage screenshots from 408 online retailers for Object Detection from product webpages. Our dataset is \u223c 10\u00d7 larger than existing datasets. 4. We show the interpretability of CoVA using attention visualizations (Sec. 6.5) 5. We claim and validate that visual features (without textual content) along with DOM information are sufficient for many tasks while allowing cross-domain and cross-language generalizability. CoVA trained on English webpages perform well on Chinese Webpages (Sec. 6.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Webpage information extraction (WIE) has been mainly addressed with Wrapper Induction (WI). WI aims to learn a set of extraction rules from HTML code or text, using manually labeled examples and counter-examples (Soderland, 1999; Muslea et al., 1998; Chang and Lui, 2001 ). These often require human intervention which is timeconsuming, error-prone (Vadrevu et al., 2005) , and does not generalize to new templates. Supervised learning, which treats WIE as a classification task has also garnered significant attention. Traditionally, natural language processing techniques are employed over HTML or DOM information. Structural and semantic features (Ibrahim et al., 2008; Gibson et al., 2007) are obtained for each part of a webpage to predict categories like title, author, etc. Wu et al. (2015) casts WIE as a HTML node selection problem using features such as positions, areas, fonts, text, tags, and links. Lin et al. (2020) proposes a neural network to learn representation of a DOM node by combining text and markup information. A CNN-BiLSTM encoder is employed to learn the embeddings for HTML node. Hwang et al. (2020) develops a transformer architecture to learn spatial dependency between DOM nodes. Unlike these work which depends on text information, we aim to learn representation of a DOM node using only visual cues. Joshi and Liu (2009) develop a semantic similarity between blocks of webpages using textual and DOM features to extract the key article on a webpage.",
"cite_spans": [
{
"start": 212,
"end": 229,
"text": "(Soderland, 1999;",
"ref_id": "BIBREF41"
},
{
"start": 230,
"end": 250,
"text": "Muslea et al., 1998;",
"ref_id": "BIBREF36"
},
{
"start": 251,
"end": 270,
"text": "Chang and Lui, 2001",
"ref_id": "BIBREF7"
},
{
"start": 349,
"end": 371,
"text": "(Vadrevu et al., 2005)",
"ref_id": "BIBREF44"
},
{
"start": 650,
"end": 672,
"text": "(Ibrahim et al., 2008;",
"ref_id": "BIBREF19"
},
{
"start": 673,
"end": 693,
"text": "Gibson et al., 2007)",
"ref_id": "BIBREF9"
},
{
"start": 781,
"end": 797,
"text": "Wu et al. (2015)",
"ref_id": "BIBREF48"
},
{
"start": 912,
"end": 929,
"text": "Lin et al. (2020)",
"ref_id": "BIBREF26"
},
{
"start": 1108,
"end": 1127,
"text": "Hwang et al. (2020)",
"ref_id": null
},
{
"start": 1347,
"end": 1353,
"text": "(2009)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Visual features have been extensively employed to generate visual wrappers for pattern extraction. Mostly, these utilize hand-crafted visual features from a webpage, e.g., area size, font size, and type. Cai et al. (2003) develop a visual block tree of a webpage using visual and layout features along with the DOM tree information. Subsequent works use this tree for tasks like webpage segmentation, visual wrapper generation, and web record extraction (Cai et al., 2004; Liu et al., 2003; Simon and Lausen, 2005; Burget and Rudolfova, 2009) . Gogar et al. (2016) aims to develop domain-specific wrappers which generalize across unseen templates and don't need manual intervention. They develop a unified model that encodes visual, textual, and positional features using a single CNN.",
"cite_spans": [
{
"start": 204,
"end": 221,
"text": "Cai et al. (2003)",
"ref_id": "BIBREF5"
},
{
"start": 454,
"end": 472,
"text": "(Cai et al., 2004;",
"ref_id": "BIBREF4"
},
{
"start": 473,
"end": 490,
"text": "Liu et al., 2003;",
"ref_id": "BIBREF28"
},
{
"start": 491,
"end": 514,
"text": "Simon and Lausen, 2005;",
"ref_id": "BIBREF40"
},
{
"start": 515,
"end": 542,
"text": "Burget and Rudolfova, 2009)",
"ref_id": "BIBREF2"
},
{
"start": 545,
"end": 564,
"text": "Gogar et al. (2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Object detection (OD) techniques in Computer Vision, which aims to detect and classify all objects, has been extensively studied for natural images. Deep learning methods such as YOLO (Redmon and Farhadi, 2018) , R-CNN variants (Girshick et al., 2014; Girshick, 2015; He et al., 2017) , etc. yielded state-of-the-art results in OD.",
"cite_spans": [
{
"start": 184,
"end": 210,
"text": "(Redmon and Farhadi, 2018)",
"ref_id": "BIBREF37"
},
{
"start": 228,
"end": 251,
"text": "(Girshick et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 252,
"end": 267,
"text": "Girshick, 2015;",
"ref_id": "BIBREF10"
},
{
"start": 268,
"end": 284,
"text": "He et al., 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "OD methods that can capture contextual information are of particular interest here. Murphy et al. (2006) learn local and global context by object presence and localization and use a product of experts model (Hinton, 2002) to combine them. Kong et al. (2021) proposes a short path context module which transforms the integrated feature maps by considering local feature affinities.",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "Murphy et al. (2006)",
"ref_id": "BIBREF35"
},
{
"start": 207,
"end": 221,
"text": "(Hinton, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 239,
"end": 257,
"text": "Kong et al. (2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Graph Convolutional Networks (GCN) (Kipf and Welling, 2016) was proposed to learn a node representation while taking neighbors of a node into account. Using it, Liu et al. 2019represent a visually rich document as a complete graph of text content obtained by passing OCR (Mithe et al., 2013) . They employ GCN to learn node representations for each web element.",
"cite_spans": [
{
"start": 271,
"end": 291,
"text": "(Mithe et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, Attention mechanisms have also shown remarkable ability in capturing contextual information (Bahdanau et al., 2014) . Vaswani et al. (2017) propose a transformer architecture for language modeling. Luo et al. (2018) use attention over a BiLSTM-CRF layer for Named Entity Recognition (NER) on biomedical data. Word vectors learned on BERT (Devlin et al., 2018) , which use self-attention, have yielded state-of-the-art results on 11 NLP tasks.",
"cite_spans": [
{
"start": 102,
"end": 125,
"text": "(Bahdanau et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 128,
"end": 149,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF45"
},
{
"start": 208,
"end": 225,
"text": "Luo et al. (2018)",
"ref_id": "BIBREF31"
},
{
"start": 348,
"end": 369,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Separately, attention has been used for contextual learning in OD (Li et al., 2013; Hsieh et al., 2019; Morabia et al., 2020) and image captioning (You et al., 2016) . Attention mechanisms have also been employed over graphs to learn an optimal representation of nodes while taking graph structure into account (Veli\u010dkovi\u0107 et al., 2017) . Moreover, attention permits to interpret result, which is often desired in many applications. We show our visualizations depicting this advantage below (Sec. 6.5).",
"cite_spans": [
{
"start": 66,
"end": 83,
"text": "(Li et al., 2013;",
"ref_id": "BIBREF24"
},
{
"start": 84,
"end": 103,
"text": "Hsieh et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 104,
"end": 125,
"text": "Morabia et al., 2020)",
"ref_id": "BIBREF34"
},
{
"start": 147,
"end": 165,
"text": "(You et al., 2016)",
"ref_id": "BIBREF49"
},
{
"start": 311,
"end": 336,
"text": "(Veli\u010dkovi\u0107 et al., 2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The DOM tree captures the syntactical structure of a webpage similar to a parse tree of a natural language. Our goal is to extract semantic information exploiting this syntactic structure. We view a leaf web element as a word and the webpage as a document with the DOM tree as its underlying parse tree. Formally, we represent a webpage W as the set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "W = {v 1 , v 2 , . . . , v i , . . . , v N , D}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "where v i denotes the visual representation of the i-th web element, N denotes number of web elements, and D refers to the DOM tree which contains the relations between the web elements. Our goal is to learn a parametric function f \u03b8 (y i |W, i) which extracts a visual representation v i of the i-th web element from website W so as to accurately predict label y i of the web element. In the following we consider four labels for a product, i.e., y i \u2208 {product price, title, image, others}. The parameters \u03b8 are obtained by minimizing the following supervised classification loss",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "\u03b8 * = argmin \u03b8 E i,W \u223cP W [L(f \u03b8 (y i |W, i), y * i )] ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "where E denotes an expectation, y i and y * i denote the predicted and ground truth labels and P W denotes a probability distribution over webpages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "Information of a webpage is present in the leaves of the DOM tree, i.e., the web elements i. Web elements are an atomic entity which is characterized by a rectangular bounding box. We can extract the target information y i from the DOM tree if we know the exact leaf bounding boxes of the desired element. Therefore, we can view WIE as an object detection (OD) task where objects are leaf elements and might contain the desired entity (target). However, identity y i of a web element is heavily dependent on its context, e.g., price, title, and image of a product are most likely to be in same or nearby sub-tree in comparison to unrelated web elements such as advertisements. Similarly, there can be multiple instances of price-like elements. However, the correct price would be contextually positioned with product title and image (Fig. 2) . Therefore, we formulate WIE as a context-aware OD.",
"cite_spans": [],
"ref_spans": [
{
"start": 833,
"end": 841,
"text": "(Fig. 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "We use the DOM tree to identify context for a web element. We represent the syntactic closeness between web elements through edges in the graph (discussed in next section). We then employ a graph attention mechanism (Veli\u010dkovi\u0107 et al., 2017) to attend to the most important contexts.",
"cite_spans": [
{
"start": 216,
"end": 241,
"text": "(Veli\u010dkovi\u0107 et al., 2017)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem formulation",
"sec_num": "3"
},
{
"text": "In this section, we present our Context-Aware Visual Attention-based end-to-end pipeline for Webpage Object Detection (CoVA) which aims to learn function f to predict labels y = [y 1 , y 2 , . . . , y N ] for a webpage. The input to CoVA consists of 1. a screenshot of a webpage, 2. list of bounding boxes [x, y, w, h] of the web elements, and 3. neighborhood information for each element obtained from DOM. It should be noted that bounding boxes of the web elements are relatively accurate and doesn't pose challenges similar to OD for natural images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed End-to-End Pipeline -CoVA",
"sec_num": "4"
},
{
"text": ".. As illustrated in Fig. 3 , this information is processed by CoVA in four stages: 1. the graph representation extraction for the webpage, 2. the Representation Network (RN), 3. the Graph Attention Network (GAT), and 4. a fully connected (FC) layer. The graph representation extraction computes for every web element i its set of neighboring web elements N i . The RN consists of a Convolutional Neural Net (CNN) and a positional encoder aimed to learn a visual representation v i for each web element i \u2208 {1, . . . , N }. The GAT combines the visual representation v i of the web element i to be classified and those of its neighbors, i.e., v k \u2200k \u2208 N i to compute the contextual representation c i for web element i. Finally, the visual and contextual representations of the web element are concatenated and passed through the FC layer to obtain the classification output. We describe each of the components next.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 27,
"text": "Fig. 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "GAT v0",
"sec_num": null
},
{
"text": "We represent a webpage as a graph where nodes are leaf web elements and an edge indicates that the corresponding web elements are contextually relevant to each other. A naive way to create graph is by putting edge between every pair of nodes . An alternative way of creating a graph is to add edges to nearby nodes based on spatial distance. However, web elements vary greatly in shapes & sizes, and two web elements might have small distance but they're contextually irrelevant since they lie in different DOM subtrees. For this, we use the K nearest leaf elements in the DOM tree as the neighbors N i a web element i. An edge within the graph denotes the syntactic closeness in the DOM tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Webpage as a Graph",
"sec_num": "4.1"
},
{
"text": "The goal of the Representation Network (RN) is to learn a fixed size visual representation v i of any web element i \u2208 {1, . . . , N }. This is important since web elements have different sizes, aspect ratios, and content type (image or text). To achieve this the RN consists of a CNN operating on the screenshot of a webpage, followed by a Region of Interest (RoI) pooling layer (Girshick, 2015 ) and a positional encoder. Specifically, RoI pooling is performed to obtain a fixed size representation for all web elements. To capture the spatial layout, we learn a P dimensional positional feature which is obtained by passing the bounding box features [x, y, w, h, w h ] through a positional encoder implemented by a single layer neural net. Finally, we concatenate the flattened output of the RoI pooling with positional features to obtain the visual representation v i .",
"cite_spans": [
{
"start": 379,
"end": 394,
"text": "(Girshick, 2015",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representation Network (RN)",
"sec_num": "4.2"
},
{
"text": "The goal of the graph attention network is to compute a contextual representation c i for each web element i which takes visual information v i from neighboring web elements into account. However, out of multiple neighbors for a web element, only a few are informative, e.g., a web element having a currency symbol near a set of digits seems relevant. To identify the relational importance we use a Graph Attention Network (GAT) (Veli\u010dkovi\u0107 et al., 2017) . We transform each of the input features by learning projection matrices W 1 and W 2 applied at every node and its neighbors. We then employ self-attention (Lin et al., 2017) to compute the importance score,",
"cite_spans": [
{
"start": 429,
"end": 454,
"text": "(Veli\u010dkovi\u0107 et al., 2017)",
"ref_id": "BIBREF46"
},
{
"start": 612,
"end": 630,
"text": "(Lin et al., 2017)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Attention Network (GAT)",
"sec_num": "4.3"
},
{
"text": "\u03b1 ij = exp(LeakyReLU(a T [W 1 v i ||W 2 v j ])) k\u2208N i exp(LeakyReLU(a T [W 1 v i ||W 2 v k ]))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Attention Network (GAT)",
"sec_num": "4.3"
},
{
"text": ", where \u2022 T represents transposition, || is the concatenation operation, N i denotes the neighbors of web element i. The weights \u03b1 ij are non-negative attention scores for neighboring web elements of web element i. Finally, we obtain the contextual representation c i for a web element i as a weighted combination of projected visual representations of its neighbors, i.e., via",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Attention Network (GAT)",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "c i = j\u2208N i \u03b1 ij W 2 v j .",
"eq_num": "(1)"
}
],
"section": "Graph Attention Network (GAT)",
"sec_num": "4.3"
},
{
"text": "In scenarios where additional features (e.g., text content, HTML tag information, etc.) are available, CoVA can be easily extended to incorporate those. These features can be concatenated with visual representations obtained from the RN without modifying the pipeline in any other way. We refer to this extended pipeline as CoVA++. However, making the model dependent on these features might lead to constraints regarding the programming language (HTML tags) or text language. In Sec. 6.4, we show that CoVA trained on English webpages (without additional features) generalizes well to Chinese webpages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Augmenting CoVA with extra features",
"sec_num": "4.4"
},
{
"text": "To the best of our knowledge there is no large-scale dataset for WIE with visual annotations for object detection. So far, the Structured Web Data Extraction (SWDE) dataset (Hao et al., 2011) is the only known large dataset that can be used for training deep neural networks for WIE (Lin et al., 2020; Lockard et al., 2019) . SWDE dataset contains webpage HTML codes which is not sufficient to render it into a screenshot (since it contains links to old and non-existent URLs). Because of this we create a new large-scale labeled dataset for object detection on English product webpage screenshots along with DOM information. We chose e-commerce websites since those have been a de-facto standard for WIE (Gogar et al., 2016; Zhu et al., 2005) . Our dataset generation consists of two steps: 1. search the web with 'shopping' keywords to aggregate diverse webpages and employ heuristics to automate labeling of product price, title, and image, 2. manual correction of incorrect labels. We discuss both steps next. Web scraping and coarse labeling. To scrape websites, we use Google shopping 2 which aggregates links to multiple online retailers (domains) for the same product. These links are uploaded by the merchants of the respective domains. We do a keyword search for various categories, like electronics, food, cosmetics. For each search result, we record the price and title from Google shopping. Then, we navigate through the links to specific product websites and save a 1280 \u00d7 1280 screenshot. To extract a bounding box for each web element, we store a pruned DOM tree. Price and title candidates are labeled by comparing with the recorded values using heuristics. For product images, we always choose the DOM element having the largest bounding box area among all the elements with an <img> HTML tag, although this might not be true for many websites. We correct this issue in the next step.",
"cite_spans": [
{
"start": 173,
"end": 191,
"text": "(Hao et al., 2011)",
"ref_id": "BIBREF13"
},
{
"start": 283,
"end": 301,
"text": "(Lin et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 302,
"end": 323,
"text": "Lockard et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 705,
"end": 725,
"text": "(Gogar et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 726,
"end": 743,
"text": "Zhu et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Generation",
"sec_num": "5"
},
{
"text": "Label correction. The coarse labeling is only \u223c60% accurate because 1. price on webpages keeps changing and might differ from the Google shopping price, and 2. many bounding boxes have the same content. To correct for these mistakes, we manually inspected and correct labeling errors. We obtained 7,740 webpages spanning 408 domains. Each of these webpages contains exactly one labeled price, title, and image. All other web elements are labeled as 'others'. On average, there are \u223c90 leaf web elements on a webpage. Train-Val-Test split. We create a cross-domain split which ensures that each of the train, val and test sets contains webpages from different domains. We observed that the top-5 frequent domains were Amazon, EBay, Walmart, Etsy, and Target. So, we created 5 different splits for 5-Fold Cross Validation such that each of the major domains is present in one of the test splits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Generation",
"sec_num": "5"
},
{
"text": "6 Experimental Setup & Results",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset Generation",
"sec_num": "5"
},
{
"text": "We compare the results of our end-to-end pipeline CoVA with other existing and newly created base- lines summarized below. Our newly created baselines combine existing object detection and graph based models to identify the importance of visual features and contextual representations. (Gogar et al., 2016) : This method identifies product price, title, and image from the visual and textual representation of the web elements. Random Forest on Heuristic features: We train a Random Forest classifier with 100 trees using various HTML tags, text, and bounding box features as shown in Fig. 4 . Fast R-CNN*: We compare with Fast R-CNN (Girshick, 2015) to quantify the importance of contextual representations in CoVA. We use the DOM tree instead of selective search (Uijlings et al., 2013) for bounding box proposals. We also use positional features as described when discussing the representation network (Sec. 4.2) for a fair comparison with CoVA. We will refer to this baseline as 'Fast R-CNN*.' Fast R-CNN* + GCN (Kipf and Welling, 2016):",
"cite_spans": [
{
"start": 286,
"end": 306,
"text": "(Gogar et al., 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 585,
"end": 591,
"text": "Fig. 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "6.1"
},
{
"text": "We use GCN on our graph formulation where node features are the visual representations obtained from Fast R-CNN*. Fast R-CNN* + Bi-LSTM (Schuster and Paliwal, 1997) :",
"cite_spans": [
{
"start": 136,
"end": 164,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "6.1"
},
{
"text": "We train a bidirectional LSTM on visual representations of web elements in preorder traversal of the DOM tree. We use its output as the contextual representation and concatenate it with the visual representation of the web element obtained from Fast R-CNN*.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Methods",
"sec_num": "6.1"
},
{
"text": "In each training epoch, we randomly sample 90% from others. This increases the diversity in training data by providing different contexts for web-pages with exactly the same template. We use batch normalization (Ioffe and Szegedy, 2015) between consecutive layers, Adam optimizer for updating model parameters and minimize cross-entropy loss.",
"cite_spans": [
{
"start": 211,
"end": 236,
"text": "(Ioffe and Szegedy, 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training, Inference and Evaluation",
"sec_num": "6.2"
},
{
"text": "During inference, the model detects one web element with highest probability for each class. Once the web element is identified, the corresponding text content can be extracted from the DOM tree or by using OCR for downstream tasks. For CoVA++ we use as additional information the same heuristic features used to train the Random Forest classifier baseline. Unless specified otherwise, all results of CoVA and baselines use the following hyperparameters where applicable: learning rate = 5e-4, batch size = 5 screenshot images, K = 24 neighbor elements in the graph, RoI pool output size (H \u00d7 W ) = (3 \u00d7 3), dropout = 0.2, P = 32 dimensional positional features, output dimension for projection matrix W 1 , W 2 is 384, weight decay = 1e-3. We use the first 5 layers of a pre-trained ResNet18 (He et al., 2016) in the representation network (RN), which yields a 64 channel feature map. This significantly reduces the parameters in the RN from 12m to 0.2m and speeds up training at the same time. The evaluation is performed using Cross-domain Accuracy for each class, i.e., the fraction of webpages of new domains with correct class. All the experiments are performed on Tesla V100-SXM2-16GB GPUs.",
"cite_spans": [
{
"start": 793,
"end": 810,
"text": "(He et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training, Inference and Evaluation",
"sec_num": "6.2"
},
{
"text": "As shown in Table 1 , our method outperforms all baselines by a considerable margin especially for price prediction. CoVA learns visual features which are significantly better than the heuristic feature baseline that uses predefined tag, textual and visual features. Fig. 4 shows the importance of different heuristic based features in a webpage. We observe that a heuristic feature based method has similar performance to methods which don't use contextual features. Moreover, CoVA++ which also uses heuristic features, doesn't lead to statistically significant improvements. This shows that visual features learnt by CoVA are more general for tasks like price & title detection. Context information is particularly important for price (in comparison to title and image) since it's highly ambiguous and occurs in different locations with varying contexts (Fig. 2) . This is evident from the \u223c8.9% improvement in price accuracy compared to the Fast R-CNN*. Unless stated other-",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 267,
"end": 273,
"text": "Fig. 4",
"ref_id": "FIGREF3"
},
{
"start": 856,
"end": 864,
"text": "(Fig. 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.3"
},
{
"text": "Title Acc Image Acc Gogar et al. (2016) 1.8m 78.1 \u00b1 17.2 91.5 \u00b1 1.3 93.2 \u00b1 1.9 Random Forest using Heuristic features -87.4 \u00b1 10.4 93.5 \u00b1 5.3 97.2 \u00b1 3.8 Fast R-CNN* (Girshick, 2015) 0.5m 86.6 \u00b1 7.3 93.7 \u00b1 2.2 97.0 \u00b1 3.6 Fast R-CNN* + GCN 1.4m 90.0 \u00b1 11.0 95.4 \u00b1 1.5 98.2 \u00b1 2.8 Fast R-CNN* + Bi-LSTM 5.1m 92.9 \u00b1 4.6 94.0 \u00b1 2.1 97.6 \u00b1 3.6 CoVA 1.6m 95.5 \u00b1 3.8 95.7 \u00b1 1.2 98.8 \u00b1 1.5 CoVA++ 1.7m 96.1 \u00b1 3.0 96.7 \u00b1 2.2 99.6 \u00b1 0.3 We also obtained top-3 accuracy for CoVA, which are 98.6%, 99.4%, and 99.9% for price, title and image respectively.",
"cite_spans": [
{
"start": 20,
"end": 39,
"text": "Gogar et al. (2016)",
"ref_id": "BIBREF12"
},
{
"start": 165,
"end": 181,
"text": "(Girshick, 2015)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method Params Price Acc",
"sec_num": null
},
{
"text": "To validate our claim that visual features (without textual or HTML tag information) can capture cross-lingual information, we test our model on webpages in a foreign language. In particular, we evaluated CoVA (trained on English product webpages) using 100 Chinese product webpages spanning across 25 unique domains. CoVA achieves 92%, 90%, and 99% accuracy for product price, title, and image. It should be noted that image has the same accuracy as for English pages. This is expected since images have no language components that the model can attend to. Table 1 shows that attention significantly improves performance for all the three targets. As discussed earlier, only few of the contexts are important which are effectively learnt by Graph Attention Network (GAT). We observed that on average, \u223c20% of context elements were activated (score above 0.05 threshold) by GAT. We also study a multihead attention instead of single head following (Vaswani et al., 2017) , which didn't yield significant improvements in our case. Fig. 5 shows visualizations of attention scores learnt by GAT. Fig. 5(a) shows an example where title and image have more weight than other contexts when learning a context representation for price. This shows that attention is able to focus on important web elements and discards others. Similarly, Fig. 5(b) shows that price has a much higher score than other contexts for learning contextual representation for title.",
"cite_spans": [
{
"start": 948,
"end": 970,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1030,
"end": 1036,
"text": "Fig. 5",
"ref_id": null
},
{
"start": 1093,
"end": 1102,
"text": "Fig. 5(a)",
"ref_id": null
},
{
"start": 1330,
"end": 1339,
"text": "Fig. 5(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Cross-lingual Evaluation of CoVA",
"sec_num": "6.4"
},
{
"text": "Importance of Positional features: Table 2 shows that positional features can significantly improve accuracy for price, title, and image prediction. This also validates that for webpage OD, location and size of a bounding box carries significant information, making it different from classical OD. Dependence on number of neighbors in graph: Fig. 6 shows the variation in cross domain accuracy of CoVA with respect to the number of neighboring elements K. Note that having 0 context elements is equivalent to our baseline Fast R-CNN*. We observe that, unlike title and image, price accuracy can significantly be improved by considering larger contexts. This is due to the fact that price is highly ambiguous (Fig. 2) . We also study the graph construction described by where all nodes are considered in the neighborhood of a particular node. This significantly reduced the performance for price (90.7%) and title (92.7%).",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": null
},
{
"start": 342,
"end": 348,
"text": "Fig. 6",
"ref_id": null
},
{
"start": 708,
"end": 716,
"text": "(Fig. 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ablation Studies",
"sec_num": "7"
},
{
"text": "In this paper, we reformulated the problem of webpage IE (WIE) as a context-aware webpage object detection. We created a large-scale dataset for this task and is available publicly. We proposed CoVA Figure 5 : Attention Visualizations where red border denotes web element to be classified, and its contexts have green shade whose intensity denotes score. Price in (a) get much more score than other contexts. Title and image in (b) are scored higher than other contexts for price.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion & Future Work",
"sec_num": "8"
},
{
"text": "Price Accuracy Title Accuracy Image Accuracy CoVA without positional features 89.2 \u00b1 10.3 91.9 \u00b1 1.4 95.9 \u00b1 1.8 CoVA 95.5 \u00b1 3.8 95.7 \u00b1 1.2 98.8 \u00b1 1.5 Table 2 : Importance of positional features in RN Figure 6 : Comparison of context size with accuracy which uses i) a graph representation of a webpage, ii) a Representation Network (RN) to learn visual representation for a web element, and iii) a Graph Attention Network (GAT) for contextual learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 157,
"text": "Table 2",
"ref_id": null
},
{
"start": 200,
"end": 208,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "CoVA improves upon state-of-the-art results and newly created baselines by considerable margins. Our visualizations show that CoVA is able to attend to the most important contexts. In the future, we plan to adapt this method to other tasks such as identifying malicious web elements. Our works shows the importance of visual features of WIE which is traditionally overlooked. We hope that our work will motivate researchers in WIE to employ CV alongwith NLP techniques to solve this important problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "shopping.google.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Jun Zhu, Zaiqing Nie, Ji-Rong Wen, Bo Zhang, andWei-YingMa. 2005. 2d conditional random fields for web information extraction. In Proceedings of the 22nd international conference on Machine learning, pages 1044-1051.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Wrapper approaches for web data extraction: A review",
"authors": [],
"year": 2017,
"venue": "2017 6th International Conference on Electrical Engineering and Informatics (ICEEI)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohd Amir Bin Mohd Azir and Kamsuriah Binti Ah- mad. 2017. Wrapper approaches for web data extrac- tion: A review. In 2017 6th International Conference on Electrical Engineering and Informatics (ICEEI), pages 1-6. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Web page element classification based on visual features",
"authors": [
{
"first": "Radek",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "Ivana",
"middle": [],
"last": "Rudolfova",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "88",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radek Burget and Ivana Rudolfova. 2009. Web page element classification based on visual features. In 88",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "First Asian Conference on Intelligent Information and Database Systems",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "First Asian Conference on Intelligent Informa- tion and Database Systems, pages 67-72. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Block-level link analysis",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Xiaofei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deng Cai, Xiaofei He, Ji-Rong Wen, and Wei-Ying Ma. 2004. Block-level link analysis. In Proceedings of the 27th annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 440-447.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Vips: a vision-based page segmentation algorithm",
"authors": [
{
"first": "Deng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Shipeng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deng Cai, Shipeng Yu, Ji-Rong Wen, and Wei-Ying Ma. 2003. Vips: a vision-based page segmentation algorithm.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A survey of web information extraction systems",
"authors": [
{
"first": "Chia-Hui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Kayed",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Moheb",
"suffix": ""
},
{
"first": "Khaled F",
"middle": [],
"last": "Girgis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shaalan",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE transactions on knowledge and data engineering",
"volume": "18",
"issue": "10",
"pages": "1411--1428",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Hui Chang, Mohammed Kayed, Moheb R Girgis, and Khaled F Shaalan. 2006. A survey of web in- formation extraction systems. IEEE transactions on knowledge and data engineering, 18(10):1411-1428.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Iepad: information extraction based on pattern discovery",
"authors": [
{
"first": "Chia-Hui",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Shao-Chen",
"middle": [],
"last": "Lui",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 10th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "681--688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Hui Chang and Shao-Chen Lui. 2001. Iepad: in- formation extraction based on pattern discovery. In Proceedings of the 10th international conference on World Wide Web, pages 681-688.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adaptive web-page content identification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Wellner",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Lubar",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 9th annual ACM international workshop on Web information and data management",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Gibson, Ben Wellner, and Susan Lubar. 2007. Adaptive web-page content identification. In Pro- ceedings of the 9th annual ACM international work- shop on Web information and data management, pages 105-112.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fast r-cnn",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "1440--1448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440-1448.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rich feature hierarchies for accurate object detection and semantic segmentation",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Jitendra",
"middle": [],
"last": "Malik",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "580--587",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Girshick, Jeff Donahue, Trevor Darrell, and Ji- tendra Malik. 2014. Rich feature hierarchies for ac- curate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580-587.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Deep neural networks for web page information extraction",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Gogar",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Hubacek",
"suffix": ""
}
],
"year": 2016,
"venue": "IFIP International Conference on Artificial Intelligence Applications and Innovations",
"volume": "",
"issue": "",
"pages": "154--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Gogar, Ondrej Hubacek, and Jan Sedivy. 2016. Deep neural networks for web page information ex- traction. In IFIP International Conference on Artifi- cial Intelligence Applications and Innovations, pages 154-163. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "From one tree to a forest: a unified solution for structured web data extraction",
"authors": [
{
"first": "Qiang",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Yanwei",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "775--784",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiang Hao, Rui Cai, Yanwei Pang, and Lei Zhang. 2011. From one tree to a forest: a unified solution for struc- tured web data extraction. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 775- 784.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mask r-cnn",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Georgia",
"middle": [],
"last": "Gkioxari",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE international conference on computer vision",
"volume": "",
"issue": "",
"pages": "2961--2969",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Georgia Gkioxari, Piotr Doll\u00e1r, and Ross Girshick. 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Training products of experts by minimizing contrastive divergence",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2002,
"venue": "Neural computation",
"volume": "14",
"issue": "8",
"pages": "1771--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural com- putation, 14(8):1771-1800.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "One-shot object detection with co-attention and co-excitation",
"authors": [
{
"first": "-I",
"middle": [],
"last": "Ting",
"suffix": ""
},
{
"first": "Yi-Chen",
"middle": [],
"last": "Hsieh",
"suffix": ""
},
{
"first": "Hwann-Tzong",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Tyng-Luh",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "2725--2734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting-I Hsieh, Yi-Chen Lo, Hwann-Tzong Chen, and Tyng-Luh Liu. 2019. One-shot object detection with co-attention and co-excitation. In Advances in Neural Information Processing Systems, pages 2725-2734.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Seunghyun Park, Sohee Yang, and Minjoon Seo. 2020. Spatial dependency parsing for semi-structured document information extraction",
"authors": [
{
"first": "Wonseok",
"middle": [],
"last": "Hwang",
"suffix": ""
},
{
"first": "Jinyeong",
"middle": [],
"last": "Yim",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.00642"
]
},
"num": null,
"urls": [],
"raw_text": "Wonseok Hwang, Jinyeong Yim, Seunghyun Park, So- hee Yang, and Minjoon Seo. 2020. Spatial depen- dency parsing for semi-structured document informa- tion extraction. arXiv preprint arXiv:2005.00642.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Automatic extraction of textual elements from news web pages",
"authors": [
{
"first": "Hossam",
"middle": [],
"last": "Ibrahim",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
},
{
"first": "Abdel-Rahim",
"middle": [],
"last": "Madany",
"suffix": ""
}
],
"year": 2008,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hossam Ibrahim, Kareem Darwish, and Abdel-Rahim Madany. 2008. Automatic extraction of textual ele- ments from news web pages. In LREC.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1502.03167"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Ioffe and Christian Szegedy. 2015. Batch nor- malization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Web document text and images extraction using dom analysis and natural language processing",
"authors": [
{
"first": "Mulendra",
"middle": [],
"last": "Parag",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 9th ACM symposium on Document engineering",
"volume": "",
"issue": "",
"pages": "218--221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parag Mulendra Joshi and Sam Liu. 2009. Web docu- ment text and images extraction using dom analysis and natural language processing. In Proceedings of the 9th ACM symposium on Document engineering, pages 218-221.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semisupervised classification with graph convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Kipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.02907"
]
},
"num": null,
"urls": [],
"raw_text": "Thomas N Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Spatial contextaware network for salient object detection",
"authors": [
{
"first": "Yuqiu",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Mengyang",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Huchuan",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Xiuping",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Baocai",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2021,
"venue": "Pattern Recognition",
"volume": "114",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuqiu Kong, Mengyang Feng, Xin Li, Huchuan Lu, Xiuping Liu, and Baocai Yin. 2021. Spatial context- aware network for salient object detection. Pattern Recognition, 114:107867.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Web data extraction based on tag path clustering",
"authors": [
{
"first": "Gui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zheng",
"middle": [
"Yu"
],
"last": "Li",
"suffix": ""
},
{
"first": "Zi",
"middle": [
"Yang"
],
"last": "Han",
"suffix": ""
},
{
"first": "Ping",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2013,
"venue": "Advanced Materials Research",
"volume": "756",
"issue": "",
"pages": "1590--1594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gui Li, Cheng Chen, Zheng Yu Li, Zi Yang Han, and Ping Sun. 2013. Web data extraction based on tag path clustering. In Advanced Materials Research, volume 756, pages 1590-1594. Trans Tech Publ.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Kagnet: Knowledge-aware graph networks for commonsense reasoning",
"authors": [
{
"first": "Xinyue",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Jamin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.02151"
]
},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph net- works for commonsense reasoning. arXiv preprint arXiv:1909.02151.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Freedom: A transferable neural architecture for structured information extraction on web documents",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tata",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "1092--1102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Ying Sheng, Nguyen Vo, and Sandeep Tata. 2020. Freedom: A transferable neural architec- ture for structured information extraction on web doc- uments. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1092-1102.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A structured self-attentive sentence embedding",
"authors": [
{
"first": "Zhouhan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Cicero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1703.03130"
]
},
"num": null,
"urls": [],
"raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. arXiv preprint arXiv:1703.03130.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Mining data records in web pages",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Grossman",
"suffix": ""
},
{
"first": "Yanhong",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "601--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu, Robert Grossman, and Yanhong Zhai. 2003. Mining data records in web pages. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601- 606.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Graph convolution for multimodal information extraction from visually rich documents",
"authors": [
{
"first": "Xiaojing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Qiong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Huasha",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.11279"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaojing Liu, Feiyu Gao, Qiong Zhang, and Huasha Zhao. 2019. Graph convolution for multimodal in- formation extraction from visually rich documents. arXiv preprint arXiv:1903.11279.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "OpenCeres: When open information extraction meets the semi-structured web",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Lockard",
"suffix": ""
},
{
"first": "Prashant",
"middle": [],
"last": "Shiralkar",
"suffix": ""
},
{
"first": "Xin Luna",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3047--3056",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1309"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Lockard, Prashant Shiralkar, and Xin Luna Dong. 2019. OpenCeres: When open information extrac- tion meets the semi-structured web. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3047-3056, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "An attention-based bilstm-crf approach to documentlevel chemical named entity recognition",
"authors": [
{
"first": "Ling",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Zhihao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Bioinformatics",
"volume": "34",
"issue": "8",
"pages": "1381--1388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ling Luo, Zhihao Yang, Pei Yang, Yin Zhang, Lei Wang, Hongfei Lin, and Jian Wang. 2018. An attention-based bilstm-crf approach to document- level chemical named entity recognition. Bioinfor- matics, 34(8):1381-1388.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Jointly learning explainable rules for recommendation with knowledge graph",
"authors": [
{
"first": "Weizhi",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Woojeong",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Chenyang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yiqun",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shaoping",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "The World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "1210--1221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weizhi Ma, Min Zhang, Yue Cao, Woojeong Jin, Chenyang Wang, Yiqun Liu, Shaoping Ma, and Xi- ang Ren. 2019. Jointly learning explainable rules for recommendation with knowledge graph. In The World Wide Web Conference, pages 1210-1221.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Optical character recognition. International journal of recent technology and engineering (IJRTE)",
"authors": [
{
"first": "Ravina",
"middle": [],
"last": "Mithe",
"suffix": ""
},
{
"first": "Supriya",
"middle": [],
"last": "Indalkar",
"suffix": ""
},
{
"first": "Nilam",
"middle": [],
"last": "Divekar",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "2",
"issue": "",
"pages": "72--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravina Mithe, Supriya Indalkar, and Nilam Divekar. 2013. Optical character recognition. Interna- tional journal of recent technology and engineering (IJRTE), 2(1):72-75.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Attention-based joint detection of object and semantic part",
"authors": [
{
"first": "Keval",
"middle": [],
"last": "Morabia",
"suffix": ""
},
{
"first": "Jatin",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Vijaykumar",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.02419"
]
},
"num": null,
"urls": [],
"raw_text": "Keval Morabia, Jatin Arora, and Tara Vijaykumar. 2020. Attention-based joint detection of object and seman- tic part. arXiv preprint arXiv:2007.02419.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Object detection and localization using local and global features",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Eaton",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Freeman",
"suffix": ""
}
],
"year": 2006,
"venue": "Toward Category-Level Object Recognition",
"volume": "",
"issue": "",
"pages": "382--400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Murphy, Antonio Torralba, Daniel Eaton, and William Freeman. 2006. Object detection and local- ization using local and global features. In Toward Category-Level Object Recognition, pages 382-400. Springer.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Stalker: Learning extraction rules for semistructured, web-based information sources",
"authors": [
{
"first": "Ion",
"middle": [],
"last": "Muslea",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Minton",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Knoblock",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of AAAI-98 Workshop on AI and Information Integration",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ion Muslea, Steve Minton, and Craig Knoblock. 1998. Stalker: Learning extraction rules for semistructured, web-based information sources. In Proceedings of AAAI-98 Workshop on AI and Information Integra- tion, pages 74-81. AAAI Press.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Yolov3: An incremental improvement",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Redmon",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.02767"
]
},
"num": null,
"urls": [],
"raw_text": "Joseph Redmon and Ali Farhadi. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A cnn-bilstm model for document-level sentiment analysis",
"authors": [
{
"first": "Maryem",
"middle": [],
"last": "Rhanoui",
"suffix": ""
},
{
"first": "Mounia",
"middle": [],
"last": "Mikram",
"suffix": ""
},
{
"first": "Siham",
"middle": [],
"last": "Yousfi",
"suffix": ""
},
{
"first": "Soukaina",
"middle": [],
"last": "Barzali",
"suffix": ""
}
],
"year": 2019,
"venue": "Machine Learning and Knowledge Extraction",
"volume": "1",
"issue": "3",
"pages": "832--847",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maryem Rhanoui, Mounia Mikram, Siham Yousfi, and Soukaina Barzali. 2019. A cnn-bilstm model for document-level sentiment analysis. Machine Learn- ing and Knowledge Extraction, 1(3):832-847.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Viper: augmenting automatic information extraction with visual perceptions",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Lausen",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th ACM international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "381--388",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kai Simon and Georg Lausen. 2005. Viper: augmenting automatic information extraction with visual percep- tions. In Proceedings of the 14th ACM international conference on Information and knowledge manage- ment, pages 381-388.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Learning information extraction rules for semi-structured and free text",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine learning",
"volume": "34",
"issue": "1-3",
"pages": "233--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Soderland. 1999. Learning information extrac- tion rules for semi-structured and free text. Machine learning, 34(1-3):233-272.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Corpus conversion service: A machine learning platform to ingest documents at scale",
"authors": [
{
"first": "W",
"middle": [
"J"
],
"last": "Peter",
"suffix": ""
},
{
"first": "Michele",
"middle": [],
"last": "Staar",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Dolfi",
"suffix": ""
},
{
"first": "Costas",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bekas",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining",
"volume": "",
"issue": "",
"pages": "774--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter WJ Staar, Michele Dolfi, Christoph Auer, and Costas Bekas. 2018. Corpus conversion service: A machine learning platform to ingest documents at scale. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 774-782.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Selective search for object recognition",
"authors": [
{
"first": "",
"middle": [],
"last": "Jasper Rr Uijlings",
"suffix": ""
},
{
"first": "E",
"middle": [
"A"
],
"last": "Koen",
"suffix": ""
},
{
"first": "Theo",
"middle": [],
"last": "Van De Sande",
"suffix": ""
},
{
"first": "Arnold Wm",
"middle": [],
"last": "Gevers",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smeulders",
"suffix": ""
}
],
"year": 2013,
"venue": "International journal of computer vision",
"volume": "104",
"issue": "2",
"pages": "154--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jasper RR Uijlings, Koen EA Van De Sande, Theo Gev- ers, and Arnold WM Smeulders. 2013. Selective search for object recognition. International journal of computer vision, 104(2):154-171.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Automated metadata and instance extraction from news web sites",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Vadrevu",
"suffix": ""
},
{
"first": "Saravanakumar",
"middle": [],
"last": "Nagarajan",
"suffix": ""
},
{
"first": "Fatih",
"middle": [],
"last": "Gelgi",
"suffix": ""
},
{
"first": "Hasan",
"middle": [],
"last": "Davulcu",
"suffix": ""
}
],
"year": 2005,
"venue": "The 2005 IEEE/WIC/ACM International Conference on Web Intelligence (WI'05)",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srinivas Vadrevu, Saravanakumar Nagarajan, Fatih Gelgi, and Hasan Davulcu. 2005. Automated meta- data and instance extraction from news web sites. In The 2005 IEEE/WIC/ACM International Conference on Web Intelligence (WI'05), pages 38-41. IEEE.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Graph attention networks",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Veli\u010dkovi\u0107",
"suffix": ""
},
{
"first": "Guillem",
"middle": [],
"last": "Cucurull",
"suffix": ""
},
{
"first": "Arantxa",
"middle": [],
"last": "Casanova",
"suffix": ""
},
{
"first": "Adriana",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Lio",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.10903"
]
},
"num": null,
"urls": [],
"raw_text": "Petar Veli\u010dkovi\u0107, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Deep reader: Information extraction from document images via relation extraction and natural language",
"authors": [
{
"first": "D",
"middle": [],
"last": "Vishwanath",
"suffix": ""
},
{
"first": "Rohit",
"middle": [],
"last": "Rahul",
"suffix": ""
},
{
"first": "Gunjan",
"middle": [],
"last": "Sehgal",
"suffix": ""
},
{
"first": "Arindam",
"middle": [],
"last": "Chowdhury",
"suffix": ""
},
{
"first": "Monika",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Lovekesh",
"middle": [],
"last": "Vig",
"suffix": ""
},
{
"first": "Gautam",
"middle": [],
"last": "Shroff",
"suffix": ""
},
{
"first": "Ashwin",
"middle": [],
"last": "Srinivasan",
"suffix": ""
}
],
"year": 2018,
"venue": "Asian Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "186--201",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D Vishwanath, Rohit Rahul, Gunjan Sehgal, Arindam Chowdhury, Monika Sharma, Lovekesh Vig, Gau- tam Shroff, Ashwin Srinivasan, et al. 2018. Deep reader: Information extraction from document im- ages via relation extraction and natural language. In Asian Conference on Computer Vision, pages 186- 201. Springer.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Automatic web content extraction by combination of learning and grouping",
"authors": [
{
"first": "Shanchan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jerry",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 24th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "1264--1274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shanchan Wu, Jerry Liu, and Jian Fan. 2015. Automatic web content extraction by combination of learning and grouping. In Proceedings of the 24th interna- tional conference on World Wide Web, pages 1264- 1274.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Image captioning with semantic attention",
"authors": [
{
"first": "Quanzeng",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Hailin",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Zhaowen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "4651--4659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanzeng You, Hailin Jin, Zhaowen Wang, Chen Fang, and Jiebo Luo. 2016. Image captioning with seman- tic attention. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4651-4659.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Simplified dom trees for transferable attribute extraction from the web",
"authors": [
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ying",
"middle": [],
"last": "Sheng",
"suffix": ""
},
{
"first": "Nguyen",
"middle": [],
"last": "Vo",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Edmonds",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Tata",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2101.02415"
]
},
"num": null,
"urls": [],
"raw_text": "Yichao Zhou, Ying Sheng, Nguyen Vo, Nick Edmonds, and Sandeep Tata. 2021. Simplified dom trees for transferable attribute extraction from the web. arXiv preprint arXiv:2101.02415.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A person can detect the element for product price, title, and image, w/o knowing (a) Arabic or (b) Chinese",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Example webpage showing multiple possible prices (red), but relatively fewer possible title (green) or image (purple)",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "CoVA end-to-end training pipeline (for a single web element). CoVA takes a webpage screenshot and list of bounding boxes along with K neighbors for each web element (obtained from DOM). RN learns visual representation (v 0 ) while GAT learns contextual representation (c 0 ) from its neighbor's visual representations.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Gini impurity-based importance of features in RF",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "Cross Domain Accuracy (mean \u00b1 standard deviation) for 5-fold cross validation. \u223c6.3% improvement in comparison to Fast RCNN*. CoVA outperforms Fast RCNN* with Bi-LSTM by \u223c2.6% with much fewer number of parameters while also yielding interpretable results.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>wise, we will discuss results with respect to price</td></tr><tr><td>accuracy. We observe that CoVA yields stable re-</td></tr><tr><td>sults across folds (\u223c3.5% reduction in standard</td></tr><tr><td>deviation). This shows that CoVA learns features</td></tr><tr><td>which are generalizable and which have less depen-</td></tr><tr><td>dence on the training data. Using GCN with Fast</td></tr><tr><td>R-CNN* leads to unstable results with 11% stan-</td></tr><tr><td>dard deviation while yielding a 3.4% improvement</td></tr><tr><td>over Fast R-CNN*. Fast R-CNN* with Bi-LSTM is</td></tr><tr><td>able to summarize the contextual features by yield-</td></tr><tr><td>ing a</td></tr></table>",
"num": null
}
}
}
}