ACL-OCL / Base_JSON /prefixT /json /trustnlp /2021.trustnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:52:41.359267Z"
},
"title": "xER: An Explainable Model for Entity Resolution using an Efficient Solution for the Clique Partitioning Problem",
"authors": [
{
"first": "Samhita",
"middle": [],
"last": "Vadrevu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "samhita3@illinois.edu"
},
{
"first": "Wen-Mei",
"middle": [],
"last": "Hwu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "w-hwu@illinois.edu"
},
{
"first": "Rakesh",
"middle": [],
"last": "Nagi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "nagi@illinois.edu"
},
{
"first": "Jinjun",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {},
"email": "jinjun@us.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we propose a global, selfexplainable solution to solve a prominent NLP problem: Entity Resolution (ER). We formulate ER as a graph partitioning problem. Every mention of a real-world entity is represented by a node in the graph, and the pairwise similarity scores between the mentions are used to associate these nodes to exactly one clique, which represents a real-world entity in the ER domain. In this paper, we use Clique Partitioning Problem (CPP), which is an Integer Program (IP) to formulate ER as a graph partitioning problem and then highlight the explainable nature of this method. Since CPP is NP-Hard, we introduce an efficient solution procedure, the xER algorithm, to solve CPP as a combination of finding maximal cliques in the graph and then performing generalized set packing using a novel formulation. We discuss the advantages of using xER over the traditional methods and provide the computational experiments and results of applying this method to ER data sets.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we propose a global, selfexplainable solution to solve a prominent NLP problem: Entity Resolution (ER). We formulate ER as a graph partitioning problem. Every mention of a real-world entity is represented by a node in the graph, and the pairwise similarity scores between the mentions are used to associate these nodes to exactly one clique, which represents a real-world entity in the ER domain. In this paper, we use Clique Partitioning Problem (CPP), which is an Integer Program (IP) to formulate ER as a graph partitioning problem and then highlight the explainable nature of this method. Since CPP is NP-Hard, we introduce an efficient solution procedure, the xER algorithm, to solve CPP as a combination of finding maximal cliques in the graph and then performing generalized set packing using a novel formulation. We discuss the advantages of using xER over the traditional methods and provide the computational experiments and results of applying this method to ER data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Entity Resolution (ER) is a prominent NLP problem, also referred to as co-reference resolution, de-duplication and record linkage, depending on the the problem set up. Irrespective of the name, the objective is to combine and cluster multiple mentions of a real-world entity from various data sources into their respective real-world entities and remove duplicates. Various techniques such as clustering (Aslam et al., 2004) , (Saeedi et al., 2017) , rule-based methods (Aum\u00fcller and Rahm, 2009) , mathematical programming, and combinatorial optimization (Tauer et al., 2019) have previously been applied to ER. In this paper, we formulate and solve ER as a graph partitioning problem.",
"cite_spans": [
{
"start": 404,
"end": 424,
"text": "(Aslam et al., 2004)",
"ref_id": "BIBREF3"
},
{
"start": 427,
"end": 448,
"text": "(Saeedi et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 470,
"end": 495,
"text": "(Aum\u00fcller and Rahm, 2009)",
"ref_id": "BIBREF4"
},
{
"start": 555,
"end": 575,
"text": "(Tauer et al., 2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Representing ER as a graph partitioning problem The transformation from the real-world ER problem domain to the mathematical Integer Programming (IP) formulation setup is essential to understand the model's explainable nature and the solution procedure. A node in the graph represents a mention in the ER domain. An edge between any two nodes has a weight associated with it, representing the similarity score between the two mentions in consideration. This similarity score indicates the probability that these mentions are associated with the same entity. The goal is to ensure that based on the weights, the nodes are optimally allotted to their respective clusters. From a combinatorial perspective, this problem is known as the Clique Partitioning Problem (CPP). A clique is a complete subgraph in which all its nodes are pairwise connected. The weight of a clique is defined as the sum of all its edges' weights. The objective of this mathematical formulation is to find disjoint cliques in the graph such that the total weight of all the cliques is maximized, which, in the ER domain, translates to associating each mention to a single real-world entity with the highest probability association. The constraints in this mathematical formulation enforce that a particular node is mapped to just one clique and ensure that the mentions' transitivity conditions are obeyed. Bhattacharya and Getoor (2004) was one of the earlier papers that formulated ER as a graphical problem and Bansal et al. (2004) proposed a correlation clustering method for the graphical problem. ER was also approached as a graph partitioning problem in (Nicolae and Nicolae, 2006) , (Chen and Ji, 2009) , (Chen and Ji, 2010) and the CPP approach outperformed other solution methods for ER (Finkel et al., 2005) , (Klenner and Ailloud, 2009) . Tauer et al. (2019) formulated ER as CPP, where an incremental graph partitioning approach was applied and solved using a heuristic. Lokhande et al. (2020) formulated ER as a set packing problem by considering the sets of all possible combinations of mentions and then choosing the best combination, based on the weights of the sets. ER has also been approached as a clustering problem. Saeedi et al. (2017) conducted an extensive survey on the clustering methods that had been applied to the entity resolution problem. von Luxburg (2007) solved ER as a spectral graph clustering problem, which is based on the graph's Laplacian matrix. Star Clustering (Aslam et al., 2004) formalizes clustering as graph covering and assigns each node to the highest probabilistic cluster. k-means is also a common technique to solve ER as a clustering problem. However, the mathematical formulation based methods come with a guarantee of optimality. Furthermore, it is easy to obtain an upper bound to these problems by relaxing the integer constraints. These upper bounds provide a guarantee on any feasible solution. In typical clustering algorithms, the number of clusters to produce in the output needs to be provided upfront, while it is decided by the model intrinsically in the CPP framework. The long convergence times and the iterations pose a disadvantage for them to be used as a solution technique for entity resolution (Saeedi et al., 2017) . Moreover, from an explainability perspective, in the formulation-based methods proposed in this paper, the explanation is substantiated with mathematical guarantees, while the clustering-based approaches lack this mathematical precision and the heuristic nature further confounds explainability. Ribeiro et al. (2016) , Ribeiro et al. (2018) , Letham et al. (2015) and Choudhary et al. (2018) have proposed explainable systems for ER using local and if-then-else based global explanations. Ebaid et al. (2019) is a tool that provides explanations at different granularity levels.",
"cite_spans": [
{
"start": 1378,
"end": 1408,
"text": "Bhattacharya and Getoor (2004)",
"ref_id": "BIBREF8"
},
{
"start": 1485,
"end": 1505,
"text": "Bansal et al. (2004)",
"ref_id": "BIBREF5"
},
{
"start": 1632,
"end": 1659,
"text": "(Nicolae and Nicolae, 2006)",
"ref_id": "BIBREF35"
},
{
"start": 1662,
"end": 1681,
"text": "(Chen and Ji, 2009)",
"ref_id": "BIBREF10"
},
{
"start": 1684,
"end": 1703,
"text": "(Chen and Ji, 2010)",
"ref_id": "BIBREF11"
},
{
"start": 1768,
"end": 1789,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF17"
},
{
"start": 1792,
"end": 1819,
"text": "(Klenner and Ailloud, 2009)",
"ref_id": "BIBREF28"
},
{
"start": 1822,
"end": 1841,
"text": "Tauer et al. (2019)",
"ref_id": "BIBREF41"
},
{
"start": 2209,
"end": 2229,
"text": "Saeedi et al. (2017)",
"ref_id": "BIBREF40"
},
{
"start": 2346,
"end": 2360,
"text": "Luxburg (2007)",
"ref_id": null
},
{
"start": 2464,
"end": 2495,
"text": "Clustering (Aslam et al., 2004)",
"ref_id": null
},
{
"start": 3239,
"end": 3260,
"text": "(Saeedi et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 3559,
"end": 3580,
"text": "Ribeiro et al. (2016)",
"ref_id": "BIBREF36"
},
{
"start": 3583,
"end": 3604,
"text": "Ribeiro et al. (2018)",
"ref_id": "BIBREF37"
},
{
"start": 3607,
"end": 3627,
"text": "Letham et al. (2015)",
"ref_id": "BIBREF32"
},
{
"start": 3632,
"end": 3655,
"text": "Choudhary et al. (2018)",
"ref_id": "BIBREF12"
},
{
"start": 3753,
"end": 3772,
"text": "Ebaid et al. (2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Since CPP is NP-hard (Gr\u00f6tschel and Wakabayashi, 1989 ), a novel two-phase solution is proposed, in this paper, to solve CPP optimally. This solution method can be easily accelerated and scaled to handle large-sized datasets. As a part of this two-phased approach, new and creative formulations for the generalized set packing problem are also proposed. The formulations and the approach to obtain the optimal solution provide a mathemati-cal guarantee on the output, and the results are easily interpretable and explainable. The constraints and objective function mathematically support the explanation behind the predicted output.",
"cite_spans": [
{
"start": 21,
"end": 53,
"text": "(Gr\u00f6tschel and Wakabayashi, 1989",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. In Section 2, entity resolution is formulated as CPP. In Section 3, explainability and interpretability of this method is discussed. Section 4 then introduces the two-phase solution approach proposed for solving the NP-hard CPP. Sections 5 and 6 go over the computational experiments and the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As discussed in Section 1, the entity resolution problem is transformed to a graph where each mention is represented by nodes and the weight on an edge between the nodes is the similarity score between the mentions. To obtain the pairwise similarity scores, we use an open-source entity resolution library called Dedupe (Gregg and Eder, 2019) , which applies blocking and a logistic regression based model to obtain the similarity scores between mentions. See Section 5 for more details about this.",
"cite_spans": [
{
"start": 320,
"end": 342,
"text": "(Gregg and Eder, 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "In this section, the graph partitioning setup is formally represented by a mathematical formulation. Let i, j (i < j) be two nodes in the graph (representing two mentions) and w ij be the weight of the edge between these nodes. x ij is a binary variable that denotes whether i, j are associated or co-referent (belong to the same clique).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "x ij = 1 if nodes i, j are associated 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "The \"traditional\" math formulation of CPP is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "CP P (w) = max N \u22121 i=1 N j=i+1 wijxij; s.t.",
"eq_num": "(1)"
}
],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "xij",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "+ x ik \u2212 x jk \u2264 1, \u22001 \u2264 i < j < k \u2264 N, (2) \u2212xij + x ik + x jk \u2264 1, \u22001 \u2264 i < j < k \u2264 N, (3) xij \u2212 x ik + x jk \u2264 1, \u22001 \u2264 i < j < k \u2264 N, (4) xij \u2208 {0, 1}, \u22001 \u2264 i < j \u2264 N.",
"eq_num": "(5)"
}
],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "Constraints (2), (3), and (4) are the transitivity constraints enforced among the nodes. These three constraints ensure that if mention a is the same as b and b is the same as c, then it must also be that a is the same as c. The graph is assumed to be directed to avoid duplication of cliques and memory exhaustion. An optimal solution to this problem results in the best possible solution to the ER for the given similarity scores. However, due to cubic number of constraints, this particular formulation for CPP, does not scale with the number of nodes. Hence, heuristics are prevalent to find an approximate solution to CPP; see Section 4 for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mathematical Formulation of CPP",
"sec_num": "2"
},
{
"text": "Before we discuss our solution approaches, the explainable nature of this method is highlighted. The definitions of explainability have been studied in various works (Guidotti et al., 2019) , (Arya et al., 2019) . As defined in Danilevsky et al. (2020) and Guidotti et al. (2019) , understanding the level of explainability of models can be interpreted as outcome explanation problems, where the emphasis lies in understanding the rationale behind the prediction of a specific output or all outputs in general. In this paper, the definitions and categorizations of explanations are based on the definitions in Danilevsky et al. (2020) . Two major categorizations of explanations are emphasized. The first is based on the explanation process's target set, and divided into two types: Local and Global. Suppose the explanation is for a particular individual output. In that case, the explanation type is referred to as Local. On the other hand, if the explanation is for the whole model in itself, then it is a Global explanation. The second categorization is based on the origin of the explanation process. If the explanation is from the prediction process itself, then it belongs to the Self Explaining or the Directly Interpretable category (Arya et al., 2019) . Otherwise, if post prediction processing is required to explain the output, it can be categorized as Post-hoc explanation. As seen in Tauer et al. (2019) , mathematical formulation based methods have a notion of optimality infused in the problems. The design of NLP problems like ER as mathematical formulations ensures that various constraints are met simultaneously, and hence making the output and the prediction process trustworthy and reliable. Since the constraints and the objective function are enforced into the mathematical formulation, the explanation behind any output comes directly from the model itself, making it a self-explainable model model. Moreover, the explanation behind any output is only dependent on the formulation and not on the output itself. This makes the model globally explainable. Therefore, by applying an efficient approach based on mathematical formulations, the solution method discussed in this paper presents an easily interpretable and explainable model for ER.",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "(Guidotti et al., 2019)",
"ref_id": "BIBREF24"
},
{
"start": 192,
"end": 211,
"text": "(Arya et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 228,
"end": 252,
"text": "Danilevsky et al. (2020)",
"ref_id": null
},
{
"start": 257,
"end": 279,
"text": "Guidotti et al. (2019)",
"ref_id": "BIBREF24"
},
{
"start": 610,
"end": 634,
"text": "Danilevsky et al. (2020)",
"ref_id": null
},
{
"start": 1242,
"end": 1261,
"text": "(Arya et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 1398,
"end": 1417,
"text": "Tauer et al. (2019)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Explainability and Interpretability",
"sec_num": "3"
},
{
"text": "As discussed in Section 2, CPP is NP-hard. In this paper, an efficient and scalable solution approach is proposed to solve the CPP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Approach for CPP",
"sec_num": "4"
},
{
"text": "The solution procedure is divided into two phases: Phase 1 involves finding the maximal cliques in the graph. A maximal clique is a clique that is not a sub-clique of a larger clique (Akkoyunlu, 1973) . For Phase 2, we propose a novel generalized set packing formulation that not only ensures that each node belongs to a single clique, but it is able to break larger cliques into smaller sub-cliques if necessary. The formulation enables to find the optimal combinations of the cliques, that maximize the weight of the system. The algorithm (Phase 1 + Phase 2), is referred to as xER (Explainable ER).",
"cite_spans": [
{
"start": 183,
"end": 200,
"text": "(Akkoyunlu, 1973)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Solution Approach for CPP",
"sec_num": "4"
},
{
"text": "In this phase, all the maximal cliques in a graph are found and stored. There are many approaches to find maximal cliques, but the most prominent and efficient approach is the Bron-Kerbosch (BK) algorithm (Bron and Kerbosch, 1973) . There are multiple variants of BK, and in this paper, we adopt the pivot-based BK algorithm with node ordering. For simplicity, a recursion-based sequential implementation is used for BK. However, a scalable GPUaccelerated implementation for maximal clique listing is currently in progress based on (Almasri et al., 2021) .",
"cite_spans": [
{
"start": 205,
"end": 230,
"text": "(Bron and Kerbosch, 1973)",
"ref_id": "BIBREF9"
},
{
"start": 532,
"end": 554,
"text": "(Almasri et al., 2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 1: Finding Maximal Cliques",
"sec_num": "4.1"
},
{
"text": "The output of Phase 1 is a list of cliques that are not disjoint. This phase aims to find the optimal combination of these cliques such that the cliques are disjoint and the total weight of all these disjoint cliques is maximized. Thus, Phase 2 is a maximum weighted Set Packing Problem (SPP). The original SPP is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 2: Set Packing",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(SPP) max W T x (6) s.t Ax = 1 (7) x \u2208 {0, 1}.",
"eq_num": "(8)"
}
],
"section": "Phase 2: Set Packing",
"sec_num": "4.2"
},
{
"text": "Here, S is the list of sets (cliques) and V is the set of nodes in the graph. W denotes the weight vector, where each entry is the weight of a clique. The binary variable x t denotes if a set t \u2208 S is chosen or not, A : V \u00d7 S is the incidence matrix indicating the presence of a node in a set. a it \u2208 A is 1 if node i \u2208 V is in the set t \u2208 S and 0, otherwise. The formulation of the original set packing problem is designed to choose the optimal packing of sets that maximizes the system's overall weight. Multiple solution procedures have been developed to solve this set packing problem, and these procedures can be categorized as either exact or approximate algorithms. Rossi and Smriglio (2001) proposed a branch-and-cut approach for solving the SPP. Landete et al. 2013proposed alternate formulations for SPP in higher dimensions and then added valid inequalities that were facets to the lifted polytope. Kwon et al. (2008) and Kolokolov and Zaozerskaya (2009) also proposed new facets that strengthen the relaxed formulations of SPP. Li et al. (2020) encoded SPP as a maximum weighted independent set and then used a Diversion Local Search based on the Weighted Configuration Checking (DLSWCC) algorithm to solve it. Since SPP is NPhard (Garey and Johnson, 2009), many heuristics have also been proposed to obtain a solution for SPP in a reasonable amount of time. R\u00f6nnqvist (1995) proposed a Lagrangian relaxation based method and Delorme et al. 2004used a greedy randomized adaptive search procedure (GRASP) to solve SPP. proposed an ant colony heuristic for SPP.",
"cite_spans": [
{
"start": 673,
"end": 698,
"text": "Rossi and Smriglio (2001)",
"ref_id": "BIBREF38"
},
{
"start": 910,
"end": 928,
"text": "Kwon et al. (2008)",
"ref_id": "BIBREF30"
},
{
"start": 933,
"end": 965,
"text": "Kolokolov and Zaozerskaya (2009)",
"ref_id": "BIBREF29"
},
{
"start": 1040,
"end": 1056,
"text": "Li et al. (2020)",
"ref_id": "BIBREF33"
},
{
"start": 1371,
"end": 1387,
"text": "R\u00f6nnqvist (1995)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Phase 2: Set Packing",
"sec_num": "4.2"
},
{
"text": "Lokhande et al. (2020) has recently formulated ER as a set packing problem. All possible combinations of groups of mentions are given as an input to the SPP. Each of these groups is referred to as a hypothesis. Every hypothesis has a weight associated with it, which is computed as the sum of weights on a pair of nodes in that hypothesis. The best combination of the sets is chosen based on the weights. A major drawback of formulating and solving ER as a traditional set packing problem is the huge input size even for considerably small graphs. Table 1 shows a comparison between the number of cliques (|C|) and the number of maximal cliques (|MC|) in small-sized graphs, with number of edges denoted as |E|. The number of maximal cliques is significantly less than the total number of cliques. The number of all the cliques in the graph grows exponentially, much faster than the number of maximal cliques as the graph's size increases. In this paper, our proposed formulation for set packing can break a large set into smaller ones if required. Therefore, it only needs the maximal cliques as an input, contrary to SPP, which requires all the cliques as an input. 38 147 70 528 8 38 203 101 801 8 38 379 433 5619 13 46 223 87 2466 28 46 317 162 3264 20 46 556 829 17114 21 ",
"cite_spans": [],
"ref_spans": [
{
"start": 548,
"end": 555,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1168,
"end": 1281,
"text": "38 147 70 528 8 38 203 101 801 8 38 379 433 5619 13 46 223 87 2466 28 46 317 162 3264 20 46",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Phase 2: Set Packing",
"sec_num": "4.2"
},
{
"text": "As discussed in Section 4.2, the formulation of the original set packing problem is designed to choose the combination of sets that are disjoint and maximize the problem's overall weight. Thus, it requires the power set of cliques as an input. In this paper, the traditional set packing formulation is modified to fit the ER problem's requirements and made it more efficient and scalable to handle large datasets. Our novel formulation for set packing is introduced in Section 4.2 requires a much smaller input size. The formulation itself is enabled to carve out sub-cliques of a larger clique while keeping them disjoint. Eventually, the same optimal solution would be found, but the difference is in the manageable input size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed SPP Formulation",
"sec_num": "4.2.1"
},
{
"text": "Notation: Here, K is the total number of maximal cliques in the input. Each set of index k, is denoted by S k (cliques and sets are used interchangeably to accommodate the notation of both the traditional set packing and the new proposed formulation). The inputs to the problem is a set of incidence matrices {A k } corresponding to each set S k , and W , the weight matrix of arcs in the original graph. The graph is directed, and an edge can only exist between two nodes i, j, with i < j and weight W ij . Each set can be broken down into multiple partitions, and M is the upper bound on the total number of partitions any set can be broken down into. The index for each partition of a set is m and is local to a set S k , where 0 \u2264 m \u2264 M \u2212 1. z ij denotes the connection between two nodes i, j in the optimal solution and y imk denotes if node i is assigned to partition m of set S k . Decision Variables: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed SPP Formulation",
"sec_num": "4.2.1"
},
{
"text": "y imk = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed SPP Formulation",
"sec_num": "4.2.1"
},
{
"text": "The new set packing formulation is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic Set Packing",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "(QSP) max N \u22121 i=0 N \u22121 j=i+1 Wijzij; s.t. (9) zij \u2212 k m y imk \u00d7 y jmk = 0, \u2200i, j \u2208 V,",
"eq_num": "(10)"
}
],
"section": "Quadratic Set Packing",
"sec_num": "4.2.2"
},
{
"text": "k m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic Set Packing",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y imk \u2264 1 \u2200i \u2208 V,",
"eq_num": "(11)"
}
],
"section": "Quadratic Set Packing",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "0 \u2264 zij \u2264 1, y imk \u2208 {0, 1}, \u2200i, j \u2208 V, m \u2208 M, k \u2208 K.",
"eq_num": "(12)"
}
],
"section": "Quadratic Set Packing",
"sec_num": "4.2.2"
},
{
"text": "QSP stands for Quadratic Set Packing, deriving the name from the quadratic nature of the constraints. It can be observed that the notation of the variables in this formulation is different from the traditional set packing formulation. In the traditional set packing formulation, the decision variable is the binary variable x t , denoting the presence of a set t in the optimal solution. However, in QSP, the decision variable y imk denotes the presence of a node i in the partition m of set S k . If a node i from set S k should belong to partition m, the value of y imk = 1 and 0 otherwise. This shows that y imk is modified to remove nodes from the maximal cliques if necessary, eliminating the need to provide the power set of the maximal cliques as an input to the original SPP formulation. As mentioned before, this ordering avoids duplication of nodes and saves memory. Moreover, due to the nature of the formulation, even though z ij is not explicitly assigned to be an integral solution, solving the QSP optimally results in an integer solution for z ij . An off-the-shelf optimization solver, Gurobi (Gurobi Optimization, 2021) was used to solve the problem optimally. z ij is used to compute the precision, recall and the F1 scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quadratic Set Packing",
"sec_num": "4.2.2"
},
{
"text": "Result: Resolved datasets with no duplicate mentions Step 1 : Perform blocking and compute pairwise similarity scores ( \u00a75);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: xER Algorithm",
"sec_num": null
},
{
"text": "Step 2 : Construct a directed graph with the mentions as nodes and similarity scores as weights on the edges. ( \u00a74);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: xER Algorithm",
"sec_num": null
},
{
"text": "Step 3 : Find maximal cliques in the graph using BK ( \u00a74.1);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: xER Algorithm",
"sec_num": null
},
{
"text": "Step 4 : Perform Set Packing using the QSP formulation ( \u00a74);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: xER Algorithm",
"sec_num": null
},
{
"text": "Step 5 : Use the output of z to compute precision, recall and F1 ( \u00a75);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: xER Algorithm",
"sec_num": null
},
{
"text": "Currently, we are working on developing scalable heuristics for the xER algorithm. As mentioned in Sec 4.1, a GPU accelerated version for Phase 1 is currently in progress based on Almasri et al. (2021) . For Phase 2, an accelerated and scalable approach is being developed. The QSP formulation is linearized to provide the Linearized Set Packing (LSP) formulation. We are working on the linear relaxations of LSP and using accelerated computing to solve this and a family of relaxations. Subsequently, one can develop branch-and-bound approaches for solving the integer programming problem to optimality.",
"cite_spans": [
{
"start": 180,
"end": 201,
"text": "Almasri et al. (2021)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm 1: xER Algorithm",
"sec_num": null
},
{
"text": "In this section, the xER algorithm's performance is evaluated through experiments on different ER datasets. In this paper, two primary data sources considered: benchmarking datasets (Saeedi et al., 2017) and ECB+ (Cybulska and Vossen, 2014) . Datasets from both these sources are used to test the algorithm and analyze the algorithm's performance in terms of the F1 scores, solution times and their potential for scalability. Different blocking and scoring techniques have been applied to both these datasets, and are discussed in detail.",
"cite_spans": [
{
"start": 182,
"end": 203,
"text": "(Saeedi et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 213,
"end": 240,
"text": "(Cybulska and Vossen, 2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Experiments",
"sec_num": "5"
},
{
"text": "Blocking is a pre-processing technique applied to the datasets. The purpose is to eliminate the need to store similarity scores between those pairs of mentions that are extremely unlikely to being associated to the same entity. This increases the sparsity in the graph, making it easier to process the graph and perform computations. Blocking and similarity score computation techniques are different for different data sources and are discussed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Computational Experiments",
"sec_num": "5"
},
{
"text": "Saeedi et al. (2017) provides benchmark datasets, three of which are used in this paper. Table 2 shows the statistics for these benchmarking datasets. An open-source entity resolution library called Dedupe (Gregg and Eder, 2019) is used to preprocess these datasets by applying blocking techniques and generating similarity scores. The blocking technique and the scoring scheme are obtained from the code base of Lokhande et al. (2020). The dataset is divided into training and validation sets, with a split ratio of 50%. Our similarity scores for the benchmark datasets are obtained from the Dedupe library by training a ridge regression model.",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Gregg and Eder, 2019)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Benchmarking Datasets",
"sec_num": "5.1"
},
{
"text": "Event Coreference Bank (ECB) (Bejan and Harabagiu, 2010) ) is an event coreference resolu-",
"cite_spans": [
{
"start": 29,
"end": 56,
"text": "(Bejan and Harabagiu, 2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ECB+ Corpus",
"sec_num": "5.2"
},
{
"text": "Entities Matches Clusters patent_example 2379 293785 102 csv_example 3337 6608 1162 settlements 3054 4388 820 Table 2 : Statistics of the benchmarking datasets tion dataset that includes a collection of documents found through Google Search. ECB+ (Cybulska and Vossen, 2014) is an extension of this dataset with newly added documents. Table 3 shows the statistics for this dataset. The ECB+ dataset comes with the gold standard or the Ground Truth (GT) values used to generate the similarity scores. The ground truth values for two connected (or co-referent) and not connected mentions are +1 and \u22121, respectively. The \"synthetic\" similarity scores are generated from a normal distribution with a fixed mean and an added noise. If the ground-truth is +1 then \u00b5 = 0.5 and if it is \u22121, then \u00b5 = \u22120.5. A variance of 0.3 is added to the generated scores using this distribution. Once the similarity scores are computed, a blocking threshold T is applied to these scores. A pair of mentions with a similarity score less than T is blocked, and the edge between these nodes is removed from the original graph. The mentions in this dataset could belong to the event class or the entity class. The mention pairs are taken from the same class for the experiments, and xER is indifferent to the class. This dataset is broken down into smaller graphs using topic modelling from (Barhom et al., 2019) . It facilitated the use of these different sized graphs to experiment with the blocking thresholds, analyze the F1 scores, and understand the xER algorithm's performance.",
"cite_spans": [
{
"start": 262,
"end": 289,
"text": "(Cybulska and Vossen, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 1381,
"end": 1402,
"text": "(Barhom et al., 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 9,
"end": 132,
"text": "Matches Clusters patent_example 2379 293785 102 csv_example 3337 6608 1162 settlements 3054 4388 820 Table 2",
"ref_id": "TABREF1"
},
{
"start": 350,
"end": 357,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "The experiments are performed on an Intel i5 processor with 8GB RAM. The datasets from both sources are preprocessed and converted into graphs given as an input to the xER algorithm. These graphs have mentions as nodes and the pairwise similarity scores as the edges' weight. As shown in the xER algorithm (1), this graph is first passed through Phase 1, which is the Bron-Kerbosch algorithm with pivoting (Bron and Kerbosch, 1973) . This step's output is a set of maximal cliques that are not disjoint and passed on to Phase 2 for the set packing step. QSP formulation is modelled using Gurobi (Gurobi Optimization, 2021) and solved optimally. The solution for the z variable from the optimally solved model is used to compute F1 scores. The xER algorithm is applied to all the datasets listed above and is evaluated in terms of F1 scores and computation times, and compared to the other competing algorithms. xER is also compared with the traditional set packing algorithm and the difference in the input sizes between SPP and QSP is highlighted through experiments. Also, to demonstrate the quality of the xER algorithm, the weights on the edges are replaced with Ground Truth (GT) values (+1 and \u22121) instead of similarity scores and tested. This helps in analyzing and confirming the model's consistency and accuracy, irrespective of the method used to compute similarity scores.",
"cite_spans": [
{
"start": 406,
"end": 431,
"text": "(Bron and Kerbosch, 1973)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Dedupe is used to perform blocking and compute the similarity scores as mentioned in Section 5.1. First, Dedupe employs specific blocking techniques on the data. A ridge regression model is then trained and used to compute the scores on the validation dataset. The pairwise nodes and the scores are passed on to the xER algorithm, and F1 scores are computed using the solutions from the z variable. These scores are obtained from the code base of (Lokhande et al., 2020) for a fair comparison and the performance of xER is compared with F-MWSP in (Lokhande et al., 2020) and a standard Hierarchical Clustering (HC) approach (Hastie et al., 2009) . As mentioned before, M is a hyperparameter, and for these three datasets, we set it to 10. Table 4 shows that xER is at least as good as the other algorithms. For the settlements dataset, xER outperforms both F-MWSP and HC. For csv_example, xER has the same F1 score as F-MWSP, which is better than that of HC. For patent_sample, the F1 score for xER is less than HC and F-MWSP. However, since xER is designed to provide an optimal solution to a graph with a given set of nodes and weights, it is possible that the blocking techniques were too severe or the computational scores were not the best, leading to a lower F1 score. As discussed before, a high-quality blocking technique and similarity scores will lead to high-quality F1 scores, since the xER algorithm is designed to give the best possible solution to a given input. Another comparison factor considered is the size of the input between SPP and QSP. The size of the input cliques required for a traditional set packing based formulation (F-MWSP) is significantly greater compared to that of the QSP formulation, which can be seen in the Table 1. Thus, a scalable xER algorithm can be useful to produce optimal outputs in lesser time. Moreover, with xER, the outputs and the explanations are supported by mathematical guarantees. In addition to the F1 scores, other metrics have also been used to evaluate and compare the algorithms' performance. The dataset settlements is considered to analyze the algorithms in terms of all the evaluation metrics and is shown in Table 5 . As discussed in Section 5, smaller datasets are constructed from the ECB+ dataset by performing topic wise modelling from (Barhom et al., 2019) . Moreover, instead of performing entity resolution on the whole corpus, a subset of documents from the topics is considered as the input. Smaller datasets of different sizes are generated this way and are used to test and assess the xER algorithm. After the similarity scores are computed, blocking techniques are applied based on a threshold of T on the similarity scores, in contrast to the blocking before the similarity score generation technique in the benchmarking datasets. The number of edges and the tightness among the nodes, measured by the Clustering Coefficient (CC) (Wang et al., 2017) , is varied by varying this threshold T . The xER algorithm is also tested with the groundtruth values as weights. These tests are listed below and analyzed.",
"cite_spans": [
{
"start": 624,
"end": 645,
"text": "(Hastie et al., 2009)",
"ref_id": "BIBREF26"
},
{
"start": 2308,
"end": 2329,
"text": "(Barhom et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 2911,
"end": 2930,
"text": "(Wang et al., 2017)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 739,
"end": 746,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 2176,
"end": 2183,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Testing xER on benchmarking datasets",
"sec_num": "6.1"
},
{
"text": "As described in Section 5.2, the similarity scores are generated from the normal distribution with means 0.5 and \u22120.5 depending on the ground truth, and the threshold values belong to the range [\u22120.7, \u22120.2] . As the threshold T increases, the graph's size becomes smaller due to the removal of edges with a weight less than the T . To demonstrate the impact of thresholding, a graph of 49 nodes is considered, and different graphs are generated from it by applying varying T values and the results are presented in Table 6 . The graph is denser and tightly connected with a tight threshold. The number of edges (|E|), the clustering coefficient (CC), the number of maximal cliques (|MC|) and the number of all the cliques in the graph (|C|) decrease with increasing T . For a particular T , the input size of SPP (|C|) compared to the input size of QSP (|MC|) is almost exponential and only increases with the graph's size. This difference is reflected in the solution times and can be seen that the SPP solution time is quite large when compared to the xER solution time. With larger graphs, the formulations will be unable to handle this large SPP input size. For the largest graph with T = \u22120.7, the computation time exceeded the time limit and was terminated. Another observation is that tighter thresholds lead to higher computation times for both phase 1 and phase 2. Thus, a higher T value is preferred in terms of solution time and memory management. However, it is possible that blocking with a higher threshold value might lead to a reduction in the recall and affect the F1 scores. So, a moderate threshold is preferred to balance both the F1 scores and the memory issues. T is treated as a hyperparameter, and the optimal T value can be chosen so that the graph size is small enough to handle, and the F1 scores are acceptable. When testing with ground truth values as weights, all the above graphs resulted in a 100% F1 score.",
"cite_spans": [
{
"start": 194,
"end": 206,
"text": "[\u22120.7, \u22120.2]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 515,
"end": 522,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Tests Based on Thresholds",
"sec_num": "6.1.2"
},
{
"text": "In addition to the thresholding tests, the xER algorithm is tested on other graphs generated using the same approach described above. The threshold value T is set to \u22120.3. The F1 scores for these graphs are reported in Table 7 . As mentioned previously, xER is also tested using the groundtruth values as weights on the edges. xER results in a 100% F1 score when using the groundtruth, in all these datasets, which is also shown in Table 7 . This implies that with the best possible scores (groundtruth), the algorithm works perfectly, which highlights the significance of high-quality similarity scores. M is set to 3 for all these graphs, and when the groundtruth is being used as weights, the value of M = 10. This is because the input graph is fully connected because of no thresholding. So Phase 1 returns the whole graph as the maximal clique and phase 2 is responsible for partitiong the whole graph into smaller cliques, which is done using the M value. So a larger value of M enabled the graph to be partitioned into smaller sets as per the weights. Table 7 : F1 scores of graphs from the ECB+ dataset.",
"cite_spans": [],
"ref_spans": [
{
"start": 219,
"end": 226,
"text": "Table 7",
"ref_id": null
},
{
"start": 432,
"end": 440,
"text": "Table 7",
"ref_id": null
},
{
"start": 1060,
"end": 1067,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation: F1 scores",
"sec_num": "6.1.3"
},
{
"text": "We now understand the model's explainable nature in an intuitive way with an example. The dataset with 49 nodes and T = \u22120.3 in Table 6 is considered. Three nodes: (7, 12, 20) that form a 3-clique or a triangle in the groundtruth are picked and analyzed. When xER is executed with weights, the thresholding does not remove one node: 25, that is connected to all these three nodes, thus having the potential to form a 4-clique. However, from the Table 8, the total weight that node 25 brings into the triangle is negative (\u22123.0) and thus, this 4-clique is not a good choice to be included in the optimal solution. Thus, the model automatically prevents this node from forming a 4-clique with the three nodes, thus ensuring that the precision wouldn't decrease. Another important observation is that blocking with a threshold of T = \u22120.2 would have removed the edge between the nodes 20 and 25, thus totally eliminating the potential of forming a 4-clique. Table 8 : Weights on the edges of nodes (7, 12, 20, 25) Another example of explainable ER and the importance of having high-quality scores, is considered for the same graph. Four edges: (3-15), (19-29), (21-25), (23-40) with weights 0.316, 0.095, 0.232, 0.046, respectively, were included in the optimal solution, while these nodes are not connected in the ground truth. The \"noisy\" weights between these nodes which should have been negative per the ground truth. This shows that a poor scoring scheme can lead to a low quality solution.",
"cite_spans": [
{
"start": 995,
"end": 998,
"text": "(7,",
"ref_id": null
},
{
"start": 999,
"end": 1002,
"text": "12,",
"ref_id": null
},
{
"start": 1003,
"end": 1006,
"text": "20,",
"ref_id": null
},
{
"start": 1007,
"end": 1010,
"text": "25)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 128,
"end": 135,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 955,
"end": 962,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Explainability of xER",
"sec_num": "6.2"
},
{
"text": "As explained in Danilevsky et al. (2020) , the explainability of a model can be evaluated in three ways: Comparison with the groundtruth, Informal explanations and Human evaluation. We compared the model with ground truth values and obtained the F1 scores. In addition to it, we also performed experiments with the groundtruth scores and the similarity scores to argue the reasoning behind a particular solution. For evaluation through informal explanation, we considered examples from graphs and understood the reasoning behind this output produced by the model. For future work, we plan to include a viable human evaluation technique for the ER problem.",
"cite_spans": [
{
"start": 16,
"end": 40,
"text": "Danilevsky et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability of xER",
"sec_num": "6.2"
},
{
"text": "In this paper, we compared our model to an existing approach for ER from Lokhande et al. (2020). As future direction of research, we aim to develop a scalable approach to handle large datasets that would not depend on an off-the-shelf solver to obtain optimal and explainable solutions(with mathematical guarantee), enabling us to compare the performance of xER with more approaches that have been used for ER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Explainability of xER",
"sec_num": "6.2"
},
{
"text": "A graph partitioning based approach is proposed to solve the entity resolution problem and is formulated as a clique partitioning problem. A node in the graph represents each mention, and the objective was to assign nodes to cliques optimally, and each clique represents a real-world entity. This mathematical formulation based model is inherently explainable. Since CPP is NP-Hard, a twophased algorithm called xER is proposed and tested on multiple datasets. Phase 1 of xER finds all the graph's maximal cliques, which is much more practical than finding all the cliques in the graph. Phase 2 is a generalized set packing formulation and has a much smaller input size than the traditional set packing problem. These contributions help develop a practical and easily parallelizable implementation for xER. xER shows promising performance in terms of accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "A GPU accelerated approach for xER is in progress and will provide a scalable and practical model. Also, xER can be extended to other applications such as Topic modelling, Community Detection, Temporal Analysis. We believe this paper will lead the way to more mathematical formulationbased approaches and NLP problems can be solved using such highly explainable models, thus reducing the dependency on black-box models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This work is supported by the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) -a research collaboration as a part of the IBM AI Horizons Network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Enumeration of Maximal Cliques of Large Graphs",
"authors": [
{
"first": "E",
"middle": [
"A"
],
"last": "Akkoyunlu",
"suffix": ""
}
],
"year": 1973,
"venue": "SIAM J. Comput",
"volume": "2",
"issue": "1",
"pages": "1--6",
"other_ids": {
"DOI": [
"10.1137/0202001"
]
},
"num": null,
"urls": [],
"raw_text": "E. A. Akkoyunlu. 1973. The Enumeration of Maximal Cliques of Large Graphs. SIAM J. Comput., 2(1):1- 6.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Accelerating K-Clique Counting on GPUs",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Almasri",
"suffix": ""
},
{
"first": "Izzat",
"middle": [
"El"
],
"last": "Hajj",
"suffix": ""
},
{
"first": "Rakesh",
"middle": [],
"last": "Nagi",
"suffix": ""
},
{
"first": "Jinjun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Mei Hwu",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 547th International Conference on Very Large Data Bases (VLDB)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Almasri, Izzat El Hajj, Rakesh Nagi, Jin- jun Xiong, and Wen mei Hwu. 2021. Accelerating K-Clique Counting on GPUs. In Proceedings of the 547th International Conference on Very Large Data Bases (VLDB), page submitted.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques",
"authors": [
{
"first": "Vijay",
"middle": [],
"last": "Arya",
"suffix": ""
},
{
"first": "K",
"middle": [
"E"
],
"last": "Rachel",
"suffix": ""
},
{
"first": "Pin-Yu",
"middle": [],
"last": "Bellamy",
"suffix": ""
},
{
"first": "Amit",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Dhurandhar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hind",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Stephanie",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "Q",
"middle": [
"Vera"
],
"last": "Houde",
"suffix": ""
},
{
"first": "Ronny",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Luss",
"suffix": ""
},
{
"first": "Sami",
"middle": [],
"last": "Mojsilovi\u0107",
"suffix": ""
},
{
"first": "Pablo",
"middle": [],
"last": "Mourad",
"suffix": ""
},
{
"first": "Ramya",
"middle": [],
"last": "Pedemonte",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Raghavendra",
"suffix": ""
},
{
"first": "Prasanna",
"middle": [],
"last": "Richards",
"suffix": ""
},
{
"first": "Karthikeyan",
"middle": [],
"last": "Sattigeri",
"suffix": ""
},
{
"first": "Moninder",
"middle": [],
"last": "Shanmugam",
"suffix": ""
},
{
"first": "Kush",
"middle": [
"R"
],
"last": "Singh",
"suffix": ""
},
{
"first": "Dennis",
"middle": [],
"last": "Varshney",
"suffix": ""
},
{
"first": "Yunfeng",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.03012"
]
},
"num": null,
"urls": [],
"raw_text": "Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Alek- sandra Mojsilovi\u0107, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sat- tigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, and Yunfeng Zhang. 2019. One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:1909.03012 [cs, stat]. ArXiv: 1909.03012.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Star Clustering Algorithm for Static and Dynamic Information Organization",
"authors": [
{
"first": "A",
"middle": [],
"last": "Javed",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Aslam",
"suffix": ""
},
{
"first": "Daniela",
"middle": [],
"last": "Pelekhov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rus",
"suffix": ""
}
],
"year": 2004,
"venue": "JGAA",
"volume": "8",
"issue": "1",
"pages": "95--129",
"other_ids": {
"DOI": [
"10.7155/jgaa.00084"
]
},
"num": null,
"urls": [],
"raw_text": "Javed A. Aslam, Ekaterina Pelekhov, and Daniela Rus. 2004. The Star Clustering Algorithm for Static and Dynamic Information Organization. JGAA, 8(1):95-129.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Web-based affiliation matching",
"authors": [
{
"first": "David",
"middle": [],
"last": "Aum\u00fcller",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Rahm",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "246--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Aum\u00fcller and Erhard Rahm. 2009. Web-based affiliation matching. pages 246-256.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Correlation Clustering. Machine Learning",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Shuchi",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "56",
"issue": "",
"pages": "89--113",
"other_ids": {
"DOI": [
"10.1023/B:MACH.0000033116.57574.95"
]
},
"num": null,
"urls": [],
"raw_text": "Nikhil Bansal, Avrim Blum, and Shuchi Chawla. 2004. Correlation Clustering. Machine Learning, 56(1- 3):89-113.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Revisiting joint modeling of cross-document entity and event coreference resolution",
"authors": [
{
"first": "Shany",
"middle": [],
"last": "Barhom",
"suffix": ""
},
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Eirew",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bugert",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4179--4189",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1409"
]
},
"num": null,
"urls": [],
"raw_text": "Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Revis- iting joint modeling of cross-document entity and event coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4179-4189, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised event coreference resolution with rich linguistic features",
"authors": [
{
"first": "Cosmin",
"middle": [],
"last": "Bejan",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1412--1422",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cosmin Bejan and Sanda Harabagiu. 2010. Unsuper- vised event coreference resolution with rich linguis- tic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Iterative record linkage for cleaning and integration",
"authors": [
{
"first": "Indrajit",
"middle": [],
"last": "Bhattacharya",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 9th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery -DMKD '04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/1008694.1008697"
]
},
"num": null,
"urls": [],
"raw_text": "Indrajit Bhattacharya and Lise Getoor. 2004. Itera- tive record linkage for cleaning and integration. In Proceedings of the 9th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery -DMKD '04, page 11, Paris, France. ACM Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Algorithm 457: finding all cliques of an undirected graph",
"authors": [
{
"first": "Coen",
"middle": [],
"last": "Bron",
"suffix": ""
},
{
"first": "Joep",
"middle": [],
"last": "Kerbosch",
"suffix": ""
}
],
"year": 1973,
"venue": "Commun. ACM",
"volume": "16",
"issue": "9",
"pages": "575--577",
"other_ids": {
"DOI": [
"10.1145/362342.362367"
]
},
"num": null,
"urls": [],
"raw_text": "Coen Bron and Joep Kerbosch. 1973. Algorithm 457: finding all cliques of an undirected graph. Commun. ACM, 16(9):575-577.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Graph-based event coreference resolution",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing -TextGraphs-4",
"volume": "54",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1708124.1708135"
]
},
"num": null,
"urls": [],
"raw_text": "Zheng Chen and Heng Ji. 2009. Graph-based event coreference resolution. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing -TextGraphs-4, page 54, Sun- tec, Singapore. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Graph-based clustering for computational linguistics: A survey",
"authors": [
{
"first": "Zheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2010,
"venue": "Workshop on Graph-based Methods for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng Chen and Heng Ji. 2010. Graph-based cluster- ing for computational linguistics: A survey. 2010 Workshop on Graph-based Methods for Natural Language Processing, (July):1-9.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "and datascience.com team",
"authors": [
{
"first": "Pramit",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Kramer",
"suffix": ""
}
],
"year": 2018,
"venue": "datascienceinc/Skater: Enable Interpretability via Rule Extraction(BRL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.5281/zenodo.1198885"
]
},
"num": null,
"urls": [],
"raw_text": "Pramit Choudhary, Aaron Kramer, and data- science.com team. 2018. datascienceinc/Skater: Enable Interpretability via Rule Extraction(BRL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution",
"authors": [
{
"first": "Agata",
"middle": [],
"last": "Cybulska",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "4545--4552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4545- 4552, Reykjavik, Iceland. European Language Re- sources Association (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Survey of the State of Explainable AI for Natural Language Processing",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Danilevsky",
"suffix": ""
},
{
"first": "Ranit",
"middle": [],
"last": "Kun Qian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aharonov",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.00711[cs].ArXiv:2010.00711"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Danilevsky, Kun Qian, Ranit Aharonov, Yannis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A Sur- vey of the State of Explainable AI for Natural Lan- guage Processing. arXiv:2010.00711 [cs]. ArXiv: 2010.00711.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "GRASP for set packing problems",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Delorme",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Gandibleux",
"suffix": ""
},
{
"first": "Joaquin",
"middle": [],
"last": "Rodriguez",
"suffix": ""
}
],
"year": 2004,
"venue": "European Journal of Operational Research",
"volume": "153",
"issue": "3",
"pages": "564--580",
"other_ids": {
"DOI": [
"10.1016/S0377-2217(03)00263-7"
]
},
"num": null,
"urls": [],
"raw_text": "Xavier Delorme, Xavier Gandibleux, and Joaquin Ro- driguez. 2004. GRASP for set packing prob- lems. European Journal of Operational Research, 153(3):564-580.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Explainer: Entity resolution explanations",
"authors": [
{
"first": "Amr",
"middle": [],
"last": "Ebaid",
"suffix": ""
},
{
"first": "Saravanan",
"middle": [],
"last": "Thirumuruganathan",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Walid",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Aref",
"suffix": ""
},
{
"first": "Mourad",
"middle": [],
"last": "Elmagarmid",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ouzzani",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE 35th International Conference on Data Engineering (ICDE)",
"volume": "",
"issue": "",
"pages": "2000--2003",
"other_ids": {
"DOI": [
"10.1109/ICDE.2019.00224"
]
},
"num": null,
"urls": [],
"raw_text": "Amr Ebaid, Saravanan Thirumuruganathan, Walid G. Aref, Ahmed Elmagarmid, and Mourad Ouzzani. 2019. Explainer: Entity resolution explanations. In 2019 IEEE 35th International Conference on Data Engineering (ICDE), pages 2000-2003.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Incorporating non-local information into information extraction systems by Gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1219840.1219885"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christo- pher Manning. 2005. Incorporating non-local in- formation into information extraction systems by Gibbs sampling. In Proceedings of the 43rd",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Annual Meeting on Association for Computational Linguistics -ACL '05",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting on Association for Computational Linguistics -ACL '05, pages 363-370, Ann Arbor, Michigan. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An Ant Colony Optimisation Algorithm for the Set Packing Problem",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Gandibleux",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Delorme",
"suffix": ""
},
{
"first": "Vincent T'kindt ; Takeo",
"middle": [],
"last": "Kanade",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Kittler",
"suffix": ""
},
{
"first": "Jon",
"middle": [
"M"
],
"last": "Kleinberg",
"suffix": ""
},
{
"first": "Friedemann",
"middle": [],
"last": "Mattern",
"suffix": ""
},
{
"first": "John",
"middle": [
"C"
],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Moni",
"middle": [],
"last": "Naor",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "Nierstrasz",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Pandu Rangan",
"suffix": ""
}
],
"year": 2004,
"venue": "Ant Colony Optimization and Swarm Intelligence",
"volume": "3172",
"issue": "",
"pages": "49--60",
"other_ids": {
"DOI": [
"10.1007/978-3-540-28646-2_5"
]
},
"num": null,
"urls": [],
"raw_text": "Xavier Gandibleux, Xavier Delorme, and Vincent T'Kindt. 2004. An Ant Colony Optimisation Algo- rithm for the Set Packing Problem. In David Hutchi- son, Takeo Kanade, Josef Kittler, Jon M. Klein- berg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bern- hard Steffen, Madhu Sudan, Demetri Terzopoulos, Dough Tygar, Moshe Y. Vardi, Gerhard Weikum, Marco Dorigo, Mauro Birattari, Christian Blum, Luca Maria Gambardella, Francesco Mondada, and Thomas St\u00fctzle, editors, Ant Colony Optimization and Swarm Intelligence, volume 3172, pages 49-60.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Computers and intractability: a guide to the theory of NP-completeness, 27. print edition. A series of books in the mathematical sciences",
"authors": [
{
"first": "R",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "David",
"middle": [
"S"
],
"last": "Garey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael R. Garey and David S. Johnson. 2009. Computers and intractability: a guide to the theory of NP-completeness, 27. print edition. A series of books in the mathematical sciences. Freeman, New York [u.a]. OCLC: 551912424.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A cutting plane algorithm for a clustering problem",
"authors": [
{
"first": "M",
"middle": [],
"last": "Gr\u00f6tschel",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Wakabayashi",
"suffix": ""
}
],
"year": 1989,
"venue": "Mathematical Programming",
"volume": "45",
"issue": "1-3",
"pages": "59--96",
"other_ids": {
"DOI": [
"10.1007/BF01589097"
]
},
"num": null,
"urls": [],
"raw_text": "M. Gr\u00f6tschel and Y. Wakabayashi. 1989. A cut- ting plane algorithm for a clustering problem. Mathematical Programming, 45(1-3):59-96.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A Survey of Methods for Explaining Black Box Models",
"authors": [
{
"first": "Riccardo",
"middle": [],
"last": "Guidotti",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Monreale",
"suffix": ""
},
{
"first": "Salvatore",
"middle": [],
"last": "Ruggieri",
"suffix": ""
},
{
"first": "Franco",
"middle": [],
"last": "Turini",
"suffix": ""
},
{
"first": "Fosca",
"middle": [],
"last": "Giannotti",
"suffix": ""
},
{
"first": "Dino",
"middle": [],
"last": "Pedreschi",
"suffix": ""
}
],
"year": 2019,
"venue": "ACM Computing Surveys",
"volume": "51",
"issue": "5",
"pages": "1--42",
"other_ids": {
"DOI": [
"10.1145/3236009"
]
},
"num": null,
"urls": [],
"raw_text": "Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys, 51(5):1- 42.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Gurobi optimizer reference manual",
"authors": [
{
"first": "",
"middle": [],
"last": "Llc Gurobi Optimization",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LLC Gurobi Optimization. 2021. Gurobi optimizer ref- erence manual.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The Elements of Statistical Learning",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1007/978-0-387-84858-7"
]
},
"num": null,
"urls": [],
"raw_text": "Trevor Hastie, Robert Tibshirani, and Jerome Fried- man. 2009. The Elements of Statistical Learning.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Optimization in coreference resolution is not needed: A nearly-optimal algorithm with intensional constraints",
"authors": [
{
"first": "Manfred",
"middle": [],
"last": "Klenner",
"suffix": ""
},
{
"first": "\u00c9tienne",
"middle": [],
"last": "Ailloud",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL '09",
"volume": "",
"issue": "",
"pages": "442--450",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manfred Klenner and \u00c9tienne Ailloud. 2009. Opti- mization in coreference resolution is not needed: A nearly-optimal algorithm with intensional con- straints. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, EACL '09, page 442-450, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "On Average Number of Iterations of Some Algorithms for Solving the Set Packing Problem",
"authors": [
{
"first": "Alexander",
"middle": [
"A"
],
"last": "Kolokolov",
"suffix": ""
},
{
"first": "Lidia",
"middle": [
"A"
],
"last": "Zaozerskaya",
"suffix": ""
}
],
"year": 2009,
"venue": "IFAC Proceedings Volumes",
"volume": "42",
"issue": "",
"pages": "1510--1513",
"other_ids": {
"DOI": [
"10.3182/20090603-3-RU-2001.0519"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander A. Kolokolov and Lidia A. Zaozerskaya. 2009. On Average Number of Iterations of Some Algorithms for Solving the Set Packing Problem. IFAC Proceedings Volumes, 42(4):1510-1513.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On a posterior evaluation of a simple greedy method for set packing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Georgios",
"middle": [
"V"
],
"last": "Kwon",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Dalakouras",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2008,
"venue": "Optim Lett",
"volume": "2",
"issue": "4",
"pages": "587--597",
"other_ids": {
"DOI": [
"10.1007/s11590-008-0085-6"
]
},
"num": null,
"urls": [],
"raw_text": "Roy H. Kwon, Georgios V. Dalakouras, and Cheng Wang. 2008. On a posterior evaluation of a sim- ple greedy method for set packing. Optim Lett, 2(4):587-597.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Alternative formulations for the Set Packing Problem and their application to the Winner Determination Problem",
"authors": [
{
"first": "Mercedes",
"middle": [],
"last": "Landete",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Francisco Monge",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"M"
],
"last": "Rodr\u00edguez-Ch\u00eda",
"suffix": ""
}
],
"year": 2013,
"venue": "Ann Oper Res",
"volume": "207",
"issue": "1",
"pages": "137--160",
"other_ids": {
"DOI": [
"10.1007/s10479-011-1039-4"
]
},
"num": null,
"urls": [],
"raw_text": "Mercedes Landete, Juan Francisco Monge, and Anto- nio M. Rodr\u00edguez-Ch\u00eda. 2013. Alternative formula- tions for the Set Packing Problem and their appli- cation to the Winner Determination Problem. Ann Oper Res, 207(1):137-160.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Letham",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Rudin",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Tyler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mc-Cormick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Madigan",
"suffix": ""
}
],
"year": 2015,
"venue": "The Annals of Applied Statistics",
"volume": "9",
"issue": "3",
"pages": "",
"other_ids": {
"DOI": [
"10.1214/15-aoas848"
]
},
"num": null,
"urls": [],
"raw_text": "Benjamin Letham, Cynthia Rudin, Tyler H. Mc- Cormick, and David Madigan. 2015. Interpretable classifiers using rules and bayesian analysis: Build- ing a better stroke prediction model. The Annals of Applied Statistics, 9(3).",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Solving the Set Packing Problem via a Maximum Weighted Independent Set Heuristic",
"authors": [
{
"first": "Ruizhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yupan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shuli",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jianhua",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Dantong",
"middle": [],
"last": "Ouyang",
"suffix": ""
},
{
"first": "Minghao",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2020,
"venue": "Mathematical Problems in Engineering",
"volume": "2020",
"issue": "",
"pages": "1--11",
"other_ids": {
"DOI": [
"10.1155/2020/3050714"
]
},
"num": null,
"urls": [],
"raw_text": "Ruizhi Li, Yupan Wang, Shuli Hu, Jianhua Jiang, Dan- tong Ouyang, and Minghao Yin. 2020. Solving the Set Packing Problem via a Maximum Weighted In- dependent Set Heuristic. Mathematical Problems in Engineering, 2020:1-11.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Accelerating Column Generation via Flexible Dual Optimal Inequalities with Application to Entity Resolution",
"authors": [
{
"first": "Shaofei",
"middle": [],
"last": "Vishnu Suresh Lokhande",
"suffix": ""
},
{
"first": "Maneesh",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yarkony",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1909.05460[cs].ArXiv:1909.05460"
]
},
"num": null,
"urls": [],
"raw_text": "Vishnu Suresh Lokhande, Shaofei Wang, Maneesh Singh, and Julian Yarkony. 2020. Accelerating Column Generation via Flexible Dual Optimal In- equalities with Application to Entity Resolution. arXiv:1909.05460 [cs]. ArXiv: 1909.05460.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "BestCut: a graph algorithm for coreference resolution",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Nicolae",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Nicolae",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing -EMNLP '06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/1610075.1610115"
]
},
"num": null,
"urls": [],
"raw_text": "Cristina Nicolae and Gabriel Nicolae. 2006. BestCut: a graph algorithm for coreference resolution. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing -EMNLP '06, page 275, Sydney, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "why should i trust you?\": Explaining the predictions of any classifier. KDD '16",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "1135--1144",
"other_ids": {
"DOI": [
"10.1145/2939672.2939778"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. \"why should i trust you?\": Explain- ing the predictions of any classifier. KDD '16, page 1135-1144, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Anchors: High-precision modelagnostic explanations",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI Conference on Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Anchors: High-precision model- agnostic explanations. In AAAI Conference on Artificial Intelligence (AAAI).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "A set packing model for the ground holding problem in congested networks",
"authors": [
{
"first": "Fabrizio",
"middle": [],
"last": "Rossi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Smriglio",
"suffix": ""
}
],
"year": 2001,
"venue": "European Journal of Operational Research",
"volume": "131",
"issue": "2",
"pages": "400--416",
"other_ids": {
"DOI": [
"10.1016/S0377-2217(00)00064-3"
]
},
"num": null,
"urls": [],
"raw_text": "Fabrizio Rossi and Stefano Smriglio. 2001. A set pack- ing model for the ground holding problem in con- gested networks. European Journal of Operational Research, 131(2):400-416.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "A method for the cutting stock problem with different qualities",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "R\u00f6nnqvist",
"suffix": ""
}
],
"year": 1995,
"venue": "European Journal of Operational Research",
"volume": "83",
"issue": "1",
"pages": "57--68",
"other_ids": {
"DOI": [
"10.1016/0377-2217(94)00023-6"
]
},
"num": null,
"urls": [],
"raw_text": "Mikael R\u00f6nnqvist. 1995. A method for the cutting stock problem with different qualities. European Journal of Operational Research, 83(1):57-68.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Comparative Evaluation of Distributed Clustering Schemes for Multi-source Entity Resolution",
"authors": [
{
"first": "Alieh",
"middle": [],
"last": "Saeedi",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Peukert",
"suffix": ""
},
{
"first": "Erhard",
"middle": [],
"last": "Rahm",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Databases and Information Systems",
"volume": "10509",
"issue": "",
"pages": "278--293",
"other_ids": {
"DOI": [
"10.1007/978-3-319-66917-5_19"
]
},
"num": null,
"urls": [],
"raw_text": "Alieh Saeedi, Eric Peukert, and Erhard Rahm. 2017. Comparative Evaluation of Distributed Clustering Schemes for Multi-source Entity Resolution. In M\u0101r\u012bte Kirikova, Kjetil N\u00f8rv\u00e5g, and George A. Pa- padopoulos, editors, Advances in Databases and Information Systems, volume 10509, pages 278- 293. Springer International Publishing, Cham. Se- ries Title: Lecture Notes in Computer Science.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "An incremental graph-partitioning algorithm for entity resolution",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Tauer",
"suffix": ""
},
{
"first": "Ketan",
"middle": [],
"last": "Date",
"suffix": ""
},
{
"first": "Rakesh",
"middle": [],
"last": "Nagi",
"suffix": ""
},
{
"first": "Moises",
"middle": [],
"last": "Sudit",
"suffix": ""
}
],
"year": 2019,
"venue": "Information Fusion",
"volume": "46",
"issue": "",
"pages": "171--183",
"other_ids": {
"DOI": [
"10.1016/j.inffus.2018.06.001"
]
},
"num": null,
"urls": [],
"raw_text": "Gregory Tauer, Ketan Date, Rakesh Nagi, and Moises Sudit. 2019. An incremental graph-partitioning al- gorithm for entity resolution. Information Fusion, 46:171-183.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Comparison of Different Generalizations of Clustering Coefficient and Local Efficiency for Weighted Undirected Graphs",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Eshwar",
"middle": [],
"last": "Ghumare",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Vandenberghe",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Dupont",
"suffix": ""
}
],
"year": 2017,
"venue": "Neural Computation",
"volume": "29",
"issue": "2",
"pages": "313--331",
"other_ids": {
"DOI": [
"10.1162/NECO_a_00914"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Wang, Eshwar Ghumare, Rik Vandenberghe, and Patrick Dupont. 2017. Comparison of Different Generalizations of Clustering Coefficient and Local Efficiency for Weighted Undirected Graphs. Neural Computation, 29(2):313-331.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "An example of converting a text into a graph."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "if node i is chosen for partition m in set S k 0 otherwise zij = 1 if nodes i, j belong to the same partition 0 otherwise All the nodes in V are ordered. E represents the edge set of the graph. E = {(i, j) : i < j}."
},
"TABREF1": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Statistics of small graphs and their associated edges."
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Statistics of ECB+ datasets"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "Dedupe F1 scores"
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table><tr><td>6.1.1 Performance of xER on ECB+ Datasets</td></tr></table>",
"text": "Evaluation metrics for settlements dataset"
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"content": "<table/>",
"text": "F1 for varying T on a graph of 49 nodes"
}
}
}
}