{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:57:59.162200Z" }, "title": "Graph-based Aspect Representation Learning for Entity Resolution", "authors": [ { "first": "Zhenqi", "middle": [], "last": "Zhao", "suffix": "", "affiliation": { "laboratory": "", "institution": "eBay Inc. {zhenqzhao", "location": {} }, "email": "" }, { "first": "Yuchen", "middle": [], "last": "Guo", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nanjing University", "location": {} }, "email": "" }, { "first": "Dingxian", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "eBay Inc. {zhenqzhao", "location": {} }, "email": "diwang@ebay.com" }, { "first": "Yufan", "middle": [], "last": "Huang", "suffix": "", "affiliation": { "laboratory": "", "institution": "eBay Inc. {zhenqzhao", "location": {} }, "email": "yufhuang@ebay.com" }, { "first": "Xiangnan", "middle": [], "last": "He", "suffix": "", "affiliation": {}, "email": "xiangnanhe@gmail.com" }, { "first": "Bin", "middle": [], "last": "Gu", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Entity Resolution (ER) identifies records that refer to the same real-world entity. Deep learning approaches improved the generalization ability of entity matching models, but hardly overcame the impact of noisy or incomplete data sources. In real scenes, an entity usually consists of multiple semantic facets, called aspects. In this paper, we focus on entity augmentation, namely retrieving the values of missing aspects. The relationship between aspects is naturally suitable to be represented by a knowledge graph, where entity augmentation can be modeled as a link prediction problem. Our paper proposes a novel graph-based approach to solve entity augmentation. Specifically, we apply a dedicated random walk algorithm, which uses node types to limit the traversal length, and encodes graph structure into low-dimensional embeddings. Thus, the missing aspects could be retrieved by a link prediction model. Furthermore, the augmented aspects with fixed orders are served as the input of a deep Siamese BiLSTM network for entity matching. We compared our method with state-of-the-art methods through extensive experiments on downstream ER tasks. According to the experiment results, our model outperforms other methods on evaluation metrics (accuracy, precision, recall, and f1-score) to a large extent, which demonstrates the effectiveness of our method.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Entity Resolution (ER) identifies records that refer to the same real-world entity. Deep learning approaches improved the generalization ability of entity matching models, but hardly overcame the impact of noisy or incomplete data sources. In real scenes, an entity usually consists of multiple semantic facets, called aspects. In this paper, we focus on entity augmentation, namely retrieving the values of missing aspects. The relationship between aspects is naturally suitable to be represented by a knowledge graph, where entity augmentation can be modeled as a link prediction problem. Our paper proposes a novel graph-based approach to solve entity augmentation. Specifically, we apply a dedicated random walk algorithm, which uses node types to limit the traversal length, and encodes graph structure into low-dimensional embeddings. Thus, the missing aspects could be retrieved by a link prediction model. Furthermore, the augmented aspects with fixed orders are served as the input of a deep Siamese BiLSTM network for entity matching. We compared our method with state-of-the-art methods through extensive experiments on downstream ER tasks. According to the experiment results, our model outperforms other methods on evaluation metrics (accuracy, precision, recall, and f1-score) to a large extent, which demonstrates the effectiveness of our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Entity resolution has a tremendous impact on applications and research, such as deduplication, record linkage and canonicalization. It is a common challenge in various domains including digital libraries, Ecommerce, natural language understanding, etc. Applying deep learning methods to solve ER problems has become a current research hotspot. These kinds of approaches have good generalization capability to improve the accuracy of prediction values on unseen data. One of the remaining challenges in tackling ER tasks is the poor quality of data, such as missing values and ambiguity. This makes pairwise distance measures approaches less effective with noisy content and context. In real-world applications, different types of aspects often interact with each other to form heterogeneous relations (Shi et al., 2018) in almost all networks. Another challenge of ER lies in how to express the relationships between the heterogeneous nature of aspects by proper data structure.", "cite_spans": [ { "start": 801, "end": 819, "text": "(Shi et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Advanced Graph Representation Learning (GRL), also called graph embedding, aiming to learn lowdimensional representations of nodes in networks, has attracted considerable attention in many real applications of networks (Perozzi et al., 2014) . The universal pattern of these learning approaches is employing various types of random walk to generate node sequences and applying language models to map nodes into the same semantic vector space. In Knowledge Graph (KG), several related nodes often jointly represent a structural identity. An example of graph embedding in an aspect-based KG is shown in Figure 1 . The edges between aspects represent their co-occurrence in entities. Node colors in the graph represent different aspects types, and the thickness of the edge represents its weight. This method learns a latent space representation of aspects, which can be applied by downstream machine learning tasks.", "cite_spans": [ { "start": 219, "end": 241, "text": "(Perozzi et al., 2014)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 601, "end": 609, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An entity is composed of a set of aspects, and the relationship of aspects is easy to be represented in the form of graphs. In this paper, GRL is introduced to resolve entity augmentation in ER problems. We apply a heuristic feedback mechanism to the GRL field, which has long been proven successful in handling combinatorial optimization problems. This mechanism can significantly reduce the aggregation phenomenon caused by the long tail distribution of aspects, and generate more diverse and reasonable traversal sequences. We develop an algorithm (ASPECT2VEC) that learns the latent representation of aspects in a KG, by modeling a stream of random walks. ASPECT2VEC applies neural language models to process a special language composed of a set of heuristically-generated walks. The latent space representation of aspects would capture neighborhood similarity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We apply ASPECT2VEC to resolve entity augmentation. In the first place, link prediction in KG is implemented and used to estimate the likelihood of linkages between aspects. This step can retrieve missing aspects of entities. Then, deep Siamese networks are constructed to generate high-quality hash codes based on semantic-preserving vectors of aspect sequences. Finally, the hashing method is employed to evaluate the performance of pairwise matching in ER.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our main contributions are as follows: * ASPECT2VEC. We propose a flexible aspect representation learning framework. The framework adopts a novel heuristic feedback method to generate reasonable subgraphs in an aspect-based KG, while preventing long tails phenomenon due to high-frequency aspects. Moreover, we encode aspects into a continuous vector space while preserving the semantic associations. These enrich the connotation of representation learning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "* Novel Problem Modeling. We model entity augmentation in ER as a link prediction task in KG. Normally KG is constructed from the observed interactions between aspects, which may be incomplete or inaccurate. Thus the challenge of data augmentation lies in measuring the likelihood of links between aspects.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "* Evaluation. Here, we evaluate the quality of our aspect representations on downstream pairwise matching problems. The method shows significant improvements over several state-of-the-art methodologies on real public E-commerce data sets. This starts new directions for exploring data quality problems in the E-commerce field. Given a set of entities U and a set of aspects A, each u i \u2208 U corresponds to a series of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "{a 1 , a 2 , \u2022 \u2022 \u2022 , a m } \u2282 A.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, entity resolution is scaled to a pairwise matching problem. The target of ER is employing aspects to generate discriminative hash codes so that similar pairs could be easily distinguished from dissimilar ones. Let G = (V, E, W ) denote a weighted undirected graph, where V , E and W represent nodes set, edges set and weights set respectively. Each node v \u2208 V refers to an aspect, and edges refer to the co-occurrence of aspects in entities. Each weight w \u2208 W represents the co-occurrence times of aspects. A pairwise labeled dataset T is created with triples{(u i , u j , y)}, where u i , u j \u2208 U are the combinations of entities and y is a boolean label representing whether the pair of entities are matching or not.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Aspect representation learning encodes aspects into a continuous vector space while preserving the semantic associations in the graph. To dig deep into the problem, we propose ASPECT2VEC, which leverages dedicated random walk to learn latent representations of nodes in the aspect-based KG. Figure 2 (a) shows a schematic diagram of ASPECT2VEC. Considering that aspects often have fixed types, the dedicated random walk helps generate reasonable sequences of aspects and avoid exhaustive search. This method lays the foundation for downstream entity augmentation.", "cite_spans": [], "ref_spans": [ { "start": 291, "end": 299, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Aspect Representation Learning", "sec_num": "2.2" }, { "text": "Swarm intelligence like ant colony optimization (ACO) (Dorigo and St\u00fctzle, 2019) algorithm has excellent performance in solving combination optimization problems. Artificial ants in ACO communicate with each other via pheromone, leading to a heuristic positive feedback mechanism. Inspired by this idea, we propose a novel traversal approach, which applies a heuristic feedback mechanism and tabu search to generate reasonable subgraphs. A walk \u03c9 = v 0 , . . . , v n is defined as a sequence of nodes where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "(v i , v i+1 ) \u2208 E.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "Specifically, the k-th walk moves from node v i to node v j with probability p k i,j , as defined in equation 1:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p k i,j = \u03c4 \u03b1 i,j \u2022 \u03b7 \u03b2 i,j r\u2208\u0393(i) \u03c4 \u03b1 i,r \u2022 \u03b7 \u03b2 i,r", "eq_num": "(1)" } ], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "where \u03c4 i,j represents degree of freshness of the hop from node v i to node v j . \u03b1 \u2265 0 is a parameter to control the influence of \u03c4 i,j . Freshness is initiated with a constant \u03c4 0 , and indicates the visited frequency of edge", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "(v i , v j ) during traversal. \u03b7 i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "describes the attractiveness of the hop from v i to v j , which is typically set to w i,j . \u03b2 \u2265 1 is a parameter to control the influence of \u03b7 i,j . \u0393(i) is the 1-hop neighbors of v i . Degrees of freshness are updated when a walk is completed, decreasing the value corresponding to its moves. An example of a global freshness updating rule is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "\u03c4 i,j \u2190 (1 \u2212 \u03c1)\u03c4 i,j if (v i ,v j ) belongs to the k-th walk \u03c4 i,j otherwise (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "where \u03c1 is the freshness decay coefficient. The value of \u03c1 depends on k d k i,j , which is the total length of k-th walk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "d i,j = 1/w i,j is the shortest distance between v i and v j . \u03c1 = 1 1 + k d k i,j (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dedicated Walk", "sec_num": "2.2.1" }, { "text": "To guarantee the stochastic properties of the walk, a roulette wheel selection method is adopted to choose the next hop in a walk, as shown in Algorithm 1. This method keeps the algorithm from falling into greedy search.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Roulette Wheel Selection", "sec_num": "2.2.2" }, { "text": "Input: v x : current node; \u0393(x): one-hop neighbors of v x ; N k : forbidden nodes in k-th walk; Output: \u03d5: the next hop node;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "1 \u03d5 = \u22121; 2 \u00b5 = random(0.0, 1.0); 3 for each v z \u2208 \u0393(x) do 4 \u00b5 \u2190 \u00b5 \u2212 p x,z ; 5 if (\u00b5 < 0) && (v z \u2208 N k ) then 6 \u03d5 = v z ; 7 break; 8 if (\u03d5 == \u22121) then 9 \u03d5 = random(v \u2208 \u0393(x));", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "10 update(N k ); 11 return \u03d5;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "Aspects that connect to similar others and have the same types in a graph are considered structural equivalence. Here, each entity only owns one specific value for a certain aspect type. Thus we restrict the walk length to the number of aspect types. If an aspect is visited during a walk, then the nodes that are with the same type as it will be added to the forbidden node set N k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "Algorithm 2 shows procedures of how a dedicated walk generates total subgraphs. At the start of the algorithm, all parameters are initialized, including distance matrix and freshness matrix. In this method, degrees of freshness are the key to achieve heuristics. And the randomness of the algorithm is achieved through roulette wheel selection. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "v \u2208 V do 3 \u03c9 = dedicatedRandomW alk(v, \u03c7); 4 updateGlobalF reshness(); 5 \u03bb.add(\u03c9); 6 return \u03bb 2.2.3 ASPECT2VEC", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "SkipGram works as a language model to maximize the co-occurrence probability among the words appearing within a window. Compared to continuous bag-of-words (CBOW), SkipGram weighs nearby context words more heavily than distant context words. In ASPECT2VEC, SkipGram is applied to convert the aspects into low-dimensional vector space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "Algorithm 2 generates almost all reasonable aspect sequences. After that, each aspect node will be encoded to a corresponding representation vector. Moreover, to maximize the appearance probability of its neighbors in the walk, Hierarchical Softmax is used to approximate the probability distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithm 1: Roulette Wheel Selection", "sec_num": null }, { "text": "Entity augmentation is modeled as a link prediction problem, namely predicting whether two nodes in a graph should have a link. The challenge lies in identifying spurious interactions and predicting missing links. The original connection information between aspects can be obtained from the KG and utilized to train a supervised model for LP.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Augmentation", "sec_num": "2.2.4" }, { "text": "We complete the entity augmentation task with a two-step solution-recall and classification. The original aspects are mapped into vector space, and the nearest neighbors that belong to the missing aspect types are recalled as candidates(the default size of the recall set is 10). Then the neighbors that are most likely to have connections with the query aspects are selected as supplement aspects. We build the LP model with a Siamese MLP structure. The input of the model is two aspect vectors, and the objective function is the contrastive loss. Accurate aspect representations facilitate entity augmentation, which greatly helps resolve downstream ER problems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entity Augmentation", "sec_num": "2.2.4" }, { "text": "Deep semantic hashing uses deep neural networks to generate discriminative hash codes so that similar pairs could be easily distinguished from dissimilar ones (Suthee et al., 2018) . Our semantic hashing method is implemented by a deep Siamese network and vector quantization.", "cite_spans": [ { "start": 159, "end": 180, "text": "(Suthee et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Deep Semantic Hashing", "sec_num": "2.3" }, { "text": "In the pairwise-preserving hashing method, the Siamese network is applied to explore the inner representation of symmetrical objects. We construct a deep bidirectional long short-term memory (BiLSTM) network with hierarchical attention (Z. et al., 2016) as the base structure. This model takes symmetrical input, as shown in Figure 2 (b) . During the training process, the symmetrical parts share the neural weights of the network. The loss function applied here is contrastive loss (Nicosia and Moschitti, 2017) based on Euclidean distance, which can be defined as:", "cite_spans": [ { "start": 236, "end": 253, "text": "(Z. et al., 2016)", "ref_id": null }, { "start": 483, "end": 512, "text": "(Nicosia and Moschitti, 2017)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 325, "end": 337, "text": "Figure 2 (b)", "ref_id": null } ], "eq_spans": [], "section": "Siamese Network", "sec_num": "2.3.1" }, { "text": "min = 1 2N N n=1 y n \u03b5 2 n + (1 \u2212 y n ) max(margin \u2212 \u03b5 n , 0) 2 \u03b5 n = ||a n \u2212 b n || 2 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Siamese Network", "sec_num": "2.3.1" }, { "text": "where y n denotes whether the pair is matching or not, \u03b5 n is the Euclidean distance between two output vectors a n and b n , and margin is the default threshold. The loss function makes a mapping from high to low dimensional space which maps similar input vectors to nearby points on the output manifold and dissimilar vectors to distant points. In the deepest layer of the Siamese network, we apply a fully connected neural layer with Softsign activation function, which polarizes the activation value and easily converts it to binary code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Siamese Network", "sec_num": "2.3.1" }, { "text": "Hash codes are widely used in information retrieval for O(1) time complexity and data compression. Vector quantization works by dividing a large set of vectors into groups, and each group is represented by its centroid point. Utilizing the output of the last layer of the network, we can get the vectors corresponding to the aspect sequences. We apply k-means clustering to every dimension of the output vectors, fitting the distribution of binary codes. It means that for each dimension there will be two clusters. For a multidimensional vector, dimension independent quantization divides values into discrete groups.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vector Quantization", "sec_num": "2.3.2" }, { "text": "Our experiments on ASPECT2VEC consist of two parts, namely link prediction and entity resolution. Each experiment compares ASPECT2VEC with several state-of-the-art graph embedding methods, including DEEPWALK (Perozzi et al., 2014) , LINE (Tang et al., 2015) , NODE2VEC (Grover and Leskovec, 2016) and STRUC2VEC (Ribeiro et al., 2017) on two E-commerce datasets. The comparison includes link prediction as well as pairwise matching by hash codes.", "cite_spans": [ { "start": 208, "end": 230, "text": "(Perozzi et al., 2014)", "ref_id": "BIBREF8" }, { "start": 238, "end": 257, "text": "(Tang et al., 2015)", "ref_id": "BIBREF14" }, { "start": 269, "end": 296, "text": "(Grover and Leskovec, 2016)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "3" }, { "text": "We select two public E-commerce datasets with different sizes and sparsity for experiments. The Flipkart dataset i contains 20000 products, the density of aspect data is 0.08% (32569 nodes, 426202 edges). The eBay dataset ii contains more than 8000 vacuum cleaner items, the density of aspect data is 0.15% (22841 nodes, 401973 edges). More than one hundred thousand entity pairs are constructed from each data set, where the label is generated from UPC/EAN iii in eBay and item title in Flipkart. The ratio of the training set to test set is controlled at four to one by random sampling.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "3.1" }, { "text": "Each kind of product entities have their main aspect types, so the length and order of the generated sequences can be determined by restricting the aspect types, which is also utilized in tabu search. For ASPECT2VEC, \u03b1 and \u03b2 are both set to 1, enabling balanced heuristic weight between \u03c4 i,j and \u03b7 i,j . \u03c4 0 is set to 1 to initialize the freshness matrix. For a fair comparison, parameters of neural networks used by different algorithms are the same. The deep models for link prediction and entity resolution are Siamese network with dense layers and deep Siamese BiLSTM, respectively. And the bits of hash code is set to 64 in pairwise matching, which is corresponding to the dimensions of the output vector. Table 1 shows the evaluation result on link prediction between aspects, and ASPECT2VEC obviously outperforms all other methods on accuracy, precision, recall, and f1-score metrics. In ASPECT2VEC, the dedicated random walk takes the co-occurrence between aspects as the heuristic factor to choose the next i https://www.kaggle.com/PromptCloudHQ/flipkart-products ii https://www.kaggle.com/zhenqizhao/ebay-vacuum-cleaner-products iii UPC stands for Universal Product Code and EAN stands for European Article Number, both for product identification. hop, and captures deep potential connections rather than random hopping. Higher accuracy indicates that the method can not only connect missing links, but also identify spurious or incorrect links. Accurate link prediction facilitates entity augmentation. Table 2 shows the result of different methods on resolving pairwise matching. Attention-based BiLSTM can accurately capture the contribution of different aspect types to the final result, and the pairwise learning method fully understands symmetrical and asymmetric information between different pairs. Compared to other methods, ASPECT2VEC sacrifices a little precision but greatly improves the recall rate. The improved accuracy proves the ability to identify different kinds of entities, and the hashing method enables very fast matching. The experimental result shows the effectiveness of our method to entity augmentation, and the increase in overall performance on entity resolution.", "cite_spans": [], "ref_spans": [ { "start": 712, "end": 719, "text": "Table 1", "ref_id": "TABREF2" }, { "start": 1515, "end": 1522, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experiment Setting", "sec_num": "3.2" }, { "text": "Entity resolution has attracted the interest of a large number of researchers in recent years. With the development of deep learning (DL), a growing number of DL methods are applied to solve ER problems (Mudgal et al., 2018) . End-to-end deep matching models (Nie et al., 2019; Fu et al., 2020; Zhao and He, 2019) adopt similarity measures or semantic features of attributes for ER, especially dealing with heterogeneous entities. DL often requires a lot of labeled data as a training set, which is expensive to obtain. Therefore, transfer learning methods, based on a pre-trained model, are employed to solve ER tasks with little or no training data (Zhao and He, 2019) . Besides, there have been many unsupervised methods to solve the data labeling problem, particularly focusing on machine labeling and error label correction (B. et al., 2019; R. et al., 2020; Chen et al., 2020) . Some of the methods mentioned above pay attention to overcoming the dirty or heterogeneous data. However, how to deal with incomplete data and augment data quality in ER still needs further research. The method we proposed applies graph representation learning to resolve this problem. Graph representation learning is dedicated to mapping nodes in networks into the same vector space, while maintaining the semantic association between nodes (Perozzi et al., 2014; Grover and Leskovec, 2016; Ribeiro et al., 2017; Shi et al., 2018; Tang et al., 2015; Wang et al., 2016; Ristoski and Paulheim, 2016) . This kind of technique has received significant attention in the last few years with the development of natural language processing. The quality of the generated vectors is often measured by link prediction and node classification (Zhang and Chen, 2018; Ying et al., 2018; Trouillon et al., 2016) . Previous researchers focused on the breadth and depth of graph traversal, but few of them take the node type into consideration during the progress of the random walk. In addition, how to avoid the long tail phenomenon as well as generating reasonable sequences in the traversal process is also a problem worth exploring.", "cite_spans": [ { "start": 203, "end": 224, "text": "(Mudgal et al., 2018)", "ref_id": "BIBREF5" }, { "start": 259, "end": 277, "text": "(Nie et al., 2019;", "ref_id": "BIBREF7" }, { "start": 278, "end": 294, "text": "Fu et al., 2020;", "ref_id": "BIBREF3" }, { "start": 295, "end": 313, "text": "Zhao and He, 2019)", "ref_id": "BIBREF20" }, { "start": 651, "end": 670, "text": "(Zhao and He, 2019)", "ref_id": "BIBREF20" }, { "start": 829, "end": 846, "text": "(B. et al., 2019;", "ref_id": "BIBREF0" }, { "start": 847, "end": 863, "text": "R. et al., 2020;", "ref_id": "BIBREF9" }, { "start": 864, "end": 882, "text": "Chen et al., 2020)", "ref_id": "BIBREF1" }, { "start": 1328, "end": 1350, "text": "(Perozzi et al., 2014;", "ref_id": "BIBREF8" }, { "start": 1351, "end": 1377, "text": "Grover and Leskovec, 2016;", "ref_id": "BIBREF4" }, { "start": 1378, "end": 1399, "text": "Ribeiro et al., 2017;", "ref_id": "BIBREF10" }, { "start": 1400, "end": 1417, "text": "Shi et al., 2018;", "ref_id": "BIBREF12" }, { "start": 1418, "end": 1436, "text": "Tang et al., 2015;", "ref_id": "BIBREF14" }, { "start": 1437, "end": 1455, "text": "Wang et al., 2016;", "ref_id": "BIBREF16" }, { "start": 1456, "end": 1484, "text": "Ristoski and Paulheim, 2016)", "ref_id": "BIBREF11" }, { "start": 1718, "end": 1740, "text": "(Zhang and Chen, 2018;", "ref_id": "BIBREF19" }, { "start": 1741, "end": 1759, "text": "Ying et al., 2018;", "ref_id": "BIBREF17" }, { "start": 1760, "end": 1783, "text": "Trouillon et al., 2016)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "4" }, { "text": "In this paper, we proposed a novel aspect representation learning framework ASPECT2VEC, which resolves the entity augmentation problem in ER by modeling it as a link prediction problem in KG. AS-PECT2VEC collaboratively explores dedicated random walks and captures semantic information between nodes in a network. Moreover, through extensive experiments on link prediction and deep semantic hashing, we demonstrated the superiority of the proposed framework to several state-of-the-art methods. Furthermore, dedicated random walk is flexible and also has great potential capability of parallelism to be explored in future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [ { "text": "The research is supported by th Key Projects of Philosophy and Social Sciences Research of Chinese Ministry of Education under Grant 19JZD021. Assistance provided by eBay, Ads Shanghai Director Hua Yang, Director Wei Fang, Manager Hansi Wu was greatly appreciated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Gradual machine learning for entity resolution", "authors": [ { "first": "B", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Q", "middle": [], "last": "Chen", "suffix": "" }, { "first": "J", "middle": [], "last": "Shen", "suffix": "" }, { "first": "X", "middle": [], "last": "Liu", "suffix": "" }, { "first": "P", "middle": [], "last": "Zhong", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Li", "middle": [ "Z" ], "last": "", "suffix": "" } ], "year": 2019, "venue": "WWW 2019", "volume": "", "issue": "", "pages": "3526--3530", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hou B., Chen Q., Shen J., Liu X., Zhong P., Wang Y., Chen Z., and Li Z. 2019. Gradual machine learning for entity resolution. In WWW 2019, pages 3526-3530.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Towards interpretable and learnable risk analysis for entity resolution", "authors": [ { "first": "Zhaoqiang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Qun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Boyi", "middle": [], "last": "Hou", "suffix": "" }, { "first": "Zhanhuai", "middle": [], "last": "Li", "suffix": "" }, { "first": "Guoliang", "middle": [], "last": "Li", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data", "volume": "", "issue": "", "pages": "1165--1180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhaoqiang Chen, Qun Chen, Boyi Hou, Zhanhuai Li, and Guoliang Li. 2020. Towards interpretable and learnable risk analysis for entity resolution. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pages 1165-1180.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Ant colony optimization: overview and recent advances", "authors": [ { "first": "Marco", "middle": [], "last": "Dorigo", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "St\u00fctzle", "suffix": "" } ], "year": 2019, "venue": "Handbook of metaheuristics", "volume": "", "issue": "", "pages": "311--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marco Dorigo and Thomas St\u00fctzle. 2019. Ant colony optimization: overview and recent advances. In Handbook of metaheuristics, pages 311-351. Springer.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Hierarchical matching network for heterogeneous entity resolution", "authors": [ { "first": "Cheng", "middle": [], "last": "Fu", "suffix": "" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Jiaming", "middle": [], "last": "He", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cheng Fu, Xianpei Han, Jiaming He, and Le Sun. 2020. Hierarchical matching network for heterogeneous entity resolution. pages 3637-3643, 07.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "node2vec: Scalable feature learning for networks", "authors": [ { "first": "Aditya", "middle": [], "last": "Grover", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "855--864", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855-864.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Deep learning for entity matching: A design space exploration", "authors": [ { "first": "Sidharth", "middle": [], "last": "Mudgal", "suffix": "" }, { "first": "Han", "middle": [], "last": "Li", "suffix": "" }, { "first": "Theodoros", "middle": [], "last": "Rekatsinas", "suffix": "" }, { "first": "Anhai", "middle": [], "last": "Doan", "suffix": "" }, { "first": "Youngchoon", "middle": [], "last": "Park", "suffix": "" }, { "first": "Ganesh", "middle": [], "last": "Krishnan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 International Conference on Management of Data", "volume": "", "issue": "", "pages": "19--34", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proceedings of the 2018 International Conference on Management of Data, pages 19-34.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Accurate sentence matching with hybrid siamese networks", "authors": [ { "first": "Massimo", "middle": [], "last": "Nicosia", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "2235--2238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Massimo Nicosia and Alessandro Moschitti. 2017. Accurate sentence matching with hybrid siamese networks. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2235-2238.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Deep sequenceto-sequence entity matching for heterogeneous entity resolution", "authors": [ { "first": "Hao", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Xianpei", "middle": [], "last": "Han", "suffix": "" }, { "first": "Ben", "middle": [], "last": "He", "suffix": "" }, { "first": "Le", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Bo", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Suhui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Kong", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 28th ACM International Conference on Information and Knowledge Management(CIKM)", "volume": "", "issue": "", "pages": "629--638", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Nie, Xianpei Han, Ben He, Le Sun, Bo Chen, Wei Zhang, Suhui Wu, and Hao Kong. 2019. Deep sequence- to-sequence entity matching for heterogeneous entity resolution. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management(CIKM), pages 629-638.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Deepwalk: Online learning of social representations", "authors": [ { "first": "Bryan", "middle": [], "last": "Perozzi", "suffix": "" }, { "first": "Rami", "middle": [], "last": "Al-Rfou", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Skiena", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "701--710", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701-710.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Zeroer: Entity resolution using zero labeled examples", "authors": [ { "first": "R", "middle": [], "last": "Wu", "suffix": "" }, { "first": "S", "middle": [], "last": "Chaba", "suffix": "" }, { "first": "S", "middle": [], "last": "Sawlani", "suffix": "" }, { "first": "X", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Thirumuruganathan", "middle": [ "S" ], "last": "", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data", "volume": "", "issue": "", "pages": "1149--1164", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wu R., Chaba S., Sawlani S., Chu X., and Thirumuruganathan S. 2020. Zeroer: Entity resolution using zero labeled examples. In Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pages 1149-1164.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Learning node representations from structural identity", "authors": [ { "first": "", "middle": [], "last": "Leonardo Fr Ribeiro", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "Pedro", "suffix": "" }, { "first": "Daniel", "middle": [ "R" ], "last": "Saverese", "suffix": "" }, { "first": "", "middle": [], "last": "Figueiredo", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining", "volume": "2", "issue": "", "pages": "385--394", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leonardo FR Ribeiro, Pedro HP Saverese, and Daniel R Figueiredo. 2017. struc2vec: Learning node representa- tions from structural identity. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 385-394.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Rdf2vec: Rdf graph embeddings for data mining", "authors": [ { "first": "Petar", "middle": [], "last": "Ristoski", "suffix": "" }, { "first": "Heiko", "middle": [], "last": "Paulheim", "suffix": "" } ], "year": 2016, "venue": "International Semantic Web Conference", "volume": "", "issue": "", "pages": "498--514", "other_ids": {}, "num": null, "urls": [], "raw_text": "Petar Ristoski and Heiko Paulheim. 2016. Rdf2vec: Rdf graph embeddings for data mining. In International Semantic Web Conference, pages 498-514. Springer.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Aspem: Embedding learning by aspects in heterogeneous information networks", "authors": [ { "first": "Yu", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Huan", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Kaplan", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 SIAM International Conference on Data Mining", "volume": "", "issue": "", "pages": "144--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yu Shi, Huan Gui, Qi Zhu, Lance Kaplan, and Jiawei Han. 2018. Aspem: Embedding learning by aspects in heterogeneous information networks. In Proceedings of the 2018 SIAM International Conference on Data Mining, pages 144-152. SIAM.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Deep semantic text hashing with weak supervision", "authors": [ { "first": "Chaidaroon", "middle": [], "last": "Suthee", "suffix": "" }, { "first": "Ebesu", "middle": [], "last": "Travis", "suffix": "" }, { "first": "Fang", "middle": [], "last": "Yi", "suffix": "" } ], "year": 2018, "venue": "The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval", "volume": "", "issue": "", "pages": "1109--1112", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chaidaroon Suthee, Ebesu Travis, and Fang Yi. 2018. Deep semantic text hashing with weak supervision. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1109-1112.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Line: Large-scale information network embedding", "authors": [ { "first": "Jian", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Mingzhe", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yan", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th international conference on world wide web", "volume": "", "issue": "", "pages": "1067--1077", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. 2015. Line: Large-scale infor- mation network embedding. In Proceedings of the 24th international conference on world wide web, pages 1067-1077.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Complex embeddings for simple link prediction", "authors": [ { "first": "Th\u00e9o", "middle": [], "last": "Trouillon", "suffix": "" }, { "first": "Johannes", "middle": [], "last": "Welbl", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Riedel", "suffix": "" }, { "first": "\u00c9ric", "middle": [], "last": "Gaussier", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Bouchard", "suffix": "" } ], "year": 2016, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel,\u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. International Conference on Machine Learning (ICML).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Structural deep network embedding", "authors": [ { "first": "Daixin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Wenwu", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining", "volume": "", "issue": "", "pages": "1225--1234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1225-1234.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Hierarchical graph representation learning with differentiable pooling", "authors": [ { "first": "Zhitao", "middle": [], "last": "Ying", "suffix": "" }, { "first": "Jiaxuan", "middle": [], "last": "You", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Morris", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Will", "middle": [], "last": "Hamilton", "suffix": "" }, { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" } ], "year": 2018, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "4800--4810", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. 2018. Hierarchi- cal graph representation learning with differentiable pooling. In Advances in neural information processing systems, pages 4800-4810.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "D", "middle": [], "last": "Yang", "suffix": "" }, { "first": "C", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "X", "middle": [], "last": "He", "suffix": "" }, { "first": "A", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Hovy", "middle": [ "E" ], "last": "", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies", "volume": "", "issue": "", "pages": "1480--1489", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang Z., Yang D., Dyer C., He X., Smola A., and Hovy E. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480-1489.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Link prediction based on graph neural networks", "authors": [ { "first": "Muhan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yixin", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2018, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5165--5175", "other_ids": {}, "num": null, "urls": [], "raw_text": "Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. In Advances in Neural Information Processing Systems, pages 5165-5175.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Auto-em: End-to-end fuzzy entity-matching using pre-trained deep models and transfer learning", "authors": [ { "first": "Chen", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Yeye", "middle": [], "last": "He", "suffix": "" } ], "year": 2019, "venue": "The World Wide Web Conference", "volume": "", "issue": "", "pages": "2413--2424", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chen Zhao and Yeye He. 2019. Auto-em: End-to-end fuzzy entity-matching using pre-trained deep models and transfer learning. In The World Wide Web Conference, pages 2413-2424.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Schema of graph embedding in an aspect-based KG.", "uris": null, "type_str": "figure", "num": null }, "TABREF1": { "text": "Dedicated Random Walk Input: V : node set; \u03c7: node type constrains; Output: \u03bb: all walks; 1 Initialize all parameters;", "html": null, "content": "", "type_str": "table", "num": null }, "TABREF2": { "text": "Comparison of aspect representation methods on link prediction", "html": null, "content": "
DatasetsMethodsaccuracy precision recall f1-score
DEEPWALK0.79640.94240.6315 0.7562
LINE0.78310.92450.6166 0.7398
FlipkartNODE2VEC0.67480.93730.3745 0.5352
STRUC2VEC0.72250.91410.4911 0.6390
ASPECT2VEC0.81830.95250.6701 0.7867
DEEPWALK0.67970.97560.3596 0.5255
LINE0.81900.97090.6524 0.7804
eBayNODE2VEC0.70120.97640.4039 0.5714
STRUC2VEC0.67460.96320.3536 0.5173
ASPECT2VEC0.85340.98250.7155 0.8280
", "type_str": "table", "num": null }, "TABREF3": { "text": "Comparison of aspect representation methods on pairwise matching", "html": null, "content": "
DatasetsMethodsaccuracy precision recall f1-score
DEEPWALK0.91510.96230.4889 0.6484
LINE0.94720.97860.6855 0.8062
FlipkartNODE2VEC0.92040.96940.5192 0.6762
STRUC2VEC0.91070.96880.4570 0.6210
ASPECT2VEC0.96080.95120.7959 0.8667
DEEPWALK0.89870.99090.6232 0.7652
LINE0.91400.99090.6815 0.8076
eBayNODE2VEC0.91240.98870.6771 0.8038
STRUC2VEC0.89430.99150.6063 0.7524
ASPECT2VEC0.91870.99000.7001 0.8201
", "type_str": "table", "num": null } } } }