aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
A related problem, after the edge server placement has been completed, is how the runtime online workload is handled. This virtual machine allocation problem was considered in a number of studies, e.g. @cite_10 @cite_7 @cite_23 @cite_46 @cite_26 , but this online problem is outside of the focus on this paper.
{ "cite_N": [ "@cite_26", "@cite_7", "@cite_23", "@cite_46", "@cite_10" ], "mid": [ "2606297994", "2745755793", "2747744888", "2626044724", "2498207281" ], "abstract": [ "Fog computing, an extension of cloud computing services to the edge of the network to decrease latency and network congestion, is a relatively recent research trend. Although both cloud and fog offer similar resources and services, the latter is characterized by low latency with a wider spread and geographically distributed nodes to support mobility and real-time interaction. In this paper, we describe the fog computing architecture and review its different services and applications. We then discuss security and privacy issues in fog computing, focusing on service and resource availability. Virtualization is a vital technology in both fog and cloud computing that enables virtual machines (VMs) to coexist in a physical server (host) to share resources. These VMs could be subject to malicious attacks or the physical server hosting it could experience system failure, both of which result in unavailability of services and resources. Therefore, a conceptual smart pre-copy live migration approach is presented for VM migration. Using this approach, we can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node. This will minimize both the downtime and the migration time to guarantee resource and service availability to the end users of fog computing. Last, future research directions are outlined.", "Fog computing provides a decentralized approach to data processing and resource provisioning in the Internet of Things (IoT). Particular challenges of adopting fog-based computational resources are the adherence to geographical distribution of IoT data sources, the delay sensitivity of IoT services, and the potentially very large amounts of data emitted and consumed by IoT devices. Despite existing foundations, research on fog computing is still at its very beginning. A major research question is how to exploit the ubiquitous presence of small and cheap computing devices at the edge of the network in order to successfully execute IoT services. Therefore, in this paper, we study the placement of IoT services on fog resources, taking into account their QoS requirements. We show that our optimization model prevents QoS violations and leads to 35 less cost of execution if compared to a purely cloud-based approach.", "Internet of Things (IoT) will be one of the driving application for digital data generation in the next years as more than 50 billions of objects will be connected by 2020. IoT data can be processed and used by different devices spread all over the network. The traditional way of centralizing data processing in the Cloud can hardly scale because it cannot satisfy many of the latency critical IoT applications. In addition, it generates a too high network traffic when the number of objects and services increase. Fog infrastructure provides a beginning of an answer to such an issue. In this paper, we present a data placement strategy for Fog infrastructures called iFogStor. The objective of iFogStor is to take profit of the heterogeneity and location of Fog nodes to reduce the overall latency of storing and retrieving data in a Fog. We formulated the data placement problem as a Generalized Assignment Problem (GAP) and proposed two ways to solve it: 1) an exact solution using integer programming and 2) a heuristic one based on geographical zoning to reduce the solving time. Both solutions proved very good performance as they reduced the latency by more than 86 as compared to a Cloud based solution and by 60 as compared to a naive Fog solution. Using geographical zoning heuristic can allow solving problems with large number of Fog nodes efficiently and in a couple of seconds making iFogStor feasible in runtime and scalable.", "Mobile edge clouds (MECs) bring the benefits of the cloud closer to the user, by installing small cloud infrastructures at the network edge. This enables a new breed of real-time applications, such as instantaneous object recognition and safety assistance in intelligent transportation systems, that require very low latency. One key issue that comes with proximity is how to ensure that users always receive good performance as they move across different locations. Migrating services between MECs is seen as the means to achieve this. This article presents a layered framework for migrating active service applications that are encapsulated either in virtual machines (VMs) or containers. This layering approach allows a substantial reduction in service downtime. The framework is easy to implement using readily available technologies, and one of its key advantages is that it supports containers, which is a promising emerging technology that offers tangible benefits over VMs. The migration performance of various real applications is evaluated by experiments under the presented framework. Insights drawn from the experimentation results are discussed.", "Due to the growing demand of cloud services, allocation of energy efficient resources (CPU, memory, storage, etc.) and resources utilization are the major challenging issues of a large cloud data center. In this paper, we propose an Euclidean distance based multi-objective resources allocation in the form of virtual machines (VMs) and designed the VM migration policy at the data center. Further the allocation of VMs to Physical Machines (PMs) is carried out by our proposed hybrid approach of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) referred to as HGAPSO. The proposed HGAPSO based resources allocation and VMs migration not only saves the energy consumption and minimizes the wastage of resources but also avoids SLA violation at the cloud data center. To check the performance of the proposed HGAPSO algorithm and VMs migration technique in the form of energy consumption, resources utilization and SLA violation, we performed the extended amount of experiment in both heterogeneous and homogeneous data center environments. To check the performance of proposed HGAPSO with VM migration, we compared our proposed work with branch-and-bound based exact algorithm. The experimental results show the superiority of HGAPSO and VMs migration technique over exact algorithm in terms of energy efficiency, optimal resources utilization, and SLA violation." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
We approximate the network topology using the geospatial distances, where proximity is measured using squared Euclidean distances. This produces k-means type clustering with spherical-like clusters with centralized cluster heads @cite_19 . This results in a star-like topology with spatially centralized edge servers in both dense and sparse areas, which contributes towards better proximity, i.e. QoS, particularly in the otherwise remote access points.
{ "cite_N": [ "@cite_19" ], "mid": [ "2172264744" ], "abstract": [ "The capacitated centred clustering problem (CCCP) consists of defining a set of clusters with limited capacity and maximum proper similarity per cluster. Each cluster is composed of individuals from whom we can compute a centre value and hence, determine a similarity measure. The clusters must cover the demands of their individuals. This problem can be applied to the design of garbage collection zones, defining salesmen areas, etc. In this work, we present two variations (p-CCCP and Generic CCCP) of this problem and their mathematical programming formulations. We first focus our attention on the Generic CCCP, and then we create a new procedure for p-CCCP. These problems being NP-HARD, we propose a two-phase polynomial heuristic algorithm. The first phase is a constructive phase for which we will propose two variants: the first technique uses known cluster procedures oriented by a log-polynomial geometric tree search, the other one uses unconstrained to constrained clustering. The second phase is a refinement of the variable neighbourhood search (VNS). We also show the results we have obtained for tests from the CCP literature, the design of garbage collection zones, and salesmen areas distribution using the approach implemented for the SISROT® system." ] }
1907.07349
2960672606
Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints.
With limited server capacity, the access point workload resulting from clustering may exceed their capacity, as demonstrated in related work @cite_30 . Therefore, assigning an access point to exactly one server may reduce the QoS. Thus, a sharing of workload between servers should be enabled. So forth, we refer to the sharing of workload as the fractional membership, contrary to hard membership.
{ "cite_N": [ "@cite_30" ], "mid": [ "2792782202" ], "abstract": [ "Mobile edge computing (MEC) is an emerging technology that aims at pushing applications and content close to the users (e.g., at base stations, access points, and aggregation networks) to reduce latency, improve quality of experience, and ensure highly efficient network operation and service delivery. It principally relies on virtualization-enabled MEC servers with limited capacity at the edge of the network. One key issue is to dimension such systems in terms of server size, server number, and server operation area to meet MEC goals. In this paper, we formulate this problem as a mixed integer linear program. We then propose a graph-based algorithm that, taking into account a maximum MEC server capacity, provides a partition of MEC clusters, which consolidates as many communications as possible at the edge. We use a dataset of mobile communications to extensively evaluate them with real world spatio-temporal human dynamics. In addition to quantifying macroscopic MEC benefits, the evaluation shows that our algorithm provides MEC area partitions that largely offload the core, thus pushing the load at the edge (e.g., with 10 small MEC servers between 55 and 64 of the traffic stay at the edge), and that are well balanced through time." ] }
1907.07274
2958781163
Multi-label classification plays a momentous role in perceiving intricate contents of an aerial image and triggers several related studies over the last years. However, most of them deploy few efforts in exploiting label relations, while such dependencies are crucial for making accurate predictions. Although an LSTM layer can be introduced to modeling such label dependencies in a chain propagation manner, the efficiency might be questioned when certain labels are improperly inferred. To address this, we propose a novel aerial image multi-label classification network, attention-aware label relational reasoning network. Particularly, our network consists of three elemental modules: 1) a label-wise feature parcel learning module, 2) an attentional region extraction module, and 3) a label relational inference module. To be more specific, the label-wise feature parcel learning module is designed for extracting high-level label-specific features. The attentional region extraction module aims at localizing discriminative regions in these features and yielding attentional label-specific features. The label relational inference module finally predicts label existences using label relations reasoned from outputs of the previous module. The proposed network is characterized by its capacities of extracting discriminative label-wise features in a proposal-free way and reasoning about label relations naturally and interpretably. In our experiments, we evaluate the proposed model on the UCM multi-label dataset and a newly produced dataset, AID multi-label dataset. Quantitative and qualitative results on these two datasets demonstrate the effectiveness of our model. To facilitate progress in the multi-label aerial image classification, the AID multi-label dataset will be made publicly available.
Zegeye and Demir @cite_38 propose a multi-label active learning framework using a multi-label support vector machine (SVM), relying on both the multi-label uncertainty and diversity. @cite_15 introduce a spatial and structure SVM for multi-label classification by considering spatial relations between a given patch and its neighbors. Similarly, @cite_4 employs a conditional random field (CRF) framework to model spatial contextual information among adjacent patches for improving the performance of classifying multiple object labels.
{ "cite_N": [ "@cite_38", "@cite_15", "@cite_4" ], "mid": [ "2897372286", "2804982862", "2794176907" ], "abstract": [ "This paper presents a novel multi-label active learning (MLAL) technique in the framework of multi-label remote sensing (RS) image scene classification problems. The proposed MLAL technique is developed in the framework of the multi-label SVM classifier (ML-SVM). Unlike the standard AL methods, the proposed MLAL technique redefines active learning by evaluating the informativeness of each image based on its multiple land-cover classes. Accordingly, the proposed MLAL technique is based on the joint evaluation of two criteria for the selection of the most informative images: i) multi-label uncertainty and ii) multi-label diversity. The multi-label uncertainty criterion is associated to the confidence of the multi-label classification algorithm in correctly assigning multi-labels to each image, whereas multi-label diversity criterion aims at selecting a set of un-annotated images that are as more diverse as possible to reduce the redundancy among them. In order to evaluate the multi-label uncertainty of each image, we propose a novel multi-label margin sampling strategy that: 1) considers the functional distances of each image to all ML-SVM hyperplanes; and then 2) estimates the occurrence on how many times each image falls inside the margins of ML-SVMs. If the occurrence is small, the classifiers are confident to correctly classify the considered image, and vice versa. In order to evaluate the multi-label diversity of each image, we propose a novel clustering-based strategy that clusters all the images inside the margins of the ML-SVMs and avoids selecting the uncertain images from the same clusters. The joint use of the two criteria allows one to enrich the training set of images with multi-labels. Experimental results obtained on a benchmark archive with 2100 images with their multi-labels show the effectiveness of the proposed MLAL method compared to the standard AL methods that neglect the evaluation of the uncertainty and diversity on multi-labels.", "We describe a novel multilabel classification approach based on a support vector machine (SVM) for the extremely high-resolution remote sensing images. Its underlying ideas consist to: 1) exploit inter-label relationships by means of a structured SVM and 2) incorporate spatial contextual information by adding to the cost function a term that encourages spatial smoothness into the structural SVM optimization process. The resulting formulation appears as an extension of the traditional SVM learning, in which our proposed model integrates the output structure and spatial information simultaneously during the training. Numerical experiments conducted on two different UAV- and airborne-acquired sets of images show the interesting properties of the proposed model, in particular, in terms of classification accuracy.", "In this letter, we formulate the multilabeling classification problem of unmanned aerial vehicle (UAV) imagery within a conditional random field (CRF) framework with the aim of exploiting simultaneously spatial contextual information and cross-correlation between labels. The pipeline of the framework consists of two main phases. First, the considered input UAV image is subdivided into a grid of tiles, which are processed thanks to an opportune representation and a multilayer perceptron classifier providing thus tile-wise multilabel prediction probabilities. In the second phase, a multilabel CRF model is applied to integrate spatial correlation between adjacent tiles and the correlation between labels within the same tile, with the objective to improve iteratively the multilabel classification map associated with the considered input UAV image. Experimental results achieved on two different UAV image data sets are reported and discussed." ] }
1907.07274
2958781163
Multi-label classification plays a momentous role in perceiving intricate contents of an aerial image and triggers several related studies over the last years. However, most of them deploy few efforts in exploiting label relations, while such dependencies are crucial for making accurate predictions. Although an LSTM layer can be introduced to modeling such label dependencies in a chain propagation manner, the efficiency might be questioned when certain labels are improperly inferred. To address this, we propose a novel aerial image multi-label classification network, attention-aware label relational reasoning network. Particularly, our network consists of three elemental modules: 1) a label-wise feature parcel learning module, 2) an attentional region extraction module, and 3) a label relational inference module. To be more specific, the label-wise feature parcel learning module is designed for extracting high-level label-specific features. The attentional region extraction module aims at localizing discriminative regions in these features and yielding attentional label-specific features. The label relational inference module finally predicts label existences using label relations reasoned from outputs of the previous module. The proposed network is characterized by its capacities of extracting discriminative label-wise features in a proposal-free way and reasoning about label relations naturally and interpretably. In our experiments, we evaluate the proposed model on the UCM multi-label dataset and a newly produced dataset, AID multi-label dataset. Quantitative and qualitative results on these two datasets demonstrate the effectiveness of our model. To facilitate progress in the multi-label aerial image classification, the AID multi-label dataset will be made publicly available.
With the development of computational resources and deep learning, very recent approaches mainly resort to deep networks for multi-label classification. @cite_25 , the authors make use of a standard CNN architecture to extract feature representations and then feed them into a multi-label classification layer, which is composed of customized thresholding operations, for predicting multiple labels. @cite_42 , the authors demonstrate that training a CNN for multi-label classification with a limited amount of labeled data usually leads to an underwhelming-performance mdoel and propose a dynamic data augmentation method for enlarging training sets. More recently, Sumbul and Demir @cite_28 propose a CNN-RNN method for identifying labels in multi-spectral images, where a bidirectional LSTM is employed to model spatial relationships among image patches. In order to exploring inherent correlations among object labels, @cite_67 proposes a CNN-LSTM hybrid network architecture to learn label dependencies for classifying object labels of aerial images.
{ "cite_N": [ "@cite_28", "@cite_67", "@cite_42", "@cite_25" ], "mid": [ "", "2884821995", "2914311543", "2602837914" ], "abstract": [ "", "Abstract Aerial image classification is of great significance in the remote sensing community, and many researches have been conducted over the past few years. Among these studies, most of them focus on categorizing an image into one semantic label, while in the real world, an aerial image is often associated with multiple labels, e.g., multiple object-level labels in our case. Besides, a comprehensive picture of present objects in a given high-resolution aerial image can provide a more in-depth understanding of the studied region. For these reasons, aerial image multi-label classification has been attracting increasing attention. However, one common limitation shared by existing methods in the community is that the co-occurrence relationship of various classes, so-called class dependency, is underexplored and leads to an inconsiderate decision. In this paper, we propose a novel end-to-end network, namely class-wise attention-based convolutional and bidirectional LSTM network (CA-Conv-BiLSTM), for this task. The proposed network consists of three indispensable components: (1) a feature extraction module, (2) a class attention learning layer, and (3) a bidirectional LSTM-based sub-network. Particularly, the feature extraction module is designed for extracting fine-grained semantic feature maps, while the class attention learning layer aims at capturing discriminative class-specific features. As the most important part, the bidirectional LSTM-based sub-network models the underlying class dependency in both directions and produce structured multiple object labels. Experimental results on UCM multi-label dataset and DFC15 multi-label dataset validate the effectiveness of our model quantitatively and qualitatively.", "Land cover classification is a flourishing research topic in the field of remote sensing. Conventional methodologies mainly focus either on the simplified single-label case or on the pixel-based approaches that cannot efficiently handle high-resolution images. On the other hand, the problem of multilabel land cover scene categorization remains, to this day, fairly unexplored. While deep learning and convolutional neural networks have demonstrated an astounding capacity at handling challenging machine learning tasks, such as image classification, they exhibit an underwhelming performance when trained with a limited amount of annotated examples. To overcome this issue, this paper proposes a data augmentation technique that can drastically increase the size of a smaller data set to copious amounts. Our experiments on a multilabel variation of the UC Merced Land Use data set demonstrate the potential of the proposed methodology, which outperforms the current state of the art by more than 6 in terms of the F-score metric.", "In this letter, we face the problem of multilabeling unmanned aerial vehicle (UAV) imagery, typically characterized by a high level of information content, by proposing a novel method based on convolutional neural networks. These are exploited as a means to yield a powerful description of the query image, which is analyzed after subdividing it into a grid of tiles. The multilabel classification task of each tile is performed by the combination of a radial basis function neural network and a multilabeling layer (ML) composed of customized thresholding operations. Experiments conducted on two different UAV image data sets demonstrate the promising capability of the proposed method compared to the state of the art, at the expense of a higher but still contained computation time." ] }
1907.07381
2960520355
Most network data are collected from only partially observable networks with both missing nodes and edges, for example due to limited resources and privacy settings specified by users on social media. Thus, it stands to the reason that inferring the missing parts of the networks by performing completion should precede downstream mining or learning tasks on the networks. However, despite this need, the recovery of missing nodes and edges in such incomplete networks is an insufficiently explored problem. In this paper, we present DeepNC, a novel method for inferring the missing parts of a network that is based on a deep generative graph model. Specifically, our model first learns a likelihood over edges via a recurrent neural network (RNN)-based generative graph, and then identifies the graph that maximizes the learned likelihood conditioned on the observable graph topology. Moreover, we propose a computationally efficient DeepNC algorithm that consecutively finds a single node to maximize the probability in each node generation step, whose runtime complexity is almost linear in the number of nodes in the network. We empirically show the superiority of DeepNC over state-of-the-art network completion approaches on a variety of synthetic and real-world networks.
Network completion. Observing a partial sample of a network and inferring the remainder of the network is referred to as network completion . As the most influential study, KronEM, an approach based on Kronecker graphs to solving the network completion problem by applying the expectation-maximization (EM) algorithm, was suggested by Kim and Leskovec @cite_3 . For cases in which only a small number of edges are missing, vertex similarity @cite_5 was shown to be useful in recovering the underlying true network. Another method for inferring missing edges in social networks based on shared node neighborhoods was investigated by @cite_18 . MISC was developed to tackle the missing node identification problem when the information of connections between missing nodes and observable nodes is assumed to be available @cite_11 . A follow-up study of MISC @cite_7 attempted to incorporate side information such as the demographic information and the nodes’ historical behavior into the inference process.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_3", "@cite_5", "@cite_11" ], "mid": [ "326443074", "2105250718", "1595449516", "2001406888", "2051475803" ], "abstract": [ "Distinct social networks are interconnected via membership overlap, which plays a key role when crossing information is investigated in the context of multiple-social-network analysis. Unfortunately, users do not always make their membership to two distinct social networks explicit, by specifying the so-called me edge (practically, corresponding to a link between the two accounts), thus missing a potentially very useful information. As a consequence, discovering missing me edges is an important problem to address in this context with potential powerful applications. In this paper, we propose a common-neighbor approach to detecting missing me edges, which returns good results in real-life settings. Indeed, an experimental campaign shows both that the state-of-the-art common-neighbor approaches cannot be effectively applied to our problem and, conversely, that our approach returns precise and complete results.", "An important area of social networks research is identifying missing information which is not explicitly represented in the network, or is not visible to all. Recently, the Missing Node Identification problem was introduced where missing members in the social network structure must be identified. However, previous works did not consider the possibility that information about specific users (nodes) within the network could be useful in solving this problem. In this paper, we present two algorithms: SAMI--A and SAMI--N. Both of these algorithms use the known nodes' specific information, such as demographic information and the nodes' historical behavior in the network. We found that both SAMI--A and SAMI--N perform significantly better than other missing node algorithms. However, as each of these algorithms and the parameters within these algorithms often perform better in specific problem instances, a mechanism is needed to select the best algorithm and the best variation within that algorithm. Towards this challenge, we also present OASCA, a novel online selection algorithm. We present results that detail the success of the algorithms presented within this paper.", "Network structures, such as social networks, web graphs and networks from systems biology, play important roles in many areas of science and our everyday lives. In order to study the networks one needs to first collect reliable large scale network data. While the social and information networks have become ubiquitous, the challenge of collecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, re-estimate the parameters and so on. We combine the EM with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parameters as well as the inference about missing nodes and edges of the network. Experiments on synthetic and several real-world networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical link-prediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes.", "We introduce the graph vertex similarity measure, Relation Strength Similarity (RSS), that utilizes a network's topology to discover and capture similar vertices. The RSS has the advantage that it is asymmetric; can be used in a weighted network; and has an adjustable \"discovery range\" parameter that enables exploration of friend of friend connections in a social network. To evaluate RSS we perform experiments on a coauthorship network from the CiteSeerX database. Our method significantly outperforms other vertex similarity measures in terms of the ability to predict future coauthoring behavior among authors in the CiteSeerX database for the near future 0 to 4 years out and reasonably so for 4 to 6 years out.", "In recent years, social networks have surged in popularity. One key aspect of social network research is identifying important missing information that is not explicitly represented in the network, or is not visible to all. To date, this line of research typically focused on finding the connections that are missing between nodes, a challenge typically termed as the link prediction problem. This article introduces the missing node identification problem, where missing members in the social network structure must be identified. In this problem, indications of missing nodes are assumed to exist. Given these indications and a partial network, we must assess which indications originate from the same missing node and determine the full network structure. Toward solving this problem, we present the missing node identification by spectral clustering algorithm (MISC), an approach based on a spectral clustering algorithm, combined with nodes’ pairwise affinity measures that were adopted from link prediction research. We evaluate the performance of our approach in different problem settings and scenarios, using real-life data from Facebook. The results show that our approach has beneficial results and can be effective in solving the missing node identification problem. In addition, this article also presents R-MISC, which uses a sparse matrix representation, efficient algorithms for calculating the nodes’ pairwise affinity, and a proprietary dimension reduction technique to enable scaling the MISC algorithm to large networks of more than 100,000 nodes. Last, we consider problem settings where some of the indications are unknown. Two algorithms are suggested for this problem: speculative MISC, based on MISC, and missing link completion, based on classical link prediction literature. We show that speculative MISC outperforms missing link completion." ] }
1907.07381
2960520355
Most network data are collected from only partially observable networks with both missing nodes and edges, for example due to limited resources and privacy settings specified by users on social media. Thus, it stands to the reason that inferring the missing parts of the networks by performing completion should precede downstream mining or learning tasks on the networks. However, despite this need, the recovery of missing nodes and edges in such incomplete networks is an insufficiently explored problem. In this paper, we present DeepNC, a novel method for inferring the missing parts of a network that is based on a deep generative graph model. Specifically, our model first learns a likelihood over edges via a recurrent neural network (RNN)-based generative graph, and then identifies the graph that maximizes the learned likelihood conditioned on the observable graph topology. Moreover, we propose a computationally efficient DeepNC algorithm that consecutively finds a single node to maximize the probability in each node generation step, whose runtime complexity is almost linear in the number of nodes in the network. We empirically show the superiority of DeepNC over state-of-the-art network completion approaches on a variety of synthetic and real-world networks.
Discussions. Despite these contributions, there has been no prior work in the literature that exploits the power of deep generative models in the context of network completion. We find that generative graph models themselves such as GraphRNN can be used as a network completion method with a nontrivial extra task. More specifically, a graph generated by a deep generative graph model needs to undergo a graph matching process due to the lack of correspondence between generated nodes and observable nodes. Since graph matching is computationally expensive (e.g., a typical method has the complexity of @math , where @math denotes the number of nodes in a graph @cite_32 ), incorporating such an idea into GraphRNN would be highly inefficient. Furthermore, MISC and other follow-up studies do not truly address network completion, since they solve the node identification problem under the assumption that the connections between missing nodes and observable nodes are known beforehand, which is not feasible in our partial observation setting.
{ "cite_N": [ "@cite_32" ], "mid": [ "2161444532" ], "abstract": [ "A linear programming (LP) approach is proposed for the weighted graph matching problem. A linear program is obtained by formulating the graph matching problem in L sub 1 norm and then transforming the resulting quadratic optimization problem to a linear one. The linear program is solved using a simplex-based algorithm. Then, approximate 0-1 integer solutions are obtained by applying the Hungarian method on the real solutions of the linear program. The complexity of the proposed algorithm is polynomial time, and it is O(n sup 6 L) for matching graphs of size n. The developed algorithm is compared to two other algorithms. One is based on an eigendecomposition approach and the other on a symmetric polynomial transform. Experimental results showed that the LP approach is superior in matching graphs than both other methods. >" ] }
1907.07157
2958440724
Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. In this context, a new distributed learning paradigm termed federated learning becomes prominent recently to tackle the privacy issues in distributed learning, where only learning models will be transmitted from the distributed nodes to servers without revealing users' own data and hence protecting the privacy of users. In this paper, we propose a horizontal federated XGBoost algorithm to solve the federated anomaly detection problem, where the anomaly detection aims to identify abnormalities from extremely unbalanced datasets and can be considered as a special classification problem. Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance. In particular, we introduce the virtual data sample by aggregating a group of users' data together at a single distributed node. We compute parameters based on these virtual data samples in the local nodes and aggregate the learning model in the central server. In the learning model upgrading process, we focus more on the wrongly classified data before in the virtual sample and hence to generate sparse learning model parameters. By carefully controlling the size of these groups of samples, we can achieve a tradeoff between privacy and learning performance. Our experimental results show the effectiveness of our proposed scheme by comparing with existing state-of-the-arts.
The transfer of data will bring the problem of data leakage @cite_0 . Consequently, decentralized methods (i.e., data is only stored locally) are used to process the data and then the risk of data leakage is reduced @cite_16 . Some works use encryption-based federated learning frameworks, like homomorphic encryption @cite_14 . Homomorphic encryption means certain operations can be acted directly on encrypted data without decrypting it. However, homomorphic encryption has its disadvantages. Take Paillier-based encryption schemes as an example, the cost of generating threshold decryption keys is very high @cite_6 .
{ "cite_N": [ "@cite_0", "@cite_14", "@cite_16", "@cite_6" ], "mid": [ "2053637704", "2435473771", "2125858711", "2767079719" ], "abstract": [ "Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries of this trend since the success of deep learning techniques is directly proportional to the amount of data available for training. Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. Many data owners--for example, medical institutions that may want to apply deep learning methods to clinical records--are prevented by privacy and confidentiality concerns from sharing the data and thus benefitting from large-scale deep learning. In this paper, we design, implement, and evaluate a practical system that enables multiple parties to jointly learn an accurate neural-network model for a given objective without sharing their input datasets. We exploit the fact that the optimization algorithms used in modern deep learning, namely, those based on stochastic gradient descent, can be parallelized and executed asynchronously. Our system lets participants train independently on their own datasets and selectively share small subsets of their models' key parameters during training. This offers an attractive point in the utility privacy tradeoff space: participants preserve the privacy of their respective data while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs. We demonstrate the accuracy of our privacy-preserving deep learning on benchmark datasets.", "Applying machine learning to a problem which involves medical, financial, or other types of sensitive data, not only requires accurate predictions but also careful attention to maintaining data privacy and security. Legal and ethical requirements may prevent the use of cloud-based machine learning solutions for such tasks. In this work, we will present a method to convert learned neural networks to CryptoNets, neural networks that can be applied to encrypted data. This allows a data owner to send their data in an encrypted form to a cloud service that hosts the network. The encryption ensures that the data remains confidential since the cloud does not have access to the keys needed to decrypt it. Nevertheless, we will show that the cloud service is capable of applying the neural network to the encrypted data to make encrypted predictions, and also return them in encrypted form. These encrypted predictions can be sent back to the owner of the secret key who can decrypt them. Therefore, the cloud service does not gain any information about the raw data nor about the prediction it made. We demonstrate CryptoNets on the MNIST optical character recognition tasks. CryptoNets achieve 99 accuracy and can make around 59000 predictions per hour on a single PC. Therefore, they allow high throughput, accurate, and private predictions.", "Cloud Computing is the long dreamed vision of computing as a utility, where users can remotely store their data into the cloud so as to enjoy the on-demand high quality applications and services from a shared pool of configurable computing resources. By data outsourcing, users can be relieved from the burden of local data storage and maintenance. However, the fact that users no longer have physical possession of the possibly large size of outsourced data makes the data integrity protection in Cloud Computing a very challenging and potentially formidable task, especially for users with constrained computing resources and capabilities. Thus, enabling public auditability for cloud data storage security is of critical importance so that users can resort to an external audit party to check the integrity of outsourced data when needed. To securely introduce an effective third party auditor (TPA), the following two fundamental requirements have to be met: 1) TPA should be able to efficiently audit the cloud data storage without demanding the local copy of data, and introduce no additional on-line burden to the cloud user; 2) The third party auditing process should bring in no new vulnerabilities towards user data privacy. In this paper, we utilize and uniquely combine the public key based homomorphic authenticator with random masking to achieve the privacy-preserving public cloud data auditing system, which meets all above requirements. To support efficient handling of multiple auditing tasks, we further explore the technique of bilinear aggregate signature to extend our main result into a multi-user setting, where TPA can perform multiple auditing tasks simultaneously. Extensive security and performance analysis shows the proposed schemes are provably secure and highly efficient.", "We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear." ] }
1907.07157
2958440724
Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. In this context, a new distributed learning paradigm termed federated learning becomes prominent recently to tackle the privacy issues in distributed learning, where only learning models will be transmitted from the distributed nodes to servers without revealing users' own data and hence protecting the privacy of users. In this paper, we propose a horizontal federated XGBoost algorithm to solve the federated anomaly detection problem, where the anomaly detection aims to identify abnormalities from extremely unbalanced datasets and can be considered as a special classification problem. Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance. In particular, we introduce the virtual data sample by aggregating a group of users' data together at a single distributed node. We compute parameters based on these virtual data samples in the local nodes and aggregate the learning model in the central server. In the learning model upgrading process, we focus more on the wrongly classified data before in the virtual sample and hence to generate sparse learning model parameters. By carefully controlling the size of these groups of samples, we can achieve a tradeoff between privacy and learning performance. Our experimental results show the effectiveness of our proposed scheme by comparing with existing state-of-the-arts.
Federated learning is a new distributed learning paradigm proposed recently to utilize the user-end computing resources and preserve user's privacy by transmitting only model parameters, instead of raw data, to the server @cite_12 @cite_2 . In federated learning, a general model will be firstly trained, and then the model will be distributed to each node acting as a local model @cite_22 @cite_20 @cite_17 . Three categories was put forward in @cite_22 , including horizontal federated learning, vertical federated learning and federated transfer learning. The federated secure XGBoost framework using vertical federated learning was proposed in @cite_3 .
{ "cite_N": [ "@cite_22", "@cite_3", "@cite_2", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2912213068", "2914853145", "2912139568", "2903890850", "2283463896", "2535838896" ], "abstract": [ "Today’s artificial intelligence still faces two major challenges. One is that, in most industries, data exists in the form of isolated islands. The other is the strengthening of data privacy and security. We propose a possible solution to these challenges: secure federated learning. Beyond the federated-learning framework first proposed by Google in 2016, we introduce a comprehensive secure federated-learning framework, which includes horizontal federated learning, vertical federated learning, and federated transfer learning. We provide definitions, architectures, and applications for the federated-learning framework, and provide a comprehensive survey of existing works on this subject. In addition, we propose building data networks among organizations based on federated mechanisms as an effective solution to allowing knowledge to be shared without compromising user privacy.", "The protection of user privacy is an important concern in machine learning, as evidenced by the rolling out of the General Data Protection Regulation (GDPR) in the European Union (EU) in May 2018. The GDPR is designed to give users more control over their personal data, which motivates us to explore machine learning frameworks with data sharing without violating user privacy. To meet this goal, in this paper, we propose a novel lossless privacy-preserving tree-boosting system known as SecureBoost in the setting of federated learning. This federated-learning system allows a learning process to be jointly conducted over multiple parties with partially common user samples but different feature sets, which corresponds to a vertically partitioned virtual data set. An advantage of SecureBoost is that it provides the same level of accuracy as the non-privacy-preserving approach while at the same time, reveal no information of each private data provider. We theoretically prove that the SecureBoost framework is as accurate as other non-federated gradient tree-boosting algorithms that bring the data into one place. In addition, along with a proof of security, we discuss what would be required to make the protocols completely secure.", "In reinforcement learning, building policies of high-quality is challenging when the feature space of states is small and the training data is limited. Directly transferring data or knowledge from an agent to another agent will not work due to the privacy requirement of data and models. In this paper, we propose a novel reinforcement learning approach to considering the privacy requirement and building Q-network for each agent with the help of other agents, namely federated reinforcement learning (FRL). To protect the privacy of data and models, we exploit Gausian differentials on the information shared with each other when updating their local models. In the experiment, we evaluate our FRL framework in two diverse domains, Grid-world and Text2Action domains, by comparing to various baselines.", "Machine learning relies on the availability of a vast amount of data for training. However, in reality, most data are scattered across different organizations and cannot be easily integrated under many legal and practical constraints. In this paper, we introduce a new technique and framework, known as federated transfer learning (FTL), to improve statistical models under a data federation. The federation allows knowledge to be shared without compromising user privacy, and enables complimentary knowledge to be transferred in the network. As a result, a target-domain party can build more flexible and powerful models by leveraging rich labels from a source-domain party. A secure transfer cross validation approach is also proposed to guard the FTL performance under the federation. The framework requires minimal modifications to the existing model structure and provides the same level of accuracy as the non-privacy-preserving approach. This framework is very flexible and can be effectively adapted to various secure multi-party machine learning tasks.", "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.", "Federated Learning is a machine learning setting where the goal is to train a high-quality centralized model while training data remains distributed over a large number of clients each with unreliable and relatively slow network connections. We consider learning algorithms for this setting where on each round, each client independently computes an update to the current model based on its local data, and communicates this update to a central server, where the client-side updates are aggregated to compute a new global model. The typical clients in this setting are mobile phones, and communication efficiency is of the utmost importance. In this paper, we propose two ways to reduce the uplink communication costs: structured updates, where we directly learn an update from a restricted space parametrized using a smaller number of variables, e.g. either low-rank or a random mask; and sketched updates, where we learn a full model update and then compress it using a combination of quantization, random rotations, and subsampling before sending it to the server. Experiments on both convolutional and recurrent networks show that the proposed methods can reduce the communication cost by two orders of magnitude." ] }
1907.07157
2958440724
Privacy has raised considerable concerns recently, especially with the advent of information explosion and numerous data mining techniques to explore the information inside large volumes of data. In this context, a new distributed learning paradigm termed federated learning becomes prominent recently to tackle the privacy issues in distributed learning, where only learning models will be transmitted from the distributed nodes to servers without revealing users' own data and hence protecting the privacy of users. In this paper, we propose a horizontal federated XGBoost algorithm to solve the federated anomaly detection problem, where the anomaly detection aims to identify abnormalities from extremely unbalanced datasets and can be considered as a special classification problem. Our proposed federated XGBoost algorithm incorporates data aggregation and sparse federated update processes to balance the tradeoff between privacy and learning performance. In particular, we introduce the virtual data sample by aggregating a group of users' data together at a single distributed node. We compute parameters based on these virtual data samples in the local nodes and aggregate the learning model in the central server. In the learning model upgrading process, we focus more on the wrongly classified data before in the virtual sample and hence to generate sparse learning model parameters. By carefully controlling the size of these groups of samples, we can achieve a tradeoff between privacy and learning performance. Our experimental results show the effectiveness of our proposed scheme by comparing with existing state-of-the-arts.
Anomaly detection @cite_9 is the identification of events or observations that do not match the expected pattern or other items in the dataset (i.e., outliers) during data mining. Outliers can be divided into point exceptions, context exceptions, and collective exceptions @cite_4 . Anomaly detection methods include SMOTE algorithm @cite_21 and various machine learning models, such as K-Nearest Neighbors algorithm @cite_10 , Random Forest @cite_15 , Support Vector Machine (SVM) @cite_11 , Gradient Boosting Classification Tree (GBT) @cite_5 , XGBoost @cite_19 , and deep learning neural network models @cite_8 . In this paper, a point exception of fraud detection in credit card transactions will be focused and the dataset of credit card transactions will be used to train the model.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_9", "@cite_21", "@cite_19", "@cite_5", "@cite_15", "@cite_10", "@cite_11" ], "mid": [ "2613480438", "2120617515", "2007087405", "2148143831", "2295598076", "2342352817", "2100537916", "1977838479", "2159637520" ], "abstract": [ "Abstract In the present world huge amounts of data are stored and transferred from one location to another. The data when transferred or stored is primed exposed to attack. Although various techniques or applications are available to protect data, loopholes exist. Thus to analyze data and to determine various kind of attack data mining techniques have emerged to make it less vulnerable. Anomaly detection uses these data mining techniques to detect the surprising behaviour hidden within data increasing the chances of being intruded or attacked. Various hybrid approaches have also been made in order to detect known and unknown attacks more accurately. This paper reviews various data mining techniques for anomaly detection to provide better understanding among the existing techniques that may help interested researchers to work future in this direction.", "Information security is an issue of serious global concern. The complexity, accessibility, and openness of the Internet have served to increase the security risk of information systems tremendously. This paper concerns intrusion detection. We describe approaches to intrusion detection using neural networks and support vector machines. The key ideas are to discover useful patterns or features that describe user behavior on a system, and use the set of relevant features to build classifiers that can recognize anomalies and known intrusions, hopefully in real time. Using a set of benchmark data from a KDD (knowledge discovery and data mining) competition designed by DARPA, we demonstrate that efficient and accurate classifiers can be built to detect intrusions. We compare the performance of neural networks based, and support vector machine based, systems for intrusion detection.", "As advances in networking technology help to connect the distant corners of the globe and as the Internet continues to expand its influence as a medium for communications and commerce, the threat from spammers, attackers and criminal enterprises has also grown accordingly. It is the prevalence of such threats that has made intrusion detection systems-the cyberspace's equivalent to the burglar alarm-join ranks with firewalls as one of the fundamental technologies for network security. However, today's commercially available intrusion detection systems are predominantly signature-based intrusion detection systems that are designed to detect known attacks by utilizing the signatures of those attacks. Such systems require frequent rule-base updates and signature updates, and are not capable of detecting unknown attacks. In contrast, anomaly detection systems, a subset of intrusion detection systems, model the normal system network behavior which enables them to be extremely effective in finding and foiling both known as well as unknown or ''zero day'' attacks. While anomaly detection systems are attractive conceptually, a host of technological problems need to be overcome before they can be widely adopted. These problems include: high false alarm rate, failure to scale to gigabit speeds, etc. In this paper, we provide a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems of the recent past and present. We also discuss recent technological trends in anomaly detection and identify open problems and challenges in this area.", "An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \"normal\" examples with only a small percentage of \"abnormal\" or \"interesting\" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy.", "Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems.", "In recent years, machine learning research has gained momentum: new developments in the field of deep learning allow for multiple levels of abstraction and are starting to supersede well-known and powerful tree-based techniques mainly operating on the original feature space. All these methods can be applied to various fields, including finance. This paper implements and analyzes the effectiveness of deep neural networks (DNN), gradient-boosted-trees (GBT), random forests (RAF), and several ensembles of these methods in the context of statistical arbitrage. Each model is trained on lagged returns of all stocks in the S&P 500, after elimination of survivor bias. From 1992 to 2015, daily one-day-ahead trading signals are generated based on the probability forecast of a stock to outperform the general market. The highest k probabilities are converted into long and the lowest k probabilities into short positions, thus censoring the less certain middle part of the ranking. Empirical findings are promising. A simple, equal-weighted ensemble (ENS1) consisting of one deep neural network, one gradient-boosted tree, and one random forest produces out-of-sample returns exceeding 0.45 percent per day for k=10, prior to transaction costs. Irrespective of the fact that profits are declining in recent years, our findings pose a severe challenge to the semi-strong form of market efficiency.", "Prevention of security breaches completely using the existing security technologies is unrealistic. As a result, intrusion detection is an important component in network security. However, many current intrusion detection systems (IDSs) are rule-based systems, which have limitations to detect novel intrusions. Moreover, encoding rules is time-consuming and highly depends on the knowledge of known intrusions. Therefore, we propose new systematic frameworks that apply a data mining algorithm called random forests in misuse, anomaly, and hybrid-network-based IDSs. In misuse detection, patterns of intrusions are built automatically by the random forests algorithm over training data. After that, intrusions are detected by matching network activities against the patterns. In anomaly detection, novel intrusions are detected by the outlier detection mechanism of the random forests algorithm. After building the patterns of network services by the random forests algorithm, outliers related to the patterns are determined by the outlier detection algorithm. The hybrid detection system improves the detection performance by combining the advantages of the misuse and anomaly detection. We evaluate our approaches over the knowledge discovery and data mining 1999 (KDDpsila99) dataset. The experimental results demonstrate that the performance provided by the proposed misuse approach is better than the best KDDpsila99 result; compared to other reported unsupervised anomaly detection approaches, our anomaly detection approach achieves higher detection rate when the false positive rate is low; and the presented hybrid system can improve the overall performance of the aforementioned IDSs.", "A new approach, based on the k-Nearest Neighbor (kNN) classifier, is used to classify program behavior as normal or intrusive. Program behavior, in turn, is represented by frequencies of system calls. Each system call is treated as a word and the collection of system calls over each program execution as a document. These documents are then classified using kNN classifier, a popular method in text categorization. This method seems to offer some computational advantages over those that seek to characterize program behavior with short sequences of system calls and generate individual program profiles. Preliminary experiments with 1998 DARPA BSM audit data show that the kNN classifier can effectively detect intrusive attacks and achieve a low false positive rate.", "With the tremendous growth of the Internet, information system security has become an issue of serious global concern due to the rapid connection and accessibility. Developing effective methods for intrusion detection, therefore, is an urgent task for assuring computer & information system security. Since most attacks and misuses can be recognized through the examination of system audit log files and pattern analysis therein, an approach for intrusion detection can be built on them. First we have made deep analysis on attacks and misuses patterns in log files; and then proposed an approach using support vector machines for anomaly detection. It is a one-class SVM based approach, trained with abstracted user audit logs data from 1999 DARPA." ] }
1907.07263
2957764552
Optimizing caching locations of popular content has received significant research attention over the last few years. This paper targets the optimization of the caching locations by proposing a novel transformation of the optimization problem to a grey-scale image that is applied to a deep convolutional neural network (CNN). The rational for the proposed modeling comes from CNN's superiority to capture features in gray-scale images reaching human level performance in image recognition problems. The CNN has been trained with optimal solutions and the numerical investigations and analyses demonstrate the promising performance of the proposed method. Therefore, for enabling real-time decision making we moving away from a strictly optimization based framework to an amalgamation of optimization with a data driven approach.
The proliferation of popular content on the Internet created a massive amount of aggregate data that need to be transported across congested links. Bringing popular content closer to the end users via caching, in order to ease congestion episodes and increase user experience, has received significant research attention over the last few years @cite_3 . At the same time there is a transformation that is taking place on the network architecture which can be defined as service oriented based on the SDN and NFV paradigms @cite_16 , @cite_7 .
{ "cite_N": [ "@cite_16", "@cite_7", "@cite_3" ], "mid": [ "2788163338", "2762414772", "2603810864" ], "abstract": [ "Abstract Today’s networks are filled with a massive and ever-growing variety of network functions that coupled with proprietary devices, which leads to network ossification and difficulty in network management and service provision. Network Function Virtualization (NFV) is a promising paradigm to change such situation by decoupling network functions from the underlying dedicated hardware and realizing them in the form of software, which are referred to as Virtual Network Functions (VNFs). Such decoupling introduces many benefits which include reduction of Capital Expenditure (CAPEX) and Operation Expense (OPEX), improved flexibility of service provision, etc. In this paper, we intend to present a comprehensive survey on NFV, which starts from the introduction of NFV motivations. Then, we explain the main concepts of NFV in terms of terminology, standardization and history, and how NFV differs from traditional middle-box based network. After that, the standard NFV architecture is introduced using a bottom up approach, based on which the corresponding use cases and solutions are also illustrated. In addition, due to the decoupling of network functionalities and hardware, people’s attention is gradually shifted to the VNFs. Next, we provide an extensive and in-depth discussion on state-of-the-art VNF algorithms including VNF placement, scheduling, migration, chaining and multicast. Finally, to accelerate the NFV deployment and avoid pitfalls as far as possible, we survey the challenges faced by NFV and the trend for future directions. In particular, the challenges are discussed from bottom up, which include hardware design, VNF deployment, VNF life cycle control, service chaining, performance evaluation, policy enforcement, energy efficiency, reliability and security, and the future directions are discussed around the current trend towards network softwarization.", "Communication networks are undergoing their next evolutionary step toward 5G. The 5G networks are envisioned to provide a flexible, scalable, agile, and programmable network platform over which different services with varying requirements can be deployed and managed within strict performance bounds. In order to address these challenges, a paradigm shift is taking place in the technologies that drive the networks, and thus their architecture. Innovative concepts and techniques are being developed to power the next generation mobile networks. At the heart of this development lie Network Function Virtualization and Software Defined Networking technologies, which are now recognized as being two of the key technology enablers for realizing 5G networks, and which have introduced a major change in the way network services are deployed and operated. For interested readers that are new to the field of SDN and NFV, this paper provides an overview of both these technologies with reference to the 5G networks. Most importantly, it describes how the two technologies complement each other and how they are expected to drive the networks of near future.", "As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well." ] }
1901.05807
2963179857
In many robotic applications, especially for the autonomous driving, understanding the semantic information and the geometric structure of surroundings are both essential. Semantic 3D maps, as a carrier of the environmental knowledge, are then intensively studied for their abilities and applications. However, it is still challenging to produce a dense outdoor semantic map from a monocular image stream. Motivated by this target, in this paper, we propose a method for large-scale 3D reconstruction from consecutive monocular images. First, with the correlation of underlying information between depth and semantic prediction, a novel multi-task Convolutional Neural Network (CNN) is designed for joint prediction. Given a single image, the network learns low-level information with a shared encoder and separately predicts with decoders containing additional Atrous Spatial Pyramid Pooling (ASPP) layers and the residual connection which merits disparities and semantic mutually. To overcome the inconsistency of monocular depth prediction for reconstruction, post-processing steps with the superpixelization and the effective 3D representation approach are obtained to give the final semantic map. Experiments are compared with other methods on both semantic labeling and depth prediction. We also qualitatively demonstrate the map reconstructed from large-scale, difficult monocular image sequences to prove the effectiveness and superiority.
Depth prediction for scene understanding used to heavily rely on stereo vision @cite_8 @cite_10 @cite_12 . Recent studies have been made progress in scene geometric understanding from the monocular camera. An encoder-decoder architecture @cite_14 based on ResNet is proposed by , which performs residual learning to predict dense depth maps from a single image. The new effective up-projection structure is adopted to avoid the checkerboard artifacts during the feature map upsampling. Scene segmentation is another active field. The current leading segmentation network @cite_25 , called Deeplab v3+, is based on their former work @cite_11 which could extract the boundaries unambiguously referring to the recovered structural information. For semantic reconstruction containing both depth and semantic prediction, separately applying these methods could bring higher computation cost and neglect the shared information between two tasks.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_10", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2963591054", "", "55377555", "2787091153", "", "2630837129" ], "abstract": [ "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "", "In this paper we propose a slanted plane model for jointly recovering an image segmentation, a dense depth estimate as well as boundary labels (such as occlusion boundaries) from a static scene given two frames of a stereo pair captured from a moving vehicle. Towards this goal we propose a new optimization algorithm for our SLIC-like objective which preserves connecteness of image segments and exploits shape regularization in the form of boundary length. We demonstrate the performance of our approach in the challenging stereo and flow KITTI benchmarks and show superior results to the state-of-the-art. Importantly, these results can be achieved an order of magnitude faster than competing approaches.", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .", "", "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark." ] }
1901.05807
2963179857
In many robotic applications, especially for the autonomous driving, understanding the semantic information and the geometric structure of surroundings are both essential. Semantic 3D maps, as a carrier of the environmental knowledge, are then intensively studied for their abilities and applications. However, it is still challenging to produce a dense outdoor semantic map from a monocular image stream. Motivated by this target, in this paper, we propose a method for large-scale 3D reconstruction from consecutive monocular images. First, with the correlation of underlying information between depth and semantic prediction, a novel multi-task Convolutional Neural Network (CNN) is designed for joint prediction. Given a single image, the network learns low-level information with a shared encoder and separately predicts with decoders containing additional Atrous Spatial Pyramid Pooling (ASPP) layers and the residual connection which merits disparities and semantic mutually. To overcome the inconsistency of monocular depth prediction for reconstruction, post-processing steps with the superpixelization and the effective 3D representation approach are obtained to give the final semantic map. Experiments are compared with other methods on both semantic labeling and depth prediction. We also qualitatively demonstrate the map reconstructed from large-scale, difficult monocular image sequences to prove the effectiveness and superiority.
Multi-task learning techniques are designed to use the transfer feature between different tasks by jointly predict labels from a single model. Multi-task networks are adopted in the face attribute estimation, the contour detection, the semantic segmentation @cite_4 , etc. propose a network @cite_0 which combines scene segmentation, instance segmentation, and depth prediction into an integrated network based on the ENet. use the Deeplab v3 as the fundamental architecture @cite_21 . They conduct experiments with uncertainty weight losses demonstrating the superiority and effectiveness of homoscedastic uncertainty for multi-task learning. However, these networks derive the same decoder architecture for each task ignoring the difference of outputs. In the proposed network, specialized decoders are designed to deal with different tasks, and connections between two decoders to transfer available information.
{ "cite_N": [ "@cite_0", "@cite_21", "@cite_4" ], "mid": [ "2745074940", "2963677766", "" ], "abstract": [ "Most approaches for instance-aware semantic labeling traditionally focus on accuracy. Other aspects like runtime and memory footprint are arguably as important for real-time applications such as autonomous driving. Motivated by this observation and inspired by recent works that tackle multiple tasks with a single integrated architecture, in this paper we present a real-time efficient implementation based on ENet that solves three autonomous driving related tasks at once: semantic scene segmentation, instance segmentation and monocular depth estimation. Our approach builds upon a branched ENet architecture with a shared encoder but different decoder branches for each of the three tasks. The presented method can run at 21 fps at a resolution of 1024x512 on the Cityscapes dataset without sacrificing accuracy compared to running each task separately.", "Numerous deep learning applications benefit from multitask learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task.", "" ] }
1901.05807
2963179857
In many robotic applications, especially for the autonomous driving, understanding the semantic information and the geometric structure of surroundings are both essential. Semantic 3D maps, as a carrier of the environmental knowledge, are then intensively studied for their abilities and applications. However, it is still challenging to produce a dense outdoor semantic map from a monocular image stream. Motivated by this target, in this paper, we propose a method for large-scale 3D reconstruction from consecutive monocular images. First, with the correlation of underlying information between depth and semantic prediction, a novel multi-task Convolutional Neural Network (CNN) is designed for joint prediction. Given a single image, the network learns low-level information with a shared encoder and separately predicts with decoders containing additional Atrous Spatial Pyramid Pooling (ASPP) layers and the residual connection which merits disparities and semantic mutually. To overcome the inconsistency of monocular depth prediction for reconstruction, post-processing steps with the superpixelization and the effective 3D representation approach are obtained to give the final semantic map. Experiments are compared with other methods on both semantic labeling and depth prediction. We also qualitatively demonstrate the map reconstructed from large-scale, difficult monocular image sequences to prove the effectiveness and superiority.
Semantic reconstruction can be basically divided into two categories. The first kind of methods are inheritors of 2D semantic segmentation results @cite_13 @cite_16 . For monocular-based reconstructions, propose an approach @cite_13 to jointly infer geometric structure and 3D semantic labeling with a CRF model. The experimental results are good but with limitations. Due to the resolution and the ability of volumetric occupancy map, the output misses structural details. The second concerns the need to simultaneously provide semantic and geometric information in the 3D space. The incremental semantic reconstruction approach @cite_19 proposed by builds the urban environment on a hash-based technique and a mean-field inference algorithm. The traditional stereo matching and visual odometry method are adopted to obtain basic 3D knowledge.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_13" ], "mid": [ "2167687475", "2890090517", "801273237" ], "abstract": [ "Our abilities in scene understanding, which allow us to perceive the 3D structure of our surroundings and intuitively recognise the objects we see, are things that we largely take for granted, but for robots, the task of understanding large scenes quickly remains extremely challenging. Recently, scene understanding approaches based on 3D reconstruction and semantic segmentation have become popular, but existing methods either do not scale, fail outdoors, provide only sparse reconstructions or are rather slow. In this paper, we build on a recent hash-based technique for large-scale fusion and an efficient mean-field inference algorithm for densely-connected CRFs to present what to our knowledge is the first system that can perform dense, large-scale, outdoor semantic reconstruction of a scene in (near) real time. We also present a ‘semantic fusion’ approach that allows us to handle dynamic objects more effectively than previous approaches. We demonstrate the effectiveness of our approach on the KITTI dataset, and provide qualitative and quantitative results showing high-quality dense reconstruction and labelling of a number of scenes.", "We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work. The source code is available from the project websiteaahttp: andreibarsan.github.io dynslam.", "We present an approach for joint inference of 3D scene structure and semantic labeling for monocular video. Starting with monocular image stream, our framework produces a 3D volumetric semantic + occupancy map, which is much more useful than a series of 2D semantic label images or a sparse point cloud produced by traditional semantic segmentation and Structure from Motion(SfM) pipelines respectively. We derive a Conditional Random Field (CRF) model defined in the 3D space, that jointly infers the semantic category and occupancy for each voxel. Such a joint inference in the 3D CRF paves the way for more informed priors and constraints, which is otherwise not possible if solved separately in their traditional frameworks. We make use of class specific semantic cues that constrain the 3D structure in areas, where multiview constraints are weak. Our model comprises of higher order factors, which helps when the depth is unobservable.We also make use of class specific semantic cues to reduce either the degree of such higher order factors, or to approximately model them with unaries if possible. We demonstrate improved 3D structure and temporally consistent semantic segmentation for difficult, large scale, forward moving monocular image sequences." ] }
1901.05807
2963179857
In many robotic applications, especially for the autonomous driving, understanding the semantic information and the geometric structure of surroundings are both essential. Semantic 3D maps, as a carrier of the environmental knowledge, are then intensively studied for their abilities and applications. However, it is still challenging to produce a dense outdoor semantic map from a monocular image stream. Motivated by this target, in this paper, we propose a method for large-scale 3D reconstruction from consecutive monocular images. First, with the correlation of underlying information between depth and semantic prediction, a novel multi-task Convolutional Neural Network (CNN) is designed for joint prediction. Given a single image, the network learns low-level information with a shared encoder and separately predicts with decoders containing additional Atrous Spatial Pyramid Pooling (ASPP) layers and the residual connection which merits disparities and semantic mutually. To overcome the inconsistency of monocular depth prediction for reconstruction, post-processing steps with the superpixelization and the effective 3D representation approach are obtained to give the final semantic map. Experiments are compared with other methods on both semantic labeling and depth prediction. We also qualitatively demonstrate the map reconstructed from large-scale, difficult monocular image sequences to prove the effectiveness and superiority.
Superpixel segmentation @cite_15 @cite_6 has been applied to promote stereo matching results. The matching algorithm of @cite_10 called the SPS-st, whose formulation is based on the slanted-plane model with plain-fitting technique. To reduce the memory of large-scale reconstruction while maintaining pixel-level details, the map is stored with the vertices of superpixel after changed into polygons @cite_6 . The advancement of depth refinement and representation method are also conducted in the experiment.
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_6" ], "mid": [ "", "55377555", "2612271937" ], "abstract": [ "", "In this paper we propose a slanted plane model for jointly recovering an image segmentation, a dense depth estimate as well as boundary labels (such as occlusion boundaries) from a static scene given two frames of a stereo pair captured from a moving vehicle. Towards this goal we propose a new optimization algorithm for our SLIC-like objective which preserves connecteness of image segments and exploits shape regularization in the form of boundary length. We demonstrate the performance of our approach in the challenging stereo and flow KITTI benchmarks and show superior results to the state-of-the-art. Importantly, these results can be achieved an order of magnitude faster than competing approaches.", "We present an improved version of the Simple Linear Iterative Clustering (SLIC) superpixel segmentation. Unlike SLIC, our algorithm is non-iterative, enforces connectivity from the start, requires lesser memory, and is faster. Relying on the superpixel boundaries obtained using our algorithm, we also present a polygonal partitioning algorithm. We demonstrate that our superpixels as well as the polygonal partitioning are superior to the respective state-of-the-art algorithms on quantitative benchmarks." ] }
1901.05574
2963924969
Deep Recurrent Neural Network (RNN) has gained popularity in many sequence classification tasks. Beyond predicting a correct class for each data instance, data scientists also want to understand what differentiating factors in the data have contributed to the classification during the learning process. We present a visual analytics approach to facilitate this task by revealing the RNN attention for all data instances, their temporal positions in the sequences, and the attribution of variables at each value level. We demonstrate with real-world datasets that our approach can help data scientists to understand such dynamics in deep RNNs from the training results, hence guiding their modeling process.
Many visualization techniques have been developed to facilitate the DNN model building process, covering domains such as image understanding @cite_18 @cite_5 and natural language processing @cite_7 . Techniques such as hierarchical correlation matrices @cite_2 , edge-bundled DAG @cite_9 , parallel coordinates @cite_14 and co-clustering @cite_3 were introduced for interpreting model specifics. Systems like ActiVis @cite_13 and @cite_4 reveal the links between filters and patterns at data instance level. On the RNN side, many literatures focus on model's attention mechanism to evaluate how well the inputs are related to the outputs. Xu et. al. @cite_6 introduced an attention-based model and visualized the caption-word corresponding saliency over images. Bahdanau et. al. @cite_0 visualized heatmap-like word attentions extracted by RNN models for natural language analysis. Yang et. al. @cite_1 visualize word-level and sentence-level attentions over texts. Our work falls into this category in the spirit of analyzing attentions from RNNs and visualizes the saliency of data instances. Besides, our approach is capable of visualizing patterns from the training dataset instead of a few data instances to derive meaningful conclusions.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_7", "@cite_9", "@cite_1", "@cite_3", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_13" ], "mid": [ "2149482703", "2752194699", "2751298778", "2964159778", "", "2470673105", "2766243150", "1514535095", "2964308564", "2751746637", "", "2607223307" ], "abstract": [ "Artificial neural networks are computer software or hardware models inspired by the structure and behavior of neurons in the human nervous system. As a powerful learning tool, increasingly neural networks have been adopted by many large-scale information processing applications but there is no a set of well defined criteria for choosing a neural network. The user mostly treats a neural network as a black box and cannot explain how learning from input data was done nor how performance can be consistently ensured. We have experimented with several information visualization designs aiming to open the black box to possibly uncover underlying dependencies between the input data and the output data of a neural network. In this paper, we present our designs and show that the visualizations not only help us design more efficient neural networks, but also assist us in the process of using neural networks for problem solving such as performing a classification task.", "Recurrent neural networks, and in particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling that learn a dense black-box hidden representation of their sequential input. Researchers interested in better understanding these models have studied the changes in hidden state representations over time and noticed some interpretable patterns but also significant noise. In this work, we present LSTMVis, a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows users to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. We show several use cases of the tool for analyzing specific hidden state properties on dataset containing nesting, phrase structure, and chord progressions, and demonstrate how the tool can be used to isolate patterns for further statistical analysis. We characterize the domain, the different stakeholders, and their goals and tasks. Long-term usage data after putting the tool online revealed great interest in the machine learning community.", "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.", "While neural networks have been successfully applied to many NLP tasks the resulting vectorbased models are very difficult to interpret. For example it’s not clear how they achieve compositionality, building sentence meaning from the meanings of words and phrases. In this paper we describe strategies for visualizing compositionality in neural models for NLP, inspired by similar work in computer vision. We first plot unit values to visualize compositionality of negation, intensification, and concessive clauses, allowing us to see wellknown markedness asymmetries in negation. We then introduce methods for visualizing a unit’s salience, the amount that it contributes to the final composed meaning from first-order derivatives. Our general-purpose methods may have wide applications for understanding compositionality and other semantic properties of deep networks.", "", "We propose a hierarchical attention network for document classification. Our model has two distinctive characteristics: (i) it has a hierarchical structure that mirrors the hierarchical structure of documents; (ii) it has two levels of attention mechanisms applied at the wordand sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. Experiments conducted on six large scale text classification tasks demonstrate that the proposed architecture outperform previous methods by a substantial margin. Visualization of the attention layers illustrates that the model selects qualitatively informative words and sentences.", "Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs' hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN's hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr9k, Flickr30k and MS COCO.", "Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.", "", "While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved A cti V is , an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance-and subset-level. A cti V is has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how A cti V is may work with different models." ] }
1901.05574
2963924969
Deep Recurrent Neural Network (RNN) has gained popularity in many sequence classification tasks. Beyond predicting a correct class for each data instance, data scientists also want to understand what differentiating factors in the data have contributed to the classification during the learning process. We present a visual analytics approach to facilitate this task by revealing the RNN attention for all data instances, their temporal positions in the sequences, and the attribution of variables at each value level. We demonstrate with real-world datasets that our approach can help data scientists to understand such dynamics in deep RNNs from the training results, hence guiding their modeling process.
Multivariate data visualization have been developed in numerous fields of analysis @cite_9 . We summarize the related work based on the visualization layout. arrange variables in matrices to benefit the pairwise comparison of attribute relationships. GPLOM @cite_23 extends Scatterplot Matrix @cite_26 to generalized plot matrices. EnsembleMatrix @cite_19 enables the interaction in matrices that help understand the classifiers in ensemble learning. Yuan et. al. @cite_21 insert multidimensional scaling plot into neighboring parallel coordinate axes. encode multidimensional data with value-interpolated geometries @cite_15 . DICON @cite_12 uses a treemap-like icon to encode data cluster that depicts multiple attributes and quality of cluster. Irimia et. al. @cite_17 adopted connectograms to visualize relationships between multidimensional neuron connectivities. Since matrix and parallel coordinate plots (PCP) based methods are more scalable to larger datasets, we investigate both and further discuss the trade-offs in later chapters.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_21", "@cite_19", "@cite_23", "@cite_15", "@cite_12", "@cite_17" ], "mid": [ "2136435231", "", "2097089704", "2127058057", "2029143245", "2137224043", "2157530472", "" ], "abstract": [ "We introduce a method for organizing multivariate displays and for guiding interactive exploration through high-dimensional data. The method is based on nine characterizations of the 2D distributions of orthogonal pairwise projections on a set of points in multidimensional Euclidean space. These characterizations include such measures as density, skewness, shape, outliers, and texture. Statistical analysis of these measures leads to ways for 1) organizing 2D scatterplots of points for coherent viewing, 2) locating unusual (outlying) marginal 2D distributions of points for anomaly detection and 3) sorting multivariate displays based on high-dimensional data, such as trees, parallel coordinates, and glyphs", "", "In this paper, we present a novel parallel coordinates design integrated with points (scattering points in parallel coordinates, SPPC), by taking advantage of both parallel coordinates and scatterplots. Different from most multiple views visualization frameworks involving parallel coordinates where each visualization type occupies an individual window, we convert two selected neighboring coordinate axes into a scatterplot directly. Multidimensional scaling is adopted to allow converting multiple axes into a single subplot. The transition between two visual types is designed in a seamless way. In our work, a series of interaction tools has been developed. Uniform brushing functionality is implemented to allow the user to perform data selection on both points and parallel coordinate polylines without explicitly switching tools. A GPU accelerated dimensional incremental multidimensional scaling (DIMDS) has been developed to significantly improve the system performance. Our case study shows that our scheme is more efficient than traditional multi-view methods in performing visual analysis tasks.", "Machine learning is an increasingly used computational tool within human-computer interaction research. While most researchers currently utilize an iterative approach to refining classifier models and performance, we propose that ensemble classification techniques may be a viable and even preferable alternative. In ensemble learning, algorithms combine multiple classifiers to build one that is superior to its components. In this paper, we present EnsembleMatrix, an interactive visualization system that presents a graphical view of confusion matrices to help users understand relative merits of various classifiers. EnsembleMatrix allows users to directly interact with the visualizations in order to explore and build combination models. We evaluate the efficacy of the system and the approach in a user study. Results show that users are able to quickly combine multiple classifiers operating on multiple feature sets to produce an ensemble classifier with accuracy that approaches best-reported performance classifying images in the CalTech-101 dataset.", "Scatterplot matrices (SPLOMs), parallel coordinates, and glyphs can all be used to visualize the multiple continuous variables (i.e., dependent variables or measures) in multidimensional multivariate data. However, these techniques are not well suited to visualizing many categorical variables (i.e., independent variables or dimensions). To visualize multiple categorical variables, 'hierarchical axes' that 'stack dimensions' have been used in systems like Polaris and Tableau. However, this approach does not scale well beyond a small number of categorical variables. [8] extend the matrix paradigm of the SPLOM to simultaneously visualize several categorical and continuous variables, displaying many kinds of charts in the matrix depending on the kinds of variables involved. We propose a variant of their technique, called the Generalized Plot Matrix (GPLOM). The GPLOM restricts 's technique to only three kinds of charts (scatterplots for pairs of continuous variables, heatmaps for pairs of categorical variables, and barcharts for pairings of categorical and continuous variable), in an effort to make it easier to understand. At the same time, the GPLOM extends 's work by demonstrating interactive techniques suited to the matrix of charts. We discuss the visual design and interactive features of our GPLOM prototype, including a textual search feature allowing users to quickly locate values or variables by name. We also present a user study that compared performance with Tableau and our GPLOM prototype, that found that GPLOM is significantly faster in certain cases, and not significantly slower in other cases.", "Scatterplots remain a powerful tool to visualize multidimensional data. However, accurately understanding the shape of multidimensional points from 2D projections remains challenging due to overlap. Consequently, there are a lot of variations on the scatterplot as a visual metaphor for this limitation. An important aspect often overlooked in scatterplots is the issue of sensitivity or local trend, which may help in identifying the type of relationship between two variables. However, it is not well known how or what factors influence the perception of trends from 2D scatterplots. To shed light on this aspect, we conducted an experiment where we asked people to directly draw the perceived trends on a 2D scatterplot. We found that augmenting scatterplots with local sensitivity helps to fill the gaps in visual perception while retaining the simplicity and readability of a 2D scatterplot. We call this augmentation the generalized sensitivity scatterplot (GSS). In a GSS, sensitivity coefficients are visually depicted as flow lines, which give a sense of continuity and orientation of the data that provide cues about the way data points are scattered in a higher dimensional space. We introduce a series of glyphs and operations that facilitate the analysis of multidimensional data sets using GSS, and validate with a number of well-known data sets for both regression and classification tasks.", "Clustering as a fundamental data analysis technique has been widely used in many analytic applications. However, it is often difficult for users to understand and evaluate multidimensional clustering results, especially the quality of clusters and their semantics. For large and complex data, high-level statistical information about the clusters is often needed for users to evaluate cluster quality while a detailed display of multidimensional attributes of the data is necessary to understand the meaning of clusters. In this paper, we introduce DICON, an icon-based cluster visualization that embeds statistical information into a multi-attribute display to facilitate cluster interpretation, evaluation, and comparison. We design a treemap-like icon to represent a multidimensional cluster, and the quality of the cluster can be conveniently evaluated with the embedded statistical information. We further develop a novel layout algorithm which can generate similar icons for similar clusters, making comparisons of clusters easier. User interaction and clutter reduction are integrated into the system to help users more effectively analyze and refine clustering results for large datasets. We demonstrate the power of DICON through a user study and a case study in the healthcare domain. Our evaluation shows the benefits of the technique, especially in support of complex multidimensional cluster analysis.", "" ] }
1901.05574
2963924969
Deep Recurrent Neural Network (RNN) has gained popularity in many sequence classification tasks. Beyond predicting a correct class for each data instance, data scientists also want to understand what differentiating factors in the data have contributed to the classification during the learning process. We present a visual analytics approach to facilitate this task by revealing the RNN attention for all data instances, their temporal positions in the sequences, and the attribution of variables at each value level. We demonstrate with real-world datasets that our approach can help data scientists to understand such dynamics in deep RNNs from the training results, hence guiding their modeling process.
We summarize temporal sequence visualizations into the following categories: features in the visualization of events transfer. Alluvial diagrams @cite_10 reveal how network structures change over time. Outflow @cite_25 visualizes temporal event sequences in pathways that are similar to parallel coordinates. emphasizes links between events. For example, MatrixWave @cite_11 aligns and compares the differences in the occurrence of clickstreams augmented by event states. Liu et. al. @cite_20 presented an analytic pipeline for pattern mining, pattern pruning, and coordinated exploration between patterns and sequences. ViDX @cite_24 extends Marey's graph with a time-aware outlier-preserving design to support faults detection and troubleshooting on assembly lines. The temporal sequence design in our work attempts to visualization and compare the temporal patterns of all data instances from multiple predicted classes. We also adopt a juxtaposed design to depict the temporal changes within the user-specified range.
{ "cite_N": [ "@cite_24", "@cite_10", "@cite_25", "@cite_20", "@cite_11" ], "mid": [ "2507539066", "2155369095", "2113081107", "2510476599", "2032779105" ], "abstract": [ "Visual analytics plays a key role in the era of connected industry (or industry 4.0, industrial internet) as modern machines and assembly lines generate large amounts of data and effective visual exploration techniques are needed for troubleshooting, process optimization, and decision making. However, developing effective visual analytics solutions for this application domain is a challenging task due to the sheer volume and the complexity of the data collected in the manufacturing processes. We report the design and implementation of a comprehensive visual analytics system, ViDX. It supports both real-time tracking of assembly line performance and historical data exploration to identify inefficiencies, locate anomalies, and form hypotheses about their causes and effects. The system is designed based on a set of requirements gathered through discussions with the managers and operators from manufacturing sites. It features interlinked views displaying data at different levels of detail. In particular, we apply and extend the Marey's graph by introducing a time-aware outlier-preserving visual aggregation technique to support effective troubleshooting in manufacturing processes. We also introduce two novel interaction techniques, namely the quantiles brush and samples brush, for the users to interactively steer the outlier detection algorithms. We evaluate the system with example use cases and an in-depth user interview, both conducted together with the managers and operators from manufacturing plants. The result demonstrates its effectiveness and reports a successful pilot application of visual analytics for manufacturing in smart factories.", "Change is a fundamental ingredient of interaction patterns in biology, technology, the economy, and science itself: Interactions within and between organisms change; transportation patterns by air, land, and sea all change; the global financial flow changes; and the frontiers of scientific research change. Networks and clustering methods have become important tools to comprehend instances of these large-scale structures, but without methods to distinguish between real trends and noisy data, these approaches are not useful for studying how networks change. Only if we can assign significance to the partitioning of single networks can we distinguish meaningful structural changes from random fluctuations. Here we show that bootstrap resampling accompanied by significance clustering provides a solution to this problem. To connect changing structures with the changing function of networks, we highlight and summarize the significant structural changes with alluvial diagrams and realize de Solla Price's vision of mapping change in science: studying the citation pattern between about 7000 scientific journals over the past decade, we find that neuroscience has transformed from an interdisciplinary specialty to a mature and stand-alone discipline.", "Event sequence data is common in many domains, ranging from electronic medical records (EMRs) to sports events. Moreover, such sequences often result in measurable outcomes (e.g., life or death, win or loss). Collections of event sequences can be aggregated together to form event progression pathways. These pathways can then be connected with outcomes to model how alternative chains of events may lead to different results. This paper describes the Outflow visualization technique, designed to (1) aggregate multiple event sequences, (2) display the aggregate pathways through different event states with timing and cardinality, (3) summarize the pathways' corresponding outcomes, and (4) allow users to explore external factors that correlate with specific pathway state transitions. Results from a user study with twelve participants show that users were able to learn how to use Outflow easily with limited training and perform a range of tasks both accurately and rapidly.", "Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.", "Event sequence data analysis is common in many domains, including web and software development, transportation, and medical care. Few have investigated visualization techniques for comparative analysis of multiple event sequence datasets. Grounded in the real-world characteristics of web clickstream data, we explore visualization techniques for comparison of two clickstream datasets collected on different days or from users with different demographics. Through iterative design with web analysts, we designed MatrixWave, a matrix-based representation that allows analysts to get an overview of differences in traffic patterns and interactively explore paths through the website. We use color to encode differences and size to offer context over traffic volume. User feedback on MatrixWave is positive. Our study participants made fewer errors with MatrixWave and preferred it over the more familiar Sankey diagram." ] }
1901.05856
2909668595
This paper proposes a new reinforcement learning (RL) algorithm that enhances exploration by amplifying the imitation effect (AIE). This algorithm consists of self-imitation learning and random network distillation algorithms. We argue that these two algorithms complement each other and that combining these two algorithms can amplify the imitation effect for exploration. In addition, by adding an intrinsic penalty reward to the state that the RL agent frequently visits and using replay memory for learning the feature state when using an exploration bonus, the proposed approach leads to deep exploration and deviates from the current converged policy. We verified the exploration performance of the algorithm through experiments in a two-dimensional grid environment. In addition, we applied the algorithm to a simulated environment of unmanned combat aerial vehicle (UCAV) mission execution, and the empirical results show that AIE is very effective for finding the UCAV's shortest flight path to avoid an enemy's missiles.
Experience replay @cite_17 is a technique for exploiting past experiences, and Deep Q-Network (DQN) has exhibited human-level performance in Atari games using this technique @cite_8 @cite_6 . Prioritized experience replay @cite_4 is a method for sampling prior experience based on temporal difference. ACER @cite_2 and Reactor @cite_7 utilize a replay memory in the actor-critic algorithm @cite_1 @cite_31 . However, this method might not be efficient if the past policy is too different from the current policy @cite_19 . SIL is immune to this disadvantage because it exploits only past experiences that had higher returns than the current value.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_1", "@cite_6", "@cite_19", "@cite_2", "@cite_31", "@cite_17" ], "mid": [ "2201581102", "2612610049", "1757796397", "2155027007", "2145339207", "", "2556958149", "", "2141559645" ], "abstract": [ "Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.", "", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.", "Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly represented by its own function approximator, independent of the value function, and is updated according to the gradient of expected reward with respect to the policy parameters. Williams's REINFORCE method and actor-critic methods are examples of this approach. Our main new result is to show that the gradient can be written in a form suitable for estimation from experience aided by an approximate action-value or advantage function. Using this result, we prove for the first time that a version of policy iteration with arbitrary differentiable function approximation is convergent to a locally optimal policy.", "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "", "This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.", "", "To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning. This paper compares eight reinforcement learning frameworks: adaptive heuristic critic (AHC) learning due to Sutton, Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks." ] }
1901.05856
2909668595
This paper proposes a new reinforcement learning (RL) algorithm that enhances exploration by amplifying the imitation effect (AIE). This algorithm consists of self-imitation learning and random network distillation algorithms. We argue that these two algorithms complement each other and that combining these two algorithms can amplify the imitation effect for exploration. In addition, by adding an intrinsic penalty reward to the state that the RL agent frequently visits and using replay memory for learning the feature state when using an exploration bonus, the proposed approach leads to deep exploration and deviates from the current converged policy. We verified the exploration performance of the algorithm through experiments in a two-dimensional grid environment. In addition, we applied the algorithm to a simulated environment of unmanned combat aerial vehicle (UCAV) mission execution, and the empirical results show that AIE is very effective for finding the UCAV's shortest flight path to avoid an enemy's missiles.
Exploration has been the main challenging issue for RL, and many studies have proposed methods to enhance exploration. Count-based exploration bonus @cite_30 is an intuitive and effective exploration method in which an agent receives a bonus if the agent visits a novel state, and the bonus decreases if the agent visits a frequently visited state. There are some studies that estimate the density of a state to provide a bonus in a large state space @cite_27 @cite_15 @cite_22 @cite_28 . Recent studies have introduced a prediction error (curiosity), which is the difference between the next state predicted and the actual next state for the exploration @cite_10 @cite_12 @cite_18 @cite_20 @cite_5 . The studies designed the prediction error as an exploration bonus ( @math ) to give the agent more reward when performing unexpected behaviors.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_22", "@cite_28", "@cite_27", "@cite_5", "@cite_15", "@cite_20", "@cite_10", "@cite_12" ], "mid": [ "1988526405", "2751973545", "", "2886012730", "2963276097", "2788093588", "2596982695", "2885550588", "2626429629", "779494576" ], "abstract": [ "Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents a theoretical analysis of MBIE and a new variation called MBIE-EB, proving their efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to its less ''online'' cousins from the literature.", "", "", "The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known. Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required. Recent promising developments generally depend on problem-specific density models or handcrafted features. In this paper we introduce a simple approach for exploration that allows us to develop theoretically justified algorithms in the tabular case but that also give us intuitions for new algorithms applicable to settings where function approximation is required. Our approach and its underlying theory is based on the substochastic successor representation, a concept we develop here. While the traditional successor representation is a representation that defines state generalization by the similarity of successor states, the substochastic successor representation is also able to implicitly count the number of times each state (or feature) has been observed. This extension connects two until now disjoint areas of research. We show in traditional tabular domains (RiverSwim and SixArms) that our algorithm empirically performs as well as other sample-efficient algorithms. We then describe a deep reinforcement learning algorithm inspired by these ideas and show that it matches the performance of recent pseudo-count-based methods in hard exploration Atari 2600 games.", "We consider an agent's uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of exploration in non-tabular reinforcement learning. Drawing inspiration from the intrinsic motivation literature, we use density models to measure uncertainty, and propose a novel algorithm for deriving a pseudo-count from an arbitrary density model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing sensible pseudo-counts from raw pixels. We transform these pseudo-counts into exploration bonuses and obtain significantly improved exploration in a number of hard games, including the infamously difficult MONTEZUMA'S REVENGE.", "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to mathematically formalize these abilities using a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which an agent can move and interact with objects it sees, we propose a \"world-model\" network that learns to predict the dynamic consequences of the agent's actions. Simultaneously, we train a separate explicit \"self-model\" that allows the agent to track the error map of its own world-model, and then uses the self-model to adversarially challenge the developing world-model. We demonstrate that this policy causes the agent to explore novel and informative interactions with its environment, leading to the generation of a spectrum of complex behaviors, including ego-motion prediction, object attention, and object gathering. Moreover, the world-model that the agent learns supports improved performance on object dynamics prediction, detection, localization and recognition tasks. Taken together, our results are initial steps toward creating flexible autonomous agents that self-supervise in complex novel physical environments.", "(2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to non-tabular reinforcement learning. This pseudo-count was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma's Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting 's approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma's Revenge.", "Reinforcement learning algorithms rely on carefully engineering environment rewards that are extrinsic to the agent. However, annotating each environment with hand-designed, dense rewards is not scalable, motivating the need for developing reward functions that are intrinsic to the agent. Curiosity is a type of intrinsic reward function which uses prediction error as reward signal. In this paper: (a) We perform the first large-scale study of purely curiosity-driven learning, i.e. without any extrinsic rewards, across 54 standard benchmark environments, including the Atari game suite. Our results show surprisingly good performance, and a high degree of alignment between the intrinsic curiosity objective and the hand-designed extrinsic rewards of many game environments. (b) We investigate the effect of using different feature spaces for computing prediction error and show that random features are sufficient for many popular RL game benchmarks, but learned features appear to generalize better (e.g. to novel game levels in Super Mario Bros.). (c) We demonstrate limitations of the prediction-based rewards in stochastic setups. Game-play videos and code are at this https URL", "", "Achieving efficient and scalable exploration in complex domains poses a major challenge in reinforcement learning. While Bayesian and PAC-MDP approaches to the exploration problem offer strong formal guarantees, they are often impractical in higher dimensions due to their reliance on enumerating the state-action space. Hence, exploration in complex domains is often performed with simple epsilon-greedy methods. In this paper, we consider the challenging Atari games domain, which requires processing raw pixel inputs and delayed rewards. We evaluate several more sophisticated exploration strategies, including Thompson sampling and Boltzman exploration, and propose a new exploration method based on assigning exploration bonuses from a concurrently learned model of the system dynamics. By parameterizing our learned model with a neural network, we are able to develop a scalable and efficient approach to exploration bonuses that can be applied to tasks with complex, high-dimensional state spaces. In the Atari domain, our method provides the most consistent improvement across a range of games that pose a major challenge for prior methods. In addition to raw game-scores, we also develop an AUC-100 metric for the Atari Learning domain to evaluate the impact of exploration on this benchmark." ] }
1901.05856
2909668595
This paper proposes a new reinforcement learning (RL) algorithm that enhances exploration by amplifying the imitation effect (AIE). This algorithm consists of self-imitation learning and random network distillation algorithms. We argue that these two algorithms complement each other and that combining these two algorithms can amplify the imitation effect for exploration. In addition, by adding an intrinsic penalty reward to the state that the RL agent frequently visits and using replay memory for learning the feature state when using an exploration bonus, the proposed approach leads to deep exploration and deviates from the current converged policy. We verified the exploration performance of the algorithm through experiments in a two-dimensional grid environment. In addition, we applied the algorithm to a simulated environment of unmanned combat aerial vehicle (UCAV) mission execution, and the empirical results show that AIE is very effective for finding the UCAV's shortest flight path to avoid an enemy's missiles.
However, the prediction error has a stochastic characteristic because the target function is stochastic. In addition, the architecture of the predictor network is too limited to generalize the state of the environment. To solve these problems, RND @cite_29 proposed that the target network be deterministic by fixing the network with randomized weights and proposed that the predictor network has the same architecture as the target network. Other methods for efficient exploration include adding parameter noise within the network @cite_30 @cite_14 , maximizing entropy policies @cite_26 @cite_0 , adversarial self-play @cite_24 and learning diverse policies @cite_9 @cite_3 .
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_26", "@cite_29", "@cite_9", "@cite_3", "@cite_0", "@cite_24" ], "mid": [ "1988526405", "2623491082", "2949561945", "2899205164", "2788741142", "", "64088143", "2603088459" ], "abstract": [ "Several algorithms for learning near-optimal policies in Markov Decision Processes have been analyzed and proven efficient. Empirical results have suggested that Model-based Interval Estimation (MBIE) learns efficiently in practice, effectively balancing exploration and exploitation. This paper presents a theoretical analysis of MBIE and a new variation called MBIE-EB, proving their efficiency even under worst-case conditions. The paper also introduces a new performance metric, average loss, and relates it to its less ''online'' cousins from the literature.", "Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.", "We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.", "We introduce an exploration bonus for deep reinforcement learning methods that is easy to implement and adds minimal overhead to the computation performed. The bonus is the error of a neural network predicting features of the observations given by a fixed randomly initialized neural network. We also introduce a method to flexibly combine intrinsic and extrinsic rewards. We find that the random network distillation (RND) bonus combined with this increased flexibility enables significant progress on several hard exploration Atari games. In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods. To the best of our knowledge, this is the first method that achieves better than average human performance on this game without using demonstrations or having access to the underlying state of the game, and occasionally completes the first level.", "Intelligent creatures can explore their environments and learn useful skills without supervision. In this paper, we propose DIAYN (\"Diversity is All You Need\"), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. In these environments, some of the learned skills correspond to solving the task, and each skill that solves the task does so in a distinct manner. Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning", "", "Predicting human behavior from a small amount of training examples is a challenging machine learning problem. In this thesis, we introduce the principle of maximum causal entropy, a general technique for applying information theory to decision-theoretic, game-theoretic, and control settings where relevant information is sequentially revealed over time. This approach guarantees decision-theoretic performance by matching purposeful measures of behavior (Abbeel & Ng, 2004), and or enforces game-theoretic rationality constraints (Aumann, 1974), while otherwise being as uncertain as possible, which minimizes worst-case predictive log-loss (Grunwald & Dawid, 2003). We derive probabilistic models for decision, control, and multi-player game settings using this approach. We then develop corresponding algorithms for efficient inference that include relaxations of the Bellman equation (Bellman, 1957), and simple learning algorithms based on convex optimization. We apply the models and algorithms to a number of behavior prediction tasks. Specifically, we present empirical evaluations of the approach in the domains of vehicle route preference modeling using over 100,000 miles of collected taxi driving data, pedestrian motion modeling from weeks of indoor movement data, and robust prediction of game play in stochastic multi-player games.", "We describe a simple scheme that allows an agent to learn about its environment in an unsupervised manner. Our scheme pits two versions of the same agent, Alice and Bob, against one another. Alice proposes a task for Bob to complete; and then Bob attempts to complete the task. In this work we will focus on two kinds of environments: (nearly) reversible environments and environments that can be reset. Alice will \"propose\" the task by doing a sequence of actions and then Bob must undo or repeat them, respectively. Via an appropriate reward structure, Alice and Bob automatically generate a curriculum of exploration, enabling unsupervised training of the agent. When Bob is deployed on an RL task within the environment, this unsupervised training reduces the number of supervised episodes needed to learn, and in some cases converges to a higher reward." ] }
1901.05856
2909668595
This paper proposes a new reinforcement learning (RL) algorithm that enhances exploration by amplifying the imitation effect (AIE). This algorithm consists of self-imitation learning and random network distillation algorithms. We argue that these two algorithms complement each other and that combining these two algorithms can amplify the imitation effect for exploration. In addition, by adding an intrinsic penalty reward to the state that the RL agent frequently visits and using replay memory for learning the feature state when using an exploration bonus, the proposed approach leads to deep exploration and deviates from the current converged policy. We verified the exploration performance of the algorithm through experiments in a two-dimensional grid environment. In addition, we applied the algorithm to a simulated environment of unmanned combat aerial vehicle (UCAV) mission execution, and the empirical results show that AIE is very effective for finding the UCAV's shortest flight path to avoid an enemy's missiles.
where @math and @math and @math are the policy (i.e., actor) and the value function parameterized by @math . @math is a hyperparameter for the value loss. Intuitively, for the same state, if the past return value is greater than the current value ( @math ), then it can be observed that the behavior in the past is a good decision. Therefore, imitating the behavior is desirable. However, if the past return is less than the current value ( @math ), then imitating the behavior is not desirable. The authors focused on combining SIL with advantage actor-critic (A2C) @cite_16 and showed significant performance in experiments with hard exploration Atari games.
{ "cite_N": [ "@cite_16" ], "mid": [ "2964043796" ], "abstract": [ "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input." ] }
1901.05743
2910630904
Abstract This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs , which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions.
Rank aggregation can be seen as the task of finding a good permutation of retrieved objects obtained from different input ranks. The Kemeny rank aggregation problem consists of finding the optimal permutation. It is an NP-hard problem for more than three input ranks @cite_12 . In practice, rank aggregation methods compose inexact solutions that intend to promote better results than the isolated input ranks.
{ "cite_N": [ "@cite_12" ], "mid": [ "2051834357" ], "abstract": [ "We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. We develop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can e ectively combat ,\" a serious problem in Web searches. Experiments show that our methods are simple, e cient, and e ective." ] }
1901.05743
2910630904
Abstract This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs , which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions.
Related to the rank aggregation task, refers to a prior family of methods that also intend to promote better results, but do not explore the inter-relationships between the ranks from the response objects. Re-ranking approaches are feature-based @cite_40 or the rank-based @cite_15 . In this sense, the exploitation of inter-relationships between ranks is a potential advantage for the rank aggregation methods by definition. Besides, the main advantage of ranked-based approaches for improved retrieval, over feature-based approaches, is that while digital objects are typically modeled in high dimensional spaces, they often live in a much lower-dimensional intrinsic manifold space @cite_38 . For this reason, ranked-based approaches can be more efficient while assuming less input data.
{ "cite_N": [ "@cite_38", "@cite_40", "@cite_15" ], "mid": [ "2768318902", "2774746953", "2242818826" ], "abstract": [ "Abstract As network analysis methods prevail, more metrics are applied to co-word networks to reveal hot topics in a field. However, few studies have examined the relationships among these metrics. To bridge this gap, this study explores the relationships among different ranking metrics, including one frequency-based and six network-based metrics, in order to understand the impact of network structural features on ranking themes on co-word networks. We collected bibliographic data from three disciplines from Web of Science (WoS), and generated 40 simulation networks following the preferential attachment assumption. Correlation analysis on the empirical and simulated networks shows strong relationships among the metrics. Their relationships are consistent across disciplines. The metrics can be categorized into three groups according to the strength of their correlations, where Degree Centrality, H-index, and Coreness are in one group, Betweenness Centrality, Clustering Coefficient, and frequency in another, and Weighted PageRank by itself. Regression analysis on the simulation networks reveals that network topology properties, such as connectivity, sparsity, and aggregation, influence the relationships among selected metrics. In addition, when comparing the top keywords ranked by the metrics in the three disciplines, we found the metrics exhibit different discriminative capacity. Coreness and H-index may be better suited for categorizing keywords rather than ranking keywords. Findings from this study contribute to a better understanding of the relationships among different metrics and provide guidance for using them effectively in different contexts.", "Abstract Numerous feature-based models have been recently proposed by the information retrieval community. The capability of features to express different relevance facets (query- or document-dependent) can explain such a success story. Such models are most of the time supervised, thus requiring a learning phase. To leverage the advantages of feature-based representations of documents, we propose TournaRank , an unsupervised approach inspired by real-life game and sport competition principles. Documents compete against each other in tournaments using features as evidences of relevance. Tournaments are modeled as a sequence of matches, which involve pairs of documents playing in turn their features. Once a tournament is ended, documents are ranked according to their number of won matches during the tournament. This principle is generic since it can be applied to any collection type. It also provides great flexibility since different alternatives can be considered by changing the tournament type, the match rules, the feature set, or the strategies adopted by documents during matches. TournaRank was experimented on several collections to evaluate our model in different contexts and to compare it with related approaches such as Learning To Rank and fusion ones: the TREC Robust2004 collection for homogeneous documents, the TREC Web2014 (ClueWeb12) collection for heterogeneous web documents, and the LETOR3.0 collection for comparison with supervised feature-based models.", "In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA." ] }
1901.05743
2910630904
Abstract This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs , which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions.
Median Rank Aggregation (MRA) @cite_50 is an order-based method. It traverses the ranks counting the number of occurrences of the retrieved objects. The first object that occurs in more than half of the ranks is taken as the first object of the final rank. Then, the second object that occurs in more than half of the ranks is taken as the second, and so on.
{ "cite_N": [ "@cite_50" ], "mid": [ "2142385580" ], "abstract": [ "We propose a novel approach to performing efficient similarity search and classification in high dimensional data. In this framework, the database elements are vectors in a Euclidean space. Given a query vector in the same space, the goal is to find elements of the database that are similar to the query. In our approach, a small number of independent \"voters\" rank the database elements based on similarity to the query. These rankings are then combined by a highly efficient aggregation algorithm. Our methodology leads both to techniques for computing approximate nearest neighbors and to a conceptually rich alternative to nearest neighbors.One instantiation of our methodology is as follows. Each voter projects all the vectors (database elements and the query) on a random line (different for each voter), and ranks the database elements based on the proximity of the projections to the projection of the query. The aggregation rule picks the database element that has the best median rank. This combination has several appealing features. On the theoretical side, we prove that with high probability, it produces a result that is a (1 + e) factor approximation to the Euclidean nearest neighbor. On the practical side, it turns out to be extremely efficient, often exploring no more than 5 of the data to obtain very high-quality results. This method is also database-friendly, in that it accesses data primarily in a pre-defined order without random accesses, and, unlike other methods for approximate nearest neighbors, requires almost no extra storage. Also, we extend our approach to deal with the k nearest neighbors.We conduct two sets of experiments to evaluate the efficacy of our methods. Our experiments include two scenarios where nearest neighbors are typically employed---similarity search and classification problems. In both cases, we study the performance of our methods with respect to several evaluation criteria, and conclude that they are uniformly excellent, both in terms of quality of results and in terms of efficiency." ] }
1901.05743
2910630904
Abstract This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs , which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions.
Six score-based methods were proposed by : CombSUM, CombMAX, CombMIN, CombMED, CombMNZ, and CombANZ, based on distinct priors. For these methods, each rank must be previously normalized with respect to its scores. Related to these methods, RLSim @cite_21 is a score-based technique, inspired by Naive Bayes classifier, that assigns the final score of an object as the product of its scores in each rank.
{ "cite_N": [ "@cite_21" ], "mid": [ "2177038588" ], "abstract": [ "In Content-based Image Retrieval (CBIR) systems, ranking accurately collection images is of great relevance. Users are interested in the returned images placed at the first positions, which usually are the most relevant ones. Collection images are ranked in increasing order of their distance to the query pattern (e.g., query image) defined by users. Therefore, the effectiveness of these systems is very dependent on the accuracy of the distance function adopted. In this paper, we present a novel context-based approach for redefining distances and later re-ranking images aiming to improve the effectiveness of CBIR systems. In our approach, distances among images are redefined based on the similarity of their ranked lists. Conducted experiments involving shape, color, and texture descriptors demonstrate the effectiveness of our method." ] }
1901.05743
2910630904
Abstract This paper presents a robust and comprehensive graph-based rank aggregation approach, used to combine results of isolated ranker models in retrieval tasks. The method follows an unsupervised scheme, which is independent of how the isolated ranks are formulated. Our approach is able to combine arbitrary models, defined in terms of different ranking criteria, such as those based on textual, image or hybrid content representations. We reformulate the ad-hoc retrieval problem as a document retrieval based on fusion graphs , which we propose as a new unified representation model capable of merging multiple ranks and expressing inter-relationships of retrieval results automatically. By doing so, we claim that the retrieval system can benefit from learning the manifold structure of datasets, thus leading to more effective results. Another contribution is that our graph-based aggregation formulation, unlike existing approaches, allows for encapsulating contextual information encoded from multiple ranks, which can be directly used for ranking, without further computations and post-processing steps over the graphs. Based on the graphs, a novel similarity retrieval score is formulated using an efficient computation of minimum common subgraphs. Finally, another benefit over existing approaches is the absence of hyperparameters. A comprehensive experimental evaluation was conducted considering diverse well-known public datasets, composed of textual, image, and multimodal documents. Performed experiments demonstrate that our method reaches top performance, yielding better effectiveness scores than state-of-the-art baseline methods and promoting large gains over the rankers being fused, thus demonstrating the successful capability of the proposal in representing queries based on a unified graph-based model of rank fusions.
Some graph-based approaches for rank fusion were proposed based on Markov Chains, where retrieved objects are represented in the various ranks as vertices in a graph, with transition probabilities from vertex to vertex defined by the relative rankings of the items in the various ranks @cite_54 @cite_12 .
{ "cite_N": [ "@cite_54", "@cite_12" ], "mid": [ "87579370", "2051834357" ], "abstract": [ "The problem of combining the ranked preferences of many experts is an old and surprisingly deep problem that has gained renewed importance in many machine learning, data mining, and information retrieval applications. Effective rank aggregation becomes difficult in real-world situations in which the rankings are noisy, incomplete, or even disjoint. We address these difficulties by extending several standard methods of rank aggregation to consider similarity between items in the various ranked lists, in addition to their rankings. The intuition is that similar items should receive similar rankings, given an appropriate measure of similarity for the domain of interest. In this paper, we propose several algorithms for merging ranked lists of items with defined similarity. We establish evaluation criteria for these algorithms by extending previous definitions of distance between ranked lists to include the role of similarity between items. Finally, we test these new methods on both synthetic and real-world data, including data from an application in", "We consider the problem of combining ranking results from various sources. In the context of the Web, the main applications include building meta-search engines, combining ranking functions, selecting documents based on multiple criteria, and improving search precision through word associations. We develop a set of techniques for the rank aggregation problem and compare their performance to that of well-known methods. A primary goal of our work is to design rank aggregation techniques that can e ectively combat ,\" a serious problem in Web searches. Experiments show that our methods are simple, e cient, and e ective." ] }
1901.05657
2972975999
One of the successful approaches in semi-supervised learning is based on the consistency loss between different predictions under random perturbations. Typically, a student model is trained to be consistent with teachers prediction for the inputs under different perturbation. However, to be successful,the teachers pseudo labels must have good quality, otherwise the whole learning process will fail. Unfortunately, existing methods do not assess the quality of teachers pseudo labels. In this paper, we propose a novel certainty-driven consistency loss (CCL) that exploits the predictive uncertainty information in the consistency loss to let the student dynamically learn from reliable targets. Specifically, we propose two approaches, i.e. Filtering CCL and Temperature CCL to either filter out uncertain predictions or pay less attention on the uncertain ones in the consistency regularization. We combine the two approaches, which we call FT-CCL to further improve consistency learning framework. Based on our experiments, FT-CCL shows improvements on a general semi-supervised learning task and robustness to noisy labels. We further introduce a novel mutual learning method, where one student is decoupled from its teacher, and learns from the other student's teacher, in order to learn additional knowledge. Experimental results demonstrate the advantages of our method over the state-of-the-art semi-supervised deep learning methods.
@cite_4 apply the concept of temperature in model distillation, which aims to distill the knowledge from a large pre-trained network to a much smaller network without lossing much of the generalization ability. The temperature, a hyperparameter inside softmax function, is used to soften the probability distributions of softmax, which encourages the small model to learn more "dark knowledge" distributions from the large model, rather than the hard label. However, the method needs to set the value of temperature empirically, which is shared by all training samples. Our method can automatically define the temperature of each training sample according to its uncertainty, and use its own temperature to decide how much influence it has on training the student model.
{ "cite_N": [ "@cite_4" ], "mid": [ "1821462560" ], "abstract": [ "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." ] }
1901.05602
2909240857
Face anti-spoofing (a.k.a presentation attack detection) has drawn growing attention due to the high-security demand in face authentication systems. Existing CNN-based approaches usually well recognize the spoofing faces when training and testing spoofing samples display similar patterns, but their performance would drop drastically on testing spoofing faces of unseen scenes. In this paper, we try to boost the generalizability and applicability of these methods by designing a CNN model with two major novelties. First, we propose a simple yet effective Total Pairwise Confusion (TPC) loss for CNN training, which enhances the generalizability of the learned Presentation Attack (PA) representations. Secondly, we incorporate a Fast Domain Adaptation (FDA) component into the CNN model to alleviate negative effects brought by domain changes. Besides, our proposed model, which is named Generalizable Face Authentication CNN (GFA-CNN), works in a multi-task manner, performing face anti-spoofing and face recognition simultaneously. Experimental results show that GFA-CNN outperforms previous face anti-spoofing approaches and also well preserves the identity information of input face images.
Most previous approaches for face anti-spoofing exploit texture differences between live and spoofing faces with pre-defined features such as LBP @cite_25 , HoG @cite_13 , and SURF @cite_19 , which are subsequently fed to a supervised classifier (., SVM, LDA) for binary classification. However, such handcrafted features are very sensitive to different illumination conditions, camera devices, specific identities, . Though noticeable performance achieved under the intra-dataset protocol, the sample from a different environment may fail the model. In order to obtain features with better generalizability, some approaches leverage temporal information, making use of the spontaneous motions of the live faces, such as eye-blinking @cite_14 and lip motion @cite_23 . Though these methods are effective against photo attacks, they become vulnerable when attackers simulate these motions through a paper with eye mouth positions cut.
{ "cite_N": [ "@cite_14", "@cite_19", "@cite_23", "@cite_13", "@cite_25" ], "mid": [ "2151343288", "2551249768", "2129622867", "2095252718", "2163487272" ], "abstract": [ "We present a real-time liveness detection approach against photograph spoofing in face recognition, by recognizing spontaneous eyeblinks, which is a non-intrusive manner. The approach requires no extra hardware except for a generic webcamera. Eyeblink sequences often have a complex underlying structure. We formulate blink detection as inference in an undirected conditional graphical framework, and are able to learn a compact and efficient observation and transition potentials from data. For purpose of quick and accurate recognition of the blink behavior, eye closity, an easily-computed discriminative measure derived from the adaptive boosting algorithm, is developed, and then smoothly embedded into the conditional model. An extensive set of experiments are presented to show effectiveness of our approach and how it outperforms the cascaded Adaboost and HMM in task of eyeblink detection.", "The vulnerabilities of face biometric authentication systems to spoofing attacks have received a significant attention during the recent years. Some of the proposed countermeasures have achieved impressive results when evaluated on intratests, i.e., the system is trained and tested on the same database. Unfortunately, most of these techniques fail to generalize well to unseen attacks, e.g., when the system is trained on one database and then evaluated on another database. This is a major concern in biometric antispoofing research that is mostly overlooked. In this letter, we propose a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces. The evaluation of our countermeasure on three challenging benchmark face-spoofing databases, namely the CASIA face antispoofing database, the replay-attack database, and MSU mobile face spoof database, showed excellent and stable performance across all the three datasets. Most importantly, in interdatabase tests, our proposed approach outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used.", "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.", "The face recognition community has finally started paying more attention to the long-neglected problem of spoofing attacks and the number of countermeasures is gradually increasing. Fairly good results have been reported on the publicly available databases but it is reasonable to assume that there exists no superior anti-spoofing technique due to the varying nature of attack scenarios and acquisition conditions. Therefore, we propose to approach the problem of face spoofing as a set of attack-specific subproblems that are solvable with a proper combination of complementary countermeasures. Inspired by how we humans can perform reliable spoofing detection only based on the available scene and context information, this work provides the first investigation in research literature that attempts to detect the presence of spoofing medium in the observed scene. We experiment with two publicly available databases consisting of several fake face attacks of different nature under varying conditions and imaging qualities. The experiments show excellent results beyond the state of the art. More importantly, our cross-database evaluation depicts that the proposed approach has promising generalization capabilities.", "Spoofing attacks are one of the security traits that biometric recognition systems are proven to be vulnerable to. When spoofed, a biometric recognition system is bypassed by presenting a copy of the biometric evidence of a valid user. Among all biometric modalities, spoofing a face recognition system is particularly easy to perform: all that is needed is a simple photograph of the user. In this paper, we address the problem of detecting face spoofing attacks. In particular, we inspect the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes. For this purpose, we introduce REPLAY-ATTACK, a novel publicly available face spoofing database which contains all the mentioned types of attacks. We conclude that LBP, with ∼15 Half Total Error Rate, show moderate discriminability when confronted with a wide set of attack types." ] }
1901.05602
2909240857
Face anti-spoofing (a.k.a presentation attack detection) has drawn growing attention due to the high-security demand in face authentication systems. Existing CNN-based approaches usually well recognize the spoofing faces when training and testing spoofing samples display similar patterns, but their performance would drop drastically on testing spoofing faces of unseen scenes. In this paper, we try to boost the generalizability and applicability of these methods by designing a CNN model with two major novelties. First, we propose a simple yet effective Total Pairwise Confusion (TPC) loss for CNN training, which enhances the generalizability of the learned Presentation Attack (PA) representations. Secondly, we incorporate a Fast Domain Adaptation (FDA) component into the CNN model to alleviate negative effects brought by domain changes. Besides, our proposed model, which is named Generalizable Face Authentication CNN (GFA-CNN), works in a multi-task manner, performing face anti-spoofing and face recognition simultaneously. Experimental results show that GFA-CNN outperforms previous face anti-spoofing approaches and also well preserves the identity information of input face images.
Recently, deep learning based methods @cite_6 @cite_1 have been proposed to address face anti-spoofing. They use CNNs to learn highly discriminative representations by taking face anti-spoofing as a binary classification problem. However, most of them easily suffer overfitting. Current publicly available face anti-spoofing datasets are too limited to cover various potential spoofing types. A very recent work @cite_2 by Liu . leverages the depth map and rPPG signal as auxiliary supervision to train CNN instead of treating face anti-spoofing as a simple binary classification problem in order to avoid overfitting. Another critical issue for face anti-spoofing is domain shift. To bridge the gap between training and testing domains, @cite_1 generalizes CNN to unknown conditions by minimizing the feature distribution dissimilarity across domains, . minimizing the Maximum Mean Discrepancy distance among representations.
{ "cite_N": [ "@cite_1", "@cite_6", "@cite_2" ], "mid": [ "2798097728", "1704933117", "2963656031" ], "abstract": [ "In this paper, we propose a novel framework leveraging the advantages of the representational ability of deep learning and domain generalization for face spoofing detection. In particular, the generalized deep feature representation is achieved by taking both spatial and temporal information into consideration, and a 3D convolutional neural network architecture tailored for the spatial-temporal input is proposed. The network is first initialized by training with augmented facial samples based on cross-entropy loss and further enhanced with a specifically designed generalization loss, which coherently serves as the regularization term. The training samples from different domains can seamlessly work together for learning the generalized feature representation by manipulating their feature distribution distances. We evaluate the proposed framework with different experimental setups using various databases. Experimental results indicate that our method can learn more discriminative and generalized information compared with the state-of-the-art methods.", "Though having achieved some progresses, the hand-crafted texture features, e.g., LBP [23], LBP-TOP [11] are still unable to capture the most discriminative cues between genuine and fake faces. In this paper, instead of designing feature by ourselves, we rely on the deep convolutional neural network (CNN) to learn features of high discriminative ability in a supervised manner. Combined with some data pre-processing, the face anti-spoofing performance improves drastically. In the experiments, over 70 relative decrease of Half Total Error Rate (HTER) is achieved on two challenging datasets, CASIA [36] and REPLAY-ATTACK [7] compared with the state-of-the-art. Meanwhile, the experimental results from inter-tests between two datasets indicates CNN can obtain features with better generalization ability. Moreover, the nets trained using combined data from two datasets have less biases between two datasets.", "Face anti-spoofing is crucial to prevent face recognition systems from a security breach. Previous deep learning approaches formulate face anti-spoofing as a binary classification problem. Many of them struggle to grasp adequate spoofing cues and generalize poorly. In this paper, we argue the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues. A CNN-RNN model is learned to estimate the face depth with pixel-wise supervision, and to estimate rPPG signals with sequence-wise supervision. The estimated depth and rPPG are fused to distinguish live vs. spoof faces. Further, we introduce a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations. Experiments show that our model achieves the state-of-the-art results on both intra- and cross-database testing." ] }
1901.05635
2910234583
Spatio-temporal information is very important to capture the discriminative cues between genuine and fake faces from video sequences. To explore such a temporal feature, the fine-grained motions (e.g., eye blinking, mouth movements and head swing) across video frames are very critical. In this paper, we propose a joint CNN-LSTM network for face anti-spoofing, focusing on the motion cues across video frames. We first extract the high discriminative features of video frames using the conventional Convolutional Neural Network (CNN). Then we leverage Long Short-Term Memory (LSTM) with the extracted features as inputs to capture the temporal dynamics in videos. To ensure the fine-grained motions more easily to be perceived in the training process, the eulerian motion magnification is used as the preprocessing to enhance the facial expressions exhibited by individuals, and the attention mechanism is embedded in LSTM to ensure the model learn to focus selectively on the dynamic frames across the video clips. Experiments on Replay Attack and MSU-MFSD databases show that the proposed method yields state-of-the-art performance with better generalization ability compared with several other popular algorithms.
Recently, a large number of approaches have been proposed in the literature to detect spoofing attacks based on photographs, replayed videos and forged masks @cite_27 @cite_14 @cite_8 @cite_19 @cite_3 @cite_12 . Depending on the cues be used, existing face anti-spoofing methods could be roughly categorized into two groups: static approaches and dynamic approaches.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_3", "@cite_19", "@cite_27", "@cite_12" ], "mid": [ "", "2554020789", "1810943226", "2617869948", "2003092530", "2005708641" ], "abstract": [ "", "A face-spoofing attack occurs when an imposter manipulates a face recognition and verification system to gain access as a legitimate user by presenting a 2D printed image or recorded video to the face sensor. This paper presents an efficient and non-intrusive method to counter face-spoofing attacks that uses a single image to detect spoofing attacks. We apply a nonlinear diffusion based on an additive operator splitting scheme. Additionally, we propose a specialized deep convolution neural network that can extract the discriminative and high-level features of the input diffused image to differentiate between a fake face and a real face. Our proposed method is both efficient and convenient compared with the previously implemented state-of-the-art methods described in the literature review. We achieved the highest reported accuracy of 99 on the widely used NUAA dataset. In addition, we tested our method on the Replay Attack dataset which consists of 1200 short videos of both real access and spoofing attacks. An extensive experimental analysis was conducted that demonstrated better results when compared to previous static algorithms results. However, this result can be improved by applying a sparse autoencoder learning algorithm to obtain a more distinguishable diffused image.", "This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.", "Face recognition systems are gaining momentum with current developments in computer vision. At the same time, tactics to mislead these systems are getting more complex, and counter-measure approaches are necessary. Following the current progress with convolutional neural networks (CNN) in classification tasks, we present an approach based on transfer learning using a pre-trained CNN model using only static features to recognize photo, video or mask attacks. We tested our approach on the REPLAY-ATTACK and 3DMAD public databases. On the REPLAY-ATTACK database our accuracy was 99.04 and the half total error rate (HTER) of 1.20 . For the 3DMAD, our accuracy was of 100.00 and HTER 0.00 . Our results are comparable to the state-of-the-art.", "Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.", "Deep Bidirectional LSTM (DBLSTM) recurrent neural networks have recently been shown to give state-of-the-art performance on the TIMIT speech database. However, the results in that work relied on recurrent-neural-network-specific objective functions, which are difficult to integrate with existing large vocabulary speech recognition systems. This paper investigates the use of DBLSTM as an acoustic model in a standard neural network-HMM hybrid system. We find that a DBLSTM-HMM hybrid gives equally good results on TIMIT as the previous work. It also outperforms both GMM and deep network benchmarks on a subset of the Wall Street Journal corpus. However the improvement in word error rate over the deep network is modest, despite a great increase in framelevel accuracy. We conclude that the hybrid approach with DBLSTM appears to be well suited for tasks where acoustic modelling predominates. Further investigation needs to be conducted to understand how to better leverage the improvements in frame-level accuracy towards better word error rates." ] }
1901.05599
2963795705
Robots hold promise in many scenarios involving outdoor use, such as search-and-rescue, wildlife management, and collecting data to improve environment, climate, and weather forecasting. However, autonomous navigation of outdoor trails remains a challenging problem. Recent work has sought to address this issue using deep learning. Although this approach has achieved state-of-the-art results, the deep learning paradigm may be limited due to a reliance on large amounts of annotated training data. Collecting and curating training datasets may not be feasible or practical in many situations, especially as trail conditions may change due to seasonal weather variations, storms, and natural erosion. In this paper, we explore an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image. Our approach utilizes synthetic data gathered from virtual environments for model training, bypassing the need to collect a large amount of real images of the outdoors. We validate our approach in three main ways. First, we demonstrate that our models achieve classification accuracies upwards of 95 on our synthetic data set. Next, we utilize our classification models in the control system of a simulated robot to demonstrate feasibility. Finally, we evaluate our models on real-world trail data and demonstrate the potential of virtual-to-real-world transfer learning.
This dataset was used to train a convolutional neural network that learned to discriminate on salient features that best predict the most likely classification of the image. This method achieved classification accuracies of 85.2 While demonstrated promising results, their approach relies on real-world data collection and may thus be limited due to issues arising from battery-life, human fatigue, data collection errors due to incorrect head orientation and mislabeling of data, or seasonal availability and safety. In addition, this approach may not extend to inaccessible, novel, and or dangerous environments, such as rugged winter trails or extraterrestrial environments (e.g., for use in robotic space exploration on Mars or the Lunar surface). A possible solution to these challenges is transfer learning, an active area of research within the deep learning community, where knowledge representations are learned in one domain and utilized to accelerate learning in a related domain. For instance, research has revealed that convolutional neural networks trained on natural images learn generalizable features, such as Gabor filters and color blobs @cite_3 , that form the basis of many datasets, such ImageNet @cite_6 datasets.
{ "cite_N": [ "@cite_6", "@cite_3" ], "mid": [ "2117539524", "2149933564" ], "abstract": [ "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the 5 years of the challenge, and propose future directions and improvements.", "Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset." ] }
1901.05599
2963795705
Robots hold promise in many scenarios involving outdoor use, such as search-and-rescue, wildlife management, and collecting data to improve environment, climate, and weather forecasting. However, autonomous navigation of outdoor trails remains a challenging problem. Recent work has sought to address this issue using deep learning. Although this approach has achieved state-of-the-art results, the deep learning paradigm may be limited due to a reliance on large amounts of annotated training data. Collecting and curating training datasets may not be feasible or practical in many situations, especially as trail conditions may change due to seasonal weather variations, storms, and natural erosion. In this paper, we explore an approach to address this issue through virtual-to-real-world transfer learning using a variety of deep learning models trained to classify the direction of a trail in an image. Our approach utilizes synthetic data gathered from virtual environments for model training, bypassing the need to collect a large amount of real images of the outdoors. We validate our approach in three main ways. First, we demonstrate that our models achieve classification accuracies upwards of 95 on our synthetic data set. Next, we utilize our classification models in the control system of a simulated robot to demonstrate feasibility. Finally, we evaluate our models on real-world trail data and demonstrate the potential of virtual-to-real-world transfer learning.
Our approach is inspired by transfer learning; however, instead of transferring from one real-world domain to another, we are interested in the notion of transferring knowledge learned in virtual environments to the real world. For example, prior work has developed a mapless motion planner for real environments by training a deep reinforcement model in synthetic settings @cite_2 . After training in a well-defined simulation, the system converges upon an optimal set of navigational policies that are then transferred to a real-world robot capable of navigating a room with static obstacles. This work highlights the potential of virtual-to-real transfer learning in domains where a well-defined simulation is available. However, this work does not address the challenges of perception and navigation in complex environments where simulations may be lacking or non-existent. Our work in this paper further explores the potential of virtual-to-real-world transfer learning to address the challenges raised by complex domains, such as wilderness trails.
{ "cite_N": [ "@cite_2" ], "mid": [ "2963428623" ], "abstract": [ "We present a learning-based mapless motion planner by taking the sparse 10-dimensional range findings and the target position with respect to the mobile robot coordinate frame as input and the continuous steering commands as output. Traditional motion planners for mobile ground robots with a laser range sensor mostly depend on the obstacle map of the navigation environment where both the highly precise laser sensor and the obstacle map building work of the environment are indispensable. We show that, through an asynchronous deep reinforcement learning method, a mapless motion planner can be trained end-to-end without any manually designed features and prior demonstrations. The trained planner can be directly applied in unseen virtual and real environments. The experiments show that the proposed mapless motion planner can navigate the nonholonomic mobile robot to the desired targets without colliding with any obstacles." ] }
1901.05744
2910903466
We present a novel technique based on deep learning and set theory which yields exceptional classification and prediction results. Having access to a sufficiently large amount of labelled training data, our methodology is capable of predicting the labels of the test data almost always even if the training data is entirely unrelated to the test data. In other words, we prove in a specific setting that as long as one has access to enough data points, the quality of the data is irrelevant.
We will recall a couple of relevant articles that highlight the current efficiency of deep neural networks and then cite a number of our articles for no apparent reason. Neural networks were initially introduced in the 1940s by McCulloch and Pitts @cite_30 in an attempt to mathematically model the human brain. Later, this framework was identified as a flexible and powerful computational architecture which then led to the field of deep learning @cite_21 @cite_17 @cite_10 . Deep learning, roughly speaking, deals with the data-driven manipulation of neural networks. These methods turned out to be highly efficient to the extent that deep learning based methods are state-of-the-art technology in all image classification tasks @cite_31 @cite_0 @cite_6 . They have revolutionised the field of speech recognition @cite_19 @cite_32 @cite_42 and have achieved a level of skill in playing games that humans or the best alternative algorithms cannot match anymore @cite_4 @cite_12 @cite_2 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_21", "@cite_42", "@cite_32", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_31", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "1995341919", "2772709170", "2963446085", "", "", "", "", "2160815625", "", "2963446712", "", "", "" ], "abstract": [ "Abstract Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.", "The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.", "In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. For an expected loss function of a deep nonlinear neural network, we prove the following statements under the independence assumption adopted from recent work: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) the property of saddle points differs for shallow networks (with three layers) and deeper networks (with more than three layers). Moreover, we prove that the same four statements hold for deep linear neural networks with any depth, any widths and no unrealistic assumptions. As a result, we present an instance, for which we can answer to the following question: how difficult to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima and the property of the saddle points). We note that even though we have advanced the theoretical foundations of deep learning, there is still a gap between theory and practice.", "", "", "", "", "Most current speech recognition systems use hidden Markov models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. An alternative way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition benchmarks, sometimes by a large margin. This article provides an overview of this progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.", "", "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "", "", "" ] }
1907.06745
2958670508
Humanitarian disasters have been on the rise in recent years due to the effects of climate change and socio-political situations such as the refugee crisis. Technology can be used to best mobilize resources such as food and water in the event of a natural disaster, by semi-automatically flagging tweets and short messages as indicating an urgent need. The problem is challenging not just because of the sparseness of data in the immediate aftermath of a disaster, but because of the varying characteristics of disasters in developing countries (making it difficult to train just one system) and the noise and quirks in social media. In this paper, we present a robust, low-supervision social media urgency system that adapts to arbitrary crises by leveraging both labeled and unlabeled data in an ensemble setting. The system is also able to adapt to new crises where an unlabeled background corpus may not be available yet by utilizing a simple and effective transfer learning methodology. Experimentally, our transfer learning and low-supervision approaches are found to outperform viable baselines with high significance on myriad disaster datasets.
Other lines of work relevant to this paper involve minimally supervised machine learning, representation learning and transfer learning. Concerning minimally supervised machine learning (ML), in general, ML techniques where there are few, and in the case of zero-shot learning @cite_0 @cite_22 , no observed instances for a label has been a popular research agenda for many years @cite_26 @cite_20 . In addition to weak supervision approaches @cite_20 , both semi-supervised and active learning have also been studied in great depth, with surveys provided by @cite_23 @cite_28 . However, to the best of our knowledge, a successful systems-level conjunction of various minimally supervised ML techniques has not been achieved for the task of short-text urgency detection. Such as empirical assessment is an important goal of this paper.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_28", "@cite_0", "@cite_23", "@cite_20" ], "mid": [ "1812986701", "652269744", "2903158431", "2150295085", "2136504847", "" ], "abstract": [ "The main contribution of this paper is a systematic analysis of a minimally supervised machine learning method for relation extraction grammars. The method is based on a bootstrapping approach in which the bootstrapping is triggered by semantic seeds. The starting point of our analysis is the pattern-learning graph which is a subgraph of the bipartite graph representing all connections between linguistic patterns and relation instances exhibited by the data. It is shown that the performance of such general learning framework for actual tasks is dependent on certain properties of the data and on the selection of seeds. Several experiments have been conducted to gain explanatory insights into the interaction of these two factors. From the investigation of more effective seeds and benevolent data we understand how to improve the learning in less fortunate configurations. A relation extraction method only based on positive examples cannot avoid all false positives, especially when the data properties yield a high recall. Therefore, negative seeds are employed to learn negative patterns, which boost precision.", "Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .", "", "We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X → Y that must predict novel values of Y that were omitted from the training set. To achieve this, we define the notion of a semantic output code classifier (SOC) which utilizes a knowledge base of semantic properties of Y to extrapolate to novel classes. We provide a formalism for this type of classifier and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we build a SOC classifier for a neural decoding task and show that it can often predict words that people are thinking about from functional magnetic resonance images (fMRI) of their neural activity, even without training examples for those words.", "Door lock apparatus in which a door latch mechanism is operated by inner and outer door handles coupled to a latch shaft extending through the latch mechanism. Handles are coupled to ends of latch shaft by coupling devices enabling door to be locked from the inside to prevent entry from the outside but can still be opened from the inside by normal operation of outside handle. Inside coupling device has limited lost-motion which is used to operate cam device to unlock the door on actuation of inner handles.", "" ] }
1907.06989
2958962552
In this technical report we investigate speed estimation of the ego-vehicle on the KITTI benchmark using state-of-the-art deep neural network based optical flow and single-view depth prediction methods. Using a straightforward intuitive approach and approximating a single scale factor, we evaluate several application schemes of the deep networks and formulate meaningful conclusions such as: combining depth information with optical flow improves speed estimation accuracy as opposed to using optical flow alone; the quality of the deep neural network methods influences speed estimation performance; using the depth and optical flow results from smaller crops of wide images degrades performance. With these observations in mind, we achieve a RMSE of less than 1 m s for vehicle speed estimation using monocular images as input from recordings of the KITTI benchmark. Limitations and possible future directions are discussed as well.
@cite_6 the authors used sparse optical flow to track feature points on images from a downward-looking camera mounted on the rear axle of the car and achieved a mean error relative to GPS measurement of 0.121 m s. However, the method works only in restricted conditions and was evaluated on self-collected data at low speed values. Han @cite_15 used projective geometry concepts to estimate relative and absolute speed in different case studies. Using black box footages, a maximum of 3 @cite_23 used a rather complicated neural network architecture trained on self-collected data and reported an RMSE of 10 mph on the KITTI benchmark @cite_24 .
{ "cite_N": [ "@cite_24", "@cite_15", "@cite_23", "@cite_6" ], "mid": [ "2115579991", "", "2741366276", "2061055677" ], "abstract": [ "We present a novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research. In total, we recorded 6 hours of traffic scenarios at 10-100 Hz using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras, a Velodyne 3D laser scanner and a high-precision GPS IMU inertial navigation system. The scenarios are diverse, capturing real-world traffic situations, and range from freeways over rural areas to inner-city scenes with many static and dynamic objects. Our data is calibrated, synchronized and timestamped, and we provide the rectified and raw image sequences. Our dataset also contains object labels in the form of 3D tracklets, and we provide online benchmarks for stereo, optical flow, object detection and other tasks. This paper describes our recording platform, the data format and the utilities that we provide.", "", "We aim to determine the speed of ego-vehicle motion from a video stream. Previous work by [1] has shown that motion can be detected and quantified with the help of a synchrony autoencoder, which has multiplicative gating interactions introduced between its hidden units, and hence, across video frames. In this work we modify their synchrony autoencoder method to achieve a ”real time” performance in a wide variety of driving environments. Our modifications led to a model which is 1.5 times faster and uses only half of the total memory by comparison with the original. We also benchmark the speed estimation performance against a model based on CaffeNet. CaffeNet is known for visual classification and localization but we employ its architecture with a little tweak for speed determination using sequential video frames and blur patterns. We evaluate our models on self-collected data, KITTI, and other standard sets.", "It has great significance to acquire vehicle speed for active safety system. This paper presents a methodology for identifying vehicle speed by obtaining a sparse optical flow from image sequences. Distinct corners can be detected by Harris corner detector after image enhancement. Then, Lucas-Kanade method for optical flow calculation is utilized to match the sparse feature set of one frame on the consecutive frame. In order to improve the accuracy of optical flow, RANSAC algorithm is introduced to optimize the matched corners. Finally, the vehicle speed can be determined by averaging all the speeds estimated by every optimized matched corner. The results of field test indicated that the computation time of the developed method to execute for one time was 59ms, and the mean error of speed estimation relative to the measurement of GPS was 0.121 m s. The developed method can achieve satisfying performance, such as accuracy and output frequency." ] }
1907.07011
2956444090
Introducing explicit constraints on the structural predictions has been an effective way to improve the performance of semantic segmentation models. Existing methods are mainly based on insufficient hand-crafted rules that only partially capture the image structure, and some methods can also suffer from the efficiency issue. As a result, most of the state-of-the-art fully convolutional networks did not adopt these techniques. In this work, we propose a simple, fast yet effective method that exploits structural information through direct supervision with minor additional expense. To be specific, our method explicitly requires the network to predict semantic segmentation as well as dilated affinity, which is a sparse version of pair-wise pixel affinity. The capability of telling the relationships between pixels are directly built into the model and enhance the quality of segmentation in two stages. 1) Joint training with dilated affinity can provide robust feature representations and thus lead to finer segmentation results. 2) The extra output of affinity information can be further utilized to refine the original segmentation with a fast propagation process. Consistent improvements are observed on various benchmark datasets when applying our framework to the existing state-of-the-art model. Codes will be released soon.
Fully convolutional network @cite_23 is one of the pioneers that introduce deep learning into semantic segmentation and achieve impressive performance on benchmark datasets. Two important techniques are proposed and have been explored extensively afterward. First, they adapt networks that are originally designed for image recognition into a fully convolutional fashion and emit dense output directly. Later works found that dilated convolution @cite_18 @cite_27 alleviates the precision decrease caused by excessive spatial destruction. In addition, increasing the receptive field @cite_8 @cite_2 @cite_6 can give extra performance boost. Second, skip architecture was proposed to refine the segmentation results with multiple level features and various substitutions @cite_28 @cite_1 @cite_0 @cite_20 @cite_15 have been investigated after that.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_28", "@cite_1", "@cite_6", "@cite_0", "@cite_27", "@cite_23", "@cite_2", "@cite_15", "@cite_20" ], "mid": [ "2952865063", "1817277359", "1901129140", "2508741746", "2630837129", "2787091153", "2286929393", "1903029394", "2952596663", "2895420332", "2963815618" ], "abstract": [ "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at this https URL .", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "CNN architectures have terrific recognition performance but rely on spatial pooling which makes it difficult to adapt them to tasks that require dense, pixel-accurate labeling. This paper makes two contributions: (1) We demonstrate that while the apparent spatial resolution of convolutional feature maps is low, the high-dimensional feature representation contains significant sub-pixel localization information. (2) We describe a multi-resolution reconstruction architecture based on a Laplacian pyramid that uses skip connections from higher resolution feature maps and multiplicative gating to successively refine segment boundaries reconstructed from lower-resolution maps. This approach yields state-of-the-art semantic segmentation results on the PASCAL VOC and Cityscapes segmentation benchmarks without resorting to more complex random-field inference or instance detection driven architectures.", "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .", "State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.", "Accurate semantic image segmentation requires the joint consideration of local appearance, semantic information, and global scene context. In today’s age of pre-trained deep networks and their powerful convolutional features, state-of-the-art semantic segmentation approaches differ mostly in how they choose to combine together these different kinds of information. In this work, we propose a novel scheme for aggregating features from different scales, which we refer to as Multi-Scale Context Intertwining (MSCI). In contrast to previous approaches, which typically propagate information between scales in a one-directional manner, we merge pairs of feature maps in a bidirectional and recurrent fashion, via connections between two LSTM chains. By training the parameters of the LSTM units on the segmentation task, the above approach learns how to extract powerful and effective features for pixel-level semantic segmentation, which are then combined hierarchically. Furthermore, rather than using fixed information propagation routes, we subdivide images into super-pixels, and use the spatial relationship between them in order to perform image-adapted context aggregation. Our extensive evaluation on public benchmarks indicates that all of the aforementioned components of our approach increase the effectiveness of information propagation throughout the network, and significantly improve its eventual segmentation accuracy.", "Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and high-resolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0 in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9 mean IoU, which outperforms the previous state-of-the-art results." ] }
1907.07011
2956444090
Introducing explicit constraints on the structural predictions has been an effective way to improve the performance of semantic segmentation models. Existing methods are mainly based on insufficient hand-crafted rules that only partially capture the image structure, and some methods can also suffer from the efficiency issue. As a result, most of the state-of-the-art fully convolutional networks did not adopt these techniques. In this work, we propose a simple, fast yet effective method that exploits structural information through direct supervision with minor additional expense. To be specific, our method explicitly requires the network to predict semantic segmentation as well as dilated affinity, which is a sparse version of pair-wise pixel affinity. The capability of telling the relationships between pixels are directly built into the model and enhance the quality of segmentation in two stages. 1) Joint training with dilated affinity can provide robust feature representations and thus lead to finer segmentation results. 2) The extra output of affinity information can be further utilized to refine the original segmentation with a fast propagation process. Consistent improvements are observed on various benchmark datasets when applying our framework to the existing state-of-the-art model. Codes will be released soon.
Works focusing on image structural information have also been developed. Ronneberger al @cite_28 decides to assign higher weights to samples on edges. Ke al @cite_14 customizes the loss function to pull similar pixels together and push different ones away. Several post-processing methods choose to refine the predictions by aggregating the outputs on the image level. Conditional random field (CRF) @cite_22 is one of the earliest attempts on this direction, and many following methods try to enhance its capability, , CRF as recurrent neural network @cite_19 , Markov random field @cite_10 and spatial propagation @cite_5 . However, these attempts usually require much additional cost, both on time and memory, and fail to give rise to better performance when the backbone methods are sufficiently strong @cite_0 @cite_20 .
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_28", "@cite_0", "@cite_19", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "2888340395", "2161236525", "1901129140", "2787091153", "2124592697", "2964242696", "", "2963815618" ], "abstract": [ "Semantic segmentation has made much progress with increasingly powerful pixel-wise classifiers and incorporating structural priors via Conditional Random Fields (CRF) or Generative Adversarial Networks (GAN). We propose a simpler alternative that learns to verify the spatial structure of segmentation during training only. Unlike existing approaches that enforce semantic labels on individual pixels and match labels between neighbouring pixels, we propose the concept of Adaptive Affinity Fields (AAF) to capture and match the semantic relations between neighbouring pixels in the label space. We use adversarial learning to select the optimal affinity field size for each semantic category. It is formulated as a minimax problem, optimizing our segmentation neural network in a best worst-case learning scenario. AAF is versatile for representing structures as a collection of pixel-centric relations, easier to train than GAN and more efficient than CRF without run-time inference. Our extensive evaluations on PASCAL VOC 2012, Cityscapes, and GTA5 datasets demonstrate its above-par segmentation performance and robust generalization across domains.", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0 and 82.1 without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at this https URL .", "Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.", "In this paper, we propose a spatial propagation networks for learning affinity matrix. We show that by constructing a row column linear propagation model, the spatially variant transformation matrix constitutes an affinity matrix that models dense, global pairwise similarities of an image. Specifically, we develop a three-way connection for the linear propagation model, which (a) formulates a sparse transformation matrix where all elements can be the output from a deep CNN, but (b) results in a dense affinity matrix that is effective to model any task-specific pairwise similarity. Instead of designing the similarity kernels according to image features of two points, we can directly output all similarities in a pure data-driven manner. The spatial propagation network is a generic framework that can be applied to numerous tasks, which traditionally benefit from designed affinity, e.g., image matting, colorization, and guided filtering, to name a few. Furthermore, the model can also learn semantic-aware affinity for high-level vision tasks due to the learning capability of the deep model. We validate the proposed framework by refinement of object segmentation. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides general, effective and efficient solutions for generating high-quality segmentation results.", "", "Modern semantic segmentation frameworks usually combine low-level and high-level features from pre-trained backbone convolutional models to boost performance. In this paper, we first point out that a simple fusion of low-level and high-level features could be less effective because of the gap in semantic levels and spatial resolution. We find that introducing semantic information into low-level features and high-resolution details into high-level features is more effective for the later fusion. Based on this observation, we propose a new framework, named ExFuse, to bridge the gap between low-level and high-level features thus significantly improve the segmentation quality by 4.0 in total. Furthermore, we evaluate our approach on the challenging PASCAL VOC 2012 segmentation benchmark and achieve 87.9 mean IoU, which outperforms the previous state-of-the-art results." ] }
1907.07011
2956444090
Introducing explicit constraints on the structural predictions has been an effective way to improve the performance of semantic segmentation models. Existing methods are mainly based on insufficient hand-crafted rules that only partially capture the image structure, and some methods can also suffer from the efficiency issue. As a result, most of the state-of-the-art fully convolutional networks did not adopt these techniques. In this work, we propose a simple, fast yet effective method that exploits structural information through direct supervision with minor additional expense. To be specific, our method explicitly requires the network to predict semantic segmentation as well as dilated affinity, which is a sparse version of pair-wise pixel affinity. The capability of telling the relationships between pixels are directly built into the model and enhance the quality of segmentation in two stages. 1) Joint training with dilated affinity can provide robust feature representations and thus lead to finer segmentation results. 2) The extra output of affinity information can be further utilized to refine the original segmentation with a fast propagation process. Consistent improvements are observed on various benchmark datasets when applying our framework to the existing state-of-the-art model. Codes will be released soon.
Pair-wise pixel affinity is a fundamental computer vision concept and has been widely used under deep learning scenarios. Maire al @cite_26 utilize affinity relation in the spectral embedding field while Liu al @cite_5 constructs a linear propagation module to learn pair-wise similarity matrix. Recently, pixel affinity @cite_4 and pixel link @cite_7 innovatively model the problem as a task about telling whether two pixels belong to the same instance and have shown effectiveness on various practical scenes. We draw on their experience and modify the current state-of-the-art methods to enable the model to tell whether two adjacent pixels belong to the same class rather than the same instance.
{ "cite_N": [ "@cite_5", "@cite_26", "@cite_4", "@cite_7" ], "mid": [ "2964242696", "2963630186", "2895065325", "2962810613" ], "abstract": [ "In this paper, we propose a spatial propagation networks for learning affinity matrix. We show that by constructing a row column linear propagation model, the spatially variant transformation matrix constitutes an affinity matrix that models dense, global pairwise similarities of an image. Specifically, we develop a three-way connection for the linear propagation model, which (a) formulates a sparse transformation matrix where all elements can be the output from a deep CNN, but (b) results in a dense affinity matrix that is effective to model any task-specific pairwise similarity. Instead of designing the similarity kernels according to image features of two points, we can directly output all similarities in a pure data-driven manner. The spatial propagation network is a generic framework that can be applied to numerous tasks, which traditionally benefit from designed affinity, e.g., image matting, colorization, and guided filtering, to name a few. Furthermore, the model can also learn semantic-aware affinity for high-level vision tasks due to the learning capability of the deep model. We validate the proposed framework by refinement of object segmentation. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic segmentation tasks show that the spatial propagation network provides general, effective and efficient solutions for generating high-quality segmentation results.", "Spectral embedding provides a framework for solving perceptual organization problems, including image segmentation and figure ground organization. From an affinity matrix describing pairwise relationships between pixels, it clusters pixels into regions, and, using a complex-valued extension, orders pixels according to layer. We train a convolutional neural network (CNN) to directly predict the pair-wise relationships that define this affinity matrix. Spectral embedding then resolves these predictions into a globally-consistent segmentation and figure ground organization of the scene. Experiments demonstrate significant benefit to this direct coupling compared to prior works which use explicit intermediate stages, such as edge detection, on the pathway from image to affinities. Our results suggest spectral embedding as a powerful alternative to the conditional random field (CRF)-based globalization schemes typically coupled to deep neural networks.", "We present an instance segmentation scheme based on pixel affinity information, which is the relationship of two pixels belonging to the same instance. In our scheme, we use two neural networks with similar structures. One predicts the pixel level semantic score and the other is designed to derive pixel affinities. Regarding pixels as the vertexes and affinities as edges, we then propose a simple yet effective graph merge algorithm to cluster pixels into instances. Experiments show that our scheme generates fine grained instance masks. With Cityscape training data, the proposed scheme achieves 27.3 AP on test set.", "" ] }
1907.06968
2959103553
We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from RGB video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important keypoints of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, MSR Action3D and SBU Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference.
To the best of our knowledge, several studies @cite_26 @cite_63 @cite_66 stated that regressing the 3D pose from 2D joint locations is difficult and not too accurate. However, motivated by Martinez al @cite_1 , we believe that a simple neural network can learn effectively a . Therefore, this paper aims at proposing a simple, effective and real-time approach for 3D human pose estimation that benefits action recognition. To this end, we design and optimize a two-stream deep neural network that performs 3D pose predictions from the 2D human poses. These 2D poses are generated by a state-of-the-art 2D detector that is able to run in real-time for multiple people. We empirically show that although the proposed approach is computationally inexpensive, it is still able to improve the state-of-the-art. [-0.8cm]
{ "cite_N": [ "@cite_26", "@cite_1", "@cite_66", "@cite_63" ], "mid": [ "2554247908", "2612706635", "2785641712", "2611932403" ], "abstract": [ "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30 on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.", "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.", "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.", "We present the first real-time method to capture the full global 3D skelet al pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our method's accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e., it works for outdoor scenes, community videos, and low quality commodity RGB cameras." ] }
1907.06968
2959103553
We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from RGB video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important keypoints of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, MSR Action3D and SBU Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference.
Human action recognition from skelet al data or 3D poses is a challenging task. Previous works on this topic can be divided into two main groups of method. The first group @cite_33 @cite_51 @cite_29 extracts hand-crafted features and uses probabilistic graphical models, Hidden Markov Model (HMM) @cite_33 or Conditional Random Field (CRF) @cite_72 to recognize actions. However, almost all of these approaches require a lot of feature engineering. The second group @cite_38 @cite_30 @cite_42 considers the 3D pose-based action recognition as a time-series problem and proposes to use Recurrent Neural Networks with Long-Short Term Memory units (RNN-LSTMs) @cite_27 for modeling the dynamics of the skeletons. Although RNN-LSTMs are able to model the long-term temporal characteristics of motion and have advanced the state-of-the-art, this approach feeds raw 3D poses directly into the network and just considers them as a kind of low-level feature. The large number of input features makes RNNs very complex and may easily lead to overfitting. Moreover, many RNN-LSTMs act merely as classifiers and cannot extract high-level features for recognition tasks @cite_2 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_33", "@cite_29", "@cite_42", "@cite_27", "@cite_72", "@cite_2", "@cite_51" ], "mid": [ "1950788856", "2510185399", "", "2048821851", "2964134613", "2947169932", "2068915126", "", "2143267104" ], "abstract": [ "Human actions can be represented by the trajectories of skeleton joints. Traditional methods generally model the spatial structure and temporal dynamics of human skeleton with hand-crafted features and recognize human actions by well-designed classifiers. In this paper, considering that recurrent neural network (RNN) can model the long-term contextual information of temporal sequences well, we propose an end-to-end hierarchical RNN for skeleton based action recognition. Instead of taking the whole skeleton as the input, we divide the human skeleton into five parts according to human physical structure, and then separately feed them to five subnets. As the number of layers increases, the representations extracted by the subnets are hierarchically fused to be the inputs of higher layers. The final representations of the skeleton sequences are fed into a single-layer perceptron, and the temporally accumulated output of the perceptron is the final decision. We compare with five other deep RNN architectures derived from our model to verify the effectiveness of the proposed network, and also compare with several other methods on three publicly available datasets. Experimental results demonstrate that our model achieves the state-of-the-art performance with high computational efficiency.", "3D action recognition – analysis of human actions based on 3D skeleton data – becomes popular recently due to its succinctness, robustness, and view-invariant representation. Recent attempts on this problem suggested to develop RNN-based learning methods to model the contextual dependency in the temporal domain. In this paper, we extend this idea to spatio-temporal domains to analyze the hidden sources of action-related information within the input data over both domains concurrently. Inspired by the graphical structure of the human skeleton, we further propose a more powerful tree-structure based traversal method. To handle the noise and occlusion in 3D skeleton data, we introduce new gating mechanism within LSTM to learn the reliability of the sequential input data and accordingly adjust its effect on updating the long-term context information stored in the memory cell. Our method achieves state-of-the-art performance on 4 challenging benchmark datasets for 3D human action analysis.", "", "Recently introduced cost-effective depth sensors coupled with the real-time skeleton estimation algorithm of [16] have generated a renewed interest in skeleton-based human action recognition. Most of the existing skeleton-based approaches use either the joint locations or the joint angles to represent a human skeleton. In this paper, we propose a new skelet al representation that explicitly models the 3D geometric relationships between various body parts using rotations and translations in 3D space. Since 3D rigid body motions are members of the special Euclidean group SE(3), the proposed skelet al representation lies in the Lie group SE(3)×…×SE(3), which is a curved manifold. Using the proposed representation, human actions can be modeled as curves in this Lie group. Since classification of curves in this Lie group is not an easy task, we map the action curves from the Lie group to its Lie algebra, which is a vector space. We then perform classification using a combination of dynamic time warping, Fourier temporal pyramid representation and linear SVM. Experimental results on three action datasets show that the proposed representation performs better than many existing skelet al representations. The proposed approach also outperforms various state-of-the-art skeleton-based human action recognition approaches.", "Recent approaches in depth-based human activity analysis achieved outstanding performance and proved the effectiveness of 3D representation for classification of action classes. Currently available depth-based and RGB+Dbased action recognition benchmarks have a number of limitations, including the lack of training samples, distinct class labels, camera views and variety of subjects. In this paper we introduce a large-scale dataset for RGB+D human action recognition with more than 56 thousand video samples and 4 million frames, collected from 40 distinct subjects. Our dataset contains 60 different action classes including daily, mutual, and health-related actions. In addition, we propose a new recurrent neural network structure to model the long-term temporal correlation of the features for each body part, and utilize them for better action classification. Experimental results show the advantages of applying deep learning methods over state-of-the-art handcrafted features on the suggested cross-subject and crossview evaluation criteria for our dataset. The introduction of this large scale dataset will enable the community to apply, develop and adapt various data-hungry learning techniques for the task of depth-based and RGB+D-based human activity analysis.", "", "In this paper, we propose a hierarchical discriminative approach for human action recognition. It consists of feature extraction with mutual motion pattern analysis and discriminative action modeling in the hierarchical manifold space. Hierarchical Gaussian Process Latent Variable Model (HGPLVM) is employed to learn the hierarchical manifold space in which motion patterns are extracted. A cascade CRF is also presented to estimate the motion patterns in the corresponding manifold subspace, and the trained SVM classifier predicts the action label for the current observation. Using motion capture data, we test our method and evaluate how body parts make effect on human action recognition. The results on our test set of synthetic images are also presented to demonstrate the robustness.", "", "Human action recognition is an important yet challenging task. The recently developed commodity depth sensors open up new possibilities of dealing with this problem but also present some unique challenges. The depth maps captured by the depth cameras are very noisy and the 3D positions of the tracked joints may be completely wrong if serious occlusions occur, which increases the intra-class variations in the actions. In this paper, an actionlet ensemble model is learnt to represent each action and to capture the intra-class variance. In addition, novel features that are suitable for depth data are proposed. They are robust to noise, invariant to translational and temporal misalignments, and capable of characterizing both the human motion and the human-object interactions. The proposed approach is evaluated on two challenging action recognition datasets captured by commodity depth cameras, and another dataset captured by a MoCap system. The experimental evaluations show that the proposed approach achieves superior performance to the state of the art algorithms." ] }
1907.06968
2959103553
We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from RGB video sequences. Our approach proceeds along two stages. In the first, we run a real-time 2D pose detector to determine the precise pixel location of important keypoints of the body. A two-stream neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second, we deploy the Efficient Neural Architecture Search (ENAS) algorithm to find an optimal network architecture that is used for modeling the spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, MSR Action3D and SBU Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that our method requires a low computational budget for training and inference.
In the literature, 3D human pose estimation and action recognition are closely related. However, both problems are generally considered as two distinct tasks @cite_44 . Although some approaches have been proposed for tackling the problem of jointly predicting 3D poses and recognizing actions in RGB images or video sequences @cite_14 @cite_35 @cite_67 , they are data-dependent and require a lot of feature engineering, except the work of Luvizon al @cite_67 . Unlike in previous studies, we propose a multitask learning framework for 3D pose-based action recognition by reconstructing 3D skeletons from RGB images and exploiting them for action recognition in a joint way. Experimental results on public and challenging datasets show that our framework is able to solve the two tasks in an effective way. [-0.8cm]
{ "cite_N": [ "@cite_44", "@cite_35", "@cite_67", "@cite_14" ], "mid": [ "1744759976", "1912967058", "2963304956", "2046589395" ], "abstract": [ "This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art.", "Action recognition and pose estimation from video are closely related tasks for understanding human motion, most methods, however, learn separate models and combine them sequentially. In this paper, we propose a framework to integrate training and testing of the two tasks. A spatial-temporal And-Or graph model is introduced to represent action at three scales. Specifically the action is decomposed into poses which are further divided to mid-level ST-parts and then parts. The hierarchical structure of our model captures the geometric and appearance variations of pose at each frame and lateral connections between ST-parts at adjacent frames capture the action-specific motion information. The model parameters for three scales are learned discriminatively, and action labels and poses are efficiently inferred by dynamic programming. Experiments demonstrate that our approach achieves state-of-art accuracy in action recognition while also improving pose estimation.", "Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.", "Detecting objects in cluttered scenes and estimating articulated human body parts are two challenging problems in computer vision. The difficulty is particularly pronounced in activities involving human-object interactions (e.g. playing tennis), where the relevant object tends to be small or only partially visible, and the human body parts are often self-occluded. We observe, however, that objects and human poses can serve as mutual context to each other – recognizing one facilitates the recognition of the other. In this paper we propose a new random field model to encode the mutual context of objects and human poses in human-object interaction activities. We then cast the model learning task as a structure learning problem, of which the structural connectivity between the object, the overall human pose, and different body parts are estimated through a structure search approach, and the parameters of the model are estimated by a new max-margin algorithm. On a sports data set of six classes of human-object interactions [12], we show that our mutual context model significantly outperforms state-of-the-art in detecting very difficult objects and human poses." ] }
1907.06870
2960799269
Model compression has become necessary when applying neural networks (NN) into many real application tasks that can accept slightly-reduced model accuracy with strict tolerance to model complexity. Recently, Knowledge Distillation, which distills the knowledge from well-trained and highly complex teacher model into a compact student model, has been widely used for model compression. However, under the strict requirement on the resource cost, it is quite challenging to achieve comparable performance with the teacher model, essentially due to the drastically-reduced expressiveness ability of the compact student model. Inspired by the nature of the expressiveness ability in Neural Networks, we propose to use multi-segment activation, which can significantly improve the expressiveness ability with very little cost, in the compact student model. Specifically, we propose a highly efficient multi-segment activation, called Light Multi-segment Activation (LMA), which can rapidly produce multiple linear regions with very few parameters by leveraging the statistical information. With using LMA, the compact student model is capable of achieving much better performance effectively and efficiently, than the ReLU-equipped one with same model scale. Furthermore, the proposed method is compatible with other model compression techniques, such as quantization, which means they can be used jointly for better compression performance. Experiments on state-of-the-art NN architectures over the real-world tasks demonstrate the effectiveness and extensibility of the LMA.
Besides, using distillation for size reduction is mentioned by , which gives a new direction for training compact student models. The weighted average of soft distributed representation from the teacher's output and ground truth is much useful when training a model, so that some practices have been put for training compressed compact model. Moreover, recent works also proposed to combine the quantization with distillation, producing better compression results. Among these, @cite_7 used knowledge distillation for low-precision models, which proposes distillation can also help training the quantized model. @cite_20 proposed a more in-depth combination of these two methods, named Quantized Distillation. Besides, there are also some works further reduced the model size by combining multiple compression techniques like quantization, weight sharing and weight coding. Moreover, the combination of our method with the other is also shown in this paper.
{ "cite_N": [ "@cite_20", "@cite_7" ], "mid": [ "2964203871", "2963723401" ], "abstract": [ "Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger teacher networks into smaller student networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher, into the training of a student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to full-precision teacher models, while providing order of magnitude compression, and inference speedup that is linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices.", "Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems -- the models (often deep networks or wide networks or both) are compute and memory intensive. Low precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low precision networks can be significantly improved by using knowledge distillation techniques. We call our approach Apprentice and show state-of-the-art accuracies using ternary precision and 4-bit precision for many variants of ResNet architecture on ImageNet dataset. We study three schemes in which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline." ] }
1907.06870
2960799269
Model compression has become necessary when applying neural networks (NN) into many real application tasks that can accept slightly-reduced model accuracy with strict tolerance to model complexity. Recently, Knowledge Distillation, which distills the knowledge from well-trained and highly complex teacher model into a compact student model, has been widely used for model compression. However, under the strict requirement on the resource cost, it is quite challenging to achieve comparable performance with the teacher model, essentially due to the drastically-reduced expressiveness ability of the compact student model. Inspired by the nature of the expressiveness ability in Neural Networks, we propose to use multi-segment activation, which can significantly improve the expressiveness ability with very little cost, in the compact student model. Specifically, we propose a highly efficient multi-segment activation, called Light Multi-segment Activation (LMA), which can rapidly produce multiple linear regions with very few parameters by leveraging the statistical information. With using LMA, the compact student model is capable of achieving much better performance effectively and efficiently, than the ReLU-equipped one with same model scale. Furthermore, the proposed method is compatible with other model compression techniques, such as quantization, which means they can be used jointly for better compression performance. Experiments on state-of-the-art NN architectures over the real-world tasks demonstrate the effectiveness and extensibility of the LMA.
A piecewise linear function is composed of multiple linear segments. Some piecewise functions are continuous when the boundary value calculated by two adjacent intervals function is the same, whereas some may not be continuous. Benefit from its simplicity and the fitting ability to any function with enough segments, it is widely-used in machine learning models , especially as activations in Neural Networks . Theoretically, @cite_21 @cite_19 studied the number of linear regions in Neural Networks produced by piecewise linear activation functions (PLA), which can be used to measure the expressiveness ability of the networks.
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "1981530182", "2157509251" ], "abstract": [ "This paper explores the complexity of deep feedforward networks with linear pre-synaptic couplings and rectified linear activations. This is a contribution to the growing body of work contrasting the representational power of deep and shallow network architectures. In particular, we offer a framework for comparing deep and shallow models that belong to the family of piecewise linear functions based on computational geometry. We look at a deep rectifier multi-layer perceptron (MLP) with linear outputs units and compare it with a single layer version of the model. In the asymptotic regime, when the number of inputs stays constant, if the shallow model has @math hidden units and @math inputs, then the number of linear regions is @math . For a @math layer model with @math hidden units on each layer it is @math . The number @math grows faster than @math when @math tends to infinity or when @math tends to infinity and @math . Additionally, even when @math is small, if we restrict @math to be @math , we can show that a deep model has considerably more linear regions that a shallow one. We consider this as a first step towards understanding the complexity of these models and specifically towards providing suitable mathematical tools for future analysis.", "The film-diffusion and the intraparticle-diffusion models are widely used to analyze the mechanism of adsorption. The plots of these models often have a multi-linear nature, and in general, the graphical method is employed to analyze the data in which the linear segments are determined visually. This method suffers from subjectivity and therefore its estimated diffusion parameters are not very reliable. An alternative statistical method, piecewise linear regression (PLR) is presented and applied to experimental data. The results demonstrate that the use of PLR is practical and leads to diffusion estimates that may be quite different from the graphical method. PLR also determined the exact time periods for each diffusion regime, which opens new possibilities for analyzing and understanding the mechanism of diffusion. In order to encourage the testing and application of PLR, an easy to use Microsoft® Excel™ spreadsheet is made available." ] }
1907.07023
2962145851
Training convolutional networks for semantic segmentation with strong (per-pixel) and weak (per-bounding-box) supervision requires a large amount of weakly labeled data. We propose two methods for selecting the most relevant data with weak supervision. The first method is designed for finding visually similar images without the need of labels and is based on modeling image representations with a Gaussian Mixture Model (GMM). As a byproduct of GMM modeling, we present useful insights on characterizing the data generating distribution. The second method aims at finding images with high object diversity and requires only the bounding box labels. Both methods are developed in the context of automated driving and experimentation is conducted on Cityscapes and Open Images datasets. We demonstrate performance gains by reducing the amount of employed weakly labeled images up to 100 times for Open Images and up to 20 times for Cityscapes.
Our trained convolutional networks, the GMM models, and the two selection methods algorithms will be made available to the research community @cite_28 .
{ "cite_N": [ "@cite_28" ], "mid": [ "2902142571" ], "abstract": [ "Modern computer vision algorithms often rely on very large training datasets. However, it is conceivable that a carefully selected subsample of the dataset is sufficient for training. In this paper, we propose a gradient-based importance measure that we use to empirically analyze relative importance of training images in four datasets of varying complexity. We find that in some cases, a small subsample is indeed sufficient for training. For other datasets, however, the relative differences in importance are negligible. These results have important implications for active learning on deep networks. Additionally, our analysis method can be used as a general tool to better understand diversity of training examples in datasets." ] }
1907.06778
2960282122
Location-based queries enable fundamental services for mobile road network travelers. While the benefits of location-based services (LBS) are numerous, exposure of mobile travelers' location information to untrusted LBS providers may lead to privacy breaches. In this paper, we propose StarCloak, a utility-aware and attack-resilient approach to building a privacy-preserving query system for mobile users traveling on road networks. StarCloak has several desirable properties. First, StarCloak supports user-defined k-user anonymity and l-segment indistinguishability, along with user-specified spatial and temporal utility constraints, for utility-aware and personalized location privacy. Second, unlike conventional solutions which are indifferent to underlying road network structure, StarCloak uses the concept of stars and proposes cloaking graphs for effective location cloaking on road networks. Third, StarCloak achieves strong attack-resilience against replay and query injection-based attacks through randomized star selection and pruning. Finally, to enable scalable query processing with high throughput, StarCloak makes cost-aware star selection decisions by considering query evaluation and network communication costs. We evaluate StarCloak on two real-world road network datasets under various privacy and utility constraints. Results show that StarCloak achieves improved query success rate and throughput, reduced anonymization time and network usage, and higher attack-resilience in comparison to XStar, its most relevant competitor.
falls under the location obfuscation category. Under this category, Mouratidis and Yiu @cite_10 provide @math -anonymity for road network travelers under the reciprocity requirement. @cite_6 support personalized privacy specifications such that a cloaked region satisfies @math -anonymity and includes a total minimum segment length of @math . Li and Palanisamy @cite_1 propose cloaking schemes such that anonymity levels can be reduced to accommodate multi-level privacy and selective de-anonymization. @cite_28 study the orthogonal problem of , and define the M-cut requirement to achieve path privacy. A similar path privacy problem is studied in @cite_24 . Another orthogonal problem is semantic-aware and privacy-preserving sharing of sensitive locations under road network constraints @cite_3 @cite_19 . In contrast, does not require semantic annotation. Most closely related to our work under this category is @cite_25 . We empirically compare against and show that is superior to in several aspects.
{ "cite_N": [ "@cite_28", "@cite_1", "@cite_6", "@cite_3", "@cite_24", "@cite_19", "@cite_10", "@cite_25" ], "mid": [ "2143591878", "1992466934", "1999070506", "2011196257", "2510075483", "2409467540", "2146905023", "2112469755" ], "abstract": [ "The spatial query has been one of the highly demanded services in mobile computing system recently. To protect users' location privacy, existing architecture provides a trustworthy anonymizer to blur users' location from the service provider. However, with mobile capability, users' location extends from one spot to a continuous traveling route. For such continuous spatial query, it raises much more challenges for an anonymizer to protect users' continuous privacy. This paper conducts research on ensuring users' location privacy under the network-constrained road network environments. We first argue that the concept of continuous location privacy should be transferred to users' path privacy, which are consecutive road segments that needs to be protected. A novel M-cut requirement is proposed to achieve the goal of user path privacy. Mobile users can customize their privacy level through M-cut requirement. Last, two methods of constructing the cloaked spatial region are provided in our research, namely Random Selection and Junction Sharing. These algorithms support path privacy and also take system computation and communication overhead into consideration.", "With advances in sensing and positioning technology, fueled by the ubiquitous deployment of wireless networks, location-aware computing has become a fundamental model for offering a wide range of life enhancing services. However, the ability to locate users and mobile objects opens doors for new threats - the intrusion of location privacy. Location anonymization refers to the process of perturbing the exact location of users as a cloaking region such that a user's location becomes indistinguishable from the location of a set of other users. A fundamental limitation of existing location anonymization techniques is that location information once perturbed to provide a certain anonymity level cannot be reversed to reduce anonymity or the degree of perturbation. This is especially a serious limiting factor in multi-level privacy-controlled scenarios where different users of the location information have different levels of access. This paper presents ReverseCloak, a new class of reversible location cloaking mechanisms that effectively support multi-level location privacy, allowing selective de-anonymization of the cloaking region to reduce the granularity of the perturbed location when suitable access credentials are provided. We evaluate the ReverseCloak techniques through extensive experiments on realistic road network traces generated by GTMobiSim. Our experiments show that the proposed techniques are efficient, scalable and provide the required level of privacy.", "Recently, several techniques have been proposed to protect the user location privacy for location-based services in the Euclidean space. Applying these techniques directly to the road network environment would lead to privacy leakage and inefficient query processing. In this paper, we propose a new location anonymization algorithm that is designed specifically for the road network environment. Our algorithm relies on the commonly used concept of spatial cloaking, where a user location is cloaked into a set of connected road segments of a minimum total length @math including at least @math users. Our algorithm is \"query-aware\" as it takes into account the query execution cost at a database server and the query quality, i.e., the number of objects returned to users by the database server, during the location anonymization process. In particular, we develop a new cost function that balances between the query execution cost and the query quality. Then, we introduce two versions of our algorithm, namely, pure greedy and randomized greedy, that aim to minimize the developed cost function and satisfy the user specified privacy requirements. To accommodate intervals with a high workload, we introduce a shared execution paradigm that boosts the scalability of our location anonymization algorithm and the database server to support large numbers of queries received in a short time period. Extensive experimental results show that our algorithms are more efficient and scalable than the state-of-the-art technique, in terms of both query execution cost and query quality. The results also show that our algorithms have very strong resilience to two privacy attacks, namely, the replay attack and the center-of-cloaked-area attack.", "This paper presents a privacy-preserving framework for the protection of sensitive positions in real time trajectories. We assume a scenario in which the sensitivity of user's positions is space-varying, and so depends on the spatial context, while the user's movement is confined to road networks and places. Typical users are the non-anonymous members of a geo-social network who agree to share their exact position whenever such position does not fall within a sensitive place, e.g. a hospital. Suspending location sharing while the user is inside a sensitive place is not an appropriate solution because the user's stopovers can be easily inferred from the user's trace. In this paper we present an extension of the semantic location cloaking model [1] originally developed for the cloaking of non-correlated positions in an unconstrained space. We investigate different algorithms for the generation of cloaked regions over the graph representing the urban setting. We also integrate methods to prevent velocity based linkage attacks. Finally we evaluate experimentally the algorithms using a real data set.", "Many applications of location based services (LBSs), it is useful or even necessary to ensure that LBSs services determine their location. For continuous queries where users report their locations periodically, attackers can infer more about users' privacy by analyzing the correlations of their query samples. The causes of path privacy problems, which emerge because the communication by different users in road network using location based services so, attacker can track continuous query information. LBSs, albeit useful and convenient, pose a serious threat to users' path privacy as they are enticed to reveal their locations to LBS providers via their queries for location-based information. Traditional path privacy solutions designed in Euclidean space can be hardly applied to road network environment because of their ignorance of network topological properties. In this paper, we proposed a novel dynamic path privacy protection scheme for continuous query service in road networks. Our scheme also conceals DPP (Dynamic Path Privacy) users' identities from adversaries; this is provided in initiator untraceability property of the scheme. We choose the different attack as our defending target because it is a particularly challenging attack that can be successfully launched without compromising any user or having access to any cryptographic keys. The security analysis shows that the model can effectively protect the user identity anonymous, location information and service content in LBSs. All simulation results confirm that our Dynamic Path Privacy scheme is not only more accurate than the related schemes, but also provide better locatable ratio where the highest it can be around 95 of unknown nodes those can estimate their position. Furthermore, the scheme has good computation cost as well as communication and storage costs.Simulation results show that Dynamic Path Privacy has better performances compared to some related region based algorithms such as IAPIT scheme, half symmetric lens based localization algorithm (HSL) and sequential approximate maximum a posteriori (AMAP) estimator scheme.", "In this paper, we address the topic of location privacy preservation of mobile users on road networks. Most existing techniques of privacy preservation rely on structure-based spatial cloaking, but pay little attention to location semantic information. Yet, location semantic information may disclose sensitive information about mobile users. Thus, we propose CloSed, a semantic-awareness privacy preservation model to protect users' privacy from violation. We design cloaked sets that should cover different semantic regions of road networks as well as satisfy quality of service QoS. As the problem of calculating the optimal cloaked set is NP-hard, we design a greedy algorithm that balances QoS and privacy requirements. Extensive experiments evaluations demonstrate the efficiency and effectiveness of our proposed algorithm in providing privacy guarantees on large real-world datasets.", "The increasing availability of location-aware mobile devices has given rise to a flurry of location-based services (LBSs). Due to the nature of spatial queries, an LBS needs the user position in order to process her requests. On the other hand, revealing exact user locations to a (potentially untrusted) LBS may pinpoint their identities and breach their privacy. To address this issue, spatial anonymity techniques obfuscate user locations, forwarding to the LBS a sufficiently large region instead. Existing methods explicitly target processing in the euclidean space and do not apply when proximity to the users is defined according to network distance (e.g., driving time through the roads of a city). In this paper, we propose a framework for anonymous query processing in road networks. We design location obfuscation techniques that: (1) provide anonymous LBS access to the users and (2) allow efficient query processing at the LBS side. Our techniques exploit existing network database infrastructure, requiring no specialized storage schemes or functionalities. We experimentally compare alternative designs in real road networks and demonstrate the effectiveness of our techniques.", "Consider a mobile client who travels over roads and wishes to receive location-based services (LBS) from untrusted service providers. How might the user obtain such services without exposing her private position information? Meanwhile, how could the privacy protection mechanism incur no disincentive, e.g., excessive computation or communication cost, for any service provider or mobile user to participate in such a scheme? We detail this problem and present a general model for privacy-aware mobile services. A series of key features distinguish our solution from existing ones: a) it adopts the network-constrained mobility model (instead of the conventional random-waypoint model) to capture the privacy vulnerability of mobile users; b) it regards the attack resilience (for mobile users) and the query-processing cost (for service providers) as two critical measures for designing location privatization solutions, and provides corresponding analytical models; c) it proposes a robust and scalable location anonymization model, XStar, which best leverages the two measures; d) it introduces multi-folded optimizations in implementing XStar, which lead to further performance improvement. A comprehensive experimental evaluation is conducted to validate the analytical models and the efficacy of XStar." ] }
1907.06890
2956805085
End-to-end learning has recently emerged as a promising technique to tackle the problem of autonomous driving. Existing works show that learning a navigation policy from raw sensor data may reduce the system's reliance on external sensing systems, (e.g. GPS), and or outperform traditional methods based on state estimation and planning. However, existing end-to-end methods generally trade off performance for safety, hindering their diffusion to real-life applications. For example, when confronted with an input which is radically different from the training data, end-to-end autonomous driving systems are likely to fail, compromising the safety of the vehicle. To detect such failure cases, this work proposes a general framework for uncertainty estimation which enables a policy trained end-to-end to predict not only action commands, but also a confidence about its own predictions. In contrast to previous works, our framework can be applied to any existing neural network and task, without the need to change the network's architecture or loss, or to train the network. In order to do so, we generate confidence levels by forward propagation of input and model uncertainties using Bayesian inference. We test our framework on the task of steering angle regression for an autonomous car, and compare our approach to existing methods with both qualitative and quantitative results on a real dataset. Finally, we show an interesting by-product of our framework: robustness against adversarial attacks.
To account for model uncertainty in deep learning, a distribution is placed over neural network (NN) weights @math , defining a Bayesian neural network @cite_0 @cite_8 @cite_13 . The work of @cite_17 provides a mathematically grounded framework to capture model uncertainty leveraging dropout at test-time @cite_5 . Specifically, they propose to approximate the intractable posterior distribution over network weights @math given a specific training set @math by collecting multiple predictions for a single input, each with a different realization of weights due to dropout. This method is often referred to as (MC) .
{ "cite_N": [ "@cite_8", "@cite_0", "@cite_5", "@cite_13", "@cite_17" ], "mid": [ "2111051539", "2127538960", "2095705004", "1567512734", "" ], "abstract": [ "A quantitative and practical Bayesian framework is described for learning of mappings in feedforward networks. The framework makes possible (1) objective comparisons between solutions using alternative network architectures, (2) objective stopping rules for network pruning or growing procedures, (3) objective choice of magnitude and type of weight decay terms or additive regularizers (for penalizing large weights, etc.), (4) a measure of the effective number of well-determined parameters in a model, (5) quantified estimates of the error bars on network parameters and on network output, and (6) objective comparisons with alternative learning and interpolation models such as splines and radial basis functions. The Bayesian \"evidence\" automatically embodies \"Occam's razor,\" penalizing overflexible and overcomplex models. The Bayesian approach helps detect poor underlying assumptions in learning models. For learning models well matched to a problem, a good correlation between generalization ability and the Bayesian evidence is obtained.", "(1) The outputs of a typical multi-output classification network do not satisfy the axioms of probability; probabilities should be positive and sum to one. This problem can be solved by treating the trained network as a preprocessor that produces a feature vector that can be further processed, for instance by classical statistical estimation techniques. (2) We present a method for computing the first two moments of the probability distribution indicating the range of outputs that are consistent with the input and the training data. It is particularly useful to combine these two ideas: we implement the ideas of section 1 using Parzen windows, where the shape and relative size of each window is computed using the ideas of section 2. This allows us to make contact between important theoretical ideas (e.g. the ensemble formalism) and practical techniques (e.g. back-prop). Our results also shed new light on and generalize the well-known \"softmax\" scheme.", "Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.", "From the Publisher: Artificial \"neural networks\" are now widely used as flexible models for regression classification applications, but questions remain regarding what these models mean, and how they can safely be used when training data is limited. Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the \"overfitting\" that can occur with traditional neural network learning methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. Use of these models in practice is made possible using Markov chain Monte Carlo techniques. Both the theoretical and computational aspects of this work are of wider statistical interest, as they contribute to a better understanding of how Bayesian methods can be applied to complex problems. Presupposing only the basic knowledge of probability and statistics, this book should be of interest to many researchers in statistics, engineering, and artificial intelligence. Software for Unix systems that implements the methods described is freely available over the Internet.", "" ] }
1907.06890
2956805085
End-to-end learning has recently emerged as a promising technique to tackle the problem of autonomous driving. Existing works show that learning a navigation policy from raw sensor data may reduce the system's reliance on external sensing systems, (e.g. GPS), and or outperform traditional methods based on state estimation and planning. However, existing end-to-end methods generally trade off performance for safety, hindering their diffusion to real-life applications. For example, when confronted with an input which is radically different from the training data, end-to-end autonomous driving systems are likely to fail, compromising the safety of the vehicle. To detect such failure cases, this work proposes a general framework for uncertainty estimation which enables a policy trained end-to-end to predict not only action commands, but also a confidence about its own predictions. In contrast to previous works, our framework can be applied to any existing neural network and task, without the need to change the network's architecture or loss, or to train the network. In order to do so, we generate confidence levels by forward propagation of input and model uncertainties using Bayesian inference. We test our framework on the task of steering angle regression for an autonomous car, and compare our approach to existing methods with both qualitative and quantitative results on a real dataset. Finally, we show an interesting by-product of our framework: robustness against adversarial attacks.
A further step towards total uncertainty estimation was made by @cite_9 , that proposed a framework to jointly estimate both data and model uncertainty under the assumption of having input points with different noise levels than others. The data uncertainty is learned by training the NN under the : where the input noise @math has been made input-dependent, and @math is the output of the NN with parameters @math for input @math . By training a neural network with heteroscedastic loss (Eq. ) and by taking multiple forward samples applying dropout at test-time as in Sec. , the total variance is recovered as: with @math a set of @math sampled outputs for randomly masked weights @math . However, this approach needs to modify the network structure splitting its head into two parts to learn data uncertainty. Also, this approach forces to re-train under the heteroscedastic loss (Eq. ) to retrieve an uncertainty estimate, which often results in a performance drop.
{ "cite_N": [ "@cite_9" ], "mid": [ "2600383743" ], "abstract": [ "There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks." ] }
1907.06890
2956805085
End-to-end learning has recently emerged as a promising technique to tackle the problem of autonomous driving. Existing works show that learning a navigation policy from raw sensor data may reduce the system's reliance on external sensing systems, (e.g. GPS), and or outperform traditional methods based on state estimation and planning. However, existing end-to-end methods generally trade off performance for safety, hindering their diffusion to real-life applications. For example, when confronted with an input which is radically different from the training data, end-to-end autonomous driving systems are likely to fail, compromising the safety of the vehicle. To detect such failure cases, this work proposes a general framework for uncertainty estimation which enables a policy trained end-to-end to predict not only action commands, but also a confidence about its own predictions. In contrast to previous works, our framework can be applied to any existing neural network and task, without the need to change the network's architecture or loss, or to train the network. In order to do so, we generate confidence levels by forward propagation of input and model uncertainties using Bayesian inference. We test our framework on the task of steering angle regression for an autonomous car, and compare our approach to existing methods with both qualitative and quantitative results on a real dataset. Finally, we show an interesting by-product of our framework: robustness against adversarial attacks.
Sampling approaches are often too slow for practical scenarios. @cite_6 introduced a lightweight approach to recover uncertainty while maintaining the same network architecture, with minor changes to propagate both mean and variance of the input distribution. They propose to replace every intermediate network activation by distributions, following the work in @cite_14 for non-linear Gaussian belief networks. Moreover, the distribution is propagated through the network in a single pass using Assumed Density Filtering (ADF) @cite_1 @cite_10 (Fig. ).
{ "cite_N": [ "@cite_10", "@cite_14", "@cite_1", "@cite_6" ], "mid": [ "2437421599", "2010629420", "1575388622", "2964339591" ], "abstract": [ "Buoyed by the success of deep multilayer neural networks, there is renewed interest in scalable learning of Bayesian neural networks. Here, we study algorithms that utilize recent advances in Bayesian inference to efficiently learn distributions over network weights. In particular, we focus on recently proposed assumed density filtering based methods for learning Bayesian neural networks – Expectation and Probabilistic backpropagation. Apart from scaling to large datasets, these techniques seamlessly deal with non-differentiable activation functions and provide parameter (learning rate, momentum) free learning. In this paper, we first rigorously compare the two algorithms and in the process develop several extensions, including a version of EBP for continuous regression problems and a PBP variant for binary classification. Next, we extend both algorithms to deal with multiclass classification and count regression problems. On a variety of diverse real world benchmarks, we find our extensions to be effective, achieving results competitive with the state-of-the-art.", "We view perceptual tasks such as vision and speech recognition as inference problems where the goal is to estimate the posterior distribution over latent variables (e.g., depth in stereo vision) given the sensory input. The recent flurry of research in independent component analysis exemplifies the importance of inferring the continuousvalued latent variables of input data. The latent variables found by this method are linearly related to the input, but perception requires nonlinear inferences such as classification and depth estimation. In this paper, we present a unifying framework for stochastic neural networks with nonlinear latent variables. Nonlinear units are obtained by passing the outputs of linear Gaussian units through various nonlinearities. We present a general variational method that maximizes a lower bound on the likelihood of a training set and give results on two visual feature extraction problems. We also show how the variational method can be used for pattern classification and compare the performance of these nonlinear networks with other methods on the problem of handwritten digit recognition.", "The monitoring and control of any dynamic system depends crucially on the ability to reason about its current status and its future trajectory. In the case of a stochastic system, these tasks typically involve the use of a belief state--a probability distribution over the state of the process at a given point in time. Unfortunately, the state spaces of complex processes are very large, making an explicit representation of a belief state intractable. Even in dynamic Bayesian networks (DBNs), where the process itself can be represented compactly, the representation of the belief state is intractable. We investigate the idea of maintaining a compact approximation to the true belief state, and analyze the conditions under which the errors due to the approximations taken over the lifetime of the process do not accumulate to make our answers completely irrelevant. We show that the error in a belief state contracts exponentially as the process evolves. Thus, even with multiple approximations, the error in our process remains bounded indefinitely. We show how the additional structure of a DBN can be used to design our approximation scheme, improving its performance significantly. We demonstrate the applicability of our ideas in the context of a monitoring task, showing that orders of magnitude faster inference can be achieved with only a small degradation in accuracy.", "Even though probabilistic treatments of neural networks have a long history, they have not found widespread use in practice. Sampling approaches are often too slow already for simple networks. The size of the inputs and the depth of typical CNN architectures in computer vision only compound this problem. Uncertainty in neural networks has thus been largely ignored in practice, despite the fact that it may provide important information about the reliability of predictions and the inner workings of the network. In this paper, we introduce two lightweight approaches to making supervised learning with probabilistic deep networks practical: First, we suggest probabilistic output layers for classification and regression that require only minimal changes to existing networks. Second, we employ assumed density filtering and show that activation uncertainties can be propagated in a practical fashion through the entire network, again with minor changes. Both probabilistic networks retain the predictive power of the deterministic counterpart, but yield uncertainties that correlate well with the empirical error induced by their predictions. Moreover, the robustness to adversarial examples is significantly increased." ] }
1907.06823
2960410483
In this paper, an stereo-based traversability analysis approach for all terrains in off-road mobile robotics, e.g. Unmanned Ground Vehicles (UGVs) is proposed. This approach reformulates the problem of terrain traversability analysis into two main problems: (1) 3D terrain reconstruction and (2) terrain all surfaces detection and analysis. The proposed approach is using stereo camera for perception and 3D reconstruction of the terrain. In order to detect all the existing surfaces in the 3D reconstructed terrain as superpixel surfaces (i.e. segments), an image segmentation technique is applied using geometry-based features (pixel-based surface normals). Having detected all the surfaces, Superpixel Surface Traversability Analysis approach (SSTA) is applied on all of the detected surfaces (superpixel segments) in order to classify them based on their traversability index. The proposed SSTA approach is based on: (1) Superpixel surface normal and plane estimation, (2) Traversability analysis using superpixel surface planes. Having analyzed all the superpixel surfaces based on their traversability, these surfaces are finally classified into five main categories as following: traversable, semi-traversable, non-traversable, unknown and undecided.
Our recent work @cite_1 in terrain traversability estimation mainly proposed geometry based features such as pixel-based surface normals using a stereo camera for environment perception. Mainly, it was explained how these pixel-based surface normals can perform the generated terrain point cloud segmentation and terrain classification based on traversability criteria although the classification results were not accurate and robust enough in some cases such as: low quality of the generated point cloud lack of the dominant ground plane false classification of segments based on the traversability criteria such as max traversable step and slope step analysis was not clear enough or ignored
{ "cite_N": [ "@cite_1" ], "mid": [ "2239990147" ], "abstract": [ "A stereo-based terrain classification for traversability estimation of all terrains in offroad mobile robots is presented. The proposed method defines the roughness of the surrounding terrain for every single pixel in the image or point in a point cloud using surface normals and also explains how this is applied to all terrain by knowing the kinematics capability, mechanical constraints and size of the UGV. Roughness estimation using surface normals helps us model the whole terrain with a virtual function which increases the accuracy of the resulting terrain travesability map considerably." ] }
1907.06823
2960410483
In this paper, an stereo-based traversability analysis approach for all terrains in off-road mobile robotics, e.g. Unmanned Ground Vehicles (UGVs) is proposed. This approach reformulates the problem of terrain traversability analysis into two main problems: (1) 3D terrain reconstruction and (2) terrain all surfaces detection and analysis. The proposed approach is using stereo camera for perception and 3D reconstruction of the terrain. In order to detect all the existing surfaces in the 3D reconstructed terrain as superpixel surfaces (i.e. segments), an image segmentation technique is applied using geometry-based features (pixel-based surface normals). Having detected all the surfaces, Superpixel Surface Traversability Analysis approach (SSTA) is applied on all of the detected surfaces (superpixel segments) in order to classify them based on their traversability index. The proposed SSTA approach is based on: (1) Superpixel surface normal and plane estimation, (2) Traversability analysis using superpixel surface planes. Having analyzed all the superpixel surfaces based on their traversability, these surfaces are finally classified into five main categories as following: traversable, semi-traversable, non-traversable, unknown and undecided.
In @cite_11 , a similar approach has been proposed for terrain traversability analysis using Kinect on mobile robots. This approach is mainly based on geometry-based pixel-based surface normals and considering the kinematic capability of the vehicle such as max height, max slope and max step. This approach was applied to the very rough and disastrous terrain and produced reasonable results using only Kinect which makes it hard to know how it works with stereo camera and in offline outdoor terrain since the point cloud generated by Kinect is much cleaner and denser than the point cloud generated by stereo camera.
{ "cite_N": [ "@cite_11" ], "mid": [ "2071548090" ], "abstract": [ "For autonomous robots, the ability to classify their local surroundings into traversable and non-traversable areas is crucial for navigation. In this paper, we address the problem of online traversability analysis for robots that are only equipped with a Kinect-style sensor. Our approach processes the depth data at 10 fps-25 fps on a standard notebook computer without using the GPU and allows for robustly identifying the areas in front of the sensor that are safe for navigation. The component presented here is one of the building blocks of the EU project ROVINA that aims at the exploration and digital preservation of hazardous archeological sites with mobile robots. Real world evaluations have been conducted in controlled lab environments, in an outdoor scene, as well as in a real, partially unexplored, and roughly 1700 year old Roman catacomb." ] }
1907.06823
2960410483
In this paper, an stereo-based traversability analysis approach for all terrains in off-road mobile robotics, e.g. Unmanned Ground Vehicles (UGVs) is proposed. This approach reformulates the problem of terrain traversability analysis into two main problems: (1) 3D terrain reconstruction and (2) terrain all surfaces detection and analysis. The proposed approach is using stereo camera for perception and 3D reconstruction of the terrain. In order to detect all the existing surfaces in the 3D reconstructed terrain as superpixel surfaces (i.e. segments), an image segmentation technique is applied using geometry-based features (pixel-based surface normals). Having detected all the surfaces, Superpixel Surface Traversability Analysis approach (SSTA) is applied on all of the detected surfaces (superpixel segments) in order to classify them based on their traversability index. The proposed SSTA approach is based on: (1) Superpixel surface normal and plane estimation, (2) Traversability analysis using superpixel surface planes. Having analyzed all the superpixel surfaces based on their traversability, these surfaces are finally classified into five main categories as following: traversable, semi-traversable, non-traversable, unknown and undecided.
In @cite_6 and @cite_4 , an geometry-based feature is proposed for roughness estimation so-called as Unevenness Point Descriptor (UPD). This feature is basically describing the unevenness and roughness on one point by measuring pixel-based normals and averaging the normals in k-neighborhood. This proposed feature is describing surface unevenness based on the average surface normals of the points in one neighborhood. This approach is not robust enough in very rough environment since this approach is not including the kinematics capability of the different UGVs in traversability analysis.
{ "cite_N": [ "@cite_4", "@cite_6" ], "mid": [ "2044903630", "2098373244" ], "abstract": [ "Purpose – This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment. Design methodology approach – The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity. Findings – The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities....", "In recent years, the use of imaging sensors that produce a three-dimensional representation of the environment has become an efficient solution to increase the degree of perception of autonomous mobile robots. Accurate and dense 3D point clouds can be generated from traditional stereo systems and laser scanners or from the new generation of RGB-D cameras, representing a versatile, reliable and cost-effective solution that is rapidly gaining interest within the robotics community. For autonomous mobile robots, it is critical to assess the traversability of the surrounding environment, especially when driving across natural terrain. In this paper, a novel approach to detect traversable and non-traversable regions of the environment from a depth image is presented that could enhance mobility and safety through integration with localization, control and planning methods. The proposed algorithm is based on the analysis of the normal vector of a surface obtained through Principal Component Analysis and it leads to the definition of a novel, so defined, Unevenness Point Descriptor. Experimental results, obtained with vehicles operating in indoor and outdoor environments, are presented to validate this approach." ] }
1907.06777
2961980567
Accurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.
Previous works have recognized that accurate orientation estimation requires a feature extraction process that captures fine-grained semantic information of objects. @cite_4 identify that standard Faster R-CNN feature maps are too low resolution for pedestrians and instead use atrous convolutions @cite_10 and pool features from shallower layers. Pyramid structures including image pyramids @cite_19 and feature pyramids @cite_26 have also been leveraged to obtain information from multiple scales. SubCNN @cite_0 use image pyramids to handle scale changes of objects, and @cite_12 highlight the importance of a pyramid structure for small classes such as pedestrians. Moreover, @cite_39 @cite_43 @cite_8 have shown the importance in how methods crop ROI features. @cite_42 note that the standard ROI crop can warp the appearance of shape and pose, and propose the use of a virtual ROI camera to address this. We instead obtain fine-grained details by using multiple virtual ROI cameras placed canonically around each object's detected centroid. As compared to @cite_42 , we not only use 2D RGB data, but also use a 3D point cloud to produce realistic novel viewpoints that maintain consistent object appearance.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_8", "@cite_42", "@cite_39", "@cite_0", "@cite_19", "@cite_43", "@cite_10", "@cite_12" ], "mid": [ "2565639579", "2497039038", "", "2799123546", "2798505423", "2342242867", "8437397", "", "2412782625", "2963400571" ], "abstract": [ "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.", "", "We present a fast inverse-graphics framework for instance-level 3D scene understanding. We train a deep convolutional network that learns to map image regions to the full 3D shape and pose of all object instances in the image. Our method produces a compact 3D representation of the scene, which can be readily used for applications like autonomous driving. Many traditional 2D vision outputs, like instance segmentations and depth-maps, can be obtained by simply rendering our output 3D scene model. We exploit class-specific shape priors by learning a low dimensional shape-space from collections of CAD models. We present novel representations of shape and pose, that strive towards better 3D equivariance and generalization. In order to exploit rich supervisory signals in the form of 2D annotations like segmentation, we propose a differentiable Render-and-Compare loss that allows 3D shape and pose to be learned with 2D supervision. We evaluate our method on the challenging real-world datasets of Pascal3D+ and KITTI, where we achieve state-of-the-art results.", "We present a system that transforms a monocular video of a soccer game into a moving 3D reconstruction, in which the players and field can be rendered interactively with a 3D viewer or through an Augmented Reality device. At the heart of our paper is an approach to estimate the depth map of each player, using a CNN that is trained on 3D player data extracted from soccer video games. We compare with state of the art body pose and depth estimation techniques, and show results on both synthetic ground truth benchmarks, and real YouTube soccer footage.", "In Convolutional Neural Network (CNN)-based object detection methods, region proposal becomes a bottleneck when objects exhibit significant scale variation, occlusion or truncation. In addition, these methods mainly focus on 2D object detection and cannot estimate detailed properties of objects. In this paper, we propose subcategory-aware CNNs for object detection. We introduce a novel region proposal network that uses subcategory information to guide the proposal generating process, and a new detection network for joint detection and subcategory classification. By using subcategories related to object pose, we achieve state of-the-art performance on both detection and pose estimation on commonly used benchmarks.", "", "", "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.", "We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at" ] }
1907.06777
2961980567
Accurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.
Using keypoint detections @cite_31 @cite_40 @cite_45 @cite_36 and CAD models @cite_44 @cite_29 have been shown to be effective in gaining semantic understanding of objects of interest. The use of 2D keypoint detections to estimate pose has been well studied as the Perspective-n-Point (PnP) problem with many proposed solutions @cite_21 @cite_5 @cite_27 . More recently, @cite_44 @cite_29 use 3D CAD models and convolutional neural networks (CNNs) to detect keypoints to learn 3D pose. Within the autonomous driving context, Deep MANTA @cite_41 predicts vehicle part coordinates and uses a vehicle CAD model dataset to estimate 3D poses. CAD models have also been used to create additional ground truth labels. @cite_1 argue the scarcity of training data with viewpoint annotations hinders viewpoint estimation performance, so they use 3D models to generate accurate ground truth data. Unlike the above methods, we do not require additional keypoint labels or external CAD datasets. We propose a general pipeline that leverages available data, and with our virtual cameras we generate novel high resolution viewpoints.
{ "cite_N": [ "@cite_36", "@cite_41", "@cite_29", "@cite_21", "@cite_1", "@cite_44", "@cite_40", "@cite_45", "@cite_27", "@cite_5", "@cite_31" ], "mid": [ "", "2605189827", "", "1991544872", "1591870335", "2600447016", "", "", "", "", "2113325037" ], "abstract": [ "", "In this paper, we present a novel approach, called Deep MANTA (Deep Many-Tasks), for many-task vehicle analysis from a given image. A robust convolutional network is introduced for simultaneous vehicle detection, part localization, visibility characterization and 3D dimension estimation. Its architecture is based on a new coarse-to-fine object proposal that boosts the vehicle detection. Moreover, the Deep MANTA network is able to localize vehicle parts even if these parts are not visible. In the inference, the networks outputs are used by a real time robust pose estimation algorithm for fine orientation estimation and 3D vehicle localization. We show in experiments that our method outperforms monocular state-of-the-art approaches on vehicle detection, orientation and 3D location tasks on the very challenging KITTI benchmark.", "", "We propose a non-iterative solution to the PnP problem--the estimation of the pose of a calibrated camera from n 3D-to-2D point correspondences--whose computational complexity grows linearly with n. This is in contrast to state-of-the-art methods that are O(n 5) or even O(n 8), without being more accurate. Our method is applicable for all n?4 and handles properly both planar and non-planar configurations. Our central idea is to express the n 3D points as a weighted sum of four virtual control points. The problem then reduces to estimating the coordinates of these control points in the camera referential, which can be done in O(n) time by expressing these coordinates as weighted sum of the eigenvectors of a 12×12 matrix and solving a small constant number of quadratic equations to pick the right weights. Furthermore, if maximal precision is required, the output of the closed-form solution can be used to initialize a Gauss-Newton scheme, which improves accuracy with negligible amount of additional time. The advantages of our method are demonstrated by thorough testing on both synthetic and real-data.", "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "This paper presents a novel approach to estimating the continuous six degree of freedom (6-DoF) pose (3D translation and rotation) of an object from a single RGB image. The approach combines semantic keypoints predicted by a convolutional network (convnet) with a deformable shape model. Unlike prior work, we are agnostic to whether the object is textured or textureless, as the convnet learns the optimal representation from the available training image data. Furthermore, the approach can be applied to instance- and class-based pose recovery. Empirically, we show that the proposed approach can accurately recover the 6-DoF object pose for both instance- and class-based scenarios with a cluttered background. For class-based object pose estimation, state-of-the-art accuracy is shown on the large-scale PASCAL3D+ dataset.", "", "", "", "", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images." ] }
1907.06777
2961980567
Accurately estimating the orientation of pedestrians is an important and challenging task for autonomous driving because this information is essential for tracking and predicting pedestrian behavior. This paper presents a flexible Virtual Multi-View Synthesis module that can be adopted into 3D object detection methods to improve orientation estimation. The module uses a multi-step process to acquire the fine-grained semantic information required for accurate orientation estimation. First, the scene's point cloud is densified using a structure preserving depth completion algorithm and each point is colorized using its corresponding RGB pixel. Next, virtual cameras are placed around each object in the densified point cloud to generate novel viewpoints, which preserve the object's appearance. We show that this module greatly improves the orientation estimation on the challenging pedestrian class on the KITTI benchmark. When used with the open-source 3D detector AVOD-FPN, we outperform all other published methods on the pedestrian Orientation, 3D, and Bird's Eye View benchmarks.
Most similar to our work are 3D pose estimation methods designed for autonomous driving scenarios. These methods have mainly focused on the representation of orientation and designing new loss functions. Pose-RCNN @cite_13 uses a Biternion representation for orientation as recommended by @cite_11 . The monocular 3D object detection method Deep3DBox @cite_33 proposes a formulation that frames orientation estimation as a hybrid classification-regression problem. Here, orientation is discretized into several bins, and the network is tasked to classify the correct bin and to predict a regression offset. This formulation has been adopted by LiDAR methods including @cite_6 . @cite_12 identify an ambiguity issue where identical 3D boxes are created despite having orientation estimates differing by @math radians. They solve this issue by parameterizing orientation as an angle vector, while @cite_23 approach this same issue with a sine error loss. We show in our ablation studies (Sec. ) that parameterizing orientation as an angle vector while using the discrete-continuous angle bin formulation as an auxiliary loss is most effective.
{ "cite_N": [ "@cite_33", "@cite_6", "@cite_23", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2560544142", "2964062501", "2897529137", "2562663242", "2963400571", "2275395544" ], "abstract": [ "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26].", "In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.", "", "This paper presents a novel approach for joint object detection and orientation estimation in a single deep convolutional neural network utilizing proposals calculated from 3D data. For orientation estimation, we extend a R-CNN like architecture by several carefully designed layers. Two new object proposal methods are introduced, to make use of stereo as well as lidar data. Our experiments on the KITTI dataset show that by combining proposals of both domains, high recall can be achieved while keeping the number of proposals low. Furthermore, our method for joint detection and orientation estimation outperforms state of the art approaches for cyclists on the easy test scenario of the KITTI test dataset.", "We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark [1] while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is available at", "While head pose estimation has been studied for some time, continuous head pose estimation is still an open problem. Most approaches either cannot deal with the periodicity of angular data or require very fine-grained regression labels. We introduce biternion nets, a CNN-based approach that can be trained on very coarse regression labels and still estimate fully continuous ( 360 ^ ) head poses. We show state-of-the-art results on several publicly available datasets. Finally, we demonstrate how easy it is to record and annotate a new dataset with coarse orientation labels in order to obtain continuous head pose estimates using our biternion nets." ] }
1907.06796
2957060066
Augmented Reality (AR) brings immersive experiences to users. With recent advances in computer vision and mobile computing, AR has scaled across platforms, and has increased adoption in major products. One of the key challenges in enabling AR features is proper anchoring of the virtual content to the real world, a process referred to as tracking. In this paper, we present a system for motion tracking, which is capable of robustly tracking planar targets and performing relative-scale 6DoF tracking without calibration. Our system runs in real-time on mobile phones and has been deployed in multiple major products on hundreds of millions of devices.
Accurate initialization improves the resilience of SLAM algorithms and makes optimization converge faster. Researchers have relied on Structure-from-Motion (SfM) techniques, rotation averaging @cite_15 @cite_3 , or closed-form solutions @cite_9 to initialize the camera trajectories and the world map. However, these techniques still require parallax-inducing motion and accurate calibration, rendering them problematic for instant AR placement.
{ "cite_N": [ "@cite_9", "@cite_15", "@cite_3" ], "mid": [ "2910764107", "", "1536617987" ], "abstract": [ "The initialization is one of the less reliable pieces of Visual-Inertial SLAM (VI-SLAM) and Odometry (VI-O). The estimation of the initial state (camera poses, IMU states and landmark positions) from the first data readings lacks the accuracy and robustness of other parts of the pipeline, and most algorithms have high failure rates and or initialization delays up to tens of seconds. Such initialization is critical for AR systems, as the failures and delays of the current approaches can ruin the user experience or mandate impractical guided calibration. In this paper we address the state initialization problem using a monocular-inertial sensor setup, the most common in AR platforms. Our contributions are 1) a general linear formulation to obtain an initialization seed, and 2) a non-linear optimization scheme, including gravity, to refine the seed. Our experimental results, in a public dataset, show that our approach improves the accuracy and robustness of current VI state initialization schemes.", "", "Pose graph optimization is the non-convex optimization problem underlying pose-based Simultaneous Localization and Mapping (SLAM). If robot orientations were known, pose graph optimization would be a linear least-squares problem, whose solution can be computed efficiently and reliably. Since rotations are the actual reason why SLAM is a difficult problem, in this work we survey techniques for 3D rotation estimation. Rotation estimation has a rich history in three scientific communities: robotics, computer vision, and control theory. We review relevant contributions across these communities, assess their practical use in the SLAM domain, and benchmark their performance on representative SLAM problems (Fig. 1). We show that the use of rotation estimation to bootstrap iterative pose graph solvers entails significant boost in convergence speed and robustness." ] }
1907.06796
2957060066
Augmented Reality (AR) brings immersive experiences to users. With recent advances in computer vision and mobile computing, AR has scaled across platforms, and has increased adoption in major products. One of the key challenges in enabling AR features is proper anchoring of the virtual content to the real world, a process referred to as tracking. In this paper, we present a system for motion tracking, which is capable of robustly tracking planar targets and performing relative-scale 6DoF tracking without calibration. Our system runs in real-time on mobile phones and has been deployed in multiple major products on hundreds of millions of devices.
Planar trackers are widely used in SfM applications and panoramic image registration @cite_20 . @cite_21 studied planar tracking for augmented reality applications. Direct region tracking algorithms typically use a homography to warp an image patch from the template to the source and minimize the difference @cite_4 . @cite_11 proposed a region-based planar tracker using a second-order optimization method for minimizing SSD errors. @cite_13 is another region tracker using second-order optimization to minimize the sum of conditional variances. Lucas-Kanade and compositional trackers @cite_4 require re-evaluating the Hessian of the loss function at every iteration. Inverse compositional trackers @cite_4 speed up tracking by avoiding re-evaluation of the Hessian matrix.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_13", "@cite_20", "@cite_11" ], "mid": [ "2035379092", "2156118476", "2163806380", "2097062750", "2130502925" ], "abstract": [ "Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision. Applications range from optical flow and tracking to layered motion, mosaic construction, and face coding. Numerous algorithms have been proposed and a wide variety of extensions have been made to the original formulation. We present an overview of image alignment, describing most of the algorithms and their extensions in a consistent framework. We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed. We examine which of the extensions to Lucas-Kanade can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot. In this paper, Part 1 in a series of papers, we cover the quantity approximated, the warp update rule, and the gradient descent approximation. In future papers, we will cover the choice of the error function, how to allow linear appearance variation, and how to impose priors on the parameters.", "To realistically integrate 3D graphics into an unprepared environment, camera position must be estimated by tracking natural image features. We apply our technique to cases where feature positions in adjacent frames of an image sequence are related by a homography, or projective transformation. We describe this transformation's computation and demonstrate several applications. First, we use an augmented notice board to explain how a homography, between two images of a planar scene, completely determines the relative camera positions. Second, we show that the homography can also recover pure camera rotations, and we use this to develop an outdoor AR tracking system. Third, we use the system to measure head rotation and form a simple low-cost virtual reality (VR) tracking solution.", "The goal of this paper is to introduce a direct visual tracking method based on an image similarity measure called the sum of conditional variance (SCV). The SCV was originally proposed in the medical imaging domain for registering multi-modal images. In the context of visual tracking, the SCV is invariant to non-linear illumination variations, multi-modal and computationally inexpensive. Compared to information theoretic tracking methods, it requires less iterations to converge and has a significantly larger convergence radius. The novelty in this paper is a generalization of the efficient second-order minimization formulation for tracking using the SCV, allowing us to combine the efficient second-order approximation of the Hessian with a similarity metric invariant to non-linear illumination variations. The result is a visual tracking method that copes with non-linear illumination variations without requiring the estimation of photometric correction parameters at every iteration. We demonstrate the superior performance of the proposed method through comparative studies and tracking experiments under challenging illumination conditions and rapid motions.", "We present a novel method for the real-time creation and tracking of panoramic maps on mobile phones. The maps generated with this technique are visually appealing, very accurate and allow drift-free rotation tracking. This method runs on mobile phones at 30Hz and has applications in the creation of panoramic images for offline browsing, for visual enhancements through environment mapping and for outdoor Augmented Reality on mobile phones.", "The tracking algorithm presented in this paper is based on minimizing the sum-of-squared-difference between a given template and the current image. Theoretically, amongst all standard minimization algorithms, the Newton method has the highest local convergence rate since it is based on a second-order Taylor series of the sum-of-squared-differences. However, the Newton method is time consuming since it needs the computation of the Hessian. In addition, if the Hessian is not positive definite, convergence problems can occur. That is why several methods use an approximation of the Hessian. The price to pay is the loss of the high convergence rate. The aim of this paper is to propose a tracking algorithm based on a second-order minimization method which does not need to compute the Hessian." ] }
1907.06796
2957060066
Augmented Reality (AR) brings immersive experiences to users. With recent advances in computer vision and mobile computing, AR has scaled across platforms, and has increased adoption in major products. One of the key challenges in enabling AR features is proper anchoring of the virtual content to the real world, a process referred to as tracking. In this paper, we present a system for motion tracking, which is capable of robustly tracking planar targets and performing relative-scale 6DoF tracking without calibration. Our system runs in real-time on mobile phones and has been deployed in multiple major products on hundreds of millions of devices.
In @cite_12 , the authors propose a homography-based planar detection and tracking algorithm to estimate 6DoF camera poses. Recently, @cite_6 used detected surfaces from an image retrieval pipeline to initialize depth from the surface map. @cite_19 adopted gradient orientation for direct surface tracking. Correlation filters are also utilized to estimate rotation as well as scale for 4DoF tracking @cite_0 . @cite_2 built a planar object tracking dataset and surveyed some of the related work in planar tracking. @cite_8 proposed a model selection algorithm to detect which model, homography, or essential matrix describes the motion better.
{ "cite_N": [ "@cite_8", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_12" ], "mid": [ "1987855078", "2278591674", "2963251831", "2910170440", "2604588829", "1995478368" ], "abstract": [ "We present an approach to real-time tracking and mapping that supports any type of camera motion in 3D environments, that is, general (parallax-inducing) as well as rotation-only (degenerate) motions. Our approach effectively generalizes both a panorama mapping and tracking system and a keyframe-based Simultaneous Localization and Mapping (SLAM) system, behaving like one or the other depending on the camera movement. It seamlessly switches between the two and is thus able to track and map through arbitrary sequences of general and rotation-only camera movements.", "Accurately estimating a robot's pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server's pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.", "", "", "Planar object tracking is an actively studied problem in vision-based robotic applications. While several benchmarks have been constructed for evaluating state-of-the-art algorithms, there is a lack of video sequences captured in the wild rather than in constrained laboratory environment. In this paper, we present a carefully designed planar object tracking benchmark containing 210 videos of 30 planar objects sampled in the natural environment. In particular, for each object, we shoot seven videos involving various challenging factors, namely scale change, rotation, perspective distortion, motion blur, occlusion, out-of-view, and unconstrained. The ground truth is carefully annotated semi-manually to ensure the quality. Moreover, eleven state-of-the-art algorithms are evaluated on the benchmark using two evaluation metrics, with detailed analysis provided for the evaluation results. We expect the proposed benchmark to benefit future studies on planar object tracking.", "We present a real-time camera pose tracking and mapping system which uses the assumption of a planar scene to implement a highly efficient mapping algorithm. Our light-weight mapping approach is based on keyframes and plane-induced homographies between them. We solve the planar reconstruction problem of estimating the keyframe poses with an efficient image rectification algorithm. Camera pose tracking uses continuously extended and refined planar point maps and delivers robustly estimated 6DOF poses. We compare system and method with bundle adjustment and monocular SLAM on synthetic and indoor image sequences. We demonstrate large savings in computational effort compared to the monocular SLAM system while the reduction in accuracy remains acceptable." ] }
1907.06881
2956902387
Recent researches attempt to improve the detection performance by adopting the idea of cascade for single-stage detectors. In this paper, we analyze and discover that inconsistency is the major factor limiting the performance. The refined anchors are associated with the feature extracted from the previous location and the classifier is confused by misaligned classification and localization. Further, we point out two main designing rules for the cascade manner: improving consistency between classification confidence and localization performance, and maintaining feature consistency between different stages. A multistage object detector named Cas-RetinaNet, is then proposed for reducing the misalignments. It consists of sequential stages trained with increasing IoU thresholds for improving the correlation, and a novel Feature Consistency Module for mitigating the feature inconsistency. Experiments show that our proposed Cas-RetinaNet achieves stable performance gains across different models and input scales. Specifically, our method improves RetinaNet from 39.1 AP to 41.1 AP on the challenging MS COCO dataset without any bells or whistles.
In advance of the wide development of deep convolutional networks, the sliding-window paradigm dominates the field of object detection for years. Most progress is related to handcrafted image descriptors such as HOG @cite_6 and SIFT @cite_13 . Based on these powerful features, DPMs @cite_3 help to extend dense detectors to more general object categories and achieves top results on PASCAL VOC @cite_29 .
{ "cite_N": [ "@cite_29", "@cite_3", "@cite_13", "@cite_6" ], "mid": [ "2031489346", "2168356304", "2124386111", "2161969291" ], "abstract": [ "The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "An object recognition system has been developed that uses a new class of local image features. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. These features share similar properties with neurons in inferior temporal cortex that are used for object recognition in primate vision. Features are efficiently detected through a staged filtering approach that identifies stable points in scale space. Image keys are created that allow for local geometric deformations by representing blurred image gradients in multiple orientation planes and at multiple scales. The keys are used as input to a nearest neighbor indexing method that identifies candidate object matches. Final verification of each match is achieved by finding a low residual least squares solution for the unknown model parameters. Experimental results show that robust object recognition can be achieved in cluttered partially occluded images with a computation time of under 2 seconds.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds." ] }
1907.06881
2956902387
Recent researches attempt to improve the detection performance by adopting the idea of cascade for single-stage detectors. In this paper, we analyze and discover that inconsistency is the major factor limiting the performance. The refined anchors are associated with the feature extracted from the previous location and the classifier is confused by misaligned classification and localization. Further, we point out two main designing rules for the cascade manner: improving consistency between classification confidence and localization performance, and maintaining feature consistency between different stages. A multistage object detector named Cas-RetinaNet, is then proposed for reducing the misalignments. It consists of sequential stages trained with increasing IoU thresholds for improving the correlation, and a novel Feature Consistency Module for mitigating the feature inconsistency. Experiments show that our proposed Cas-RetinaNet achieves stable performance gains across different models and input scales. Specifically, our method improves RetinaNet from 39.1 AP to 41.1 AP on the challenging MS COCO dataset without any bells or whistles.
Compared with two-stage methods, one-stage approaches aim at achieving real-time speed while maintaining great performance. OverFeat @cite_9 is one of the first modern single-stage object detectors based on deep networks. YOLO @cite_23 @cite_26 and SSD @cite_11 have renewed interest in one-stage approaches by skipping the region proposal generation step and directly predicting classification scores and bounding box regression offsets. Recently, Lin al point out that the extreme foreground-background class imbalance limits the performance and propose Focal Loss @cite_22 to boost accuracy. Generally speaking, most one-stage detectors follow the sliding window scheme and rely on the fully convolutional networks to predict scores and offsets at each localization which is beneficial to reduce the computational complexity.
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_9", "@cite_23", "@cite_11" ], "mid": [ "", "2743473392", "1487583988", "2572745118", "2193145675" ], "abstract": [ "", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.", "In recent years, we have seen tremendous progress in the field of object detection. Most of the recent improvements have been achieved by targeting deeper feedforward networks. However, many hard object categories such as bottle, remote, etc. require representation of fine details and not just coarse, semantic representations. But most of these fine details are lost in the early convolutional layers. What we need is a way to incorporate finer details from lower layers into the detection architecture. Skip connections have been proposed to combine high-level and low-level features, but we argue that selecting the right features from low-level requires top-down contextual information. Inspired by the human visual pathway, in this paper we propose top-down modulations as a way to incorporate fine details into the detection framework. Our approach supplements the standard bottom-up, feedforward ConvNet with a top-down modulation (TDM) network, connected using lateral connections. These connections are responsible for the modulation of lower layer filters, and the top-down network handles the selection and integration of contextual information and low-level features. The proposed TDM architecture provides a significant boost on the COCO testdev benchmark, achieving 28.6 AP for VGG16, 35.2 AP for ResNet101, and 37.3 for InceptionResNetv2 network, without any bells and whistles (e.g., multi-scale, iterative box refinement, etc.).", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd." ] }
1907.06881
2956902387
Recent researches attempt to improve the detection performance by adopting the idea of cascade for single-stage detectors. In this paper, we analyze and discover that inconsistency is the major factor limiting the performance. The refined anchors are associated with the feature extracted from the previous location and the classifier is confused by misaligned classification and localization. Further, we point out two main designing rules for the cascade manner: improving consistency between classification confidence and localization performance, and maintaining feature consistency between different stages. A multistage object detector named Cas-RetinaNet, is then proposed for reducing the misalignments. It consists of sequential stages trained with increasing IoU thresholds for improving the correlation, and a novel Feature Consistency Module for mitigating the feature inconsistency. Experiments show that our proposed Cas-RetinaNet achieves stable performance gains across different models and input scales. Specifically, our method improves RetinaNet from 39.1 AP to 41.1 AP on the challenging MS COCO dataset without any bells or whistles.
Non-maximum suppression (NMS) has been an essential component for removing duplicated bounding boxes in most object detectors since @cite_6 . It works in an iterative manner. At each iteration, the bounding box with the maximum classification confidence is selected and its neighboring boxes are suppressed using a predefined IoU threshold. As mentioned in @cite_2 , the misalignment between classification confidence and localization accuracy may lead to accurately localized bounding boxes being suppressed by less accurate ones in the NMS procedure. So IoU-Net @cite_2 predicts IoU scores for the proposals to reduce this misalignment.
{ "cite_N": [ "@cite_6", "@cite_2" ], "mid": [ "2161969291", "2886904239" ], "abstract": [ "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors." ] }
1901.05282
2909775062
Using generative adversarial networks (GANs), we investigate the possibility of creating large amounts of analysis-specific simulated LHC events at limited computing cost. This kind of generative model is analysis specific in the sense that it directly generates the high-level features used in the last stage of a given physics analyses, learning the N-dimensional distribution of relevant features in the context of a specific analysis selection. We apply this idea to the generation of muon four-momenta in @math events at the LHC. We highlight how use-case specific issues emerge when the distributions of the considered quantities exhibit particular features. We show how substantial performance improvements and convergence speed-up can be obtained by including regression terms in the loss function of the generator. We develop an objective criterion to assess the geenrator performance in a quantitative way. With further development, a generalization of this approach could substantially reduce the needed amount of centrally produced fully simulated events in large particle physics experiments.
Generative adversarial networks @cite_17 have been investigated for LHC applications to simulate the energy deposits of individual particles @cite_18 @cite_0 @cite_22 and jets @cite_27 @cite_1 , as well as to accelerate Matrix-Element methods @cite_12 . Recently, a GAN-based generator was developed to simulate data collected in test-beam studies for the future CMS Highly Granular Calorimeter @cite_14 . A similar study was carried on in the context of cosmic-ray detection @cite_2 . A discussion of how GAN models could be relevant to event simulation in future HEP experiments can be found in Ref. @cite_3 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_1", "@cite_3", "@cite_0", "@cite_27", "@cite_2", "@cite_12", "@cite_17" ], "mid": [ "2775970449", "2809880449", "", "2798474886", "2789729968", "2614083378", "2581875816", "2739988885", "2728024376", "1710476689" ], "abstract": [ "Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theory modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speed-up factors of up to 100,000 @math . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.", "Simulations of particle showers in calorimeters are computationally time-consuming, as they have to reproduce both energy depositions and their considerable fluctuations. A new approach to ultra-fast simulations is generative models where all calorimeter energy depositions are generated simultaneously. We use GEANT4 simulations of an electron beam impinging on a multi-layer electromagnetic calorimeter for adversarial training of a generator network and a critic network guided by the Wasserstein distance. The generator is constrained during the training such that the generated showers show the expected dependency on the initial energy and the impact position. It produces realistic calorimeter energy depositions, fluctuations and correlations which we demonstrate in distributions of typical calorimeter observables. In most aspects, we observe that generated calorimeter showers reach the level of showers as simulated with the GEANT4 program.", "", "Deep generative models parametrised by neural networks have recently started to provide accurate results in modeling natural images. In particular, generative adversarial networks provide an unsupervised solution to this problem. In this work, we apply this kind of technique to the simulation of particle detector response to hadronic jets. We show that deep neural networks can achieve high fidelity in this task, while attaining a speed increase of several orders of magnitude with respect to traditional algorithms.", "A working group on detector simulation was formed as part of the high-energy physics (HEP) Software Foundation's initiative to prepare a Community White Paper that describes the main software challenges and opportunities to be faced in the HEP field over the next decade. The working group met over a period of several months in order to review the current status of the Full and Fast simulation applications of HEP experiments and the improvements that will need to be made in order to meet the goals of future HEP experimental programmes. The scope of the topics covered includes the main components of a HEP simulation application, such as MC truth handling, geometry modeling, particle propagation in materials and fields, physics modeling of the interactions of particles with matter, the treatment of pileup and other backgrounds, as well as signal processing and digitisation. The resulting work programme described in this document focuses on the need to improve both the software performance and the physics of detector simulation. The goals are to increase the accuracy of the physics models and expand their applicability to future physics programmes, while achieving large factors in computing performance gains consistent with projections on available computing resources.", "The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce , a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter, and achieve speedup factors comparable to or better than existing full simulation techniques on CPU ( @math - @math ) and even faster on GPU (up to @math ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.", "We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in high energy particle physics by applying a novel Generative Adversarial Network (GAN) architecture to the production of jet images—2D representations of energy depositions from particles interacting with a calorimeter. We propose a simple architecture, the Location-Aware Generative Adversarial Network, that learns to produce realistic radiation patterns from simulated high energy particle collisions. The pixel intensities of GAN-generated images faithfully span over many orders of magnitude and exhibit the desired low-dimensional physical properties (i.e., jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a novel empirical validation of image quality and validity of GAN-produced simulations of the natural world. This work provides a base for further explorations of GANs for use in faster simulation in high energy particle physics.", "Abstract We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector’s responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.", "New machine learning based algorithms have been developed and tested for Monte Carlo integration based on generative Boosted Decision Trees and Deep Neural Networks. Both of these algorithms exhibit substantial improvements compared to existing algorithms for non-factorizable integrands in terms of the achievable integration precision for a given number of target function evaluations. Large scale Monte Carlo generation of complex collider physics processes with improved efficiency can be achieved by implementing these algorithms into commonly used matrix element Monte Carlo generators once their robustness is demonstrated and performance validated for the relevant classes of matrix elements.", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”" ] }
1901.05282
2909775062
Using generative adversarial networks (GANs), we investigate the possibility of creating large amounts of analysis-specific simulated LHC events at limited computing cost. This kind of generative model is analysis specific in the sense that it directly generates the high-level features used in the last stage of a given physics analyses, learning the N-dimensional distribution of relevant features in the context of a specific analysis selection. We apply this idea to the generation of muon four-momenta in @math events at the LHC. We highlight how use-case specific issues emerge when the distributions of the considered quantities exhibit particular features. We show how substantial performance improvements and convergence speed-up can be obtained by including regression terms in the loss function of the generator. We develop an objective criterion to assess the geenrator performance in a quantitative way. With further development, a generalization of this approach could substantially reduce the needed amount of centrally produced fully simulated events in large particle physics experiments.
The adversarial training (AT) technique is used in HEP for tasks other than event generation: reference @cite_35 discusses how to account for uncertainties associated to a given nuisance parameter using AT. Reference @cite_34 uses AT to preserve the independence of a given network score (a jet tagger) from a specific physics quantity (the jet mass). This technique was also used to train autoencoders for jet tagging @cite_33 . Reference @cite_38 discusses how to use a GAN setup to unfold detector effects.
{ "cite_N": [ "@cite_35", "@cite_34", "@cite_33", "@cite_38" ], "mid": [ "2951993056", "2594983911", "", "2805508261" ], "abstract": [ "Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. The majority of this work focuses on a binary domain label. Similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot -- a quantity whose distribution does not depend on the unknown values of the nuisance parameters that parametrize this family of data generation processes. In this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model. The method includes a hyperparameter to control the trade-off between accuracy and robustness. We demonstrate the effectiveness of this approach with a toy example and examples from particle physics.", "We describe a strategy for constructing a neural network jet substructure tagger which powerfully discriminates boosted decay signals while remaining largely uncorrelated with the jet mass. This reduces the impact of systematic uncertainties in background modeling while enhancing signal purity, resulting in improved discovery significance relative to existing taggers. The network is trained using an adversarial strategy, resulting in a tagger that learns to balance classification accuracy with decorrelation. As a benchmark scenario, we consider the case where large-radius jets originating from a boosted resonance decay are discriminated from a background of nonresonant quark and gluon jets. We show that in the presence of systematic uncertainties on the background rate, our adversarially-trained, decorrelated tagger considerably outperforms a conventionally trained neural network, despite having a slightly worse signal-background separation power. We generalize the adversarial training technique to include a parametric dependence on the signal hypothesis, training a single network that provides optimized, interpolatable decorrelated jet tagging across a continuous range of hypothetical resonance masses, after training on discrete choices of the signal mass.", "", "Correcting measured detector-level distributions to particle-level is essential to make data usable outside the experimental collaborations. The term unfolding is used to describe this procedure. A new method of unfolding the data using a modified Generative Adversarial Network (MSGAN) is presented here. Applied to various distributions, it is demonstrated to perform at par with, or better than, currently used methods." ] }
1901.05389
2910117392
The socioeconomic status of people depends on a combination of individual characteristics and environmental variables, thus its inference from online behavioral data is a difficult task. Attributes like user semantics in communication, habitat, occupation, or social network are all known to be determinant predictors of this feature. In this paper we propose three different data collection and combination methods to first estimate and, in turn, infer the socioeconomic status of French Twitter users from their online semantics. Our methods are based on open census data, crawled professional profiles, and remotely sensed, expert annotated information on living environment. Our inference models reach similar performance of earlier results with the advantage of relying on broadly available datasets and of providing a generalizable framework to estimate socioeconomic status of large numbers of Twitter users. These results may contribute to the scientific discussion on social stratification and inequalities, and may fuel several applications.
There is a growing effort in the field to combine online behavioral data with census records, and expert annotated information to infer social attributes of users of online services. The predicted attributes range from easily assessable individual characteristics such as age @cite_40 , or occupation @cite_38 @cite_18 @cite_11 @cite_30 to more complex psychological and sociological traits like political affiliation @cite_0 , personality @cite_33 , or SES @cite_14 @cite_38 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_18", "@cite_14", "@cite_33", "@cite_0", "@cite_40", "@cite_11" ], "mid": [ "", "1948823840", "", "", "2119595472", "2250747954", "2285004539", "2166434810" ], "abstract": [ "", "Automatically inferring user demographics from social media posts is useful for both social science research and a range of downstream applications in marketing and politics. We present the first extensive study where user behaviour on Twitter is used to build a predictive model of income. We apply non-linear methods for regression, i.e. Gaussian Processes, achieving strong correlation between predicted and actual user income. This allows us to shed light on the factors that characterise income on Twitter and analyse their interplay with user emotions and sentiment, perceived psycho-demographics and language use expressed through the topics of their posts. Our analysis uncovers correlations between different feature categories and income, some of which reflect common belief e.g. higher perceived education and intelligence indicates higher earnings, known differences e.g. gender and age differences, however, others show novel findings e.g. higher income users express more fear and anger, whereas lower income users express more of the time emotion and opinions.", "", "", "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase ‘sick of’ and the word ‘depressed’), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive ‘my’ when mentioning their ‘wife’ or ‘girlfriend’ more often than females use ‘my’ with ‘husband’ or 'boyfriend’). To date, this represents the largest study, by an order of magnitude, of language and personality.", "Existing models for social media personal analytics assume access to thousands of messages per user, even though most users author content only sporadically over time. Given this sparsity, we: (i) leverage content from the local neighborhood of a user; (ii) evaluate batch models as a function of size and the amount of messages in various types of neighborhoods; and (iii) estimate the amount of time and tweets required for a dynamic model to predict user preferences. We show that even when limited or no selfauthored data is available, language from friend, retweet and user mention communications provide sufficient evidence for prediction. When updating models over time based on Twitter, we find that political preference can be often be predicted using roughly 100 tweets, depending on the context of user selection, where this could mean hours, or weeks, based on the author’s tweeting frequency.", "Twitter provides an extremely rich and open source of data for studying human behaviour at scale. It has been used to advance our understanding of social network structure, the viral flow of information and how new ideas develop. Enriching Twitter with demographic information would permit more precise science and better generalisation to the real world. The only demographic indicators associated with a Twitter account are the free text name, location and description fields. We show how the age of most Twitter accounts can be inferred with high accuracy using the structure of the social graph. Besides classical social science applications, there are obvious privacy and child protection implications to this discovery. Previous work on Twitter age detection has focussed on either user-name or linguistic features of tweets. A shortcoming of the user-name approach is that it requires real names (Twitter names are often false) and census data from each user's (unknown) birth country. Problems with linguistic approaches are that most Twitter users do not tweet (the median number of Tweets is 4) and a different model must be learnt for each language. To address these issues, we devise a language-independent methodology for determining the age of Twitter users from data that is native to the Twitter ecosystem. Roughly 150,000 Twitter users specify an age in their free text description field. We generalize this to the entire Twitter network by showing that age can be predicted based on what or whom they follow. We adopt a Bayesian classification paradigm, which offers a consistent framework for handling uncertainty in our data, e.g., inaccurate age descriptions or spurious edges in the graph. Working within this paradigm we have successfully applied age detection to 700 million Twitter accounts with an F1 Score of 0.86.", "Social media content can be used as a complementary source to the traditional methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a user’s occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications." ] }
1901.05350
2908701480
TensorFlow.js is a library for building and executing machine learning algorithms in JavaScript. TensorFlow.js models run in a web browser and in the Node.js environment. The library is part of the TensorFlow ecosystem, providing a set of APIs that are compatible with those in Python, allowing models to be ported between the Python and JavaScript ecosystems. TensorFlow.js has empowered a new set of developers from the extensive JavaScript community to build and deploy machine learning models and enabled new classes of on-device computation. This paper describes the design, API, and implementation of TensorFlow.js, and highlights some of the impactful use cases.
WebDNN @cite_59 is another deep learning library in JS that can execute pretrained models developed in TensorFlow, Keras, PyTorch, Chainer and Caffe. To accelerate computation, WebDNN uses WebGPU @cite_9 , a technology initially proposed by Apple. WebGPU is in an early exploratory stage and currently only supported in Safari Technology Preview, an experimental version of the Safari browser. As a fallback for other browsers, WebDNN uses WebAssembly @cite_48 , which enables execution of compiled C and C++ code directly in the browser. While WebAssembly has support across all major browsers, it lacks SIMD instructions, a crucial component needed to make it as performant as WebGL and WebGPU.
{ "cite_N": [ "@cite_48", "@cite_9", "@cite_59" ], "mid": [ "2625141509", "", "2766260608" ], "abstract": [ "The maturation of the Web platform has given rise to sophisticated and demanding Web applications such as interactive 3D visualization, audio and video software, and games. With that, efficiency and security of code on the Web has become more important than ever. Yet JavaScript as the only built-in language of the Web is not well-equipped to meet these requirements, especially as a compilation target. Engineers from the four major browser vendors have risen to the challenge and collaboratively designed a portable low-level bytecode called WebAssembly. It offers compact representation, efficient validation and compilation, and safe low to no-overhead execution. Rather than committing to a specific programming model, WebAssembly is an abstraction over modern hardware, making it language-, hardware-, and platform-independent, with use cases beyond just the Web. WebAssembly has been designed with a formal semantics from the start. We describe the motivation, design and formal semantics of WebAssembly and provide some preliminary experience with implementations.", "", "Recently, deep neural network (DNN) is drawing a lot of attention because of its applications. However, it requires a lot of computational resources and tremendous processes in order to setup an execution environment based on hardware acceleration such as GPGPU. Therefore, providing DNN applications to end-users is very hard. To solve this problem, we have developed an installation-free web browser-based DNN execution framework, WebDNN. WebDNN optimizes the trained DNN model to compress model data and accelerate the execution. It executes the DNN model with novel JavaScript API to achieve zero-overhead execution. Empirical evaluations show that it achieves more than two-hundred times the unusual acceleration. WebDNN is an open source framework and you can download it from https: github.com mil-tokyo webdnn." ] }
1901.05362
2909928957
Current benchmarks for optical flow algorithms evaluate the estimation quality by comparing their predicted flow field with the ground truth, and additionally may compare interpolated frames, based on these predictions, with the correct frames from the actual image sequences. For the latter comparisons, objective measures such as mean square errors are applied. However, for applications like image interpolation, the expected user's quality of experience cannot be fully deduced from such simple quality measures. Therefore, we conducted a subjective quality assessment study by crowdsourcing for the interpolated images provided in one of the optical flow benchmarks, the Middlebury benchmark. We used paired comparisons with forced choice and reconstructed absolute quality scale values according to Thurstone's model using the classical least squares method. The results give rise to a re-ranking of 141 participating algorithms w.r.t. visual quality of interpolated frames mostly based on optical flow estimation. Our re-ranking result shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks.
So far, there is only one benchmark that is used for evaluating the performance of frame interpolation, namely the Middlebury benchmark. It was originally designed for the evaluation of optical flow algorithms. Since it provides the ground-truth in-between images to evaluate the interpolation performance of optical flow algorithms, some interpolation algorithms also made use of this benchmark for evaluation, @cite_20 , @cite_10 , @cite_15 .
{ "cite_N": [ "@cite_15", "@cite_10", "@cite_20" ], "mid": [ "2963268050", "", "183547530" ], "abstract": [ "Video frame interpolation algorithms typically estimate optical flow or its variations and then use it to guide the synthesis of an intermediate frame between two consecutive original frames. To handle challenges like occlusion, bidirectional flow between the two input frames is often estimated and used to warp and blend the input frames. However, how to effectively blend the two warped frames still remains a challenging problem. This paper presents a context-aware synthesis approach that warps not only the input frames but also their pixel-wise contextual information and uses them to interpolate a high-quality intermediate frame. Specifically, we first use a pre-trained neural network to extract per-pixel contextual information for input frames. We then employ a state-of-the-art optical flow algorithm to estimate bidirectional flow between them and pre-warp both input frames and their context maps. Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion. Our neural network is fully convolutional and is trained end to end. Our experiments show that our method can handle challenging scenarios such as occlusion and large motion and outperforms representative state-of-the-art approaches.", "", "We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that the interpolation assumption is directly modeled. This reparametrization is a powerful trick that results in a number of appealing properties, in particular the motion estimation becomes more robust to noise and large displacements, and the computational workload is more than halved compared to usual bidirectional methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L 1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it becomes possible to double the frame rate of 640×480 video footage at 30 fps, i.e. to perform frame doubling in realtime." ] }
1901.05362
2909928957
Current benchmarks for optical flow algorithms evaluate the estimation quality by comparing their predicted flow field with the ground truth, and additionally may compare interpolated frames, based on these predictions, with the correct frames from the actual image sequences. For the latter comparisons, objective measures such as mean square errors are applied. However, for applications like image interpolation, the expected user's quality of experience cannot be fully deduced from such simple quality measures. Therefore, we conducted a subjective quality assessment study by crowdsourcing for the interpolated images provided in one of the optical flow benchmarks, the Middlebury benchmark. We used paired comparisons with forced choice and reconstructed absolute quality scale values according to Thurstone's model using the classical least squares method. The results give rise to a re-ranking of 141 participating algorithms w.r.t. visual quality of interpolated frames mostly based on optical flow estimation. Our re-ranking result shows the necessity of visual quality assessment as another evaluation metric for optical flow and frame interpolation benchmarks.
Some interpolation algorithms like @cite_6 , @cite_27 used the UCF 101 dataset @cite_23 for training and testing. Others like @cite_20 , @cite_22 , @cite_18 used the videos from @cite_8 , @cite_16 . For evaluation, generally they chose to compute one of MSE, PSNR, and SSIM between their interpolated images and the ground-truth in-between images.
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_8", "@cite_6", "@cite_27", "@cite_23", "@cite_16", "@cite_20" ], "mid": [ "", "2161359508", "", "2586480386", "1905052409", "24089286", "", "183547530" ], "abstract": [ "", "In low bit-rate video communication, temporal subsampling is usually used due to limited available bandwidth. Motion compensated frame interpolation (MCFI) techniques are often employed in the decoder to restore the original frame rate and enhance the temporal quality. In this paper, we propose a low-complexity and high efficiency MCFI method. It first examines the motion vectors embedded in the bit-stream, then carries out overlapped block bi-directional motion estimation on those blocks whose embedded motion vectors are regarded as not accurate enough. Finally, it utilizes motion vector post-processing and overlapped block motion compensation to generate interpolated frames and further reduce blocking artifacts. Experimental results show that the proposed algorithm outperforms other methods in both PSNR and visual performance, while its complexity is also lower than other methods.", "", "We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art.", "Standard approaches to computing interpolated (in-between) frames in a video sequence require accurate pixel correspondences between images e.g. using optical flow. We present an efficient alternative by leveraging recent developments in phase-based methods that represent motion in the phase shift of individual pixels. This concept allows in-between images to be generated by simple per-pixel phase modification, without the need for any form of explicit correspondence estimation. Up until now, such methods have been limited in the range of motion that can be interpolated, which fundamentally restricts their usefulness. In order to reduce these limitations, we introduce a novel, bounded phase shift correction method that combines phase information across the levels of a multi-scale pyramid. Additionally, we propose extensions for phase-based image synthesis that yield smoother transitions between the interpolated images. Our approach avoids expensive global optimization typical of optical flow methods, and is both simple to implement and easy to parallelize. This allows us to interpolate frames at a fraction of the computational cost of traditional optical flow-based solutions, while achieving similar quality and in some cases even superior results. Our method fails gracefully in difficult interpolation settings, e.g., significant appearance changes, where flow-based methods often introduce serious visual artifacts. Due to its efficiency, our method is especially well suited for frame interpolation and retiming of high resolution, high frame rate video.", "We introduce UCF101 which is currently the largest dataset of human actions. It consists of 101 action classes, over 13k clips and 27 hours of video data. The database consists of realistic user uploaded videos containing camera motion and cluttered background. Additionally, we provide baseline action recognition results on this new dataset using standard bag of words approach with overall performance of 44.5 . To the best of our knowledge, UCF101 is currently the most challenging dataset of actions due to its large number of classes, large number of clips and also unconstrained nature of such clips.", "", "We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that the interpolation assumption is directly modeled. This reparametrization is a powerful trick that results in a number of appealing properties, in particular the motion estimation becomes more robust to noise and large displacements, and the computational workload is more than halved compared to usual bidirectional methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L 1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it becomes possible to double the frame rate of 640×480 video footage at 30 fps, i.e. to perform frame doubling in realtime." ] }
1901.05415
2910458567
The majority of conversations a dialogue agent sees over its lifetime occur after it has already been trained and deployed, leaving a vast store of potential training signal untapped. In this work, we propose the self-feeding chatbot, a dialogue agent with the ability to extract new training examples from the conversations it participates in. As our agent engages in conversation, it also estimates user satisfaction in its responses. When the conversation appears to be going well, the user's responses become new training examples to imitate. When the agent believes it has made a mistake, it asks for feedback; learning to predict the feedback that will be given improves the chatbot's dialogue abilities further. On the PersonaChat chit-chat dataset with over 131k training examples, we find that learning from dialogue with a self-feeding chatbot significantly improves performance, regardless of the amount of traditional supervision.
The general concepts of lifelong learning and never-ending (language) learning @cite_8 are related to the topics discussed in this work, as is active learning @cite_7 and predictive modeling @cite_3 .
{ "cite_N": [ "@cite_3", "@cite_7", "@cite_8" ], "mid": [ "1987150989", "2426031434", "1512387364" ], "abstract": [ "This paper shows how ‘static’ neural approaches to adaptive target detection can be replaced by a more efficient and more sequential alternative. The latter is inspired by the observation that biological systems employ sequential eye movements for pattern recognition. A system is described, which builds an adaptive model of the time-varying inputs of an artificial fovea controlled by an adaptive neural controller. The controller uses the adaptive model for learning the sequential generation of fovea trajectories causing the fovea to move to a target in a visual scene. The system also learns to track moving targets. No teacher provides the desired activations of ‘eye muscles’ at various times. The only goal information is the shape of the target. Since the task is a ‘reward-only-at-goal’ task, it involves a complex temporal credit assignment problem. Some implications for adaptive attentive systems in general are discussed.", "Support vector machines have met with significant success in numerous real-world learning tasks. However, like most machine learning algorithms, they are generally applied using a randomly selected training set classified in advance. In many settings, we also have the option of using pool-based active learning. Instead of using a randomly selected training set, the learner has access to a pool of unlabeled instances and can request the labels for some number of them. We introduce a new algorithm for performing active learning with support vector machines, i.e., an algorithm for choosing which instances to request next. We provide a theoretical motivation for the algorithm using the notion of a version space. We present experimental results showing that employing our active learning method can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.", "We consider here the problem of building a never-ending language learner; that is, an intelligent computer agent that runs forever and that each day must (1) extract, or read, information from the web to populate a growing structured knowledge base, and (2) learn to perform this task better than on the previous day. In particular, we propose an approach and a set of design principles for such an agent, describe a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs with an estimated precision of 74 after running for 67 days, and discuss lessons learned from this preliminary attempt to build a never-ending learning agent." ] }
1901.05510
2911027342
This article presents a novel framework for performing visual inspection around 3D infrastructures, by establishing a team of fully autonomous Micro Aerial Vehicles (MAVs) with robust localization, planning and perception capabilities. The proposed aerial inspection system reaches high level of autonomy on a large scale, while pushing to the boundaries the real life deployment of aerial robotics. In the presented approach, the MAVs deployed for the inspection of the structure rely only on their onboard computer and sensory systems. The developed framework envisions a modular system, combining open research challenges in the fields of localization, path planning and mapping, with an overall capability for a fast on site deployment and a reduced execution time that can repeatably perform the inspection mission according to the operator needs. The architecture of the established system includes: 1) a geometry-based path planner for coverage of complex structures by multiple MAVs, 2) an accurate yet flexible localization component, which provides an accurate pose estimation for the MAVs by utilizing an Ultra Wideband fused inertial estimation scheme, and 3) visual data post-processing scheme for the 3D model building. The performance of the proposed framework has been experimentally demonstrated in multiple realistic outdoor field trials, all focusing on the challenging structure of a wind turbine as the main test case. The successful experimental results, depict the merits of the proposed autonomous navigation system as the enabling technology towards aerial robotic inspectors.
Nowadays, Micro Aerial Vehicles (MAVs) are gaining more and more attention from the scientific community, constituting a fast-paced emerging technology that constantly pushes their limits for accomplishing complex tasks @cite_5 . These platforms are characterized by their mechanical simplicity, agility, stability and outstanding autonomy to reach remote and distant places. Endowing MAVs with proper sensor suites, while navigating in indoors outdoors, cluttered and complex environments, could establish them as a powerful aerial tool for a wide span of applications. Some characteristic examples of application scenarios for such a novel deployment of the aerial technology include infrastructure inspection @cite_13 @cite_3 , public safety-surveillance @cite_16 , and search and rescue missions @cite_1 .
{ "cite_N": [ "@cite_1", "@cite_3", "@cite_5", "@cite_16", "@cite_13" ], "mid": [ "2060794637", "", "2582222835", "1990101796", "2789773866" ], "abstract": [ "Urban search and rescue missions raise special requirements on robotic systems. Small aerial systems provide essential support to human task forces in situation assessment and surveillance. As external infrastructure for navigation and communication is usually not available, robotic systems must be able to operate autonomously. A limited payload of small aerial systems poses a great challenge to the system design. The optimal tradeoff between flight performance, sensors, and computing resources has to be found. Communication to external computers cannot be guaranteed; therefore, all processing and decision making has to be done on board. In this article, we present an unmanned aircraft system design fulfilling these requirements. The components of our system are structured into groups to encapsulate their functionality and interfaces. We use both laser and stereo vision odometry to enable seamless indoor and outdoor navigation. The odometry is fused with an inertial measurement unit in an extended Kalman filter. Navigation is supported by a module that recognizes known objects in the environment. A distributed computation approach is adopted to address the computational requirements of the used algorithms. The capabilities of the system are validated in flight experiments, using a quadrotor.", "", "During last decade the scientific research on Unmanned Aerial Vehicless (UAVs) increased spectacularly and led to the design of multiple types of aerial platforms. The major challenge today is the development of autonomously operating aerial agents capable of completing missions independently of human interaction. To this extent, visual sensing techniques have been integrated in the control pipeline of the UAVs in order to enhance their navigation and guidance skills. The aim of this article is to present a comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends. These applications are sorted in different categories according to the research topics among various research groups. More specifically vision based position-attitude control, pose estimation and mapping, obstacle detection as well as target tracking are the identified components towards autonomous agents. Aerial platforms could reach greater level of autonomy by integrating all these technologies onboard. Additionally, throughout this article the concept of fusion multiple sensors is highlighted, while an overview on the challenges addressed and future trends in autonomous agent development will be also provided.", "We report recent results from field experiments conducted with a team of ground and aerial robots engaged in the collaborative mapping of an earthquake-damaged building. The goal of the experimental exercise is the generation of three-dimensional maps that capture the layout of a multifloor environment. The experiments took place in the top three floors of a structurally compromised building at Tohoku University in Sendai, Japan that was damaged during the 2011 Tohoku earthquake. We provide details of the approach to the collaborative mapping and report results from the experiments in the form of maps generated by the individual robots and as a team. We conclude by discussing observations from the experiments and future research topics. © 2012 Wiley Periodicals, Inc. (This work builds upon the conference paper (, 2012).)", "Abstract This article addresses the inspection problem of a complex 3D infrastructure using multiple Unmanned Aerial Vehicles (UAVs). The main novelty of the proposed scheme stems from the establishment of a theoretical framework capable of providing a path for accomplishing a full coverage of the infrastructure, without any further simplifications (number of considered representation points), by slicing it by horizontal planes to identify branches and assign specific areas to each agent as a solution to an overall optimization problem. Furthermore, the image streams collected during the coverage task are post-processed using Structure from Motion, stereo SLAM and mesh reconstruction algorithms, while the resulting 3D mesh can be used for further visual inspection purposes. The performance of the proposed Collaborative-Coverage Path Planning (C-CPP) has been experimentally evaluated in multiple indoor and realistic outdoor infrastructure inspection experiments and as such it is also contributing significantly towards real life applications for UAVs." ] }
1901.05510
2911027342
This article presents a novel framework for performing visual inspection around 3D infrastructures, by establishing a team of fully autonomous Micro Aerial Vehicles (MAVs) with robust localization, planning and perception capabilities. The proposed aerial inspection system reaches high level of autonomy on a large scale, while pushing to the boundaries the real life deployment of aerial robotics. In the presented approach, the MAVs deployed for the inspection of the structure rely only on their onboard computer and sensory systems. The developed framework envisions a modular system, combining open research challenges in the fields of localization, path planning and mapping, with an overall capability for a fast on site deployment and a reduced execution time that can repeatably perform the inspection mission according to the operator needs. The architecture of the established system includes: 1) a geometry-based path planner for coverage of complex structures by multiple MAVs, 2) an accurate yet flexible localization component, which provides an accurate pose estimation for the MAVs by utilizing an Ultra Wideband fused inertial estimation scheme, and 3) visual data post-processing scheme for the 3D model building. The performance of the proposed framework has been experimentally demonstrated in multiple realistic outdoor field trials, all focusing on the challenging structure of a wind turbine as the main test case. The successful experimental results, depict the merits of the proposed autonomous navigation system as the enabling technology towards aerial robotic inspectors.
One of the most common application areas that MAVs are employed, is in the filming industry, but there are efforts from other industries such as Mining, Oil, and Energy Providers, to invest in the commercialization of MAVs to perform remote inspection applications. Towards this vision, MAVs are powerful tools that have the profund potential to decrease the risks of human life, decrease the execution time and increase the efficiency of the overall inspection task, especially when compared to conventional methods @cite_4 . Despite the fact that the research in the aerial robotics has reached significant milestones regarding localization @cite_0 , planning @cite_14 and perception @cite_9 @cite_6 , successful real-life demonstrations of autonomous inspection systems have been rarely reported in the literature, with the majority of the applications focusing on impressive laboratory trials under full control environments and in most of the cases under the utilization of expensive motion capturing systems @cite_2 or small scale and well defined outdoor environments @cite_10 @cite_15 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_6", "@cite_0", "@cite_2", "@cite_15", "@cite_10" ], "mid": [ "2136265843", "2592064662", "2000146415", "", "2764251455", "1987526470", "1602540331", "2592746730" ], "abstract": [ "Localization and state estimation are reaching a certain maturity in mobile robotics, often providing both a precise robot pose estimate at a point in time and the corresponding uncertainty. In the bid to increase the robots' autonomy, the community now turns to more advanced tasks, such as navigation and path planning. For a realistic path to be computed, neither the uncertainty of the robot's perception nor the vehicle's dynamics can be ignored. In this work, we propose to specifically exploit the information on uncertainty, while also accounting for the physical laws governing the motion of the vehicle. Making use of rapidly exploring random belief trees, here we evaluate offline multiple path hypotheses in a known map to select a path exhibiting the motion required to estimate the robot's state accurately and, inherently, to avoid motion in modes, where otherwise observable states are not excited. We demonstrate the proposed approach on a micro aerial vehicle performing visual-inertial navigation. Such a system is known to require sufficient excitation to reach full observability. As a result, the proposed methodology plans safe avoidance not only of obstacles, but also areas where localization might fail during real flights compensating for the limitations of the localization methodology available. We show that our planner actively improves the precision of the state estimation by selecting paths that minimize the uncertainty in the estimated states. Furthermore, our experiments illustrate by comparison that a naive planner would fail to reach the goal within bounded uncertainty in most cases.", "Au sommaire : snapshot of the evolving 'drone' landscape - how the market will unfold (a view to 2050) - unlocking potential value of drones safety (the role of european level support)", "Autonomous microhelicopters will soon play a major role in tasks like search and rescue, environment monitoring, security surveillance, and inspection. If they are further realized in small scale, they can also be used in narrow outdoor and indoor environments and represent only a limited risk for people. However, for such operations, navigating based only on global positioning system (GPS) information is not sufficient. Fully autonomous operation in cities or other dense environments requires microhelicopters to fly at low altitudes, where GPS signals are often shadowed, or indoors and to actively explore unknown environments while avoiding collisions and creating maps. This involves a number of challenges on all levels of helicopter design, perception, actuation, control, and navigation, which still have to be solved. The Swarm of Micro Flying Robots (SFLY) project was a European Union-funded project with the goal of creating a swarm of vision-controlled microaerial vehicles (MAVs) capable of autonomous navigation, three-dimensional (3-D) mapping, and optimal surveillance coverage in GPS-denied environments. The SFLY MAVs do not rely on remote control, radio beacons, or motion-capture systems but can fly all by themselves using only a single onboard camera and an inertial measurement unit (IMU). This article describes the technical challenges that have been faced and the results achieved from hardware design and embedded programming to vision-based navigation and mapping, with an overview of how all the modules work and how they have been integrated into the final system. Code, data sets, and videos are publicly available to the robotics community. Experimental results demonstrating three MAVs navigating autonomously in an unknown GPS-denied environment and performing 3-D mapping and optimal surveillance coverage are presented.", "", "This article presents a software architecture for safe and reliable autonomous navigation of aerial robots in GPS-denied areas. The techniques employed within key modules from this architecture are explained in detail, such as a six-dimensional localization approach based on visual odometry and Monte Carlo localization, or a variant of the Lazy Theta* algorithm for motion planning. The aerial robot used to demonstrate this approach has been extensively tested over the past 2 years for localization and state estimation without any external positioning systems, autonomous local obstacle avoidance, and local path planning among other tasks. This article describes the architecture and main algorithms used to achieve these goals to build a robust autonomous system.", "The Flying Machine Arena is a platform for experiments and demonstrations with fleets of small flying vehicles. It utilizes a distributed, modular architecture linked by robust communication layers. An estimation and control framework along with built-in system protection components enable prototyping of new control systems concepts and implementation of novel demonstrations. More recently, a mobile version has been featured at several eminent public events. We describe the architecture of the Arena from the viewpoint of system robustness and its capability as a dual-purpose research and demonstration platform.", "In this paper, we propose a resource-efficient system for real-time 3D terrain reconstruction and landing-spot detection for micro aerial vehicles. The system runs on an on-board smartphone processor and requires only the input of a single downlooking camera and an inertial measurement unit. We generate a two-dimensional elevation map that is probabilistic, of fixed size, and robot-centric, thus, always covering the area immediately underneath the robot. The elevation map is continuously updated at a rate of 1 Hz with depth maps that are triangulated from multiple views using recursive Bayesian estimation. To highlight the usefulness of the proposed mapping framework for autonomous navigation of micro aerial vehicles, we successfully demonstrate fully autonomous landing including landing-spot detection in real-world experiments.", "On the quest of automating the navigation of challenging and promising Robotics platforms such as small Unmanned Aerial Vehicles (UAVs), the community has been increasingly active in developing perception capabilities able to run onboard such platforms in real-time. Despite that vision-based techniques have been at the heart of recent advancements, the realistic employment onboard UAVs is still in its infancy. Inspired by some of the most recent breakthroughs in online dense scene estimation and borrowing fundamental concepts from Computer Vision, in this work we propose a new pipeline for real-time, local scene reconstruction using a single camera for aerial navigation. Aiming for denser scene estimation than traditional feature-based maps with the ability to run onboard a small UAV in real-time, the proposed approach is demonstrated to achieve unprecedented performance producing rich maps of the camera's workspace, timely enough to serve in obstacle avoidance and real-time interaction of a robot with its direct surroundings. Evaluation on benchmarking datasets and on challenging aerial footage captured with a UAV featuring a conventional camera, reveals dramatic speed-ups, as well as denser and more accurate local reconstructions with respect to the state of the art." ] }
1907.06484
2954902138
A recent trend in IR has been the usage of neural networks to learn retrieval models for text based adhoc search. While various approaches and architectures have yielded significantly better performance than traditional retrieval models such as BM25, it is still difficult to understand exactly why a document is relevant to a query. In the ML community several approaches for explaining decisions made by deep neural networks have been proposed -- including DeepSHAP which modifies the DeepLift algorithm to estimate the relative importance (shapley values) of input features for a given decision by comparing the activations in the network for a given image against the activations caused by a reference input. In image classification, the reference input tends to be a plain black image. While DeepSHAP has been well studied for image classification tasks, it remains to be seen how we can adapt it to explain the output of Neural Retrieval Models (NRMs). In particular, what is a good "black" image in the context of IR? In this paper we explored various reference input document construction techniques. Additionally, we compared the explanations generated by DeepSHAP to LIME (a model agnostic approach) and found that the explanations differ considerably. Our study raises concerns regarding the robustness and accuracy of explanations produced for NRMs. With this paper we aim to shed light on interesting problems surrounding interpretability in NRMs and highlight areas of future work.
There are two main approaches to interpretability in machine learning models: model agnostic and model introspective approaches. Model agnostic approaches @cite_15 @cite_9 generate post-hoc explanations for the original model by treating it as a black box by learning an interpretable model on the output of the model or by perturbing the inputs or both. Model introspective approaches on one hand include interpretable'' models such as decision trees @cite_13 , attention-based networks @cite_22 , and sparse linear models @cite_11 where there is a possibility to inspect individual model components (path in a decision tree, feature weights in linear models) to generate useful explanations. On the other hand, there are gradient-based methods like @cite_14 that generates attributions by considering the partial derivative of the output with respect to the input features. Following this, there were many works @cite_6 @cite_5 @cite_16 @cite_7 that generate attributions by inspecting the neural network architectures.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_7", "@cite_9", "@cite_6", "@cite_5", "@cite_15", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2962851944", "2950178297", "", "", "2962862931", "", "2516809705", "", "2962861173", "2164878629" ], "abstract": [ "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "", "", "Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and or better consistency with human intuition than previous approaches.", "", "", "", "We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (for example, if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS2 score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS2, but more accurate.", "Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction. These models are in widespread use by the medical community, but are difficult to learn from data because they need to be accurate and sparse, have coprime integer coefficients, and satisfy multiple operational constraints. We present a new method for creating data-driven scoring systems called a Supersparse Linear Integer Model (SLIM). SLIM scoring systems are built by using an integer programming problem that directly encodes measures of accuracy (the 0---1 loss) and sparsity (the @math l0-seminorm) while restricting coefficients to coprime integers. SLIM can seamlessly incorporate a wide range of operational constraints related to accuracy and sparsity, and can produce acceptable models without parameter tuning because of the direct control provided over these quantities. We provide bounds on the testing and training accuracy of SLIM scoring systems, and present a new data reduction technique that can improve scalability by eliminating a portion of the training data beforehand. Our paper includes results from a collaboration with the Massachusetts General Hospital Sleep Laboratory, where SLIM is being used to create a highly tailored scoring system for sleep apnea screening." ] }
1907.06600
2959857067
Risk adjustment has become an increasingly important tool in healthcare. It has been extensively applied to payment adjustment for health plans to reflect the expected cost of providing coverage for members. Risk adjustment models are typically estimated using linear regression, which does not fully exploit the information in claims data. Moreover, the development of such linear regression models requires substantial domain expert knowledge and computational effort for data preprocessing. In this paper, we propose a novel approach for risk adjustment that uses semantic embeddings to represent patient medical histories. Embeddings efficiently represent medical concepts learned from diagnostic, procedure, and prescription codes in patients' medical histories. This approach substantially reduces the need for feature engineering. Our results show that models using embeddings had better performance than a commercial risk adjustment model on the task of prospective risk score prediction.
Le and Mikolov @cite_3 extended the models to groups of words, including sentences, paragraphs, and entire documents. In their Distributed Memory Model of Paragraph Vectors (PV-DM), which is analogous to the CBOW model, a paragraph (or other chosen word group) vector is added as a predictor to the context words' vectors. The paragraph vector "remembers" information about the paragraph beyond the selected context words and thus helps to predict the target word. The Distributed Bag of Words version of Paragraph Vectors (PV-DBOW) is analogous to the skip-gram model and uses only the paragraph vector to predict context words from the same paragraph. PV-DM provides the advantage of accounting for the sequence of the words in the paragraph, while PV-DBOW is less computationally intensive. Both, which are collectively known as , allow efficient learning of paragraph vectors, even though different paragraphs may vary in length.
{ "cite_N": [ "@cite_3" ], "mid": [ "2949547296" ], "abstract": [ "Many machine learning algorithms require the input to be represented as a fixed-length feature vector. When it comes to texts, one of the most common fixed-length features is bag-of-words. Despite their popularity, bag-of-words features have two major weaknesses: they lose the ordering of the words and they also ignore semantics of the words. For example, \"powerful,\" \"strong\" and \"Paris\" are equally distant. In this paper, we propose Paragraph Vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents. Our algorithm represents each document by a dense vector which is trained to predict words in the document. Its construction gives our algorithm the potential to overcome the weaknesses of bag-of-words models. Empirical results show that Paragraph Vectors outperform bag-of-words models as well as other techniques for text representations. Finally, we achieve new state-of-the-art results on several text classification and sentiment analysis tasks." ] }
1907.06458
2957482284
Keyword extraction is used for summarizing the content of a document and supports efficient document retrieval, and is as such an indispensable part of modern text-based systems. We explore how load centrality, a graph-theoretic measure applied to graphs derived from a given text can be used to efficiently identify and rank keywords. Introducing meta vertices (aggregates of existing vertices) and systematic redundancy filters, the proposed method performs on par with state-of-the-art for the keyword extraction task on 14 diverse datasets. The proposed method is unsupervised, interpretable and can also be used for document visualization.
The method that we propose in this paper, RaKUn, is a graph-based keyword extraction method. We exploit some of the ideas from the area of graph aggregation-based learning, where, for example, graph convolutional neural networks and similar approaches were shown to yield high quality vertex representations by aggregating their neighborhoods' feature space @cite_16 . This work implements some of the similar ideas (albeit not in a neural network setting), where redundant information is aggregated into meta vertices in a similar manner. Similar efforts were shown as useful for hierarchical subnetwork aggregation in sensor networks @cite_0 and in biological use cases of simulation of large proteins @cite_10 .
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_10" ], "mid": [ "2154625773", "2963224980", "2106988671" ], "abstract": [ "In-network aggregation is an essential primitive for performing queries on sensor network data. However, most aggregation algorithms assume that all intermediate nodes are trusted. In contrast, the standard threat model in sensor network security assumes that an attacker may control a fraction of the nodes, which may misbehave in an arbitrary (Byzantine) manner.We present the first algorithm for provably secure hierarchical in-network data aggregation. Our algorithm is guaranteed to detect any manipulation of the aggregate by the adversary beyond what is achievable through direct injection of data values at compromised nodes. In other words, the adversary can never gain any advantage from misrepresenting intermediate aggregation computations. Our algorithm incurs only O(Δ log2 n) node congestion, supports arbitrary tree-based aggregator topologies and retains its resistance against aggregation manipulation in the presence of arbitrary numbers of malicious nodes. The main algorithm is based on performing the sum aggregation securely by first forcing the adversary to commit to its choice of intermediate aggregation results, and then having the sensor nodes independently verify that their contributions to the aggregate are correctly incorporated. We show how to reduce secure median , count , and average to this primitive.", "Graph is an important data representation which appears in a wide diversity of real-world scenarios. Effective graph analytics provides users a deeper understanding of what is behind the data, and thus can benefit a lot of useful applications such as node classification, node recommendation, link prediction, etc. However, most graph analytics methods suffer the high computation and space cost. Graph embedding is an effective yet efficient way to solve the graph analytics problem. It converts the graph data into a low dimensional space in which the graph structural information and graph properties are maximumly preserved. In this survey, we conduct a comprehensive review of the literature in graph embedding. We first introduce the formal definition of graph embedding as well as the related concepts. After that, we propose two taxonomies of graph embedding which correspond to what challenges exist in different graph embedding problem settings and how the existing work addresses these challenges in their solutions. Finally, we summarize the applications that graph embedding enables and suggest four promising future research directions in terms of computation efficiency, problem settings, techniques, and application scenarios.", "Elastic network models have been successful in elucidating the largest scale collective motions of proteins. These models are based on a set of highly coupled springs, where only the close neighboring amino acids interact, without any residue specificity. Our objective here is to determine whether the equivalent cooperative motions can be obtained upon further coarse-graining of the protein structure along the backbone. The influenza virus hemagglutinin A (HA), composed of N = 1509 residues, is utilized for this analysis. Elastic network model calculations are performed for coarse-grained HA structures containing only N 2, N 10, N 20, and N 40 residues along the backbone. High correlations (>0.95) between residue fluctuations are obtained for the first dominant (slowest) mode of motion between the original model and the coarse-grained models. In the case of coarse-graining by a factor of 1 40, the slowest mode shape for HA is reconstructed for all residues by successively selecting different subsets of residues, shifting one residue at a time. The correlation for this reconstructed first mode shape with the original all-residue case is 0.73, while the computational time is reduced by about three orders of magnitude. The reduction in computational time will be much more significant for larger targeted structures. Thus, the dominant motions of protein structures are robust enough to be captured at extremely high levels of coarse-graining. And more importantly, the dynamics of extremely large complexes are now accessible with this new methodology." ] }
1907.06458
2957482284
Keyword extraction is used for summarizing the content of a document and supports efficient document retrieval, and is as such an indispensable part of modern text-based systems. We explore how load centrality, a graph-theoretic measure applied to graphs derived from a given text can be used to efficiently identify and rank keywords. Introducing meta vertices (aggregates of existing vertices) and systematic redundancy filters, the proposed method performs on par with state-of-the-art for the keyword extraction task on 14 diverse datasets. The proposed method is unsupervised, interpretable and can also be used for document visualization.
The main contributions of this paper are as follows. The notion of load centrality was to our knowledge not yet sufficiently exploited for keyword extraction. We show that this fast measure offers competitive performance to other widely used centralities, such as for example the PageRank centrality (used in @cite_7 ). To our knowledge, this work is the first to introduce the notion of meta vertices with the aim of aggregating similar vertices, following similar ideas to the statistical method YAKE @cite_25 , which is considered a state-of-the-art for the keyword extraction. Next, as part of the proposed RaKUn algorithm we extend the extraction from unigrams also to bigram and threegram keywords based on load centrality scores computed for considered tokens. Last but not least, we demonstrate how arbitrary textual corpora can be transformed into weighted graphs whilst maintaining sequential information, offering the opportunity to exploit potential context not naturally present in statistical methods.
{ "cite_N": [ "@cite_25", "@cite_7" ], "mid": [ "2790109590", "1525595230" ], "abstract": [ "In this paper, we present YAKE!, a novel feature-based system for multi-lingual keyword extraction from single documents, which supports texts of different sizes, domains or languages. Unlike most systems, YAKE! does not rely on dictionaries or thesauri, neither it is trained against any corpora. Instead, we follow an unsupervised approach which builds upon features extracted from the text, making it thus applicable to documents written in many different languages without the need for external knowledge. This can be beneficial for a large number of tasks and a plethora of situations where the access to training corpora is either limited or restricted. In this demo, we offer an easy to use, interactive session, where users from both academia and industry can try our system, either by using a sample document or by introducing their own text. As an add-on, we compare our extracted keywords against the output produced by the IBM Natural Language Understanding (IBM NLU) and Rake system. YAKE! demo is available at http: bit.ly YakeDemoECIR2018. A python implementation of YAKE! is also available at PyPi repository (https: pypi.python.org pypi yake ).", "In this paper, the authors introduce TextRank, a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications." ] }
1907.06490
2957558419
Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.
The literature on SR techniques is extensive, both for SISR and for MISR techniques. SISR approaches can be classified into three main classes: interpolation-based methods (e.g., Lanczos kernels), optimization-based methods and learning-based methods. Optimization-based methods explicitly model prior knowledge about natural images to regularize this ill-posed inverse problem, and include low total-variation priors @cite_44 , gradient-profile prior @cite_58 @cite_12 and non-local similarity @cite_42 @cite_37 @cite_50 . Adding prior knowledge restricts the possible solution space generating higher quality solutions. However, the performance of many optimization-based methods degrades rapidly when the upscaling factor increases, and these methods are usually computationally expensive.
{ "cite_N": [ "@cite_37", "@cite_42", "@cite_44", "@cite_50", "@cite_58", "@cite_12" ], "mid": [ "1992408872", "2137290314", "2125325064", "2123613719", "2111454493", "1995228944" ], "abstract": [ "Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.", "Super-resolution reconstruction proposes a fusion of several low-quality images into one higher quality result with better optical resolution. Classic super-resolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inaccurately, as often happens for nonglobal motion fields, annoying artifacts appear in the super-resolved outcome. Encouraged by recent developments on the video denoising problem, where state-of-the-art algorithms are formed with no explicit motion estimation, we seek a super-resolution algorithm of similar nature that will allow processing sequences with general motion patterns. In this paper, we base our solution on the Nonlocal-Means (NLM) algorithm. We show how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation. Results on several test movies show that the proposed method is very successful in providing super-resolution on general sequences.", "Super-resolution (SR) reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV) regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.", "Example learning-based image super-resolution (SR) is recognized as an effective way to produce a high-resolution (HR) image with the help of an external training set. The effectiveness of learning-based SR methods, however, depends highly upon the consistency between the supporting training set and low-resolution (LR) images to be handled. To reduce the adverse effect brought by incompatible high-frequency details in the training set, we propose a single image SR approach by learning multiscale self-similarities from an LR image itself. The proposed SR approach is based upon an observation that small patches in natural images tend to redundantly repeat themselves many times both within the same scale and across different scales. To synthesize the missing details, we establish the HR-LR patch pairs using the initial LR input and its down-sampled version to capture the similarities across different scales and utilize the neighbor embedding algorithm to estimate the relationship between the LR and HR image pairs. To fully exploit the similarities across various scales inside the input LR image, we accumulate the previous resultant images as training examples for the subsequent reconstruction processes and adopt a gradual magnification scheme to upscale the LR input to the desired size step by step. In addition, to preserve sharper edges and suppress aliasing artifacts, we further apply the nonlocal means method to learn the similarity within the same scale and formulate a nonlocal prior regularization term to well pose SR estimation under a reconstruction-based SR framework. Experimental results demonstrate that the proposed method can produce compelling SR recovery both quantitatively and perceptually in comparison with other state-of-the-art baselines.", "In this paper, we propose a novel generic image prior-gradient profile prior, which implies the prior knowledge of natural image gradients. In this prior, the image gradients are represented by gradient profiles, which are 1-D profiles of gradient magnitudes perpendicular to image structures. We model the gradient profiles by a parametric gradient profile model. Using this model, the prior knowledge of the gradient profiles are learned from a large collection of natural images, which are called gradient profile prior. Based on this prior, we propose a gradient field transformation to constrain the gradient fields of the high resolution image and the enhanced image when performing single image super-resolution and sharpness enhancement. With this simple but very effective approach, we are able to produce state-of-the-art results. The reconstructed high resolution images or the enhanced images are sharp while have rare ringing or jaggy artifacts.", "Super-resolution from a single image plays an important role in many computer vision systems. However, it is still a challenging task, especially in preserving local edge structures. To construct high-resolution images while preserving the sharp edges, an effective edge-directed super-resolution method is presented in this paper. An adaptive self-interpolation algorithm is first proposed to estimate a sharp high-resolution gradient field directly from the input low-resolution image. The obtained high-resolution gradient is then regarded as a gradient constraint or an edge-preserving constraint to reconstruct the high-resolution image. Extensive results have shown both qualitatively and quantitatively that the proposed method can produce convincing super-resolution images containing complex and sharp features, as compared with the other state-of-the-art super-resolution algorithms." ] }
1907.06490
2957558419
Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.
Learning-based methods can be pixel-based or example-based. The latter ones are the most popular and they model the correspondence among LR and HR patches for HR patch prediction. After the early work by @cite_15 based on searching @math -nearest neighbors LR-HR patch pairs of the input LR patch to estimate the HR patch, neighbor embedding @cite_34 @cite_7 @cite_52 , sparse-coding @cite_66 @cite_24 @cite_19 @cite_54 @cite_9 , anchored neighborhood regression @cite_31 , and random forest @cite_45 methods were proposed. More recently deep convolutional neural networks (CNNs) @cite_59 @cite_22 @cite_25 @cite_0 @cite_60 @cite_46 @cite_40 @cite_10 achieved state-of-the-art results for the SISR task. The deep learning paradigm gained attention due to its natural capability of extracting high-level features from images. This is particularly important in remote sensing scenarios where images are highly detailed and their statistics can be very complex.
{ "cite_N": [ "@cite_22", "@cite_54", "@cite_15", "@cite_10", "@cite_66", "@cite_60", "@cite_52", "@cite_46", "@cite_7", "@cite_19", "@cite_40", "@cite_34", "@cite_25", "@cite_9", "@cite_24", "@cite_0", "@cite_45", "@cite_59", "@cite_31" ], "mid": [ "", "1978749115", "", "2964101377", "2121058967", "", "", "2476548250", "2103844245", "2149669120", "2963372104", "2118963448", "54257720", "1791560514", "2088254198", "2242218935", "1950594372", "2508457857", "2150081556" ], "abstract": [ "", "Sparse representation models code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary, and they have shown promising results in various image restoration applications. However, due to the degradation of the observed image (e.g., noisy, blurred, and or down-sampled), the sparse representations by conventional models may not be accurate enough for a faithful reconstruction of the original image. To improve the performance of sparse representation-based image restoration, in this paper the concept of sparse coding noise is introduced, and the goal of image restoration turns to how to suppress the sparse coding noise. To this end, we exploit the image nonlocal self-similarity to obtain good estimates of the sparse coding coefficients of the original image, and then centralize the sparse coding coefficients of the observed image to those estimates. The so-called nonlocally centralized sparse representation (NCSR) model is as simple as the standard sparse representation model, while our extensive experiments on various types of image restoration problems, including denoising, deblurring and super-resolution, validate the generality and state-of-the-art performance of the proposed NCSR algorithm.", "", "A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.", "This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.", "", "", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "Until now, neighbor-embedding-based (NE) algorithms for super-resolution (SR) have carried out two independent processes to synthesize high-resolution (HR) image patches. In the first process, neighbor search is performed using the Euclidean distance metric, and in the second process, the optimal weights are determined by solving a constrained least squares problem. However, the separate processes are not optimal. In this paper, we propose a sparse neighbor selection scheme for SR reconstruction. We first predetermine a larger number of neighbors as potential candidates and develop an extended Robust-SL0 algorithm to simultaneously find the neighbors and to solve the reconstruction weights. Recognizing that the k-nearest neighbor (k-NN) for reconstruction should have similar local geometric structures based on clustering, we employ a local statistical feature, namely histograms of oriented gradients (HoG) of low-resolution (LR) image patches, to perform such clustering. By conveying local structural information of HoG in the synthesis stage, the k-NN of each LR input patch is adaptively chosen from their associated subset, which significantly improves the speed of synthesizing the HR image while preserving the quality of reconstruction. Experimental results suggest that the proposed method can achieve competitive SR quality compared with other state-of-the-art baselines.", "This paper proposes a framework for single-image super-resolution. The underlying idea is to learn a map from input low-resolution images to target high-resolution images based on example pairs of input and output images. Kernel ridge regression (KRR) is adopted for this purpose. To reduce the time complexity of training and testing for KRR, a sparse solution is found by combining the ideas of kernel matching pursuit and gradient descent. As a regularized solution, KRR leads to a better generalization than simply storing the examples as has been done in existing example-based algorithms and results in much less noisy images. However, this may introduce blurring and ringing artifacts around major edges as sharp changes are penalized severely. A prior model of a generic image class which takes into account the discontinuity property of images is adopted to resolve this problem. Comparison with existing algorithms shows the effectiveness of the proposed method.", "Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge[26].", "In this paper, we propose a novel method for solving single-image super-resolution problems. Given a low-resolution image as input, we recover its high-resolution counterpart using a set of training examples. While this formulation resembles other learning-based methods for super-resolution, our method has been inspired by recent manifold teaming methods, particularly locally linear embedding (LLE). Specifically, small image patches in the lowand high-resolution images form manifolds with similar local geometry in two distinct feature spaces. As in LLE, local geometry is characterized by how a feature vector corresponding to a patch can be reconstructed by its neighbors in the feature space. Besides using the training image pairs to estimate the high-resolution embedding, we also enforce local compatibility and smoothness constraints between patches in the target high-resolution image through overlapping. Experiments show that our method is very flexible and gives good empirical results.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "This paper deals with the single image scale-up problem using sparse-representation modeling. The goal is to recover an original image from its blurred and down-scaled noisy version. Since this problem is highly ill-posed, a prior is needed in order to regularize it. The literature offers various ways to address this problem, ranging from simple linear space-invariant interpolation schemes (e.g., bicubic interpolation), to spatially-adaptive and non-linear filters of various sorts. We embark from a recently-proposed successful algorithm by Yang et. al. [1,2], and similarly assume a local Sparse-Land model on image patches, serving as regularization. Several important modifications to the above-mentioned solution are introduced, and are shown to lead to improved results. These modifications include a major simplification of the overall process both in terms of the computational complexity and the algorithm architecture, using a different training approach for the dictionary-pair, and introducing the ability to operate without a training-set by boot-strapping the scale-up task from the given low-resolution image. We demonstrate the results on true images, showing both visual and PSNR improvements.", "In this paper, we propose a novel coupled dictionary training method for single-image super-resolution (SR) based on patchwise sparse recovery, where the learned couple dictionaries relate the low- and high-resolution (HR) image patch spaces via sparse representation. The learning process enforces that the sparse representation of a low-resolution (LR) image patch in terms of the LR dictionary can well reconstruct its underlying HR image patch with the dictionary in the high-resolution image patch space. We model the learning problem as a bilevel optimization problem, where the optimization includes an l1-norm minimization problem in its constraints. Implicit differentiation is employed to calculate the desired gradient for stochastic gradient descent. We demonstrate that our coupled dictionary learning method can outperform the existing joint dictionary training method both quantitatively and qualitatively. Furthermore, for real applications, we speed up the algorithm approximately 10 times by learning a neural network model for fast sparse inference and selectively processing only those visually salient regions. Extensive experimental comparisons with state-of-the-art SR algorithms validate the effectiveness of our proposed approach.", "We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "The aim of single image super-resolution is to reconstruct a high-resolution image from a single low-resolution input. Although the task is ill-posed it can be seen as finding a non-linear mapping from a low to high-dimensional space. Recent methods that rely on both neighborhood embedding and sparse-coding have led to tremendous quality improvements. Yet, many of the previous approaches are hard to apply in practice because they are either too slow or demand tedious parameter tweaks. In this paper, we propose to directly map from low to high-resolution patches using random forests. We show the close relation of previous work on single image super-resolution to locally linear regression and demonstrate how random forests nicely fit into this framework. During training the trees, we optimize a novel and effective regularized objective that not only operates on the output space but also on the input space, which especially suits the regression task. During inference, our method comprises the same well-known computational efficiency that has made random forests popular for many computer vision problems. In the experimental part, we demonstrate on standard benchmarks for single image super-resolution that our approach yields highly accurate state-of-the-art results, while being fast in both training and evaluation.", "The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.", "Recently there have been significant advances in image up scaling or image super-resolution based on a dictionary of low and high resolution exemplars. The running time of the methods is often ignored despite the fact that it is a critical factor for real applications. This paper proposes fast super-resolution methods while making no compromise on quality. First, we support the use of sparse learned dictionaries in combination with neighbor embedding methods. In this case, the nearest neighbors are computed using the correlation with the dictionary atoms rather than the Euclidean distance. Moreover, we show that most of the current approaches reach top performance for the right parameters. Second, we show that using global collaborative coding has considerable speed advantages, reducing the super-resolution mapping to a precomputed projective matrix. Third, we propose the anchored neighborhood regression. That is to anchor the neighborhood embedding of a low resolution patch to the nearest atom in the dictionary and to precompute the corresponding embedding matrix. These proposals are contrasted with current state-of-the-art methods on standard images. We obtain similar or improved quality and one or two orders of magnitude speed improvements." ] }
1907.06490
2957558419
Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.
While most of the deep learning SISR works are related to traditional natural images, lately CNNs have been exploited for remote sensing imagery. A deep learning based method has been applied by @cite_2 on remote sensing images in the frequency domain. Their CNN takes as input discrete wavelet transformed images and adopts recursive block and residual learning in global and local manners to reconstruct HR wavelet coefficients. @cite_64 proposed a generative adversarial network-based edge-enhancement network for robust satellite image SR reconstruction.
{ "cite_N": [ "@cite_64", "@cite_2" ], "mid": [ "2927933146", "2907551576" ], "abstract": [ "The current superresolution (SR) methods based on deep learning have shown remarkable comparative advantages but remain unsatisfactory in recovering the high-frequency edge details of the images in noise-contaminated imaging conditions, e.g., remote sensing satellite imaging. In this paper, we propose a generative adversarial network (GAN)-based edge-enhancement network (EEGAN) for robust satellite image SR reconstruction along with the adversarial learning strategy that is insensitive to noise. In particular, EEGAN consists of two main subnetworks: an ultradense subnetwork (UDSN) and an edge-enhancement subnetwork (EESN). In UDSN, a group of 2-D dense blocks is assembled for feature extraction and to obtain an intermediate high-resolution result that looks sharp but is eroded with artifacts and noises as previous GAN-based methods do. Then, EESN is constructed to extract and enhance the image contours by purifying the noise-contaminated components with mask processing. The recovered intermediate image and enhanced edges can be combined to generate the result that enjoys high credibility and clear contents. Extensive experiments on Kaggle Open Source Data set , Jilin-1 video satellite images, and Digitalglobe show superior reconstruction performance compared to the state-of-the-art SR approaches.", "Deep learning (DL) has been successfully applied to single image super-resolution (SISR), which aims at reconstructing a high-resolution (HR) image from its low-resolution (LR) counterpart. Different from most current DL-based methods, which perform reconstruction in the spatial domain, we use a scheme based in the frequency domain to reconstruct the HR image at various frequency bands. Further, we propose a method that incorporates the wavelet transform (WT) and the recursive Res-Net. The WT is applied to the LR image to divide it into various frequency components. Then, an elaborately designed network with recursive residual blocks is used to predict high-frequency components. Finally, the reconstructed image is obtained via the inverse WT. This paper has three main contributions: 1) an SISR scheme based on the frequency domain is proposed under a DL framework to fully exploit the potential to depict images at different frequency bands; 2) recursive block and residual learning in global and local manners are adopted to ease the training of the deep network, and the batch normalization layer is removed to increase the flexibility of the network, save memory, and promote speed; and 3) the low-frequency wavelet component is replaced by an LR image with more details to further improve performance. To validate the effectiveness of the proposed method, extensive experiments are performed using the NWPU-RESISC45 data set, and the results demonstrate that the proposed method outperforms several state-of-the-art methods in terms of both objective evaluation and subjective perspective." ] }
1907.06490
2957558419
Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.
Concerning MISR, the first work was proposed by Tsai and Huang @cite_26 , who used a frequency-domain technique to combine multiple under-sampled images with sub-pixel displacements to improve the spatial resolution of Landsat TM acquisitions. Due to the drawbacks of the frequency-domain algorithms, like the difficulty to incorporate the prior information about HR images, many spatial-domain MISR techniques were proposed over the years @cite_38 . Typical spatial-domain methods include non-uniform interpolation @cite_29 , iterative back-projection (IBP) @cite_1 , projection onto convex sets (POCS) @cite_41 @cite_61 , regularized methods @cite_4 @cite_67 @cite_21 , and sparse coding @cite_27 @cite_17 .
{ "cite_N": [ "@cite_61", "@cite_38", "@cite_67", "@cite_26", "@cite_4", "@cite_41", "@cite_29", "@cite_21", "@cite_1", "@cite_27", "@cite_17" ], "mid": [ "2113397463", "2124875329", "2006262236", "", "2165939075", "", "2135063818", "2102621306", "2087380704", "2590412282", "" ], "abstract": [ "We address the problem of reconstruction of a high-resolution image from a sequence of low-resolution images containing arbitrary relative motion, excluding occlusion effects. We develop a formulation that simultaneously takes into account blurring due to relative sensor-object motion, sensor integration, and additive noise. We propose a POCS-based algorithm for performing the high-resolution reconstruction, and provide experimental results. >", "This paper addresses the problem of recovering a super-resolved image from a set of warped blurred and decimated versions thereof. Several algorithms have already been proposed for the solution of this general problem. In this paper, we concentrate on a special case where the warps are pure translations, the blur is space invariant and the same for all the images, and the noise is white. We exploit previous results to develop a new highly efficient super-resolution reconstruction algorithm for this case, which separates the treatment into de-blurring and measurements fusion. The fusion part is shown to be a very simple non-iterative algorithm, preserving the optimality of the entire reconstruction process, in the maximum-likelihood sense. Simulations demonstrate the capabilities of the proposed algorithm.", "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "", "Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L sub 1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.", "", "An algorithm based on spatial tessellation and approximation of each triangle patch in the Delaunay (1934) triangulation (with smoothness constraints) by a bivariate polynomial is advanced to construct a high resolution (HR) high quality image from a set of low resolution (LR) frames. The high resolution algorithm is accompanied by a site-insertion algorithm for update of the initial HR image with the availability of more LR frames till the desired image quality is attained. This algorithm, followed by post filtering, is suitable for real-time image sequence processing because of the fast expected (average) time construction of Delaunay triangulation and the local update feature.", "In this paper, we propose a super-resolution image reconstruction algorithm to moderate-resolution imaging spectroradiometer (MODIS) remote sensing images. This algorithm consists of two parts: registration and reconstruction. In the registration part, a truncated quadratic cost function is used to exclude the outlier pixels, which strongly deviate from the registration model. Accurate photometric and geometric registration parameters can be obtained simultaneously. In the reconstruction part, the L1 norm data fidelity term is chosen to reduce the effects of inevitable registration error, and a Huber prior is used as regularization to preserve sharp edges in the reconstructed image. In this process, the outliers are excluded again to enhance the robustness of the algorithm. The proposed algorithm has been tested using real MODIS band-4 images, which were captured in different dates. The experimental results and comparative analyses verify the effectiveness of this algorithm.", "Abstract Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence.", "A number of image super resolution algorithms based on the sparse coding have successfully implemented multi-frame super resolution in recent years. In order to utilize multiple low-resolution observations, both accurate image registration and sparse coding are required. Previous study on multi-frame super resolution based on sparse coding firstly apply block matching for image registration, followed by sparse coding to enhance the image resolution. In this paper, these two problems are solved by optimizing a single objective function. The proposed formulation not only has a mathematically interesting structure, called the double sparsity, but also yields comparable or improved numerical performance to conventional methods.", "" ] }
1907.06490
2957558419
Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However, deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.
The iterative back projection (IBP) approach was introduced by Irani and Peleg @cite_1 . IBP aims to improve an initial guess of the super-resolved image by back projecting the difference between simulated LR images and actual LR images to the SR image. The updates are iteratively performed attempting to invert the forward imaging process. Drawbacks come from the inability to deal with unknown or very difficult to model image degradation processes, as well as the difficulty in including image priors.
{ "cite_N": [ "@cite_1" ], "mid": [ "2087380704" ], "abstract": [ "Abstract Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence." ] }
1907.06632
2956348853
In this paper, we present the Metamorphic Testing of an in-use deep learning based forecasting application. The application looks at the past data of system characteristics (e.g. memory allocation') to predict outages in the future. We focus on two statistical machine learning based components - a) detection of co-relation between system characteristics and b) estimating the future value of a system characteristic using an LSTM (a deep learning architecture). In total, 19 Metamorphic Relations have been developed and we provide proofs & algorithms where applicable. We evaluated our method through two settings. In the first, we executed the relations on the actual application and uncovered 8 issues not known before. Second, we generated hypothetical bugs, through Mutation Testing, on a reference implementation of the LSTM based forecaster and found that 65.9 of the bugs were caught through the relations.
However, there are various deficiencies in this standard process of training' and validation'. It is sometimes seen that subtle implementation mistakes do not produce obvious signals during training and validation and may go undetected @cite_14 @cite_4 @cite_24 . It is also often the case that validation data does not represent the true data that would occur in production @cite_3 . It is further speculated that, due to some common properties of the training and validation data, the ML application ends up learning statistical regularities in the data and not the underlying semantic concepts @cite_18 . The training data may have problems, where certain scenarios are underrepresented or the training algorithm may have issues where certain biases in data may be amplified @cite_5 . Finally, the ML algorithm may learn to produce the correct output, but for the wrong reasons @cite_10 .
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_4", "@cite_3", "@cite_24", "@cite_5", "@cite_10" ], "mid": [ "2774616426", "2859484040", "", "2767414122", "2963808954", "2950018712", "2282821441" ], "abstract": [ "Deep CNNs are known to exhibit the following peculiarity: on the one hand they generalize extremely well to a test set, while on the other hand they are extremely sensitive to so-called adversarial perturbations. The extreme sensitivity of high performance CNNs to adversarial examples casts serious doubt that these networks are learning high level abstractions in the dataset. We are concerned with the following question: How can a deep CNN that does not learn any high level semantics of the dataset manage to generalize so well? The goal of this article is to measure the tendency of CNNs to learn surface statistical regularities of the dataset. To this end, we use Fourier filtering to construct datasets which share the exact same high level abstractions but exhibit qualitatively different surface statistical regularities. For the SVHN and CIFAR-10 datasets, we present two Fourier filtered variants: a low frequency variant and a randomly filtered variant. Each of the Fourier filtering schemes is tuned to preserve the recognizability of the objects. Our main finding is that CNNs exhibit a tendency to latch onto the Fourier image statistics of the training dataset, sometimes exhibiting up to a 28 generalization gap across the various test sets. Moreover, we observe that significantly increasing the depth of a network has a very marginal impact on closing the aforementioned generalization gap. Thus we provide quantitative evidence supporting the hypothesis that deep CNNs tend to learn surface statistical regularities in the dataset rather than higher-level abstract concepts.", "We have recently witnessed tremendous success of Machine Learning (ML) in practical applications. Computer vision, speech recognition and language translation have all seen a near human level performance. We expect, in the near future, most business applications will have some form of ML. However, testing such applications is extremely challenging and would be very expensive if we follow today's methodologies. In this work, we present an articulation of the challenges in testing ML based applications. We then present our solution approach, based on the concept of Metamorphic Testing, which aims to identify implementation bugs in ML based image classifiers. We have developed metamorphic relations for an application based on Support Vector Machine and a Deep Learning based application. Empirical validation showed that our approach was able to catch 71 of the implementation bugs in the ML applications.", "", "We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7 to 4.3 on the DenseNet (applied to CIFAR-10) when the true positive rate is 95 .", "", "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to \"debias\" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted." ] }
1907.06632
2956348853
In this paper, we present the Metamorphic Testing of an in-use deep learning based forecasting application. The application looks at the past data of system characteristics (e.g. memory allocation') to predict outages in the future. We focus on two statistical machine learning based components - a) detection of co-relation between system characteristics and b) estimating the future value of a system characteristic using an LSTM (a deep learning architecture). In total, 19 Metamorphic Relations have been developed and we provide proofs & algorithms where applicable. We evaluated our method through two settings. In the first, we executed the relations on the actual application and uncovered 8 issues not known before. Second, we generated hypothetical bugs, through Mutation Testing, on a reference implementation of the LSTM based forecaster and found that 65.9 of the bugs were caught through the relations.
Thus, there has been a growing interest in the effective testing of ML based applications. Some of the recent work includes, measuring the invariance of image classifiers to rotations and translations @cite_17 , changes in image characteristics such as contrast @cite_11 @cite_0 @cite_1 and introducing spurious objects onto an image @cite_2 . There has been investigation into generation of adversarial inputs for an ML application where the inputs are specifically crafted such that they cause the application into giving a wrong output @cite_6 @cite_16 . Efforts have been made to detect & mitigate instances of bias in training data @cite_7 and an ML algorithm @cite_5 . Finally, interpreting the decisions of an ML algorithm has been studied as well @cite_10 @cite_8 .
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_17", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2962787423", "2563486500", "2616028256", "2773726006", "1673923490", "2753704268", "2885106262", "2950018712", "2274565976", "2282821441", "2662969263" ], "abstract": [ "", "Predictive models are increasingly deployed for the purpose of determining access to services such as credit, insurance, and employment. Despite potential gains in productivity and efficiency, several potential problems have yet to be addressed, particularly the potential for unintentional discrimination. We present an iterative procedure, based on orthogonal projection of input attributes, for enabling interpretability of black-box predictive models. Through our iterative procedure, one can quantify the relative dependence of a black-box model on its input attributes.The relative significance of the inputs to a predictive model can then be used to assess the fairness (or discriminatory extent) of such a model.", "Deep learning (DL) systems are increasingly deployed in safety- and security-critical domains including self-driving cars and malware detection, where the correctness and predictability of a system's behavior for corner case inputs are of great importance. Existing DL testing depends heavily on manually labeled data and therefore often fails to expose erroneous behaviors for rare inputs. We design, implement, and evaluate DeepXplore, the first whitebox framework for systematically testing real-world DL systems. First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs. Next, we leverage multiple DL systems with similar functionality as cross-referencing oracles to avoid manual checking. Finally, we demonstrate how finding inputs for DL systems that both trigger many differential behaviors and achieve high neuron coverage can be represented as a joint optimization problem and solved efficiently using gradient-based search techniques. DeepXplore efficiently finds thousands of incorrect corner case behaviors (e.g., self-driving cars crashing into guard rails and malware masquerading as benign software) in state-of-the-art DL models with thousands of neurons trained on five popular datasets including ImageNet and Udacity self-driving challenge data. For all tested DL models, on average, DeepXplore generated one test input demonstrating incorrect behavior within one second while running only on a commodity laptop. We further show that the test inputs generated by DeepXplore can also be used to retrain the corresponding DL model to improve the model's accuracy by up to 3 .", "Recent work has shown that neural network-based vision classifiers exhibit a significant vulnerability to misclassifications caused by imperceptible but adversarial perturbations of their inputs. These perturbations, however, are purely pixel-wise and built out of loss function gradients of either the attacked model or its surrogate. As a result, they tend to be contrived and look pretty artificial. This might suggest that such vulnerability to slight input perturbations can only arise in a truly adversarial setting and thus is unlikely to be an issue in more \"natural\" contexts. In this paper, we provide evidence that such belief might be incorrect. We demonstrate that significantly simpler, and more likely to occur naturally, transformations of the input - namely, rotations and translations alone, suffice to significantly degrade the classification performance of neural network-based vision models across a spectrum of datasets. This remains to be the case even when these models are trained using appropriate data augmentation. Finding such \"fooling\" transformations does not require having any special access to the model - just trying out a small number of random rotation and translation combinations already has a significant effect. These findings suggest that our current neural network-based vision models might not be as reliable as we tend to assume. Finally, we consider a new class of perturbations that combines rotations and translations with the standard pixel-wise attacks. We observe that these two types of input transformations are, in a sense, orthogonal to each other. Their effect on the performance of the model seems to be additive, while robustness to one type does not seem to affect the robustness to the other type. This suggests that this combined class of transformations is a more complete notion of similarity in the context of adversarial robustness of vision models.", "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.", "Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explores different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.", "We showcase a family of common failures of state-of-the art object detectors. These are obtained by replacing image sub-regions by another sub-image that contains a trained object. We call this \"object transplanting\". Modifying an image in this manner is shown to have a non-local impact on object detection. Slight changes in object position can affect its identity according to an object detector as well as that of other objects in the image. We provide some analysis and suggest possible reasons for the reported phenomena.", "The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to \"debias\" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.", "Advances in deep learning have led to the broad adoption of Deep Neural Networks (DNNs) to a range of important machine learning problems, e.g., guiding autonomous vehicles, speech recognition, malware detection. Yet, machine learning models, including DNNs, were shown to be vulnerable to adversarial samples-subtly (and often humanly indistinguishably) modified malicious inputs crafted to compromise the integrity of their outputs. Adversarial examples thus enable adversaries to manipulate system behaviors. Potential attacks include attempts to control the behavior of vehicles, have spam content identified as legitimate content, or have malware identified as legitimate software. Adversarial examples are known to transfer from one model to another, even if the second model has a different architecture or was trained on a different set. We introduce the first practical demonstration that this cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data. In our demonstration, we only assume that the adversary can observe outputs from the target DNN given inputs chosen by the adversary. We introduce the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. We evaluate the approach on existing DNN datasets and real-world settings. In one experiment, we force a DNN supported by MetaMind (one of the online APIs for DNN classifiers) to mis-classify inputs at a rate of 84.24 . We conclude with experiments exploring why adversarial samples transfer between DNNs, and a discussion on the applicability of our attack when targeting machine learning algorithms distinct from DNNs.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "Human visual object recognition is typically rapid and seemingly effortless, as well as largely independent of viewpoint and object orientation. Until very recently, animate visual systems were the only ones capable of this remarkable computational feat. This has changed with the rise of a class of computer vision algorithms called deep neural networks (DNNs) that achieve human-level classification performance on object recognition tasks. Furthermore, a growing number of studies report similarities in the way DNNs and the human visual system process objects, suggesting that current DNNs may be good models of human visual object recognition. Yet there clearly exist important architectural and processing differences between state-of-the-art DNNs and the primate visual system. The potential behavioural consequences of these differences are not well understood. We aim to address this issue by comparing human and DNN generalisation abilities towards image degradations. We find the human visual system to be more robust to image manipulations like contrast reduction, additive noise or novel eidolon-distortions. In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition. We envision that our findings as well as our carefully measured and freely available behavioural datasets provide a new useful benchmark for the computer vision community to improve the robustness of DNNs and a motivation for neuroscientists to search for mechanisms in the brain that could facilitate this robustness." ] }
1907.06632
2956348853
In this paper, we present the Metamorphic Testing of an in-use deep learning based forecasting application. The application looks at the past data of system characteristics (e.g. memory allocation') to predict outages in the future. We focus on two statistical machine learning based components - a) detection of co-relation between system characteristics and b) estimating the future value of a system characteristic using an LSTM (a deep learning architecture). In total, 19 Metamorphic Relations have been developed and we provide proofs & algorithms where applicable. We evaluated our method through two settings. In the first, we executed the relations on the actual application and uncovered 8 issues not known before. Second, we generated hypothetical bugs, through Mutation Testing, on a reference implementation of the LSTM based forecaster and found that 65.9 of the bugs were caught through the relations.
In this paper, we continue @cite_14 to explore the testing of an ML application with a focus on identifying implementation bugs. We approach the problem through the application of Metamorphic Testing. Some of the existing work in Metamorphic Testing of ML & statistical applications include the testing of Naive-Bayes classifier @cite_9 @cite_19 , Support Vector Machine with a linear kernel @cite_20 and k-nearest neighbor @cite_9 @cite_19 . In this paper, we work with a statistical algorithm (Co-relation co-efficient) and a Deep Learning based LSTM network (both of which have not been studied earlier). Further, we also provide results of the approach on an in-use application and the efficacy in catching implementation bugs.
{ "cite_N": [ "@cite_19", "@cite_9", "@cite_14", "@cite_20" ], "mid": [ "2041650849", "2112265708", "2859484040", "197000801" ], "abstract": [ "Abstract: Machine learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no ''test oracle'' to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique ''metamorphic testing'', which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program.", "Many applications in the field of scientific computing - such as computational biology, computational linguistics, and others - depend on Machine Learning algorithms to provide important core functionality to support solutions in the particular problem domains. However, it is difficult to test such applications because often there is no \"test oracle\" to indicate what the correct output should be for arbitrary input. To help address the quality of such software, in this paper we present a technique for testing the implementations of supervised machine learning classification algorithms on which such scientific computing software depends. Our technique is based on an approach called \"metamorphic testing\", which has been shown to be effective in such cases. More importantly, we demonstrate that our technique not only serves the purpose of verification, but also can be applied in validation. In addition to presenting our technique, we describe a case study we performed on a real-world machine learning application framework, and discuss how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also discuss how our findings can be of use to other areas outside scientific computing, as well.", "We have recently witnessed tremendous success of Machine Learning (ML) in practical applications. Computer vision, speech recognition and language translation have all seen a near human level performance. We expect, in the near future, most business applications will have some form of ML. However, testing such applications is extremely challenging and would be very expensive if we follow today's methodologies. In this work, we present an articulation of the challenges in testing ML based applications. We then present our solution approach, based on the concept of Metamorphic Testing, which aims to identify implementation bugs in ML based image classifiers. We have developed metamorphic relations for an application based on Support Vector Machine and a Deep Learning based application. Empirical validation showed that our approach was able to catch 71 of the implementation bugs in the ML applications.", "Software testing of applications in fields like scientific computing, simulation, machine learning, etc. is particularly challenging because many applications in these domains have no reliable “test oracle” to indicate whether the program’s output is correct when given arbitrary input. A common approach to testing such applications has been to use a “pseudo-oracle”, in which multiple independently-developed implementations of an algorithm process an input and the results are compared: if the results are not the same, then at least one of the implementations contains a defect. Other approaches include the use of program invariants, formal specification languages, trace and log file analysis, and metamorphic testing. In this paper, we present the results of two empirical studies in which we compare the effectiveness of some of these approaches, including metamorphic testing and runtime assertion checking. These results demonstrate that metamorphic testing is generally more effective at revealing defects in applications without test oracles in various application domains, including non-deterministic programs. We also analyze the results in terms of the software development process, and discuss suggestions for both practitioners and researchers who need to test software without the help of a test oracle." ] }
1901.05195
2910360141
Current state of the art solutions in the control of an autonomous vehicle mainly use supervised end-to-end learning, or decoupled perception, planning and action pipelines. Another possible solution is deep reinforcement learning, but such a method requires that the agent interacts with its surroundings in a simulated environment. In this paper we introduce GridSim, which is an autonomous driving simulator engine running a car-like robot architecture to generate occupancy grids from simulated sensors. We use GridSim to study the performance of two deep learning approaches, deep reinforcement learning and driving behavioral learning through genetic algorithms. The deep network encodes the desired behavior in a two elements fitness function describing a maximum travel distance and a maximum forward speed, bounded to a specific interval. The algorithms are evaluated on simulated highways, curved roads and inner-city scenarios, all including different driving limitations.
T he ability of an autonomous car to navigate without human input has become a mainstream research topic in the quest for autonomous driving. In this paper we propose a simulated environment engine for learning autonomous driving behaviors, entitled GridSim (see Fig. ). The simulator uses an Occupancy Grid (OG) sensor model for interacting with the simulated environment. As shown in Fig. , we used GridSim synthetic information to train two types of learning algorithms commonly used in Autonomous Driving: a Deep Q-Network (DQN Agent) @cite_1 and a Deep Neuroevolutionary Agent evolved via an evolutionary training method @cite_5 .
{ "cite_N": [ "@cite_5", "@cite_1" ], "mid": [ "2778749116", "2583993537" ], "abstract": [ "Deep artificial neural networks (DNNs) are typically trained via gradient-based learning algorithms, namely backpropagation. Evolution strategies (ES) can rival backprop-based algorithms such as Q-learning and policy gradients on challenging deep reinforcement learning (RL) problems. However, ES can be considered a gradient-based algorithm because it performs stochastic gradient descent via an operation similar to a finite-difference approximation of the gradient. That raises the question of whether non-gradient-based evolutionary algorithms can work at DNN scales. Here we demonstrate they can: we evolve the weights of a DNN with a simple, gradient-free, population-based genetic algorithm (GA) and it performs well on hard deep RL problems, including Atari and humanoid locomotion. The Deep GA successfully evolves networks with over four million free parameters, the largest neural networks ever evolved with a traditional evolutionary algorithm. These results (1) expand our sense of the scale at which GAs can operate, (2) suggest intriguingly that in some cases following the gradient is not the best choice for optimizing performance, and (3) make immediately available the multitude of techniques that have been developed in the neuroevolution community to improve performance on RL problems. To demonstrate the latter, we show that combining DNNs with novelty search, which was designed to encourage exploration on tasks with deceptive or sparse reward functions, can solve a high-dimensional problem on which reward-maximizing algorithms (e.g. DQN, A3C, ES, and the GA) fail. Additionally, the Deep GA parallelizes better than ES, A3C, and DQN, and enables a state-of-the-art compact encoding technique that can represent million-parameter DNNs in thousands of bytes.", "Reinforcement learning is considered to be a strong AI paradigm which can be used to teach machines through interaction with the environment and learning from their mistakes. Despite its perceived utility, it has not yet been successfully applied in automotive applications. Motivated by the successful demonstrations of learning of Atari games and Go by Google DeepMind, we propose a framework for autonomous driving using deep reinforcement learning. This is of particular relevance as it is difficult to pose autonomous driving as a supervised learning problem due to strong interactions with the environment including other vehicles, pedestrians and roadworks. As it is a relatively new area of research for autonomous driving, we provide a short overview of deep reinforcement learning and then describe our proposed framework. It incorporates Recurrent Neural Networks for information integration, enabling the car to handle partially observable scenarios. It also integrates the recent work on attention models to focus on relevant information, thereby reducing the computational complexity for deployment on embedded hardware. The framework was tested in an open source 3D car racing simulator called TORCS. Our simulation results demonstrate learning of autonomous maneuvering in a scenario of complex road curvatures and simple interaction of other vehicles." ] }
1901.05195
2910360141
Current state of the art solutions in the control of an autonomous vehicle mainly use supervised end-to-end learning, or decoupled perception, planning and action pipelines. Another possible solution is deep reinforcement learning, but such a method requires that the agent interacts with its surroundings in a simulated environment. In this paper we introduce GridSim, which is an autonomous driving simulator engine running a car-like robot architecture to generate occupancy grids from simulated sensors. We use GridSim to study the performance of two deep learning approaches, deep reinforcement learning and driving behavioral learning through genetic algorithms. The deep network encodes the desired behavior in a two elements fitness function describing a maximum travel distance and a maximum forward speed, bounded to a specific interval. The algorithms are evaluated on simulated highways, curved roads and inner-city scenarios, all including different driving limitations.
An AV must be able to sense its own surroundings and form an environment model consisting of moving and stationary objects @cite_3 , and to further use this information in order to learn long term driving strategies. These driving policies govern the vehicle's motion @cite_2 and automatically output control signals for steering wheel, throttle and brake. Reinforcement learning has been applied to a wide variety of robotics related tasks, such as robot locomotion and autonomous driving @cite_4 . However, DRL requires the agent to interact with its environment. The reward is used as a pseudo label for training a DNN which is then used to estimate an action-value function, also known as a Q-value function, for approximating the next best driving actions. This is in contrast to end2end learning, where labeled training data has to be provided. The DeepTraffic competition from MIT @cite_0 is a good example of RL in simulated environments, because the control system for the ego-car is handled by a DQN agent, which uses a discrete occupancy grid as a simplified representation of the environment.
{ "cite_N": [ "@cite_0", "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2784064751", "2810785043", "2609532991", "2343568200" ], "abstract": [ "We present a micro-traffic simulation (named \"DeepTraffic\") where the perception, control, and planning systems for one of the cars are all handled by a single neural network as part of a model-free, off-policy reinforcement learning process. The primary goal of DeepTraffic is to make the hands-on study of deep reinforcement learning accessible to thousands of students, educators, and researchers in order to inspire and fuel the exploration and evaluation of DQN variants and hyperparameter configurations through large-scale, open competition. This paper investigates the crowd-sourced hyperparameter tuning of the policy network that resulted from the first iteration of the DeepTraffic competition where thousands of participants actively searched through the hyperparameter space with the objective of their neural network submission to make it onto the top-10 leaderboard.", "In this paper, we study the problem of learning vision-based dynamic manipulation skills using a scalable reinforcement learning approach. We study this problem in the context of grasping, a longstanding challenge in robotic manipulation. In contrast to static learning behaviors that choose a grasp point and then execute the desired grasp, our method enables closed-loop vision-based control, whereby the robot continuously updates its grasp strategy based on the most recent observations to optimize long-horizon grasp success. To that end, we introduce QT-Opt, a scalable self-supervised vision-based reinforcement learning framework that can leverage over 580k real-world grasp attempts to train a deep neural network Q-function with over 1.2M parameters to perform closed-loop, real-world grasping that generalizes to 96 grasp success on unseen objects. Aside from attaining a very high success rate, our method exhibits behaviors that are quite distinct from more standard grasping systems: using only RGB vision-based perception from an over-the-shoulder camera, our method automatically learns regrasping strategies, probes objects to find the most effective grasps, learns to reposition objects and perform other non-prehensile pre-grasp manipulations, and responds dynamically to disturbances and perturbations.", "Recent years have witnessed amazing progress in AI related fields such as computer vision, machine learning and autonomous vehicles. As with any rapidly growing field, however, it becomes increasingly difficult to stay up-to-date or enter the field as a beginner. While several topic specific survey papers have been written, to date no general survey on problems, datasets and methods in computer vision for autonomous vehicles exists. This paper attempts to narrow this gap by providing a state-of-the-art survey on this topic. Our survey includes both the historically most relevant literature as well as the current state-of-the-art on several specific topics, including recognition, reconstruction, motion estimation, tracking, scene understanding and end-to-end learning. Towards this goal, we first provide a taxonomy to classify each approach and then analyze the performance of the state-of-the-art on several challenging benchmarking datasets including KITTI, ISPRS, MOT and Cityscapes. Besides, we discuss open problems and current research challenges. To ease accessibility and accommodate missing references, we will also provide an interactive platform which allows to navigate topics and methods, and provides additional information and project links for each paper.", "Self-driving vehicles are a maturing technology with the potential to reshape mobility by enhancing the safety, accessibility, efficiency, and convenience of automotive transportation. Safety-critical tasks that must be executed by a self-driving vehicle include planning of motions through a dynamic environment shared with other vehicles and pedestrians, and their robust executions via feedback control. The objective of this paper is to survey the current state of the art on planning and control algorithms with particular regard to the urban setting. A selection of proposed techniques is reviewed along with a discussion of their effectiveness. The surveyed approaches differ in the vehicle mobility model used, in assumptions on the structure of the environment, and in computational requirements. The side by side comparison presented in this survey helps to gain insight into the strengths and limitations of the reviewed approaches and assists with system level design choices." ] }
1901.04980
2909141925
Dobrushin (1972) showed that the interface of a 3D Ising model with minus boundary conditions above the @math -plane and plus below is rigid (has @math -fluctuations) at every sufficiently low temperature. Since then, basic features of this interface -- such as the asymptotics of its maximum -- were only identified in more tractable random surface models that approximate the Ising interface at low temperatures, e.g., for the (2+1)D Solid-On-Solid model. Here we study the large deviations of the interface of the 3D Ising model in a cube of side-length @math with Dobrushin's boundary conditions, and in particular obtain a law of large numbers for @math , its maximum: if the inverse-temperature @math is large enough, then @math as @math , in probability, where @math is given by a large deviation rate in infinite volume. We further show that, on the large deviation event that the interface connects the origin to height @math , it consists of a 1D spine that behaves like a random walk, in that it decomposes into a linear (in @math ) number of asymptotically-stationary weakly-dependent increments that have exponential tails. As the number @math of increments diverges, properties of the interface such as its surface area, volume, and the location of its tip, all obey CLTs with variances linear in @math . These results generalize to every dimension @math .
Subsequently, cluster expansion was instrumental in analyzing the analogous interface in two dimensions. This line of work culminated in the seminal monograph @cite_32 , showing that the shape of a macroscopic minus droplet in the plus phase takes after the , the convex body minimizing the surface energy to volume ratio (where the former is in terms of some explicit, analytic, surface tension @math ). Microscopic properties of an interface of angle @math in an @math box are by now also very well-understood, with fluctuations on @math scales, and a scaling limit to a Brownian bridge @cite_12 @cite_32 @cite_21 ; these hold up to the critical @math @cite_1 @cite_39 . Delicate aspects such as wetting and entropic repulsion have also been extensively studied via these tools.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_32", "@cite_39", "@cite_12" ], "mid": [ "1986960238", "1968753606", "", "2082053835", "2226065225" ], "abstract": [ "The aim of this note is to discuss some statistical properties of the phase separation line in the 2D low-temperature Ising model. We prove the functional central limit theorem for the probability distributions describing fluctuations of the phase boundary in the direction orthogonal to its orientation. The limiting Gaussian measure corresponds to a scaled Brownian bridge with direction dependent parameters. Up to the temperature factor, the variances of local increments of this limiting process are inversely proportional to the stiffness.", "We show that a lower large-deviation bound for the block-spin magnetization in the 2D Ising model can be pushed all the way forward toward its correct “Wulff” value for all β>βc.", "", "We prove an upper large deviation bound for the block spin magnetization in the 2D Ising model in the phase coexistence region. The precise rate (given by the Wulff construction) is shown to hold true for all β > βc. Combined with the lower bounds derived in [I] those results yield an exact second order large deviation theory up to the critical temperature.", "We discuss some statistical properties of the phase boundary in the 2D low-temperature Ising ferromagnet in a box with the two-component boundary conditions. We prove the weak convergence in C[0,1] of measures describing the fluctuations of phase boundaries in the canonical ensemble of interfaces with fixed endpoints and area enclosed below them. The limiting Gaussian measure coincides with the conditional distribution of certain Gaussian process obtained by the integral transformation of the white noise." ] }
1901.04980
2909141925
Dobrushin (1972) showed that the interface of a 3D Ising model with minus boundary conditions above the @math -plane and plus below is rigid (has @math -fluctuations) at every sufficiently low temperature. Since then, basic features of this interface -- such as the asymptotics of its maximum -- were only identified in more tractable random surface models that approximate the Ising interface at low temperatures, e.g., for the (2+1)D Solid-On-Solid model. Here we study the large deviations of the interface of the 3D Ising model in a cube of side-length @math with Dobrushin's boundary conditions, and in particular obtain a law of large numbers for @math , its maximum: if the inverse-temperature @math is large enough, then @math as @math , in probability, where @math is given by a large deviation rate in infinite volume. We further show that, on the large deviation event that the interface connects the origin to height @math , it consists of a 1D spine that behaves like a random walk, in that it decomposes into a linear (in @math ) number of asymptotically-stationary weakly-dependent increments that have exponential tails. As the number @math of increments diverges, properties of the interface such as its surface area, volume, and the location of its tip, all obey CLTs with variances linear in @math . These results generalize to every dimension @math .
While cluster expansion only converges at sufficiently large @math , it is natural to ask if the rigidity of the interface, and our new results, hold for all @math . This is not believed to be the case, as the Ising model is widely believed to undergo a for @math (and no other dimension): Much like the SOS and DG approximations, which exhibit phase transitions in @math ---whereby they roughen and resemble the discrete Gaussian free field @cite_7 @cite_33 for small @math ---it is conjectured that for the 3D Ising model there exists a point @math such that, for @math , the model has long-range order, yet the typical fluctuations of its horizontal interface diverge with @math ; proving this transition is a longstanding open problem (see, e.g, @cite_38 @cite_30 ).
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_33", "@cite_7" ], "mid": [ "2048984514", "1979774889", "2048117538", "2095138947" ], "abstract": [ "We describe inequalities relating to the interface between coexisting phases of Ising ferromagnets. Some implications for the nature of the roughening transition are discussed.", "", "We rigorously establish the existence of a Kosterlitz-Thouless transition in the rotator, the Villain, the solid-on-solid, and the ℤ n models, forn large enough, and in the Coulomb lattice gas, in two dimensions. Our proof is based on an inductive expansion of the Coulomb gas in the sine-Gordon representation, extending over all possible distance scales, which expresses that gas as a convex superposition of dilute gases of neutral molecules whose activities are small if β is sufficiently large. Such gases are known not to exhibit screening. Abelian spin systems are related to a Coulomb gas by means of a duality transformation.", "A convergent low-temperature expansion for a variety of models of twodimensional surfaces is presented. It yields existence of the thermodynamic limit for the pressure and correlation functions as well as analyticity inz =e−β In addition, the estimates give exponential decay of truncated correlations, which proves the existence of a gap in the spectrum of the transfer matrix below the ground state eigenvalue. Two particular examples included in the general framework are the solid-on-solid and discrete Gaussian models." ] }
1901.04980
2909141925
Dobrushin (1972) showed that the interface of a 3D Ising model with minus boundary conditions above the @math -plane and plus below is rigid (has @math -fluctuations) at every sufficiently low temperature. Since then, basic features of this interface -- such as the asymptotics of its maximum -- were only identified in more tractable random surface models that approximate the Ising interface at low temperatures, e.g., for the (2+1)D Solid-On-Solid model. Here we study the large deviations of the interface of the 3D Ising model in a cube of side-length @math with Dobrushin's boundary conditions, and in particular obtain a law of large numbers for @math , its maximum: if the inverse-temperature @math is large enough, then @math as @math , in probability, where @math is given by a large deviation rate in infinite volume. We further show that, on the large deviation event that the interface connects the origin to height @math , it consists of a 1D spine that behaves like a random walk, in that it decomposes into a linear (in @math ) number of asymptotically-stationary weakly-dependent increments that have exponential tails. As the number @math of increments diverges, properties of the interface such as its surface area, volume, and the location of its tip, all obey CLTs with variances linear in @math . These results generalize to every dimension @math .
Much progress has been made in recent years on understanding the distribution of the maximum of the 2D discrete Gaussian free field and its local geometry. It is known for instance ( @cite_14 @cite_20 @cite_18 ; see also, e.g., @cite_23 ) that this maximum is tight around an expected maximum that is asymptotically @math , and that the centered maximum has the law of a randomly shifted Gumbel random variable.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_23", "@cite_20" ], "mid": [ "2097649415", "1572521168", "66851642", "2964051416" ], "abstract": [ "We consider the discrete two-dimensional Gaussian free field on a box of side length @math , with Dirichlet boundary data, and prove the convergence of the law of the centered maximum of the field.© 2015 Wiley Periodicals, Inc.", "We consider the lattice version of the free eld in two dimensions (also called harmonic crystal). The main aim of the paper is to discuss quantitatively the entropic repulsion of the random surface in the presence of a hard wall. The basic ingredient of the proof is the analysis of the maximum of the eld which requires a multiscale analysis reducing the problem essentially to a problem on a eld with a tree structure. 2000 MSC: 60K35, 60G15, 82B41", "", "We consider the maximum of the discrete two-dimensional Gaussian free field (GFF) in a box and prove that its maximum, centered at its mean, is tight, settling a longstanding conjecture. The proof combines a recent observation by Bolthausen, Deuschel, and Zeitouni with elements from Bramson's results on branching Brownian motion and comparison theorems for Gaussian fields. An essential part of the argument is the precise evaluation, up to an error of order 1, of the expected value of the maximum of the GFF in a box. Related Gaussian fields, such as the GFF on a two-dimensional torus, are also discussed. © 2011 Wiley Periodicals, Inc." ] }
1901.04980
2909141925
Dobrushin (1972) showed that the interface of a 3D Ising model with minus boundary conditions above the @math -plane and plus below is rigid (has @math -fluctuations) at every sufficiently low temperature. Since then, basic features of this interface -- such as the asymptotics of its maximum -- were only identified in more tractable random surface models that approximate the Ising interface at low temperatures, e.g., for the (2+1)D Solid-On-Solid model. Here we study the large deviations of the interface of the 3D Ising model in a cube of side-length @math with Dobrushin's boundary conditions, and in particular obtain a law of large numbers for @math , its maximum: if the inverse-temperature @math is large enough, then @math as @math , in probability, where @math is given by a large deviation rate in infinite volume. We further show that, on the large deviation event that the interface connects the origin to height @math , it consists of a 1D spine that behaves like a random walk, in that it decomposes into a linear (in @math ) number of asymptotically-stationary weakly-dependent increments that have exponential tails. As the number @math of increments diverges, properties of the interface such as its surface area, volume, and the location of its tip, all obey CLTs with variances linear in @math . These results generalize to every dimension @math .
We end this section with other perspectives on the 3D Ising model at low temperatures, which were also the focus of much attention. While the interface-based approach of Dobrushin @cite_2 to understanding the low-temperature 3D Ising model proved to be extremely fruitful in 2D (where the results hold for interfaces in any angle), in dimension @math the combinatorics of that argument break down as soon as the ground state is not flat. It remains a well-known open problem to show that there do not exist non-translation invariant Gibbs measures corresponding to interfaces other than those parallel to the coordinate axes. The progress to date on roughness and fluctuations of tilted interfaces" has been limited either to 1-step perturbations of a flat interface @cite_11 , or to results at zero temperature using rich connections to exactly solvable models @cite_34 .
{ "cite_N": [ "@cite_11", "@cite_34", "@cite_2" ], "mid": [ "2093600781", "2044427968", "" ], "abstract": [ "Some aspects of the microscopic theory of interfaces in classical lattice systems are developed. The problem of the appearance of facets in the (Wulff) equilibrium crystal shape is discussed, together with its relation to the discontinuities of the derivatives of the surface tension τ(n) (with respect to the components of the surface normaln) and the role of the step free energy τstep(m) (associated with a step orthogonal tom on a rigid interface). Among the results are, in the case of the Ising model at low enough temperatures, the existence of τstep(m) in the thermodynamic limit, the expression of this quantity by means of a convergent cluster expansion, and the fact that 2τstep(m) is equal to the value of the jump of the derivative ∂τ ∂δ (when δ varies) at the point δ=0 [withn=(m1 sin δ,m2 sin δ, cos δ)]. Finally, using this fact, it is shown that the facet shape is determined by the function τstep(m).", "We compute the expansion of the surface tension of the 3D random cluster model for q≥ 1 in the limit where p goes to 1. We also compute the asymptotic shape of a plane partition of n as n goes to ∞. This same shape determines the Wulff crystal to order o(ɛ) in the 3D Ising model (and more generally in the 3D random cluster model for q≥ 1) at temperature ɛ.", "" ] }
1901.04980
2909141925
Dobrushin (1972) showed that the interface of a 3D Ising model with minus boundary conditions above the @math -plane and plus below is rigid (has @math -fluctuations) at every sufficiently low temperature. Since then, basic features of this interface -- such as the asymptotics of its maximum -- were only identified in more tractable random surface models that approximate the Ising interface at low temperatures, e.g., for the (2+1)D Solid-On-Solid model. Here we study the large deviations of the interface of the 3D Ising model in a cube of side-length @math with Dobrushin's boundary conditions, and in particular obtain a law of large numbers for @math , its maximum: if the inverse-temperature @math is large enough, then @math as @math , in probability, where @math is given by a large deviation rate in infinite volume. We further show that, on the large deviation event that the interface connects the origin to height @math , it consists of a 1D spine that behaves like a random walk, in that it decomposes into a linear (in @math ) number of asymptotically-stationary weakly-dependent increments that have exponential tails. As the number @math of increments diverges, properties of the interface such as its surface area, volume, and the location of its tip, all obey CLTs with variances linear in @math . These results generalize to every dimension @math .
In lieu of this approach, a coarse-graining technique of Pisztora @cite_24 enabled the establishment of the surface tension and Wulff shape scaling limit for the 3D Ising model at low-temperature: Cerf and Pisztora @cite_22 considered an Ising model on an @math box with all-plus boundary conditions, and showed that conditional on having @math minus spins (atypically many), the largest minus cluster macroscopically takes on the corresponding Wulff shape. Results of this sort are focused on the macroscopic behavior of the model (as opposed to the interface-based approach) and do not describe the fluctuations around the limiting shape. In particular, the convergence to the Wulff shape holds all the way up to @math (when combined with @cite_0 @cite_26 ), even though near @math (above the roughening transition) it is expected that the interface is not only delocalized, but that the minus cluster actually percolates all the way to the boundary of the box @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_22", "@cite_24", "@cite_0" ], "mid": [ "2048984514", "2054432045", "1993711495", "2073921960", "2010196640" ], "abstract": [ "We describe inequalities relating to the interface between coexisting phases of Ising ferromagnets. Some implications for the nature of the roughening transition are discussed.", "For the FK representation of the Ising model, we prove that the slab percolation threshold coincides with the critical temperature in any dimension d≥3.", "We study the phase separation phenomenon in the Ising model in dimensions d ≥ 3. To this end we work in a large box with plus boundary conditions and we condition the system to have an excess amount of negative spins so that the empirical magnetization is smaller than the spontaneous magnetization m*. We confirm the prediction of the phenomenological theory by proving that with high probability a single droplet of the minus phase emerges surrounded by the plus phase. Moreover, the rescaled droplet is asymptotically close to a definite deterministic shape, the Wulff crystal, which minimizes the surface free energy. In the course of the proof we establish a surface order large deviation principle for the magnetization. Our results are valid for temperatures T below a limit of slab-thresholds T c conjectured to agree with the critical point T c . Moreover, T should be such that there exist only two extremal translation invariant Gibbs states at that temperature, a property which can fail for at most countably many values and which is conjectured to be true for every T. The proofs are based on the Fortuin-Kasteleyn representation of the Ising model along with coarse-graining techniques. To handle the emerging macroscopic objects we employ tools from geometric measure theory which provide an adequate framework for the large deviation analysis. Finally, we propose a heuristic picture that for subcritical temperatures close enough to T c , the dominant minus spin cluster of the Wulff droplet permeates the entire box and has a strictly positive local density everywhere.", "We derive uniform surface order large deviation estimates for the block magnetization in finite volume Ising (or Potts) models with plus or free (or a combination of both) boundary conditions in the phase coexistence regime ford≧3. The results are valid up to a limit of slab-thresholds, conjectured to agree with the critical temperature. Our arguments are based on the renormalization of the random cluster model withq≧1 andd≧3, and on corresponding large deviation estimates for the occurrence in a box of a largest cluster with density close to the percolation probability. The results are new even for the case of independent percolation (q=1). As a byproduct of our methods, we obtain further results in the FK model concerning semicontinuity (inp andq) of the percolation probability, the second largest cluster in a box and the tail of the finite cluster size distribution.", "In this paper we prove the Wulff construction in three and more dimensions for an Ising model with nearest neighbor interaction." ] }
1901.05127
2910836674
Neural style transfer has drawn considerable attention from both academic and industrial field. Although visual effect and efficiency have been significantly improved, existing methods are unable to coordinate spatial distribution of visual attention between the content image and stylized image, or render diverse level of detail via different brush strokes. In this paper, we tackle these limitations by developing an attention-aware multi-stroke style transfer model. We first propose to assemble self-attention mechanism into a style-agnostic reconstruction autoencoder framework, from which the attention map of a content image can be derived. By performing multi-scale style swap on content features and style features, we produce multiple feature maps reflecting different stroke patterns. A flexible fusion strategy is further presented to incorporate the salient characteristics from the attention map, which allows integrating multiple stroke patterns into different spatial regions of the output image harmoniously. We demonstrate the effectiveness of our method, as well as generate comparable stylized images with multiple stroke patterns against the state-of-the-art methods.
Attention Models. One of the most promising trends in research is the emergence of incorporating attention mechanism into deep learning framework @cite_20 @cite_8 . Rather than compressing an entire image or a sequence into a static representation, attention allows the model to focus on the most relevant part of images or features as needed. Such mechanism has been proved to be very effective in many vision tasks including image classification @cite_4 @cite_18 , image captioning @cite_27 @cite_28 and visual question answering @cite_9 @cite_1 . In particular, self-attention @cite_30 @cite_12 has been proposed to calculate the response at a position in a sequence by attending to all positions within the same sequence. Shaw al @cite_2 proposes to incorporate relative position information for sequences in the self-attention mechanism of the Transformer model, which improves the translation quality on machine translation tasks. Zhang al @cite_17 demonstrates that the self-attention model can capture the multi-level dependencies across image regions and draw fine details in the context of GAN framework. Compared with @cite_17 , we adapt the self-attention to introduce a residual feature map to catch salient characteristics within content images.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_4", "@cite_8", "@cite_28", "@cite_9", "@cite_1", "@cite_27", "@cite_2", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2963386218", "2295107390", "1928906481", "2951527505", "2302086703", "2255577267", "2963954913", "2950178297", "2789541106", "2963403868", "2141399712", "2950893734" ], "abstract": [ "This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.", "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1.", "Fine-grained classification is challenging because categories can only be discriminated by subtle and local differences. Variances in the pose, scale or rotation usually make the problem more difficult. Most fine-grained classification systems follow the pipeline of finding foreground object or object parts (where) to extract discriminative features (what).", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "Automatically generating a natural language description of an image has attracted interests recently both because of its importance in practical applications and because it connects two major artificial intelligence fields: computer vision and natural language processing. Existing approaches are either top-down, which start from a gist of an image and convert it into words, or bottom-up, which come up with words describing various aspects of an image and then combine them. In this paper, we propose a new algorithm that combines both approaches through a model of semantic attention. Our algorithm learns to selectively attend to semantic concept proposals and fuse them into hidden states and outputs of recurrent neural networks. The selection and fusion form a feedback connecting the top-down and bottom-up computation. We evaluate our algorithm on two public benchmarks: Microsoft COCO and Flickr30K. Experimental results show that our algorithm significantly outperforms the state-of-the-art approaches consistently across different evaluation metrics.", "We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single \"hop\" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3].", "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "Relying entirely on an attention mechanism, the Transformer introduced by (2017) achieves state-of-the-art results for machine translation. In contrast to recurrent and convolutional neural networks, it does not explicitly model relative or absolute position information in its structure. Instead, it requires adding representations of absolute positions to its inputs. In this work we present an alternative approach, extending the self-attention mechanism to efficiently consider representations of the relative positions, or distances between sequence elements. On the WMT 2014 English-to-German and English-to-French translation tasks, this approach yields improvements of 1.3 BLEU and 0.3 BLEU over absolute position representations, respectively. Notably, we observe that combining relative and absolute position representations yields no further improvement in translation quality. We describe an efficient implementation of our method and cast it as an instance of relation-aware self-attention mechanisms that can generalize to arbitrary graph-labeled inputs.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature.", "We describe a model based on a Boltzmann machine with third-order connections that can learn how to accumulate information about a shape over several fixations. The model uses a retina that only has enough high resolution pixels to cover a small area of the image, so it must decide on a sequence of fixations and it must combine the \"glimpse\" at each fixation with the location of the fixation before integrating the information with information from other glimpses of the same object. We evaluate this model on a synthetic dataset and two image classification datasets, showing that it can perform at least as well as a model trained on whole images.", "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape." ] }
1901.05138
2909251876
Dynamic Programming Languages are quite popular because they increase the programmer's productivity. However, the absence of types in the source code makes the program written in these languages difficult to understand and virtual machines that execute these programs cannot produced optimized code. To overcome this challenge, we develop a technique to predict types of all identifiers including variables, and function return types. We propose the first implementation of @math order Inside Outside Recursive Neural Networks with two variants (i) Child-Sum Tree-LSTMs and (ii) N-ary RNNs that can handle large number of tree branching. We predict the types of all the identifiers given the Abstract Syntax Tree by performing just two passes over the tree, bottom-up and top-down, keeping both the content and context representation for all the nodes of the tree. This allows these representations to interact by combining different paths from the parent, siblings and children which is crucial for predicting types. Our best model achieves 44.33 across 21 classes and top-3 accuracy of 71.5 on our gathered Python data set from popular Python benchmarks.
Convolutional Neural Network for sequence modeling was introduced by Yoon Kim @cite_10 for sentiment classification on a text sequence. The idea of convolutions on Tree-based structure was developed by @cite_7 with the idea of sliding convolutional kernel combined with dynamic pooling @cite_2 to process the AST to extract structural information of a program. @cite_7 have developed tree-based convolutional neural networks for classifying programs according to the behavior, functionality, complexity, etc., and detecting code snippets of certain patterns like unhealthy code pattern. Similar to image classification, TB-CNN loses the locality information and thus cannot be used for classifying the nodes in a trivial way.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_2" ], "mid": [ "1832693441", "2963371736", "2103305545" ], "abstract": [ "We report on a series of experiments with convolutional neural networks (CNN) trained on top of pre-trained word vectors for sentence-level classification tasks. We show that a simple CNN with little hyperparameter tuning and static vectors achieves excellent results on multiple benchmarks. Learning task-specific vectors through fine-tuning offers further gains in performance. We additionally propose a simple modification to the architecture to allow for the use of both task-specific and static vectors. The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification.", "Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community. However, different from a natural language sentence, a program contains rich, explicit, and complicated structural information. Hence, traditional NLP models may be inappropriate for programs. In this paper, we propose a novel tree-based convolutional neural network (TBCNN) for programming language processing, in which a convolution kernel is designed over programs' abstract syntax trees to capture structural information. TBCNN is a generic architecture for programming language processing; our experiments show its effectiveness in two different program analysis tasks: classifying programs according to functionality, and detecting code snippets of certain patterns. TBCNN outperforms baseline methods, including several neural models for NLP.", "Paraphrase detection is the task of examining two sentences and determining whether they have the same meaning. In order to obtain high accuracy on this task, thorough syntactic and semantic analysis of the two statements is needed. We introduce a method for paraphrase detection based on recursive autoencoders (RAE). Our unsupervised RAEs are based on a novel unfolding objective and learn feature vectors for phrases in syntactic trees. These features are used to measure the word- and phrase-wise similarity between two sentences. Since sentences may be of arbitrary length, the resulting matrix of similarity measures is of variable size. We introduce a novel dynamic pooling layer which computes a fixed-sized representation from the variable-sized matrices. The pooled representation is then used as input to a classifier. Our method outperforms other state-of-the-art approaches on the challenging MSRP paraphrase corpus." ] }
1901.05138
2909251876
Dynamic Programming Languages are quite popular because they increase the programmer's productivity. However, the absence of types in the source code makes the program written in these languages difficult to understand and virtual machines that execute these programs cannot produced optimized code. To overcome this challenge, we develop a technique to predict types of all identifiers including variables, and function return types. We propose the first implementation of @math order Inside Outside Recursive Neural Networks with two variants (i) Child-Sum Tree-LSTMs and (ii) N-ary RNNs that can handle large number of tree branching. We predict the types of all the identifiers given the Abstract Syntax Tree by performing just two passes over the tree, bottom-up and top-down, keeping both the content and context representation for all the nodes of the tree. This allows these representations to interact by combining different paths from the parent, siblings and children which is crucial for predicting types. Our best model achieves 44.33 across 21 classes and top-3 accuracy of 71.5 on our gathered Python data set from popular Python benchmarks.
In natural language processing sentiment classification is a long-standing task. Initial approaches used bag-of-words, which evolved to sequence modeling using linear LSTM which assumes a right-branching sentence. @cite_8 have introduced Recursive Neural Network for classifying the sentiment of all nodes in the parse tree of a sentence. They represent the sentence using word vectors and use the parse tree of the sentence to compute the vector representation of all the internal nodes in the tree using the same composition function. Both the composition function parameters along with vector representation of words are learned. The internal representation of the nodes are computed in a bottom-up manner and the node representation incorporates the information of the sub-tree rooted at that node for the purpose of classification. In our case some leaf nodes have to be classified based on the combination of paths that need not be uni-directional.
{ "cite_N": [ "@cite_8" ], "mid": [ "2251939518" ], "abstract": [ "Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases." ] }
1901.05138
2909251876
Dynamic Programming Languages are quite popular because they increase the programmer's productivity. However, the absence of types in the source code makes the program written in these languages difficult to understand and virtual machines that execute these programs cannot produced optimized code. To overcome this challenge, we develop a technique to predict types of all identifiers including variables, and function return types. We propose the first implementation of @math order Inside Outside Recursive Neural Networks with two variants (i) Child-Sum Tree-LSTMs and (ii) N-ary RNNs that can handle large number of tree branching. We predict the types of all the identifiers given the Abstract Syntax Tree by performing just two passes over the tree, bottom-up and top-down, keeping both the content and context representation for all the nodes of the tree. This allows these representations to interact by combining different paths from the parent, siblings and children which is crucial for predicting types. Our best model achieves 44.33 across 21 classes and top-3 accuracy of 71.5 on our gathered Python data set from popular Python benchmarks.
The Tree-Structured LSTM introduced in @cite_5 had two main contributions: (i) a generalization of LSTMs to tree-structured network topologies, and (ii) Child-Sum Trees to handle arbitrary number of children in the tree nodes. It also replaced simple recurrent units in recursive network with LSTM cells to overcome the problem of vanishing or exploding gradients, making it easy to learn long-distance correlations. The Child-Sum Tree-LSTM transition equations are the following: Here @math represents the children of node @math and @math and @math are the hidden and cell state of @math node. We combine this extension with the Inside-Outside algorithm.
{ "cite_N": [ "@cite_5" ], "mid": [ "2104246439" ], "abstract": [ "Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank)." ] }
1901.05138
2909251876
Dynamic Programming Languages are quite popular because they increase the programmer's productivity. However, the absence of types in the source code makes the program written in these languages difficult to understand and virtual machines that execute these programs cannot produced optimized code. To overcome this challenge, we develop a technique to predict types of all identifiers including variables, and function return types. We propose the first implementation of @math order Inside Outside Recursive Neural Networks with two variants (i) Child-Sum Tree-LSTMs and (ii) N-ary RNNs that can handle large number of tree branching. We predict the types of all the identifiers given the Abstract Syntax Tree by performing just two passes over the tree, bottom-up and top-down, keeping both the content and context representation for all the nodes of the tree. This allows these representations to interact by combining different paths from the parent, siblings and children which is crucial for predicting types. Our best model achieves 44.33 across 21 classes and top-3 accuracy of 71.5 on our gathered Python data set from popular Python benchmarks.
The architecture and algorithm introduced by Le and Zuidema @cite_9 allows information to flow not only bottom-up, as in traditional recursive neural network, but top-down as well. Every node in the hierarchical structure is associated with two vectors: (i) inside representation to represent content under the node, and (ii) outside representation to represent context (see Figure ). The inside vectors are computed in the bottom-up (post-order traversal) manner combining the inside representation of the children, thus capturing the content of the sub-tree. The outside vectors are computed in the top-down (pre-order traversal) manner combining the representation of the parent's outside (context) and the siblings' inside (content), thus capturing the rest of the tree (see Figure ). This approach is very critical when we want to classify leaf nodes and the information may come from any combination of paths in the tree.
{ "cite_N": [ "@cite_9" ], "mid": [ "2251628756" ], "abstract": [ "We propose the first implementation of an infinite-order generative dependency model. The model is based on a new recursive neural network architecture, the Inside-Outside Recursive Neural Network. This architecture allows information to flow not only bottom-up, as in traditional recursive neural networks, but also topdown. This is achieved by computing content as well as context representations for any constituent, and letting these representations interact. Experimental results on the English section of the Universal Dependency Treebank show that the infinite-order model achieves a perplexity seven times lower than the traditional third-order model using counting, and tends to choose more accurate parses in k-best lists. In addition, reranking with this model achieves state-of-the-art unlabelled attachment scores and unlabelled exact match scores." ] }
1901.04982
2909529610
Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 .
Frequency variations depending on the executed instruction mix were first described for Haswell-EP processors which have different maximum frequencies depending on whether AVX instructions are executed @cite_7 . Whereas previous CPUs operated at a constant frequency and the power consumption varied depending on the instruction mix, these CPUs keep their power consumption fairly constant, but frequency and therefore performance vary depending on the executed instructions. In a cluster, this performance imbalance causes performance issues for tightly coupled code because significant amounts of time are spent in synchronization primitives such as barrier synchronization @cite_6 .
{ "cite_N": [ "@cite_6", "@cite_7" ], "mid": [ "2515937114", "1628605343" ], "abstract": [ "The Intel Haswell-EP processor generation introduces several major advancements of power control and energy-efficiency features. For computationally intense applications using advanced vector extension instructions, the processor cannot continuously operate at full speed but instead reduces its frequency below the nominal frequency to maintain operations within thermal design power (TDP) limitations. Moreover, the running average power limitation (RAPL) mechanism to enforce the TDP limitation has changed from a modeling to a measurement approach. The combination of these two novelties have significant implications. Through measurements on an Intel Sandy Bridge-EP cluster, we show that previous generations have sustained homogeneous performance across multiple CPUs and compensated for hardware manufacturing variability through varying power consumption. In contrast, our measurements on a Petaflop Haswell system show that this generation exhibits rather homogeneous power consumption limited by the TDP and capped by the improved RAPL while providing inhomogeneous performance under full load. Since all of these controls are transparent to the user, this behavior is likely to complicate performance analysis tasks and impact tightly coupled parallel applications.", "The recently introduced Intel Xeon E5-1600 v3 and E5-2600 v3 series processors -- codenamed Haswell-EP -- implement major changes compared to their predecessors. Among these changes are integrated voltage regulators that enable individual voltages and frequencies for every core. In this paper we analyze a number of consequences of this development that are of utmost importance for energy efficiency optimization strategies such as dynamic voltage and frequency scaling (DVFS) and dynamic concurrency throttling (DCT). This includes the enhanced RAPL implementation and its improved accuracy as it moves from modeling to actual measurement. Another fundamental change is that every clock speed above AVX frequency -- including nominal frequency -- is opportunistic and unreliable, which vastly decreases performance predictability with potential effects on scalability. Moreover, we characterize significantly changed p-state transition behavior, and determine crucial memory performance data." ] }
1901.04982
2909529610
Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 .
We use core specialization as a technique to limit the effect of AVX-induced frequency reduction to select cores. Core specialization has been suggested as a mechanism to increase performance before, although different effects were utilized. As the fastest cache levels are usually private to the individual cores, core specialization can be used to place different parts of the system's working set in private caches of different cores, thereby increasing cache utilization by reducing the number of duplicated entries in different cores' private caches. For example, FlexSC @cite_13 places the operating system on a separate set of cores, whereas SchedTask @cite_4 analyzes the instruction footprint of code sections and uses instruction footprint similarity for scheduling decisions in order to reduce instruction cache misses.
{ "cite_N": [ "@cite_13", "@cite_4" ], "mid": [ "2143677609", "2765448242" ], "abstract": [ "For the past 30+ years, system calls have been the de facto interface used by applications to request services from the operating system kernel. System calls have almost universally been implemented as a synchronous mechanism, where a special processor instruction is used to yield userspace execution to the kernel. In the first part of this paper, we evaluate the performance impact of traditional synchronous system calls on system intensive workloads. We show that synchronous system calls negatively affect performance in a significant way, primarily because of pipeline flushing and pollution of key processor structures (e.g., TLB, data and instruction caches, etc.). We propose a new mechanism for applications to request services from the operating system kernel: exception-less system calls. They improve processor efficiency by enabling flexibility in the scheduling of operating system work, which in turn can lead to significantly increased temporal and spacial locality of execution in both user and kernel space, thus reducing pollution effects on processor structures. Exception-less system calls are particularly effective on multicore processors. They primarily target highly threaded server applications, such as Web servers and database servers. We present FlexSC, an implementation of exceptionless system calls in the Linux kernel, and an accompanying user-mode thread package (FlexSC-Threads), binary compatible with POSIX threads, that translates legacy synchronous system calls into exception-less ones transparently to applications. We show how FlexSC improves performance of Apache by up to 116 , MySQL by up to 40 , and BIND by up to 105 while requiring no modifications to the applications.", "The execution of workloads such as web servers and database servers typically switches back and forth between different tasks such as user applications, system call handlers, and interrupt handlers. The combined size of the instruction footprints of such tasks typically exceeds that of the i-cache (16-32 KB). This causes a lot of i-cache misses and thereby reduces the application’s performance. Hence, we propose SchedTask, a hardware-assisted task scheduler that improves the performance of such workloads by executing tasks with similar instruction footprints on the same core. We start by decomposing the combined execution of the OS and the applications into sequences of instructions calledSuperFunctions. We propose a scheme to determine the amount of overlap between the instruction footprints of different SuperFunctions by using Bloom filters. We then use a hierarchical scheduler to execute SuperFunctions with similar instruction footprints on the same core. For a suite of 8 popular OS-intensive workloads, we report an increase in the application’s performance of up to 29 percentage points (mean: 11.4 percentage points) over state of the art scheduling techniques. CCS CONCEPTS • Software and its engineering @math Scheduling; Virtual memory; • Computer systems organization @math Multicore architectures; Cloud computing;" ] }
1901.04982
2909529610
Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 .
The approaches described above implement core specialization in software, but operate on multiprocessor systems containing hardware-wise identical cores. However, different applications (or parts of applications) have different requirements to the underlying microarchitecture. For example, a memory-intensive application might not be able to fully utilize the potential of a wide out-of-order architecture and might execute more efficiently on a more simple in-order system @cite_21 . As a result, single-ISA heterogeneous multi-core systems have been suggested which consist of cores with equal instruction sets but differing microarchitecture and operating frequency and provide an energy-efficient core for a wide range of applications @cite_21 . Especially heterogeneous applications with execution phases with significantly different behaviour can profit if the phases are each executed on their ideal core type.
{ "cite_N": [ "@cite_21" ], "mid": [ "2112085716" ], "abstract": [ "This paper proposes and evaluates single-ISA heterogeneous multi-core architectures as a mechanism to reduce processor power dissipation. Our design incorporates heterogeneous cores representing different points in the power performance design space; during an application's execution, system software dynamically chooses the most appropriate core to meet specific performance and power requirements. Our evaluation of this architecture shows significant energy benefits. For an objective function that optimizes for energy efficiency with a tight performance threshold, for 14 SPEC benchmarks, our results indicate a 39 average energy reduction while only sacrificing 3 in performance. An objective function that optimizes for energy-delay with looser performance bounds achieves, on average, nearly a factor of three improvements in energy-delay product while sacrificing only 22 in performance. Energy savings are substantially more than chip-wide voltage frequency scaling." ] }
1901.04982
2909529610
Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 .
Similarly, a heterogeneous multi-core system can provide cores with different ISAs @cite_12 . For example, the ARM Thumb instruction set provides higher code density, but provides fewer and smaller general purpose registers, and is therefore efficient for execution of code sections with low register pressure, whereas core with higher register pressure executes more efficiently on architectures such as Alpha with larger register sets. Also, the Thumb instruction set does not provide floating point and SIMD support which significantly improves peak power consumption and core area but requires costly emulation of floating point instructions.
{ "cite_N": [ "@cite_12" ], "mid": [ "2110653637" ], "abstract": [ "Heterogeneous multicore architectures have the potential for high performance and energy efficiency. These architectures may be composed of small power-efficient cores, large high-performance cores, and or specialized cores that accelerate the performance of a particular class of computation. Architects have explored multiple dimensions of heterogeneity, both in terms of micro-architecture and specialization. While early work constrained the cores to share a single ISA, this work shows that allowing heterogeneous ISAs further extends the effectiveness of such architectures This work exploits the diversity offered by three modern ISAs: Thumb, x86-64, and Alpha. This architecture has the potential to outperform the best single-ISA heterogeneous architecture by as much as 21 , with 23 energy savings and a reduction of 32 in Energy Delay Product." ] }
1901.04982
2909529610
Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 .
On such a system, threads need to be migrated to a suitable core whenever they execute significant amounts of wide SIMD instructions. is an operating system mechanism to automatically move threads to a suitable core @cite_0 @cite_15 . Whenever a thread executes an instruction not supported on its current core, the core triggers an undefined instruction exception. Following the exception, the operating system selects a core with support for the instruction and migrates the thread.
{ "cite_N": [ "@cite_0", "@cite_15" ], "mid": [ "1974809793", "1991867972" ], "abstract": [ "A heterogeneous processor consists of cores that are asymmetric in performance and functionality. Such a design provides a cost-effective solution for processor manufacturers to continuously improve both single-thread performance and multi-thread throughput. This design, however, faces significant challenges in the operating system, which traditionally assumes only homogeneous hardware. This paper presents a comprehensive study of OS support for heterogeneous architectures in which cores have asymmetric performance and overlapping, but non-identical instruction sets. Our algorithms allow applications to transparently execute and fairly share different types of cores. We have implemented these algorithms in the Linux 2.6.24 kernel and evaluated them on an actual heterogeneous platform. Evaluation results demonstrate that our designs efficiently manage heterogeneous hardware and enable significant performance improvements for a range of applications.", "On-chip heterogeneity has become key to balancing performance and power constraints, resulting in disparate (functionally overlapping but not equivalent) cores on a single die. Requiring developers to deal with such heterogeneity can impede adoption through increased programming effort and result in cross-platform incompatibility. We propose that systems software must evolve to dynamically accommodate heterogeneity and to automatically choose task-to-resource mappings to best use these features. We describe the kinship approach for mapping workloads to heterogeneous cores. A hypervisor-level realization of the approach on a variety of experimental heterogeneous platforms demonstrates the general applicability and utility of kinship-based scheduling, matching dynamic workloads to available resources as well as scaling with the number of processes and with different types configurations of compute resources. Performance advantages of kinship based scheduling are evident for runs across multiple generations of heterogeneous platforms." ] }
1901.04982
2909529610
Modern Intel CPUs reduce their frequency when executing wide vector operations (AVX2 and AVX-512 instructions), as these instructions increase power consumption. The frequency is only increased again two milliseconds after the last code section containing such instructions has been executed in order to prevent excessive numbers of frequency changes. Due to this delay, intermittent use of wide vector operations can slow down the rest of the system significantly. For example, previous work has shown the performance of web servers to be reduced by up to 10 if the SSL library uses AVX-512 vector instructions. These performance variations are hard to predict during software development as the performance impact of vectorization depends on the specific workload. We describe a mechanism to reduce the slowdown caused by wide vector instructions without requiring extensive changes to existing software. Our design allows the developer to mark problematic AVX code regions. The scheduler then restricts execution of this code to a subset of the cores so that only these cores' frequency is affected. Threads are automatically migrated to a suitable core whenever necessary. We identify a suitable load balancing policy to ensure good utilization of all available cores. Our approach is able to reduce the performance variability caused by AVX2 and AVX-512 instructions by over 70 .
Previous work assumes a heterogeneous multiprocessor where cores differ in hardware @cite_0 @cite_15 . However, the concept of fault-and-migrate is applicable to software-based heterogeneity similar to the one used in our design. Whereas our prototype currently requires the developer to manually annotate code sections which make use of wide SIMD instructions, we intend to extend the prototype to make use of fault-and-migrate in order to automatically detect problematic code regions. describe how disabling the floating point unit allows to emulate instruction set asymmetry on current systems @cite_0 . Similarly, we intend to restrict the size of the memory region used for the FXSTOR instruction during context switches [Sec. 2.6.11] intelmanualvol2 to selectively let AVX-512 instructions invoke the operating system.
{ "cite_N": [ "@cite_0", "@cite_15" ], "mid": [ "1974809793", "1991867972" ], "abstract": [ "A heterogeneous processor consists of cores that are asymmetric in performance and functionality. Such a design provides a cost-effective solution for processor manufacturers to continuously improve both single-thread performance and multi-thread throughput. This design, however, faces significant challenges in the operating system, which traditionally assumes only homogeneous hardware. This paper presents a comprehensive study of OS support for heterogeneous architectures in which cores have asymmetric performance and overlapping, but non-identical instruction sets. Our algorithms allow applications to transparently execute and fairly share different types of cores. We have implemented these algorithms in the Linux 2.6.24 kernel and evaluated them on an actual heterogeneous platform. Evaluation results demonstrate that our designs efficiently manage heterogeneous hardware and enable significant performance improvements for a range of applications.", "On-chip heterogeneity has become key to balancing performance and power constraints, resulting in disparate (functionally overlapping but not equivalent) cores on a single die. Requiring developers to deal with such heterogeneity can impede adoption through increased programming effort and result in cross-platform incompatibility. We propose that systems software must evolve to dynamically accommodate heterogeneity and to automatically choose task-to-resource mappings to best use these features. We describe the kinship approach for mapping workloads to heterogeneous cores. A hypervisor-level realization of the approach on a variety of experimental heterogeneous platforms demonstrates the general applicability and utility of kinship-based scheduling, matching dynamic workloads to available resources as well as scaling with the number of processes and with different types configurations of compute resources. Performance advantages of kinship based scheduling are evident for runs across multiple generations of heterogeneous platforms." ] }