aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1901.01535
2951394060
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.
A major limitation of all aforementioned approaches is that they require full 3D supervision for training, which is quite restrictive. Tulsiani al @cite_8 relax these assumptions by formulating a differentiable view consistency loss that measures the inconsistency between the predicted 3D shape and its observation. Similarly, Rezende al @cite_17 propose a neural projection layer and a black box renderer for supervising the learning process. Yan al @cite_18 and Gwak al @cite_3 use 2D silhouettes as supervision for 3D reconstruction from a single image. While all these methods exploit ray constraints inside the loss function, our goal is to directly integrate the physical properties of the image formation process into the model via unrolled MRF inference with ray potentials. Thus, we are able to significantly reduce the number of parameters in the network and our network does not need to acquire these first principles from data.
{ "cite_N": [ "@cite_18", "@cite_17", "@cite_3", "@cite_8" ], "mid": [ "2551540143", "2963730200", "2619556892", "2609026071" ], "abstract": [ "Understanding the 3D world is a fundamental problem in computer vision. However, learning a good representation of 3D objects is still an open problem due to the high dimensionality of the data and many factors of variation involved. In this work, we investigate the task of single-view 3D object reconstruction from a learning agent's perspective. We formulate the learning process as an interaction between 3D and 2D representations and propose an encoder-decoder network with a novel projection loss defined by the projective transformation. More importantly, the projection loss enables the unsupervised learning using 2D observation without explicit 3D supervision. We demonstrate the ability of the model in generating 3D volume from a single 2D image with three sets of experiments: (1) learning from single-class objects; (2) learning from multi-class objects and (3) testing on novel object classes. Results show superior performance and better generalization ability for 3D object reconstruction when the projection loss is involved.", "A key goal of computer vision is to recover the underlying 3D structure that gives rise to 2D observations of the world. If endowed with 3D understanding, agents can abstract away from the complexity of the rendering process to form stable, disentangled representations of scene elements. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet, and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained jointly, end-to-end, and directly from 2D images without any use of ground-truth 3D labels. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.", "", "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset." ] }
1907.04449
2956978206
Although deep neural networks have been widely applied in many application domains, they are found to be vulnerable to adversarial attacks. A recent promising set of attacking techniques have been proposed, which mainly focus on generating adversarial examples under digital-world settings. Such strategies are unfortunately not implementable for any physical-world scenarios such as autonomous driving. In this paper, we present FragGAN, a new GAN-based framework which is capable of generating an adversarial image which differs from the original input image only through replacing a targeted fragment within the image using a corresponding visually indistinguishable adversarial fragment. FragGAN ensures that the resulting entire image is effective in attacking. For any physical-world implementation, an attacker could physically print out the adversarial fragment and then paste it onto the original fragment (e.g., a roadside sign for autonomous driving scenarios). FragGAN also enables clean-label attacks against image classification, as the resulting attacks may succeed even without modifying any essential content of an image. Extensive experiments including physical-world case studies on state-of-the-art autonomous steering and image classification models demonstrate that FragGAN is highly effective and superior to simple extensions of existing approaches. To the best of our knowledge, FragGAN is the first approach that can implement effective and clean-label physical-world attacks.
A very recent set of works took the first step in studying physical-world attacking of static physical objects @cite_2 @cite_27 , human objects @cite_41 @cite_33 , stop sign @cite_18 @cite_47 , and roadside sign @cite_12 . Although these works prove to be effective under the targeted scenarios and certain assumptions, they mostly focus on studying a static physical-world scene (e.g., a single snapshot of a stop sign @cite_16 @cite_18 ), and their generated adversarial samples are visually unrealistic (e.g., a billboard painted by various bright colors which are too obvious for attack purposes @cite_12 ). Moreover, as discussed earlier, a key technical limitation is that the process of generating the perturbation has not considered any potential background imagery commonly associated with the targeted object (e.g., a stop sign in @cite_16 and a billboard in @cite_12 ) in the physical world (e.g., the sky or the road). This prevents the technique to be implemented in realistic scenarios such as autonomous driving, since the attacking efficacy may dramatically decrease (even ineffective at all) as the image captured by any car dash camera will contain such background imagery besides the stop sign.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_41", "@cite_27", "@cite_2", "@cite_47", "@cite_16", "@cite_12" ], "mid": [ "2125085157", "2804342109", "2535873859", "2963118571", "2736899637", "2126628495", "2798302089", "2906946247" ], "abstract": [ "We apply Convolutional Networks (ConvNets) to the task of traffic sign classification as part of the GTSRB competition. ConvNets are biologically-inspired multi-stage architectures that automatically learn hierarchies of invariant features. While many popular vision approaches use hand-crafted features such as HOG or SIFT, ConvNets learn features at every level from data that are tuned to the task at hand. The traditional ConvNet architecture was modified by feeding 1st stage features in addition to 2nd stage features to the classifier. The system yielded the 2nd-best accuracy of 98.97 during phase I of the competition (the best entry obtained 98.98 ), above the human performance of 98.81 , using 32×32 color input images. Experiments conducted after phase 1 produced a new record of 99.17 by increasing the network capacity, and by using greyscale images instead of color. Interestingly, random features still yielded competitive results (97.33 ).", "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.", "Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.", "While deep learning is remarkably successful on perceptual tasks, it was also shown to be vulnerable to adversarial perturbations of the input. These perturbations denote noise added to the input that was generated specifically to fool the system while being quasi-imperceptible for humans. More severely, there even exist universal perturbations that are input-agnostic but fool the network on the majority of inputs. While recent work has focused on image classification, this work proposes attacks against semantic image segmentation: we present an approach for generating (universal) adversarial perturbations that make the network yield a desired target segmentation as output. We show empirically that there exist barely perceptible universal noise patterns which result in nearly the same predicted segmentation for arbitrary inputs. Furthermore, we also show the existence of universal noise which removes a target class (e.g., all pedestrians) from the segmentation while leaving the segmentation mostly unchanged otherwise.", "Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world.", "In this paper, we provide a survey of the traffic sign detection literature, detailing detection systems for traffic sign recognition (TSR) for driver assistance. We separately describe the contributions of recent works to the various stages inherent in traffic sign detection: segmentation, feature extraction, and final sign detection. While TSR is a well-established research area, we highlight open research issues in the literature, including a dearth of use of publicly available image databases and the over-representation of European traffic signs. Furthermore, we discuss future directions of TSR research, including the integration of context and localization. We also introduce a new public database containing U.S. traffic signs.", "Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle (field test) for the target classifier.", "Deep Neural Networks (DNNs) have been widely applied in many autonomous systems such as autonomous driving. Recently, DNN testing has been intensively studied to automatically generate adversarial examples, which inject small-magnitude perturbations into inputs to test DNNs under extreme situations. While existing testing techniques prove to be effective, they mostly focus on generating digital adversarial perturbations (particularly for autonomous driving), e.g., changing image pixels, which may never happen in physical world. There is a critical missing piece in the literature on autonomous driving testing: understanding and exploiting both digital and physical adversarial perturbation generation for impacting steering decisions. In this paper, we present DeepBillboard, a systematic physical-world testing approach targeting at a common and practical driving scenario: drive-by billboards. DeepBillboard is capable of generating a robust and resilient printable adversarial billboard, which works under dynamic changing driving conditions including viewing angle, distance, and lighting. The objective is to maximize the possibility, degree, and duration of the steering-angle errors of an autonomous vehicle driving by the generated adversarial billboard. We have extensively evaluated the efficacy and robustness of DeepBillboard through conducting both digital and physical-world experiments. Results show that DeepBillboard is effective for various steering models and scenes. Furthermore, DeepBillboard is sufficiently robust and resilient for generating physical-world adversarial billboard tests for real-world driving under various weather conditions. To the best of our knowledge, this is the first study demonstrating the possibility of generating realistic and continuous physical-world tests for practical autonomous driving systems." ] }
1907.04449
2956978206
Although deep neural networks have been widely applied in many application domains, they are found to be vulnerable to adversarial attacks. A recent promising set of attacking techniques have been proposed, which mainly focus on generating adversarial examples under digital-world settings. Such strategies are unfortunately not implementable for any physical-world scenarios such as autonomous driving. In this paper, we present FragGAN, a new GAN-based framework which is capable of generating an adversarial image which differs from the original input image only through replacing a targeted fragment within the image using a corresponding visually indistinguishable adversarial fragment. FragGAN ensures that the resulting entire image is effective in attacking. For any physical-world implementation, an attacker could physically print out the adversarial fragment and then paste it onto the original fragment (e.g., a roadside sign for autonomous driving scenarios). FragGAN also enables clean-label attacks against image classification, as the resulting attacks may succeed even without modifying any essential content of an image. Extensive experiments including physical-world case studies on state-of-the-art autonomous steering and image classification models demonstrate that FragGAN is highly effective and superior to simple extensions of existing approaches. To the best of our knowledge, FragGAN is the first approach that can implement effective and clean-label physical-world attacks.
GAN was first introduced in @cite_20 , implemented by a system of two neural networks contesting with each other in a zero-sum game framework. GAN is proved to be able to achieve visually appealing results in both face generation @cite_4 and manipulation @cite_37 . In order to further improve the quality of synthesis images, image-to-image GAN has been proposed such as the conditional GAN @cite_28 and the CycleGAN @cite_40 , which learn a loss function to train the mapping from the input image to the output image. @cite_6 presents AdvGAN, which leverages GAN to produce adversarial samples with high success rate. Different from these GAN-based approaches, focuses on generating a physically implementable adversarial image in which only those arbitrarily selected fragments in the original image are replaced using the generated adversarial fragments.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_28", "@cite_6", "@cite_40", "@cite_20" ], "mid": [ "2519536754", "2618574778", "2963073614", "2783555701", "2962793481", "2099471712" ], "abstract": [ "Realistic image manipulation is challenging because it requires modifying the image appearance in a user-controlled way, while preserving the realism of the result. Unless the user has considerable artistic skill, it is easy to “fall off” the manifold of natural images while editing. In this paper, we propose to learn the natural image manifold directly from data using a generative adversarial neural network. We then define a class of image editing operations, and constrain their output to lie on that learned manifold at all times. The model automatically adjusts the output keeping all edits as realistic as possible. All our manipulations are expressed in terms of constrained optimization and are applied in near-real time. We evaluate our algorithm on the task of realistic photo manipulation of shape and color. The presented method can further be used for changing one image to look like the other, as well as generating novel imagery from scratch based on user’s scribbles.", "", "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.", "Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76 accuracy on a public MNIST black-box attack challenge.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
Numerous models and approaches @cite_18 have been developed to address the learning problem formulated above. Gaussian mixture models fit using expectation maximisation @cite_31 are widely used for clustering, while their switching state space analog, Gaussian emission hidden Markov models have a long history of application in sequence learning. Although typically fit using the Baum-Welch algorithm @cite_32 (a form of expectation maximisation), variational approaches have also been proposed for a broader class of switching state space models @cite_22 .
{ "cite_N": [ "@cite_18", "@cite_31", "@cite_32", "@cite_22" ], "mid": [ "1530730921", "2049633694", "2105594594", "2102716594" ], "abstract": [ "1. Basic Principles: The Operating Regime Approach 2. Modelling: Fuzzy Set Methods for Local Modelling Identification 3. Modelling of Electrically Stimulated Muscle 4. Process Modelling Using a Functional State Approach 5. Markov Mixtures of Experts 6. Active Learning With Mixture Models 7. Local Learning in Local Model Networks 8. Side Effects of Normalising Basic Functions 9. Control: Heterogeneous Control Laws 10. Local Laguerre Models 11. Multiple Model Adaptive Control 12. H Control Using Multiple Linear Models 13. Synthesis of Fuzzy Control Systems Based on Linear Takagi-Sugeno Fuzzy Models", "", "The basic theory of Markov chains has been known to mathematicians and engineers for close to 80 years, but it is only in the past decade that it has been applied explicitly to problems in speech processing. One of the major reasons why speech models, based on Markov chains, have not been developed until recently was the lack of a method for optimizing the parameters of the Markov model to match observed signal patterns. Such a method was proposed in the late 1960's and was immediately applied to speech processing in several research institutions. Continued refinements in the theory and implementation of Markov modelling techniques have greatly enhanced the method, leading to a wide range of applications of these models. It is the purpose of this tutorial paper to give an introduction to the theory of Markov models, and to illustrate how they have been applied to problems in speech recognition.", "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series models— hidden Markov models and linear dynamical systems—and is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models." ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
Learning for switching state space models can also be considered from a changepoint detection perspective, and a range of numerical inference techniques have been used to detect changepoints in sequential data @cite_13 . More recently, variational and gradient-based inference strategies for Bayesian learning have proved useful in hierarchical modelling @cite_29 @cite_27 and variational auto-encoding @cite_26 .
{ "cite_N": [ "@cite_27", "@cite_29", "@cite_26", "@cite_13" ], "mid": [ "2547875792", "2108424780", "", "2127074553" ], "abstract": [ "Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.", "Hierarchical Bayesian networks and neural networks with stochastic hidden units are commonly perceived as two separate types of models. We show that either of these types of models can often be transformed into an instance of the other, by switching between centered and differentiable non-centered parameterizations of the latent variables. The choice of parameterization greatly influences the efficiency of gradient-based posterior inference; we show that they are often complementary to eachother, we clarify when each parameterization is preferred and show how inference can be made robust. In the noncentered form, a simple Monte Carlo estimator of the marginal likelihood can be used for learning the parameters. Theoretical results are supported by experiments.", "", "This paper presents a Bayesian approach to the location of a discontinuity in linearly modelled data. A matrix formulation is introduced which allows the modelling of changepoints in general linear models. Linear models investigated include abrupt changes in the mean of a Gaussian random variable, and piecewise polynomials such as splines, as well as autoregressive models. The approach facilitates the removal of nuisance parameters by integration. A general recursive technique for updating Bayesian posterior densities, which can result in large savings in computation, is also described. >" ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
Hierarchical modelling is an effective means of incorporating structure into a learning problem, so as to avoid sample inefficient learning and improve generalisation through abstraction. Work on options learning @cite_16 @cite_11 and skill identification @cite_6 @cite_0 has paid significant attention to hierarchical learning, but has been a particular challenge for visuomotor control.
{ "cite_N": [ "@cite_0", "@cite_16", "@cite_6", "@cite_11" ], "mid": [ "2211996086", "2109910161", "2217025414", "" ], "abstract": [ "We present a method for segmenting a set of unstructured demonstration trajectories to discover reusable skills using inverse reinforcement learning (IRL). Each skill is characterised by a latent reward function which the demonstrator is assumed to be optimizing. The skill boundaries and the number of skills making up each demonstration are unknown. We use a Bayesian nonparametric approach to propose skill segmentations and maximum entropy inverse reinforcement learning to infer reward functions from the segments. This method produces a set of Markov Decision Processes (MDPs) that best describe the input trajectories. We evaluate this approach in a car driving domain and a simulated quadcopter obstacle course, showing that it is able to recover demonstrated skills more effectively than existing methods.", "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.", "Skill discovery algorithms in reinforcement learning typically identify single states or regions in state space that correspond to potential task-specific subgoals. However, such methods do not directly address the question of how many distinct skills are appropriate for solving the tasks that the agent faces. This can be highly inefficient when many identified subgoals correspond to the same underlying skill, but are all used individually as skill goals. Furthermore, skills created in this manner are often only transferable to tasks that share identical state spaces, since corresponding subgoals across tasks are not merged into a single skill goal. We show that these problems can be overcome by clustering subgoal data defined in an agent-space and using the resulting clusters as templates for skill termination conditions. Clustering via a Dirichlet process mixture model is used to discover a minimal, sufficient collection of portable skills.", "" ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
Our work is inspired by sequential composition theories in robotics @cite_23 , where tasks are solved by moving between sub-controllers lying within the domains of one another. Here, we seek to identify the sub-controllers required for a given task in an end-to-end fashion, from demonstration sequences. Learning from demonstration (LfD) @cite_1 is widely acknowledged as a particularly useful paradigm for robot programming. Significant progress has been made in LfD, moving beyond the direct replication of motions to produce more robust approaches @cite_7 through the introduction of more general schemes for modelling motion like dynamic motion primitives @cite_8 , linear dynamical attractor systems @cite_25 , sparse online Gaussian processes @cite_15 @cite_19 or conditionally linear Gaussian models @cite_20 @cite_21 that can be used for trajectory optimisation. It is important to note that each of these systems behaves as a hybrid system, decomposing a state space into specific regions, and learning appropriate dynamics for each region.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_21", "@cite_1", "@cite_19", "@cite_23", "@cite_15", "@cite_25", "@cite_20" ], "mid": [ "1540685400", "2161395589", "2121103318", "1986014385", "2105401337", "2132714442", "2148718436", "2056884876", "2165300526" ], "abstract": [ "", "We provide a general approach for learning robotic motor skills from human demonstration. To represent an observed movement, a non-linear differential equation is learned such that it reproduces this movement. Based on this representation, we build a library of movements by labeling each recorded movement according to task and context (e.g., grasping, placing, and releasing). Our differential equation is formulated such that generalization can be achieved simply by adapting a start and a goal parameter in the equation to the desired position values of a movement. For object manipulation, we present how our framework extends to the control of gripper orientation and finger position. The feasibility of our approach is demonstrated in simulation as well as on the Sarcos dextrous robot arm. The robot learned a pick-and-place operation and a water-serving task and could generalize these tasks to novel situations.", "We present a policy search method that uses iteratively refitted local linear models to optimize trajectory distributions for large, continuous problems. These trajectory distributions can be used within the framework of guided policy search to learn policies with an arbitrary parameterization. Our method fits time-varying linear dynamics models to speed up learning, but does not rely on learning a global model, which can be difficult when the dynamics are complex and discontinuous. We show that this hybrid approach requires many fewer samples than model-free methods, and can handle complex, nonsmooth dynamics that can pose a challenge for model-based techniques. We present experiments showing that our method can be used to learn complex neural network policies that successfully execute simulated robotic manipulation tasks in partially observed environments with numerous contact discontinuities and underactuation.", "We present a comprehensive survey of robot Learning from Demonstration (LfD), a technique that develops policies from example state to action mappings. We introduce the LfD design choices in terms of demonstrator, problem space, policy derivation and performance, and contribute the foundations for a structure in which to categorize LfD research. Specifically, we analyze and categorize the multiple ways in which examples are gathered, ranging from teleoperation to imitation, as well as the various techniques for policy derivation, including matching functions, dynamics models and plans. To conclude we discuss LfD limitations and related promising areas for future research.", "We are interested in transferring control policies for arbitrary tasks from a human to a robot. Using interactive demonstration via teleoperation as our transfer scenario, we cast learning as statistical regression over sensor-actuator data pairs. Our desire for interactive learning necessitates algorithms that are incremental and realtime. We examine locally weighted projection regression, a popular robotic learning algorithm, and sparse online Gaussian processes in this domain on one synthetic and several robot-generated data sets. We evaluate each algorithm in terms of function approximation, learned task performance, and scalability to large data sets.", "In this thesis we present a technique for the composition of robot control laws in dynamical environments. We propose a challenging robotic task, called Dynamical Pick and Place, in which a robot equipped with merely a soft paddle must capture and contain a ball, safely negotiate it past obstacles, and bring it to rest at a desired location. We develop a composition technique for local controllers that provides a formal guarantee of the stability of the switching behavior required in this task, and provide descriptive statistics of a working implementation. Our robotic system displays unusually dexterous behavior in the face of significant system noise, and recovers gracefully from large unexpected perturbations caused by the experimenters. Our approach to controller composition makes use of the funnel as a metaphor for asymptotic stability, is motivated by the pre-image backchaining techniques developed by Lozano-Perez, Mason and Taylor, and extends their ideas from quasi-static environments to systems with full dynamics. We introduce the concepts of \"dynamical obstacle avoidance\" and \"dynamical safety\" for systems with only intermittent control of their environment, and show that it is important not only that the system avoid obstacles directly, but also that the system will never reach an obstacle before getting another chance to effect control. The Dynamical Pick and Place problem addressed by this thesis is a difficult control problem, but an easy planning problem. The system we develop provides a way to engage more powerful AI planning tools without sacrificing access to the stability arguments of dynamical systems theory.", "Using data collected from human teleoperation, our goal is to learn a control policy that maps perception to actuation. Such policies are potentially multi-valued with regard to perception with a single input mapping to multiple outputs depending on the user's objective at a particular time. We propose a multi-valued function regressor to learn a larger class of robot control policies from human demonstration and extend the Hierarchical Dirichlet Process Hidden Markov Model to discover latent variables representing unknown objectives in the demonstrated data and the transitions between these objectives. Each of these objectives requires only a single-valued policy function, and thus can be learned with a Gaussian process function regressor. The learned transitions between these objectives determine the correct actuation where the complete policy function is multi-valued. We present the results of experiments conducted on the Nao humanoid robot platform.", "In this paper we present a novel approach for representing trajectories using sequenced linear dynamical systems. This method uses a closed-form least-squares procedure to fit a single linear dynamical system (LDS) to a simple trajectory. These LDS estimates form the elemental building blocks used to describe complicated trajectories through an automatic segmentation procedure that can represent complicated trajectories with high accuracy. Each estimated LDS induces a control law, mapping current state to desired state, that encodes the target trajectory in a generative manner. We provide a proof of stability of the control law and show how multiple trajectories can be incorporated to improve the generalization ability of the system.", "Many time-series such as human movement data consist of a sequence of basic actions, e.g., forehands and backhands in tennis. Automatically extracting and characterizing such actions is an important problem for a variety of different applications. In this paper, we present a probabilistic segmentation approach in which an observed time-series is modeled as a concatenation of segments corresponding to different basic actions. Each segment is generated through a noisy transformation of one of a few hidden trajectories representing different types of movement, with possible time re-scaling. We analyze three different approximation methods for dealing with model intractability, and demonstrate how the proposed approach can successfully segment table tennis movements recorded using a robot arm as haptic input device." ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
More recently, trajectory optimisation approaches have been extended to incorporate end-to-end learning, demonstrating robust task level visuomotor control @cite_30 through guided policy search. End-to-end learning has allowed for the use of domain transfer to facilitate one-shot learning @cite_14 from human video demonstrations, and for the use of reinforcement learning to learn optimised control policies @cite_12 @cite_33 . Unfortunately, end-to-end learning approaches typically lack interpretability and are difficult to verify without policy distillation @cite_5 . fit a sequence of proportional control laws to end-to-end model demonstrations using particle filters, in an attempt to obtain a more interpretable control system, but this approach is vulnerable to performance loss if important properties of the network fail to be inferred. In contrast, this paper shows that it is possible to learn switching proportional control laws in an end-to-end fashion, by emdedding this structure into the learning process.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_33", "@cite_5", "@cite_12" ], "mid": [ "2964161785", "2963703448", "2963713397", "2964231903", "2757631751" ], "abstract": [ "Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.", "", "We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.", "While deep reinforcement learning has successfully solved many challenging control tasks, its real-world applicability has been limited by the inability to ensure the safety of learned policies. We propose an approach to verifiable reinforcement learning by training decision tree policies, which can represent complex policies (since they are nonparametric), yet can be efficiently verified using existing techniques (since they are highly structured). The challenge is that decision tree policies are difficult to train. We propose VIPER, an algorithm that combines ideas from model compression and imitation learning to learn decision tree policies guided by a DNN policy (called the oracle) and its Q-function, and show that it substantially outperforms two baselines. We use VIPER to (i) learn a provably robust decision tree policy for a variant of Atari Pong with a symbolic state space, (ii) learn a decision tree policy for a toy game based on Pong that provably never loses, and (iii) learn a provably stable decision tree policy for cart-pole. In each case, the decision tree policy achieves performance equal to that of the original DNN policy.", "Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform multiple tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been shown to scale to high-dimensional dexterous manipulation. Furthermore, deployment of DRL on physical systems remains challenging due to sample inefficiency. Thus, the success of DRL in robotics has thus far been limited to simpler manipulators and tasks. In this work, we show that model-free DRL with natural policy gradients can effectively scale up to complex manipulation tasks with a high-dimensional 24-DoF hand, and solve them from scratch in simulated experiments. Furthermore, with the use of a small number of human demonstrations, the sample complexity can be significantly reduced, and enable learning within the equivalent of a few hours of robot experience. We demonstrate successful policies for multiple complex tasks: object relocation, in-hand manipulation, tool use, and door opening." ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
In computer vision, spatial transformers @cite_9 and capsule networks @cite_17 embed learnable structured transformations in an attempt to better capture the relational properties of image attributes in convolutional neural networks. Without this structure, convolution neural networks can learn jumbled image representations @cite_4 . This work shows that mixture density networks suffer from a similar problem, which switching density networks address. Switching density networks are conceptually similar to the stochastic neural network architecture proposed by , which uses a switching structure to learn reusible skills in a reinforcement learning setting. Our work differs by considering the use of switching structures for parameter prediction for state space models, thereby incorporating known controller structure into the learning process in a lightly supervised manner.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_17" ], "mid": [ "603908379", "", "2963703618" ], "abstract": [ "Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.", "", "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule." ] }
1907.04360
2961140575
Behaviour cloning is a commonly used strategy for imitation learning and can be extremely effective in constrained domains. However, in cases where the dynamics of an environment may be state dependent and varying, behaviour cloning places a burden on model capacity and the number of demonstrations required. This paper introduces switching density networks, which rely on a categorical reparametrisation for hybrid system identification. This results in a network comprising a classification layer that is followed by a regression layer. We use switching density networks to predict the parameters of hybrid control laws, which are toggled by a switching layer to produce different controller outputs, when conditioned on an input state. This work shows how switching density networks can be used for hybrid system identification in a variety of tasks, successfully identifying the key joint angle goals that make up manipulation tasks, while simultaneously learning image-based goal classifiers and regression networks that predict joint angles from images. We also show that they can cluster the phase space of an inverted pendulum, identifying the balance, spin and pump controllers required to solve this task. Switching density networks can be difficult to train, but we introduce a cross entropy regularisation loss that stabilises training.
Switching density networks are closely related to mixture density networks @cite_10 , a family of neural network constructed using @math output distributions. In the Gaussian mixture case, MDNs fit a weighted combination of Gaussian distributions, using mean @math , variance @math and normalised weight parameters @math , which are predicted using a neural network. Unfortunately, there is no direct link between weight components and mean or variance components, so mixture density networks often learn seemingly arbitrary connections. We illustrate this experimentally in Section , showing that an MDN trained to predict manipulator joint angles will use only a single mixture component for completely different joint angle predictions, somewhat unintuitively learning to change the mean and variance parameters instead of toggling between mixture components. This occurs because no structure forces mixture consistency in the network.
{ "cite_N": [ "@cite_10" ], "mid": [ "1579853615" ], "abstract": [ "Minimization of a sum-of-squares or cross-entropy error function leads to network outputs which approximate the conditional averages of the target data, conditioned on the input vector. For classifications problems, with a suitably chosen target coding scheme, these averages represent the posterior probabilities of class membership, and so can be regarded as optimal. For problems involving the prediction of continuous variables, however, the conditional averages provide only a very limited description of the properties of the target variables. This is particularly true for problems in which the mapping to be learned is multi-valued, as often arises in the solution of inverse problems, since the average of several correct target values is not necessarily itself a correct value. In order to obtain a complete description of the data, for the purposes of predicting the outputs corresponding to new input vectors, we must model the conditional probability distribution of the target data, again conditioned on the input vector. In this paper we introduce a new class of network models obtained by combining a conventional neural network with a mixture density model. The complete system is called a Mixture Density Network, and can in principle represent arbitrary conditional probability distributions in the same way that a conventional neural network can represent arbitrary functions. We demonstrate the effectiveness of Mixture Density Networks using both a toy problem and a problem involving robot inverse kinematics." ] }
1907.04404
2957889783
In order to facilitate further research in stereo reconstruction with multi-date satellite images, the goal of this paper is to provide a set of stereo-rectified images and the associated groundtruthed disparities for 10 AOIs (Area of Interest) drawn from two sources: 8 AOIs from IARPA's MVS Challenge dataset and 2 AOIs from the CORE3D-Public dataset. The disparities were groundtruthed by first constructing a fused DSM from the stereo pairs and by aligning 30 cm LiDAR with the fused DSM. Unlike the existing benckmarking datasets, we have also carried out a quantitative evaluation of our groundtruthed disparities using human annotated points in two of the AOIs. Additionally, the rectification accuracy in our dataset is comparable to the same in the existing state-of-the-art stereo datasets. In general, we have used the WorldView-3 (WV3) images for the dataset, the exception being the UCSD area for which we have used both WV3 and WorldView-2 (WV2) images. All of the dataset images are now in the public domain. Since multi-date satellite images frequently include images acquired in different seasons (which creates challenges in finding corresponding pairs of pixels for stereo), our dataset also includes for each image a building mask over which the disparities estimated by stereo should prove reliable. Additional metadata included in the dataset includes information about each image's acquisition date and time, the azimuth and elevation angles of the camera, and the intersection angles for the two views in a stereo pair. Also included in the dataset are both quantitative and qualitative analyses of the accuracy of the groundtruthed disparity maps. Our dataset is available for download at this https URL
3D reconstruction is a popular area of research in the computer vision community and there exist a number of groundtruthed datasets for benchmarking stereo matching algorithms. Although synthetic datasets created using rendered scenes such as the MPI Sintel stereo dataset @cite_15 might prove useful for certain tasks, they do not necessarily capture the diversity and complexity of images of the real world. Since our dataset has been created to serve as a benchmark for binocular stereo, it is sufficient to restrict our discussion of related work to datasets that focus on binocular stereo. The well known Tsukuba image pair @cite_8 was one of the first stereo datasets and contains disparity maps created using manual annotation. Since then, multiple attempts have been made to create more accurate datasets and some of the most popular ones include the Middlebury, KITTI and ETH3D datasets.
{ "cite_N": [ "@cite_15", "@cite_8" ], "mid": [ "1513100184", "2108688738" ], "abstract": [ "Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.", "In stereo algorithms with more than two cameras, the improvement of accuracy is often reported since they are robust against noise. However, another important aspect of the polynocular stereo, that is the ability of occlusion detection, has been paid less attention. We intensively analyzed the occlusion in the camera matrix stereo (SEA) and developed a simple but effective method to detect the presence of occlusion and to eliminate its effect in the correspondence search. By considering several statistics on the occlusion and the accuracy in the SEA, we derived a few base masks which represent occlusion patterns and are effective for the detection of occlusion. Several experiments using typical indoor scenes showed quite good performance to obtain dense and accurate depth maps even at the occluding boundaries of objects." ] }
1907.04404
2957889783
In order to facilitate further research in stereo reconstruction with multi-date satellite images, the goal of this paper is to provide a set of stereo-rectified images and the associated groundtruthed disparities for 10 AOIs (Area of Interest) drawn from two sources: 8 AOIs from IARPA's MVS Challenge dataset and 2 AOIs from the CORE3D-Public dataset. The disparities were groundtruthed by first constructing a fused DSM from the stereo pairs and by aligning 30 cm LiDAR with the fused DSM. Unlike the existing benckmarking datasets, we have also carried out a quantitative evaluation of our groundtruthed disparities using human annotated points in two of the AOIs. Additionally, the rectification accuracy in our dataset is comparable to the same in the existing state-of-the-art stereo datasets. In general, we have used the WorldView-3 (WV3) images for the dataset, the exception being the UCSD area for which we have used both WV3 and WorldView-2 (WV2) images. All of the dataset images are now in the public domain. Since multi-date satellite images frequently include images acquired in different seasons (which creates challenges in finding corresponding pairs of pixels for stereo), our dataset also includes for each image a building mask over which the disparities estimated by stereo should prove reliable. Additional metadata included in the dataset includes information about each image's acquisition date and time, the azimuth and elevation angles of the camera, and the intersection angles for the two views in a stereo pair. Also included in the dataset are both quantitative and qualitative analyses of the accuracy of the groundtruthed disparity maps. Our dataset is available for download at this https URL
The Middlebury datasets include the Middlebury2001 @cite_14 , Middlebury2003 @cite_22 , Middlebury2005 and Middlebury2006 datasets @cite_17 and more recently the high resolution Middlebury 2014 dataset @cite_2 . The last dataset was created using a stereo rig with cameras and structured light projectors and claims subpixel-accurate groundtruth. Images are of resolution (5-6MP) and mostly contain indoor scenes. Pairs are grouped under different categories such as similar and varying ambient illumination, perfect and imperfect rectification etc. Note that less than 50 @cite_2 .
{ "cite_N": [ "@cite_14", "@cite_2", "@cite_22", "@cite_17" ], "mid": [ "2104974755", "63091017", "2155479981", "2133255058" ], "abstract": [ "Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.", "We present a structured lighting system for creating high-resolution stereo datasets of static indoor scenes with highly accurate ground-truth disparities. The system includes novel techniques for efficient 2D subpixel correspondence search and self-calibration of cameras and projectors with modeling of lens distortion. Combining disparity estimates from multiple projector positions we are able to achieve a disparity accuracy of 0.2 pixels on most observed surfaces, including in half-occluded regions. We contribute 33 new 6-megapixel datasets obtained with our system and demonstrate that they present new challenges for the next generation of stereo algorithms.", "Progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. Our results are available at http: www.middlebury.edu stereo .", "Stereo correspondence methods rely on matching costs for computing the similarity of image locations. In this paper we evaluate the insensitivity of different matching costs with respect to radiometric variations of the input images. We consider both pixel-based and window-based variants and measure their performance in the presence of global intensity changes (e.g., due to gain and exposure differences), local intensity changes (e.g., due to vignetting, non-Lambertian surfaces, and varying lighting), and noise. Using existing stereo datasets with ground-truth disparities as well as six new datasets taken under controlled changes of exposure and lighting, we evaluate the different costs with a local, a semi-global, and a global stereo method." ] }
1907.04404
2957889783
In order to facilitate further research in stereo reconstruction with multi-date satellite images, the goal of this paper is to provide a set of stereo-rectified images and the associated groundtruthed disparities for 10 AOIs (Area of Interest) drawn from two sources: 8 AOIs from IARPA's MVS Challenge dataset and 2 AOIs from the CORE3D-Public dataset. The disparities were groundtruthed by first constructing a fused DSM from the stereo pairs and by aligning 30 cm LiDAR with the fused DSM. Unlike the existing benckmarking datasets, we have also carried out a quantitative evaluation of our groundtruthed disparities using human annotated points in two of the AOIs. Additionally, the rectification accuracy in our dataset is comparable to the same in the existing state-of-the-art stereo datasets. In general, we have used the WorldView-3 (WV3) images for the dataset, the exception being the UCSD area for which we have used both WV3 and WorldView-2 (WV2) images. All of the dataset images are now in the public domain. Since multi-date satellite images frequently include images acquired in different seasons (which creates challenges in finding corresponding pairs of pixels for stereo), our dataset also includes for each image a building mask over which the disparities estimated by stereo should prove reliable. Additional metadata included in the dataset includes information about each image's acquisition date and time, the azimuth and elevation angles of the camera, and the intersection angles for the two views in a stereo pair. Also included in the dataset are both quantitative and qualitative analyses of the accuracy of the groundtruthed disparity maps. Our dataset is available for download at this https URL
With a focus on autonomous driving, the KITTI2012 @cite_12 and KITTI2015 @cite_6 datasets were created to capture outdoor scenes. While the former pays attention to static environments, the latter is concerned with moving objects captured by a stereo camera. For generating groundtruth, scans were captured using a laser scanner mounted on a car and scenes were annotated using 3D CAD models for moving vehicles. The disparity maps in this dataset are semi-dense when compared to the Middlebury2014 dataset.
{ "cite_N": [ "@cite_6", "@cite_12" ], "mid": [ "1921093919", "2150066425" ], "abstract": [ "This paper proposes a novel model and dataset for 3D scene flow estimation with an application to autonomous driving. Taking advantage of the fact that outdoor scenes often decompose into a small number of independently moving objects, we represent each element in the scene by its rigid motion parameters and each superpixel by a 3D plane as well as an index to the corresponding object. This minimal representation increases robustness and leads to a discrete-continuous CRF where the data term decomposes into pairwise potentials between superpixels and objects. Moreover, our model intrinsically segments the scene into its constituting dynamic components. We demonstrate the performance of our model on existing benchmarks as well as a novel realistic dataset with scene flow ground truth. We obtain this dataset by annotating 400 dynamic scenes from the KITTI raw data collection using detailed 3D CAD models for all vehicles in motion. Our experiments also reveal novel challenges which cannot be handled by existing methods.", "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti" ] }
1907.04404
2957889783
In order to facilitate further research in stereo reconstruction with multi-date satellite images, the goal of this paper is to provide a set of stereo-rectified images and the associated groundtruthed disparities for 10 AOIs (Area of Interest) drawn from two sources: 8 AOIs from IARPA's MVS Challenge dataset and 2 AOIs from the CORE3D-Public dataset. The disparities were groundtruthed by first constructing a fused DSM from the stereo pairs and by aligning 30 cm LiDAR with the fused DSM. Unlike the existing benckmarking datasets, we have also carried out a quantitative evaluation of our groundtruthed disparities using human annotated points in two of the AOIs. Additionally, the rectification accuracy in our dataset is comparable to the same in the existing state-of-the-art stereo datasets. In general, we have used the WorldView-3 (WV3) images for the dataset, the exception being the UCSD area for which we have used both WV3 and WorldView-2 (WV2) images. All of the dataset images are now in the public domain. Since multi-date satellite images frequently include images acquired in different seasons (which creates challenges in finding corresponding pairs of pixels for stereo), our dataset also includes for each image a building mask over which the disparities estimated by stereo should prove reliable. Additional metadata included in the dataset includes information about each image's acquisition date and time, the azimuth and elevation angles of the camera, and the intersection angles for the two views in a stereo pair. Also included in the dataset are both quantitative and qualitative analyses of the accuracy of the groundtruthed disparity maps. Our dataset is available for download at this https URL
The datasets described thus far consist of images taken with projective cameras that are either handheld or mounted on stereo rigs. Recently, there was an announcement of a stereo dataset for satellite images @cite_26 that also provides groundtruthed disparities. That dataset however does not provide estimates of the errors in the groundtruthed disparities using human annotated points. Additionally, that dataset also does not present any information on the rectification errors involved. Note also that the framework we have used for creating the dataset is significantly different from the one used in @cite_26 . We believe that the research community can only benefit by experimenting with datasets produced with two different approaches.
{ "cite_N": [ "@cite_26" ], "mid": [ "2962880841" ], "abstract": [ "The increasingly common use of incidental satellite images for stereo reconstruction versus rigidly tasked binocular or trinocular coincident collection is helping to enable timely global-scale 3D mapping; however, reliable stereo correspondence from multi-date image pairs remains very challenging due to seasonal appearance differences and scene change. Promising recent work suggests that semantic scene segmentation can provide a robust regularizing prior for resolving ambiguities in stereo correspondence and reconstruction problems. To enable research for pairwise semantic stereo and multi-view semantic 3D reconstruction with incidental satellite images, we have established a large-scale public dataset including multi-view, multi-band satellite images and ground truth geometric and semantic labels for two large cities. To demonstrate the complementary nature of the stereo and segmentation tasks, we present lightweight public baselines adapted from recent state of the art convolutional neural network models and assess their performance." ] }
1907.04423
2961688412
The spectrum scarcity at sub-6 GHz spectrum has made millimeter-wave (mmWave) frequency band a key component of the next-generation wireless networks. While mmWave spectrum offers extremely large transmission bandwidths to accommodate ever-increasing data rates, unique characteristics of this new spectrum need special consideration to achieve the promised network throughput. In this work, we consider the off-grid problem for mmWave communications, which has a significant impact on basic network functionalities involving beam steering and tracking. The off-grid effect naturally appears in compressed sensing (CS) techniques adopting a discretization approach for representing the angular domain. This approach yields a finite set of discrete angle points, which are an approximation to the continuous angular space, and hence degrade the accuracy of related parameter estimation. In order to cope with the off-grid effect, we present a novel parameter-perturbation framework to efficiently estimate the channel and the covariance for mmWave networks. The proposed algorithms employ a smart perturbation mechanism in conjunction with a low-complexity greedy framework of simultaneous orthogonal matching pursuit (SOMP), and jointly solve for the off-grid parameters and weights. Numerical results show a significant performance improvement through our novel framework as a result of handling the off-grid effects, which is totally ignored in the conventional sparse mmWave channel or covariance estimation algorithms.
The spatial covariance exploits the relatively stationary long-term statistics of the propagation channel, and it can be leveraged for precoder design in mmWave networks @cite_37 @cite_17 @cite_2 @cite_25 @cite_33 . The rationale behind the use of spatial covariance matrix are two-fold. Firstly, in many cases, the angular coherence time (several seconds or more) is much longer than the channel coherence time (several milliseconds) @cite_18 @cite_37 . As a result, the angular and average power features of the channel can be assumed to be time-invariant, resulting in the spatial covariance matrix to be constant across many channel coherence intervals. Secondly, the spatial covariance matrix is frequency invariant, due to the significant angular congruence across the frequency bands @cite_36 @cite_27 , which is important for a wideband system where a common analog precoder can be shared across different sub-carriers. These reasons make the spatial covariance based precoding particularly attractive: once the RF beamformer is designed based on the channel covariance, it need not be updated every time instant. We would like to refer the reader to the works @cite_25 @cite_17 @cite_37 @cite_33 for a comprehensive discussion on the spatial covariance estimation for mmWave HADB MIMO architectures.
{ "cite_N": [ "@cite_37", "@cite_18", "@cite_33", "@cite_36", "@cite_27", "@cite_2", "@cite_25", "@cite_17" ], "mid": [ "", "2962819920", "2611749634", "2962952148", "2968634732", "", "2592085864", "2241455052" ], "abstract": [ "", "Millimeter wave (mmWave) has great potential in realizing high data rates, thanks to the large spectral channels. It is considered as a key technology for fifth-generation (5G) wireless networks and is already used in wireless LAN (e.g., IEEE 802.11ad). Using mmWave for vehicular communications, however, is often viewed with some skepticism due to a misconception that the Doppler spread would become too large at these high frequencies. This is not necessarily true when directional beams are employed. In this paper, closed-form expressions relating the channel coherence time and beamwidth are derived. Unlike prior work that assumed perfect beam pointing, the pointing error due to the receiver motion is incorporated to show that there exists a nonzero optimal beamwidth that maximizes the coherence time. We define a novel concept of beam coherence time, which is an effective measure of beam alignment frequency. Using the derived correlation function, the channel coherence time, and the beam coherence time, an overall performance metric considering both the channel time variation and the beam alignment overhead is derived. Using this metric, it is shown that beam realignment in every beam coherence time performs better than beam realignment in every channel coherence time.", "We propose a new hybrid precoding technique for massive multi-input multi-output (MIMO) systems using spatial channel covariance matrices in the analog precoder design. Applying a regularized zero-forcing precoder for the baseband precoding matrix, we find an unconstrained analog precoder that maximizes signal-to-leakage-plus-noise ratio (SLNR) while ignoring analog phase shifter constraints. Subsequently, we develop a technique to design a constrained analog precoder that mimics the obtained unconstrained analog precoder under phase shifter constraints. The main idea is to adopt an additional baseband precoding matrix, which we call a compensation matrix. We analyze the SLNR loss due to the proposed hybrid precoding compared to fully digital precoding, and determine which factors have a significant impact on this loss. In the simulations, we show that if the channel is spatially correlated and the number of users is smaller than the number of RF chains, the SLNR loss becomes negligible compared to fully digital precoding. The main benefit of our method stems from the use of spatial channel matrices in such a way that not only is each user's desired signal considered, but also the inter-user interference is incorporated in the analog precoder design.", "5G millimeter wave (mmWave) technology is envisioned to be an integral part of next- generation vehicle-to-everything (V2X) networks and autonomous vehicles due to its broad bandwidth, wide field of view sensing, and precise localization capabilities. The reliability of mmWave links may be compromised due to difficulties in beam alignment for mobile channels and due to blocking effects between a mmWave transmitter and a receiver. To address such challenges, out-of-band information from sub-6 GHz channels can be utilized for predicting the temporal and angular channel characteristics in mmWave bands, which necessitates a good understanding of how propagation characteristics are coupled across different bands. In this paper, we use ray tracing simulations to characterize the angular and temporal correlation across a wide range of propagation frequencies for V2X channels ranging from 900 MHz up to 73 GHz, for a vehicle maintaining line-of-sight (LOS) and non-LOS (NLOS) beams with a transmitter in an urban environment. Our results shed light on increasing sparsity behavior of propagation channels with increasing frequency and highlight the strong temporal angular correlation among 5.9 GHz and 28 GHz bands especially for LOS channels.", "In high mobility applications of millimeter wave (mmWave) communications, e.g., vehicle-to-everything communication and next-generation cellular communication, frequent link configuration can be a source of significant overhead. We use the sub-6 GHz channel covariance as an out-of-band side information for mmWave link configuration. Assuming: (i) a fully digital architecture at sub-6GHz; and (ii) a hybrid analog-digital architecture at mmWave, we propose an out-of-band covariance translation approach and an out-of-band aided compressed covariance estimation approach. For covariance translation, we estimate the parameters of sub-6 GHz covariance and use them in theoretical expressions of covariance matrices to predict the mmWave covariance. For out-of-band aided covariance estimation, we use weighted sparse signal recovery to incorporate out-ofband information in compressed covariance estimation. The outof-band covariance translation eliminates the in-band training completely, whereas out-of-band aided covariance estimation relies on in-band as well as out-of-band training. We also analyze the loss in the signal-to-noise ratio due to an imperfect estimate of the covariance. The simulation results show that the proposed covariance estimation strategies can reduce the training overhead compared to the in-band only covariance estimation.", "", "Spatial channel covariance information can replace full channel state information for designing analog precoders in millimeter wave (mmWave) hybrid MIMO systems. Hybrid MIMO architectures, however, make it challenging to estimate the spatial channel covariance matrix because the estimator in baseband can only see the low-dimensional projections of the original channel. In this paper, we propose two key ideas for developing the covariance estimation techniques based on compressive sensing techniques. One is to use the Hermitian property of the covariance matrix, and the other is to use a time-varying analog combining matrix to effectively extend the measurement size.", "" ] }
1907.04423
2961688412
The spectrum scarcity at sub-6 GHz spectrum has made millimeter-wave (mmWave) frequency band a key component of the next-generation wireless networks. While mmWave spectrum offers extremely large transmission bandwidths to accommodate ever-increasing data rates, unique characteristics of this new spectrum need special consideration to achieve the promised network throughput. In this work, we consider the off-grid problem for mmWave communications, which has a significant impact on basic network functionalities involving beam steering and tracking. The off-grid effect naturally appears in compressed sensing (CS) techniques adopting a discretization approach for representing the angular domain. This approach yields a finite set of discrete angle points, which are an approximation to the continuous angular space, and hence degrade the accuracy of related parameter estimation. In order to cope with the off-grid effect, we present a novel parameter-perturbation framework to efficiently estimate the channel and the covariance for mmWave networks. The proposed algorithms employ a smart perturbation mechanism in conjunction with a low-complexity greedy framework of simultaneous orthogonal matching pursuit (SOMP), and jointly solve for the off-grid parameters and weights. Numerical results show a significant performance improvement through our novel framework as a result of handling the off-grid effects, which is totally ignored in the conventional sparse mmWave channel or covariance estimation algorithms.
Estimating the covariance is complicated due to the fact that only the signals pre-combined by the analog precombiner are available at the baseband. Based on the way the covariance matrix is estimated, it can be broadly categorized into two methods: 1) which we will refer as the indirect method, and 2) which we will refer as the direct method hereafter. The central idea in the indirect approach is to solve for the channel estimates for every successive snapshot and use these estimates to calculate the covariance matrix. Upon obtaining the channel estimates for every snapshot, the covariance calculation is relatively straightforward. However, in cases when the channel estimates are not required, then one can explicitly operate on the covariance of measurements directly to estimate the covariance matrix which is central to the latter approach. Both the and the problems can be posed as a compressed sensing (CS) problem leveraging the sparse nature of mmWave channels @cite_4 @cite_9 @cite_35 @cite_6 @cite_27 .
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_9", "@cite_6", "@cite_27" ], "mid": [ "2848225903", "2597836615", "2921341714", "2963304087", "2968634732" ], "abstract": [ "The millimeter-wave (mmWave) communications is a promising technology for next-generation wireless networks with its available broad spectrum. Along with massive number of antennas employed at both end of the transceiver, the number of unknown channel coefficients become extremely large. Thanks to sparse nature of mmWave links, this paper proposes a parameter perturbation based sparse recovery technique for mmWave channel estimation. Recently, classical compressive sensing (CS) based sparse recovery techniques have been applied in this area. However, CS based reconstructions are highly effected by basis mismatch problems such as off-the-grid targets, or, equivalently, scattering points. The proposed iterative algorithm called parameter perturbed orthogonal matching pursuit (PPOMP) jointly solves for both the sparse signal, which is the unknown mmWave channel itself, and the basis mismatch due to off-the-grid problem. We verify through extensive numerical results that the proposed PPOMP algorithm achieves significantly better channel estimation performance compared to the state of the art sparse reconstruction techniques.", "Multiple-input multiple-output (MIMO) systems are well suited for millimeter-wave (mmWave) wireless communications where large antenna arrays can be integrated in small form factors due to tiny wavelengths, thereby providing high array gains while supporting spatial multiplexing, beamforming, or antenna diversity. It has been shown that mmWave channels exhibit sparsity due to the limited number of dominant propagation paths, thus compressed sensing techniques can be leveraged to conduct channel estimation at mmWave frequencies. This paper presents a novel approach of constructing beamforming dictionary matrices for sparse channel estimation using the continuous basis pursuit (CBP) concept, and proposes two novel low-complexity algorithms to exploit channel sparsity for adaptively estimating multipath channel parameters in mmWave channels. We verify the performance of the proposed CBP-based beamforming dictionary and the two algorithms using a simulator built upon a three-dimensional mmWave statistical spatial channel model, NYUSIM, that is based on real-world propagation measurements. Simulation results show that the CBP-based dictionary offers substantially higher estimation accuracy and greater spectral efficiency than the grid-based counterpart introduced by previous researchers, and the algorithms proposed here render better performance but require less computational effort compared with existing algorithms.", "Correlation-based techniques used for frame synchronization can suffer significant performance degradation over multi-path frequency-selective channels. In this paper, we propose a joint frame synchronization and channel estimation (JFSCE) framework as a remedy to this problem. This framework, however, increases the size of the resulting combined channel vector which should capture both the channel impulse response vector and the frame boundary offset and, therefore, its estimation becomes more challenging. On the other hand, because the combined channel vector is sparse, sparse channel estimation methods can be applied. We propose several JFSCE methods using popular sparse signal recovery algorithms which exploit the sparsity of the combined channel vector. Subsequently, the sparse channel vector estimate is used to design a sparse equalizer. Our simulation results and experimental measurements using software defined radios show that in some scenarios our proposed method improves the overall system performance significantly, in terms of the mean square error between the transmitted and the equalized symbols compared to the conventional method.", "Spatial channel covariance information can replace full knowledge of the entire channel matrix for designing analog precoders in hybrid multiple-input-multiple-output (MIMO) architecture. Spatial channel covariance estimation, however, is challenging for the hybrid MIMO architecture because the estimator operating at baseband can only obtain a lower dimensional pre-combined signal through fewer radio frequency chains than antennas. In this paper, we propose two approaches to covariance estimation based on compressive sensing techniques. One is to apply a time-varying sensing matrix, and the other is to exploit the prior knowledge that the covariance matrix is Hermitian. We present the rationale behind the two ideas and validate the superiority of the proposed methods by theoretical analysis and numerical simulations. We conclude the paper by extending the proposed algorithms from narrowband MIMO systems with a single receive antenna to wideband systems with multiple receive antennas.", "In high mobility applications of millimeter wave (mmWave) communications, e.g., vehicle-to-everything communication and next-generation cellular communication, frequent link configuration can be a source of significant overhead. We use the sub-6 GHz channel covariance as an out-of-band side information for mmWave link configuration. Assuming: (i) a fully digital architecture at sub-6GHz; and (ii) a hybrid analog-digital architecture at mmWave, we propose an out-of-band covariance translation approach and an out-of-band aided compressed covariance estimation approach. For covariance translation, we estimate the parameters of sub-6 GHz covariance and use them in theoretical expressions of covariance matrices to predict the mmWave covariance. For out-of-band aided covariance estimation, we use weighted sparse signal recovery to incorporate out-ofband information in compressed covariance estimation. The outof-band covariance translation eliminates the in-band training completely, whereas out-of-band aided covariance estimation relies on in-band as well as out-of-band training. We also analyze the loss in the signal-to-noise ratio due to an imperfect estimate of the covariance. The simulation results show that the proposed covariance estimation strategies can reduce the training overhead compared to the in-band only covariance estimation." ] }
1907.04423
2961688412
The spectrum scarcity at sub-6 GHz spectrum has made millimeter-wave (mmWave) frequency band a key component of the next-generation wireless networks. While mmWave spectrum offers extremely large transmission bandwidths to accommodate ever-increasing data rates, unique characteristics of this new spectrum need special consideration to achieve the promised network throughput. In this work, we consider the off-grid problem for mmWave communications, which has a significant impact on basic network functionalities involving beam steering and tracking. The off-grid effect naturally appears in compressed sensing (CS) techniques adopting a discretization approach for representing the angular domain. This approach yields a finite set of discrete angle points, which are an approximation to the continuous angular space, and hence degrade the accuracy of related parameter estimation. In order to cope with the off-grid effect, we present a novel parameter-perturbation framework to efficiently estimate the channel and the covariance for mmWave networks. The proposed algorithms employ a smart perturbation mechanism in conjunction with a low-complexity greedy framework of simultaneous orthogonal matching pursuit (SOMP), and jointly solve for the off-grid parameters and weights. Numerical results show a significant performance improvement through our novel framework as a result of handling the off-grid effects, which is totally ignored in the conventional sparse mmWave channel or covariance estimation algorithms.
In the literature, several CS approaches have been utilized to estimate the channel and the spatial covariance. For the indirect approach, the channel estimates can be obtained using the SMV CS techniques such as @cite_9 @cite_35 . However, these SMV techniques fail to exploit the common support of the channel estimates across different snapshots. The common support across multiple snapshots is due to the invariant angular domain features across multiple snapshots which is central to the use of spatial covariance matrix. The MMV techniques can exploit this common support structure; however, most of the MMV techniques are designed with sensing matrix fixed over all the snapshots making it inefficient for matrices. The statistical problem of covariance estimation can be approached by explicitly estimating the covariance using the measurement covariance space. Strategies such as MUSIC @cite_43 and ESPRIT @cite_8 algorithms can be adopted but these methods fail to leverage the channel sparsity. Recently, a CS MMV based covariance estimation for the time-varying sensing matrices has been proposed in @cite_6 and a tensor-based decomposition approach has been proposed in @cite_21 . Further CS algorithms for the direct approach of spatial covariance estimation can be found in @cite_33 @cite_6 @cite_27 @cite_12 @cite_21
{ "cite_N": [ "@cite_35", "@cite_33", "@cite_8", "@cite_9", "@cite_21", "@cite_6", "@cite_43", "@cite_27", "@cite_12" ], "mid": [ "2848225903", "2611749634", "2109958193", "2921341714", "2919210464", "2963304087", "1575224478", "2968634732", "2962739446" ], "abstract": [ "The millimeter-wave (mmWave) communications is a promising technology for next-generation wireless networks with its available broad spectrum. Along with massive number of antennas employed at both end of the transceiver, the number of unknown channel coefficients become extremely large. Thanks to sparse nature of mmWave links, this paper proposes a parameter perturbation based sparse recovery technique for mmWave channel estimation. Recently, classical compressive sensing (CS) based sparse recovery techniques have been applied in this area. However, CS based reconstructions are highly effected by basis mismatch problems such as off-the-grid targets, or, equivalently, scattering points. The proposed iterative algorithm called parameter perturbed orthogonal matching pursuit (PPOMP) jointly solves for both the sparse signal, which is the unknown mmWave channel itself, and the basis mismatch due to off-the-grid problem. We verify through extensive numerical results that the proposed PPOMP algorithm achieves significantly better channel estimation performance compared to the state of the art sparse reconstruction techniques.", "We propose a new hybrid precoding technique for massive multi-input multi-output (MIMO) systems using spatial channel covariance matrices in the analog precoder design. Applying a regularized zero-forcing precoder for the baseband precoding matrix, we find an unconstrained analog precoder that maximizes signal-to-leakage-plus-noise ratio (SLNR) while ignoring analog phase shifter constraints. Subsequently, we develop a technique to design a constrained analog precoder that mimics the obtained unconstrained analog precoder under phase shifter constraints. The main idea is to adopt an additional baseband precoding matrix, which we call a compensation matrix. We analyze the SLNR loss due to the proposed hybrid precoding compared to fully digital precoding, and determine which factors have a significant impact on this loss. In the simulations, we show that if the channel is spatially correlated and the number of users is smaller than the number of RF chains, the SLNR loss becomes negligible compared to fully digital precoding. The main benefit of our method stems from the use of spatial channel matrices in such a way that not only is each user's desired signal considered, but also the inter-user interference is incorporated in the analog precoder design.", "A new spectral search-based direction-of-arrival (DOA) estimation method is proposed that extends the idea of the conventional ESPRIT DOA estimator to a much more general class of array geometries than assumed by the conventional ESPRIT technique. A computationally efficient polynomial rooting-based search-free implementation of the proposed algorithm is also developed.", "Correlation-based techniques used for frame synchronization can suffer significant performance degradation over multi-path frequency-selective channels. In this paper, we propose a joint frame synchronization and channel estimation (JFSCE) framework as a remedy to this problem. This framework, however, increases the size of the resulting combined channel vector which should capture both the channel impulse response vector and the frame boundary offset and, therefore, its estimation becomes more challenging. On the other hand, because the combined channel vector is sparse, sparse channel estimation methods can be applied. We propose several JFSCE methods using popular sparse signal recovery algorithms which exploit the sparsity of the combined channel vector. Subsequently, the sparse channel vector estimate is used to design a sparse equalizer. Our simulation results and experimental measurements using software defined radios show that in some scenarios our proposed method improves the overall system performance significantly, in terms of the mean square error between the transmitted and the equalized symbols compared to the conventional method.", "Spatial channel covariance information can replace instantaneous full channel state information for designing hybrid analog digital precoders. Estimating the spatial channel covariance is challenging due to the inherent limitation of the hybrid architecture, i.e., much fewer radio frequency (RF) chains than antennas. In this paper, we propose a spatial channel covariance estimation method for spatially sparse time-varying frequency-selective channels. The proposed method leverages the fact that the channel can be represented as a low-rank higher-order tensor. Numerical results demonstrate that the proposed approach achieves higher estimation accuracy in comparison with existing covariance estimation methods.", "Spatial channel covariance information can replace full knowledge of the entire channel matrix for designing analog precoders in hybrid multiple-input-multiple-output (MIMO) architecture. Spatial channel covariance estimation, however, is challenging for the hybrid MIMO architecture because the estimator operating at baseband can only obtain a lower dimensional pre-combined signal through fewer radio frequency chains than antennas. In this paper, we propose two approaches to covariance estimation based on compressive sensing techniques. One is to apply a time-varying sensing matrix, and the other is to exploit the prior knowledge that the covariance matrix is Hermitian. We present the rationale behind the two ideas and validate the superiority of the proposed methods by theoretical analysis and numerical simulations. We conclude the paper by extending the proposed algorithms from narrowband MIMO systems with a single receive antenna to wideband systems with multiple receive antennas.", "In this paper, we propose a direction-of-arrival estimation method by covariance matrix sparse reconstruction of coprime array. Specifically, source locations are estimated by solving a newly formulated convex optimization problem, where the difference between the spatially smoothed covariance matrix and the sparsely reconstructed one is minimized. Then, a sliding window scheme is designed for source enumeration. Finally, the power of each source is re-estimated as a least squares problem. Compared with existing methods, the proposed method achieves more accurate source localization and power estimation performance with full utilization of increased degrees of freedom provided by coprime array.", "In high mobility applications of millimeter wave (mmWave) communications, e.g., vehicle-to-everything communication and next-generation cellular communication, frequent link configuration can be a source of significant overhead. We use the sub-6 GHz channel covariance as an out-of-band side information for mmWave link configuration. Assuming: (i) a fully digital architecture at sub-6GHz; and (ii) a hybrid analog-digital architecture at mmWave, we propose an out-of-band covariance translation approach and an out-of-band aided compressed covariance estimation approach. For covariance translation, we estimate the parameters of sub-6 GHz covariance and use them in theoretical expressions of covariance matrices to predict the mmWave covariance. For out-of-band aided covariance estimation, we use weighted sparse signal recovery to incorporate out-ofband information in compressed covariance estimation. The outof-band covariance translation eliminates the in-band training completely, whereas out-of-band aided covariance estimation relies on in-band as well as out-of-band training. We also analyze the loss in the signal-to-noise ratio due to an imperfect estimate of the covariance. The simulation results show that the proposed covariance estimation strategies can reduce the training overhead compared to the in-band only covariance estimation.", "Massive MIMO is a variant of multiuser MIMO where the number of base-station antennas M is very large (typically ≈ 100), and generally much larger than the number of spatially multiplexed data streams (typically ≈ 10). The benefits of such approach have been intensively investigated in the past few years, and all-digital experimental implementations have also been demonstrated. Unfortunately, the front-end A D conversion necessary to drive hundreds of antennas, with a signal bandwidth of the order of 10 to 100 MHz, requires very large sampling bitrate and power consumption. In order to reduce such implementation requirements, Hybrid Digital-Analog architectures have been proposed. In particular, our work in this paper is motivated by one of such schemes named Joint Spatial Division and Multiplexing (JSDM), where the downlink precoder (resp., uplink linear receiver) is split into the product of a baseband linear projection (digital) and an RF reconfigurable beamforming network (analog), such that only a reduced number m M of A D converters and RF modulation demodulation chains is needed. In JSDM, users are grouped according to similarity of their channel dominant subspaces, and these groups are separated by the analog beamforming stage, where multiplexing gain in each group is achieved using the digital precoder. Therefore, it is apparent that extracting the channel subspace information of the M -dim channel vectors from snapshots of m-dim projections, with m M , plays a fundamental role in JSDM implementation. In this paper, we develop novel efficient algorithms that require sampling only m = O(2 √ M) specific array elements according to a coprime sampling scheme, and for a given p M , return a p-dim beamformer that has a performance comparable with the best p-dim beamformer that can be designed from the full knowledge of the exact channel covariance matrix. We assess the performance of our proposed estimators both analytically and empirically via numerical simulations. We also demonstrate by simulation that the proposed subspace estimation methods provide near-ideal performance for a massive MIMO JSDM system, by comparing with the case where the user channel covariances are perfectly known." ] }
1907.04423
2961688412
The spectrum scarcity at sub-6 GHz spectrum has made millimeter-wave (mmWave) frequency band a key component of the next-generation wireless networks. While mmWave spectrum offers extremely large transmission bandwidths to accommodate ever-increasing data rates, unique characteristics of this new spectrum need special consideration to achieve the promised network throughput. In this work, we consider the off-grid problem for mmWave communications, which has a significant impact on basic network functionalities involving beam steering and tracking. The off-grid effect naturally appears in compressed sensing (CS) techniques adopting a discretization approach for representing the angular domain. This approach yields a finite set of discrete angle points, which are an approximation to the continuous angular space, and hence degrade the accuracy of related parameter estimation. In order to cope with the off-grid effect, we present a novel parameter-perturbation framework to efficiently estimate the channel and the covariance for mmWave networks. The proposed algorithms employ a smart perturbation mechanism in conjunction with a low-complexity greedy framework of simultaneous orthogonal matching pursuit (SOMP), and jointly solve for the off-grid parameters and weights. Numerical results show a significant performance improvement through our novel framework as a result of handling the off-grid effects, which is totally ignored in the conventional sparse mmWave channel or covariance estimation algorithms.
The CS-based methods discussed in are based on the concept of @cite_32 , which provide a virtual angular representation of MIMO channels employing a discretization procedure. The discretization procedure results in an exact sparse representation of the virtual channel model only when the true AoA and AoD lies on one of the pre-defined set of spatial angles employed during the discretization. However, the true AoA-AoD lies in the continuous space and may not fall exactly onto one of the finite pre-defined spatial angles. In fact, for the discrete Fourier transform (DFT) basis defined by the virtual channel model, a continuous AoA-AoD parameter lying between two successive DFT grid cells will affect not the only the closest two cells, but the whole grid with amplitude decaying with @math due to the @cite_14 @cite_11 , where @math and @math are the number of grid points in the AoA and AoD grid, respectively. This phenomena violates the sparsity assumption, resulting in a decrease in reconstruction performance. As a result, the estimation accuracy of the CS based methods is limited by the number of grid points @cite_41 @cite_0 @cite_14 @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_41", "@cite_32", "@cite_0", "@cite_11" ], "mid": [ "2067878805", "2565293665", "2128865660", "2123457453", "2957141087" ], "abstract": [ "Compressive Sensing theory details how a sparsely represented signal in a known basis can be reconstructed with an underdetermined linear measurement model. However, in reality there is a mismatch between the assumed and the actual bases due to factors such as discretization of the parameter space defining basis components, sampling jitter in A D conversion, and model errors. Due to this mismatch, a signal may not be sparse in the assumed basis, which causes significant performance degradation in sparse reconstruction algorithms. To eliminate the mismatch problem, this paper presents a novel perturbed orthogonal matching pursuit (POMP) algorithm that performs controlled perturbation of selected support vectors to decrease the orthogonal residual at each iteration. Based on detailed mathematical analysis, conditions for successful reconstruction are derived. Simulations show that robust results with much smaller reconstruction errors in the case of perturbed bases can be obtained as compared to standard sparse reconstruction techniques.", "This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(slog s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method.", "Accurate and tractable channel modeling is critical to realizing the full potential of antenna arrays in wireless communications. Current approaches represent two extremes: idealized statistical models representing a rich scattering environment and parameterized physical models that describe realistic scattering environments via the angles and gains associated with different propagation paths. However, simple rules that capture the effects of scattering characteristics on channel capacity and diversity are difficult to infer from existing models. We propose an intermediate virtual channel representation that captures the essence of physical modeling and provides a simple geometric interpretation of the scattering environment. The virtual representation corresponds to a fixed coordinate transformation via spatial basis functions defined by fixed virtual angles. We show that in an uncorrelated scattering environment, the elements of the channel matrix form a segment of a stationary process and that the virtual channel coefficients are approximately uncorrelated samples of the underlying spectral representation. For any scattering environment, the virtual channel matrix clearly reveals the two key factors affecting capacity: the number of parallel channels and the level of diversity. The concepts of spatial zooming and aliasing are introduced to provide a transparent interpretation of the effect of antenna spacing on channel statistics and capacity. Numerical results are presented to illustrate various aspects of the virtual framework.", "The theory of compressed sensing suggests that successful inversion of an image of the physical world (broadly defined to include speech signals, radar sonar returns, vibration records, sensor array snapshot vectors, 2-D images, and so on) for its source modes and amplitudes can be achieved at measurement dimensions far lower than what might be expected from the classical theories of spectrum or modal analysis, provided that the image is sparse in an apriori known basis. For imaging problems in spectrum analysis, and passive and active radar sonar, this basis is usually taken to be a DFT basis. However, in reality no physical field is sparse in the DFT basis or in any apriori known basis. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and consequently there is mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of compressed sensing to mismatch between the assumed and the actual sparsity bases. We start by analyzing the effect of basis mismatch on the best k-term approximation error, which is central to providing exact sparse recovery guarantees. We establish achievable bounds for the l1 error of the best k -term approximation and show that these bounds grow linearly with the image (or grid) dimension and the mismatch level between the assumed and actual bases for sparsity. We then derive bounds, with similar growth behavior, for the basis pursuit l1 recovery error, indicating that the sparse recovery may suffer large errors in the presence of basis mismatch. Although, we present our results in the context of basis pursuit, our analysis applies to any sparse recovery principle that relies on the accuracy of best k-term approximations for its performance guarantees. We particularly highlight the problematic nature of basis mismatch in Fourier imaging, where spillage from off-grid DFT components turns a sparse representation into an incompressible one. We substantiate our mathematical analysis by numerical examples that demonstrate a considerable performance degradation for image inversion from compressed sensing measurements in the presence of basis mismatch, for problem sizes common to radar and sonar.", "In this paper, we tackle channel estimation in millimeter-wave hybrid multiple-input multiple-output systems by considering off-grid effects. In particular, we assume that spatial parameters can take any value in the angular domain, and need not fall on predefined discretized angles. Instead of increasing the number of discretized points to combat off-grid effects, we use implicit Dirichlet kernel structure in the Fourier domain, which conventional compressed sensing methods do not use. We propose greedy low-complexity algorithms based on orthogonal matching pursuit (OMP); our core idea is to traverse the Dirichlet kernel peak using estimates of the discrete Fourier transform. We demonstrate the efficacy of our proposed algorithms compared to standard OMP reconstruction. Numerical results show that our proposed algorithms obtain smaller reconstruction errors when off-grid effects are accounted for." ] }
1907.04423
2961688412
The spectrum scarcity at sub-6 GHz spectrum has made millimeter-wave (mmWave) frequency band a key component of the next-generation wireless networks. While mmWave spectrum offers extremely large transmission bandwidths to accommodate ever-increasing data rates, unique characteristics of this new spectrum need special consideration to achieve the promised network throughput. In this work, we consider the off-grid problem for mmWave communications, which has a significant impact on basic network functionalities involving beam steering and tracking. The off-grid effect naturally appears in compressed sensing (CS) techniques adopting a discretization approach for representing the angular domain. This approach yields a finite set of discrete angle points, which are an approximation to the continuous angular space, and hence degrade the accuracy of related parameter estimation. In order to cope with the off-grid effect, we present a novel parameter-perturbation framework to efficiently estimate the channel and the covariance for mmWave networks. The proposed algorithms employ a smart perturbation mechanism in conjunction with a low-complexity greedy framework of simultaneous orthogonal matching pursuit (SOMP), and jointly solve for the off-grid parameters and weights. Numerical results show a significant performance improvement through our novel framework as a result of handling the off-grid effects, which is totally ignored in the conventional sparse mmWave channel or covariance estimation algorithms.
A natural approach to the problem of off-grid basis mismatch is to increase the number of grid points corresponding to decrease in grid sizes. However, this is an inefficient approach due to the following two main problems: Firstly, it increases the mutual coherence of the dictionary, violating the restricted isometric property @cite_5 , which makes it more difficult to reconstruct using standard compressed sensing analyses. Further, it also increases the dimension of the dictionary and the sparse vector to be recovered, resulting in higher memory and computational complexity in reconstruction. More details on the basis mismatch off grid effects can be found in the seminal paper @cite_41 and further discussion in @cite_0 @cite_14 @cite_30 with a focus on applications such as beamforming, radars, and image reconstruction.
{ "cite_N": [ "@cite_30", "@cite_14", "@cite_41", "@cite_0", "@cite_5" ], "mid": [ "2114221796", "2067878805", "2565293665", "2123457453", "2015418199" ], "abstract": [ "Pulse-Doppler radar has been successfully applied to surveillance and tracking of both moving and stationary targets. For efficient processing of radar returns, delay-Doppler plane is discretized and FFT techniques are employed to compute matched filter output on this discrete grid. However, for targets whose delay-Doppler values do not coincide with the computation grid, the detection performance degrades considerably. Especially for detecting strong and closely spaced targets this causes miss detections and false alarms. This phenomena is known as the off-grid problem. Although compressive sensing based techniques provide sparse and high resolution results at sub-Nyquist sampling rates, straightforward application of these techniques is significantly more sensitive to the off-grid problem. Here a novel parameter perturbation based sparse reconstruction technique is proposed for robust delay-Doppler radar processing even under the off-grid case. Although the perturbation idea is general and can be implemented in association with other greedy techniques, presently it is used within an orthogonal matching pursuit (OMP) framework. In the proposed technique, the selected dictionary parameters are perturbed towards directions to decrease the orthogonal residual norm. The obtained results show that accurate and sparse reconstructions can be obtained for off-grid multi target cases. A new performance metric based on Kullback-Leibler Divergence (KLD) is proposed to better characterize the error between actual and reconstructed parameter spaces. Increased performance with lower reconstruction errors are obtained for all the tested performance criteria for the proposed technique compared to conventional OMP and @?\"1 minimization techniques.", "Compressive Sensing theory details how a sparsely represented signal in a known basis can be reconstructed with an underdetermined linear measurement model. However, in reality there is a mismatch between the assumed and the actual bases due to factors such as discretization of the parameter space defining basis components, sampling jitter in A D conversion, and model errors. Due to this mismatch, a signal may not be sparse in the assumed basis, which causes significant performance degradation in sparse reconstruction algorithms. To eliminate the mismatch problem, this paper presents a novel perturbed orthogonal matching pursuit (POMP) algorithm that performs controlled perturbation of selected support vectors to decrease the orthogonal residual at each iteration. Based on detailed mathematical analysis, conditions for successful reconstruction are derived. Simulations show that robust results with much smaller reconstruction errors in the case of perturbed bases can be obtained as compared to standard sparse reconstruction techniques.", "This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(slog s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method.", "The theory of compressed sensing suggests that successful inversion of an image of the physical world (broadly defined to include speech signals, radar sonar returns, vibration records, sensor array snapshot vectors, 2-D images, and so on) for its source modes and amplitudes can be achieved at measurement dimensions far lower than what might be expected from the classical theories of spectrum or modal analysis, provided that the image is sparse in an apriori known basis. For imaging problems in spectrum analysis, and passive and active radar sonar, this basis is usually taken to be a DFT basis. However, in reality no physical field is sparse in the DFT basis or in any apriori known basis. No matter how finely we grid the parameter space the sources may not lie in the center of the grid cells and consequently there is mismatch between the assumed and the actual bases for sparsity. In this paper, we study the sensitivity of compressed sensing to mismatch between the assumed and the actual sparsity bases. We start by analyzing the effect of basis mismatch on the best k-term approximation error, which is central to providing exact sparse recovery guarantees. We establish achievable bounds for the l1 error of the best k -term approximation and show that these bounds grow linearly with the image (or grid) dimension and the mismatch level between the assumed and actual bases for sparsity. We then derive bounds, with similar growth behavior, for the basis pursuit l1 recovery error, indicating that the sparse recovery may suffer large errors in the presence of basis mismatch. Although, we present our results in the context of basis pursuit, our analysis applies to any sparse recovery principle that relies on the accuracy of best k-term approximations for its performance guarantees. We particularly highlight the problematic nature of basis mismatch in Fourier imaging, where spillage from off-grid DFT components turns a sparse representation into an incompressible one. We substantiate our mathematical analysis by numerical examples that demonstrate a considerable performance degradation for image inversion from compressed sensing measurements in the presence of basis mismatch, for problem sizes common to radar and sonar.", "Abstract It is now well-known that one can reconstruct sparse or compressible signals accurately from a very limited number of measurements, possibly contaminated with noise. This technique known as “compressed sensing” or “compressive sampling” relies on properties of the sensing matrix such as the restricted isometry property . In this Note, we establish new results about the accuracy of the reconstruction from undersampled measurements which improve on earlier estimates, and have the advantage of being more elegant. To cite this article: E.J. Candes, C. R. Acad. Sci. Paris, Ser. I 346 (2008)." ] }
1907.04423
2961688412
The spectrum scarcity at sub-6 GHz spectrum has made millimeter-wave (mmWave) frequency band a key component of the next-generation wireless networks. While mmWave spectrum offers extremely large transmission bandwidths to accommodate ever-increasing data rates, unique characteristics of this new spectrum need special consideration to achieve the promised network throughput. In this work, we consider the off-grid problem for mmWave communications, which has a significant impact on basic network functionalities involving beam steering and tracking. The off-grid effect naturally appears in compressed sensing (CS) techniques adopting a discretization approach for representing the angular domain. This approach yields a finite set of discrete angle points, which are an approximation to the continuous angular space, and hence degrade the accuracy of related parameter estimation. In order to cope with the off-grid effect, we present a novel parameter-perturbation framework to efficiently estimate the channel and the covariance for mmWave networks. The proposed algorithms employ a smart perturbation mechanism in conjunction with a low-complexity greedy framework of simultaneous orthogonal matching pursuit (SOMP), and jointly solve for the off-grid parameters and weights. Numerical results show a significant performance improvement through our novel framework as a result of handling the off-grid effects, which is totally ignored in the conventional sparse mmWave channel or covariance estimation algorithms.
An alternative is to tackle the off-grid effects upfront without increasing the grid size. For example, in the context of channel estimation, @cite_22 provide improved off-grid sparse Bayesian algorithm for the channel estimation framework. A grid-less CS technique is developed via atomic norm minimization in the form of semi-definite programming by @cite_1 . Although these problems tackle the off-grid issues, the computational complexity of these methods are significantly high. In previous work, provide a controlled perturbation mechanism for spatial angular parameters based on orthogonal matching pursuit (OMP) @cite_35 but is tailored only for the SMV setup with the immediate application to the MMV setup being not straightforward. Also, the application of these off-grid methods to the covariance estimation problem is not straight forward. More importantly, to the best of our knowledge, there is no work which investigates the off-grid effects or provide an off-grid based solution explicitly for the covariance estimation problem. This motivates the development and analysis of robust low-complexity channel and covariance estimation techniques for the MMV setup with emphasis on basis mismatch effects.
{ "cite_N": [ "@cite_35", "@cite_1", "@cite_22" ], "mid": [ "2848225903", "2739833327", "2810297269" ], "abstract": [ "The millimeter-wave (mmWave) communications is a promising technology for next-generation wireless networks with its available broad spectrum. Along with massive number of antennas employed at both end of the transceiver, the number of unknown channel coefficients become extremely large. Thanks to sparse nature of mmWave links, this paper proposes a parameter perturbation based sparse recovery technique for mmWave channel estimation. Recently, classical compressive sensing (CS) based sparse recovery techniques have been applied in this area. However, CS based reconstructions are highly effected by basis mismatch problems such as off-the-grid targets, or, equivalently, scattering points. The proposed iterative algorithm called parameter perturbed orthogonal matching pursuit (PPOMP) jointly solves for both the sparse signal, which is the unknown mmWave channel itself, and the basis mismatch due to off-the-grid problem. We verify through extensive numerical results that the proposed PPOMP algorithm achieves significantly better channel estimation performance compared to the state of the art sparse reconstruction techniques.", "In millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems, channel estimation in the presence of sparse multipath fading boils down to two-dimensional (2D) direction-of-arrival (DOA) estimation followed by path gain estimation. To achieve super-resolution angle estimation at affordable complexity, this paper develops an efficient channel estimation approach by applying a truncated atomic norm minimization (T-ANM) technique, which is implemented via partial antenna activation during training-based channel estimation. This technique makes use of a key observation that the sparse scattering characteristics of mmWave MIMO channel gives rise to a low-rank two-level Toeplitz structure in the angular domain. Because of the low-rank property, only a subset of the transceiver antennas needs to be activated to save training resources. Meanwhile, the Toeplitz structure enables ANM-based gridless 2D DOA estimation via reduced-size semidefinite programming. Simulation results show that the proposed reduced-size method can achieve comparable spectral efficiency as the full-size benchmark method at much lower computational complexity and shorter sensing time.", "In this letter, an angle domain off-grid channel estimation algorithm for the uplink millimeter wave (mmWave) massive multiple-input and multiple-output systems is proposed. By exploiting spatial sparse structure in mmWave channels, the proposed method is capable of identifying the angles and gains of the scatterer paths. Comparing the conventional channel estimation methods for mmWave systems, the proposed method achieves better performance in terms of mean square error. Numerical simulation results are provided to verify the superiority of the proposed algorithm." ] }
1907.04428
2959421486
The proliferation of ubiquitous computing requires energy-efficient as well as secure operation of modern processors. Side channel attacks are becoming a critical threat to security and privacy of devices embedded in modern computing infrastructures. Unintended information leakage via physical signatures such as power consumption, electromagnetic emission (EM) and execution time have emerged as a key security consideration for SoCs. Also, information published on purpose at user privilege level accessible through software interfaces results in software only attacks. In this paper, we used a supervised learning based approach for inferring applications executing on android platform based on features extracted from EM side-channel emissions and software exposed dynamic voltage frequency scaling(DVFS) states. We highlight the importance of machine learning based approach in utilizing these multi-dimensional features on a complex SoC, against profiling-based approaches. We also show that learning the instantaneous frequency states polled from onboard frequency driver (cpufreq) is adequate to identify a known application and flag potentially malicious unknown application. The experimental results on benchmarking applications running on ARMv8 processor in Snapdragon 820 board demonstrates early detection of these apps, and atleast 85 accuracy in detecting unknown applications. Overall, the highlight is to utilize a low-complexity path to application inference attacks through learning instantaneous frequency states pattern of CPU core.
Many of the past efforts have focused on EM based side-channel analysis. Nazari et. al. and Callan et. al. presented EM-based acquisition and analysis flow to detect code change injected by an adversary @cite_7 @cite_4 . ML-models were used to learn features extracted from power side channel emissions to create a model for detecting malware in medical devices @cite_10 . Similarly, a security monitor for control flow integrity of programs executing on industrial PLC has been demonstrated using LSTM network based on features derived from EM-emissions @cite_11 .
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_7", "@cite_11" ], "mid": [ "2124631616", "2625110865", "2117552728", "2751592946" ], "abstract": [ "Medical devices based on embedded systems are ubiquitous in clinical settings. Increasingly, they connect to networks and run off-the-shelf operating systems vulnerable to malware. But strict validation requirements make it prohibitively difficult or costly to use anti-virus software or automated operating system updates on these systems. Our add-on monitoring system, WattsUpDoc, uses a traditionally undesirable side channel of power consumption to enable run-time malware detection. In our experiments, WattsUpDoc detected previously known malware with at least 94 accuracy and previously unknown malware with at least 85 accuracy on several embedded devices--detection rates similar to those of conventional malware-detection systems on PCs. WattsUpDoc detects malware without requiring hardware or software modifications or network communication.", "This paper describes EM-Based Detection of Deviations in Program Execution (EDDIE), a new method for detecting anomalies in program execution, such as malware and other code injections, without introducing any overheads, adding any hardware support, changing any software, or using any resources on the monitored system itself. Monitoring with EDDIE involves receiving electromagnetic (EM) emanations that are emitted as a side effect of execution on the monitored system, and it relies on spikes in the EM spectrum that are produced as a result of periodic (e.g. loop) activity in the monitored execution. During training, EDDIE characterizes normal execution behavior in terms of peaks in the EM spectrum that are observed at various points in the program execution, but it does not need any characterization of the malware or other code that might later be injected. During monitoring, EDDIE identifies peaks in the observed EM spectrum, and compares these peaks to those learned during training. Since EDDIE requires no resources on the monitored machine and no changes to the monitored software, it is especially well suited for security monitoring of embedded and IoT devices. We evaluate EDDIE on a real IoT system and in a cycle-accurate simulator, and find that even relatively brief injected bursts of activity (a few milliseconds) are detected by EDDIE with high accuracy, and that it also accurately detects when even a few instructions are injected into an existing loop within the application.", "This paper presents a new metric, which we call Signal Available to Attacker (SAVAT), that measures the side channel signal created by a specific single-instruction difference in program execution, i.e. The amount of signal made available to a potential attacker who wishes to decide whether the program has executed instruction event A or instruction event B. We also devise a practical methodology for measuring SAVAT in real systems using only user-level access permissions and common measurement equipment. Finally, we perform a case study where we measure electromagnetic (EM) emanations SAVAT among 11 different instructions for three different laptop systems. Our findings from these experiments confirm key intuitive expectations, e.g. That SAVAT between on-chip instructions and off-chip memory accesses tends to be higher than between two on-chip instructions. However, we find that particular instructions, such as integer divide, have much higher SAVAT than other instructions in the same general category (integer arithmetic), and that last-level-cache hits and misses have similar (high) SAVAT. Overall, we confirm that our new metric and methodology can help discover the most vulnerable aspects of a processor architecture or a program, and thus inform decision-making about how to best manage the overall side channel vulnerability of a processor, a program, or a system.", "Trustworthy operation of industrial control systems depends on secure and real-time code execution on the embedded programmable logic controllers (PLCs). The controllers monitor and control the critical infrastructures, such as electric power grids and healthcare platforms, and continuously report back the system status to human operators. We present Zeus, a contactless embedded controller security monitor to ensure its execution control flow integrity. Zeus leverages the electromagnetic emission by the PLC circuitry during the execution of the controller programs. Zeus's contactless execution tracking enables non-intrusive monitoring of security-critical controllers with tight real-time constraints. Those devices often cannot tolerate the cost and performance overhead that comes with additional traditional hardware or software monitoring modules. Furthermore, Zeus provides an air-gap between the monitor (trusted computing base) and the target (potentially compromised) PLC. This eliminates the possibility of the monitor infection by the same attack vectors. Zeus monitors for control flow integrity of the PLC program execution. Zeus monitors the communications between the human machine interface and the PLC, and captures the control logic binary uploads to the PLC. Zeus exercises its feasible execution paths, and fingerprints their emissions using an external electromagnetic sensor. Zeus trains a neural network for legitimate PLC executions, and uses it at runtime to identify the control flow based on PLC's electromagnetic emissions. We implemented Zeus on a commercial Allen Bradley PLC, which is widely used in industry, and evaluated it on real-world control program executions. Zeus was able to distinguish between different legitimate and malicious executions with 98.9 accuracy and with zero overhead on PLC execution by design." ] }
1907.04428
2959421486
The proliferation of ubiquitous computing requires energy-efficient as well as secure operation of modern processors. Side channel attacks are becoming a critical threat to security and privacy of devices embedded in modern computing infrastructures. Unintended information leakage via physical signatures such as power consumption, electromagnetic emission (EM) and execution time have emerged as a key security consideration for SoCs. Also, information published on purpose at user privilege level accessible through software interfaces results in software only attacks. In this paper, we used a supervised learning based approach for inferring applications executing on android platform based on features extracted from EM side-channel emissions and software exposed dynamic voltage frequency scaling(DVFS) states. We highlight the importance of machine learning based approach in utilizing these multi-dimensional features on a complex SoC, against profiling-based approaches. We also show that learning the instantaneous frequency states polled from onboard frequency driver (cpufreq) is adequate to identify a known application and flag potentially malicious unknown application. The experimental results on benchmarking applications running on ARMv8 processor in Snapdragon 820 board demonstrates early detection of these apps, and atleast 85 accuracy in detecting unknown applications. Overall, the highlight is to utilize a low-complexity path to application inference attacks through learning instantaneous frequency states pattern of CPU core.
DVFS based power management has been explored extensively across all platforms but only recently researchers have started exploring the interactions of DVFS and security @cite_1 @cite_8 @cite_5 @cite_13 . Yang et. al. has demonstrated use of DVFS as a countermeasure to power side channel attack on encryption engines @cite_1 . More recently. A. Singh et. al. has demonstrated use of fast DVFS enabled by on-chip regulator and adaptive clocking to deter extraction of encryption key in hardware accelerators @cite_8 @cite_5 . More recently, Tang et. al. presented a CLKSCREW methodology which exploits the flaws in power management techniques of an ARMv7 processor @cite_13 . By performing unconstrained overclocking under-volting, authors could inject faults during encryption and successfully recover the secret key.
{ "cite_N": [ "@cite_13", "@cite_5", "@cite_1", "@cite_8" ], "mid": [ "2750990141", "2900861686", "2096490242", "2798745612" ], "abstract": [ "", "This paper demonstrates the improved power and electromagnetic (EM) side-channel attack (SCA) resistance of 128-bit Advanced Encryption Standard (AES) engines in 130-nm CMOS using random fast voltage dithering (RFVD) enabled by integrated voltage regulator (IVR) with the bond-wire inductors and an on-chip all-digital clock modulation (ADCM) circuit. RFVD scheme transforms the current signatures with random variations in AES input supply while adding random shifts in the clock edges in the presence of global and local supply noises. The measured power signatures at the supply node of the AES engines show upto 37 @math reduction in peak for higher order test vector leakage assessment (TVLA) metric and upto 692 @math increase in minimum traces required to disclose (MTD) the secret encryption key with correlation power analysis (CPA). Similarly, SCA on the measured EM signatures from the chip demonstrates a reduction of upto 11.3 @math in TVLA peak and upto 37 @math increase in correlation EM analysis (CEMA) MTD.", "A novel power attack resistant cryptosystem is presented in this paper. Security in digital computing and communication is becoming increasingly important. Design techniques that can protect cryptosystems from leaking information have been studied by several groups. Power attacks, which infer program behavior from observing power supply current into a processor core, are important forms of attacks. Various methods have been proposed to countermeasure the popular and efficient power attacks. However, these methods do not adequately protect against power attacks and may introduce new vulnerabilities. In this work, we addressed a novel approach against the power attacks, i.e., Dynamic Voltage and Frequency Switching (DVFS). Three designs, naive, improved and advanced implementations, have been studied to test the efficiency of DVFS against power attacks. A final advanced realization of our novel cryptosystem was given out, which achieved enough high power trace entropy and time trace entropy to block all kinds of power attacks, with 27 energy reduction and 16 time overhead for DES encryption and decryption algorithms.", "The high-performance and energy-efficient encryption engines have emerged as a key component for modern System-On-Chip (SoC) in various platforms including servers, desktops, mobile, and IoT edge devices. A key bottleneck to secure operation of encryption engines is leakage of information through various side-channels. For example, an adversary can extract the secret key by performing statistical analysis on measured power and electromagnetic (EM) emission signatures generated by the hardware during encryption. Countermeasures to such side-channel attacks often come at high power, area, or performance overheads. Therefore, design of side-channel secure encryption engines is a critical challenge for high-performance and or power- energy efficient operations. This paper reviews that although low-power requirement imposes critical challenge for side-channel security, but circuit techniques traditionally developed for power management also present new opportunities for side-channel resistance. As a case study, we review the feasibility of using integrated voltage regulator and dynamic voltage frequency scaling normally used for efficient power management, for increasing power-side-channel resistance of AES engines. The hardware measurement results from test-chip fabricated in 130nm process are presented to demonstrate the impact of power management circuits on side-channel security." ] }
1907.04428
2959421486
The proliferation of ubiquitous computing requires energy-efficient as well as secure operation of modern processors. Side channel attacks are becoming a critical threat to security and privacy of devices embedded in modern computing infrastructures. Unintended information leakage via physical signatures such as power consumption, electromagnetic emission (EM) and execution time have emerged as a key security consideration for SoCs. Also, information published on purpose at user privilege level accessible through software interfaces results in software only attacks. In this paper, we used a supervised learning based approach for inferring applications executing on android platform based on features extracted from EM side-channel emissions and software exposed dynamic voltage frequency scaling(DVFS) states. We highlight the importance of machine learning based approach in utilizing these multi-dimensional features on a complex SoC, against profiling-based approaches. We also show that learning the instantaneous frequency states polled from onboard frequency driver (cpufreq) is adequate to identify a known application and flag potentially malicious unknown application. The experimental results on benchmarking applications running on ARMv8 processor in Snapdragon 820 board demonstrates early detection of these apps, and atleast 85 accuracy in detecting unknown applications. Overall, the highlight is to utilize a low-complexity path to application inference attacks through learning instantaneous frequency states pattern of CPU core.
Both profiling based and ML based techniques have been utilized for application inferencing. For instance, to protect the devices against malwares, authors have demonstrated malware detection based on HPCs by selectively choosing application specific hardware and software events @cite_15 . Similar approach, is shown by R. Spritzer et.al. using selective information from process logs that form a strong correlation for same application in order to form templates and utilizing dynamic time warping(DTW) for application inference. They have shown upto 96 exposed information can be used to mount inference attacks. Using proc @math pid @math statm along with number of context switches, is shown to infer a visited-webpages by a user @cite_12 . Similarly, size of the memory footprint of specific applications in proc @math pid @math statm is used to infer the user interface. More recently, activity transitions of an application are inferred using runtime memory statistics. @cite_2 Moreover, identifying which application is running can help in launching specific attacks. For instance, an app running in the background to identify application which require login credentials, can execute phishing-based attacks @cite_2 to steal login credentials.
{ "cite_N": [ "@cite_15", "@cite_12", "@cite_2" ], "mid": [ "2319159802", "2144219822", "2163643194" ], "abstract": [ "Hardware Performance Counter-based (HPC) runtime checking is an effective way to identify malicious behaviors of malware and detect malicious modifications to a legitimate program’s control flow. To reduce the overhead in the monitored system which has limited storage and computing resources, we present a “sample-locally-analyze-remotely” technique. The sampled HPC data are sent to a remote server for further analysis. To minimize the I O bandwidth required for transmission, the fine-grained HPC profiles are compressed into much smaller vectors with Compressive Sensing. The experimental results demonstrate an 80p I O bandwidth reduction after applying Compressive Sensing, without compromising the detection and identification capabilities.", "We describe a new side-channel attack. By tracking changes in the application's memory footprint, a concurrent process belonging to a different user can learn its secrets. Using Web browsers as the target, we show how an unprivileged, local attack process - for example, a malicious Android app - can infer which page the user is browsing, as well as finer-grained information: whether she is a paid customer, her interests, etc. This attack is an instance of a broader problem. Many isolation mechanisms in modern systems reveal accounting information about program execution, such as memory usage and CPU scheduling statistics. If temporal changes in this public information are correlated with the program's secrets, they can lead to a privacy breach. To illustrate the pervasiveness of this problem, we show how to exploit scheduling statistics for keystroke sniffing in Linux and Android, and how to combine scheduling statistics with the dynamics of memory usage for more accurate adversarial inference of browsing behavior.", "The security of smartphone GUI frameworks remains an important yet under-scrutinized topic. In this paper, we report that on the Android system (and likely other OSes), a weaker form of GUI confidentiality can be breached in the form of UI state (not the pixels) by a background app without requiring any permissions. Our finding leads to a class of attacks which we name UI state inference attack. The underlying problem is that popular GUI frameworks by design can potentially reveal every UI state change through a newly-discovered public side channel -- shared memory. In our evaluation, we show that for 6 out of 7 popular Android apps, the UI state inference accuracies are 80-90 for the first candidate UI states, and over 93 for the top 3 candidates. Even though the UI state does not reveal the exact pixels, we show that it can serve as a powerful building block to enable more serious attacks. To demonstrate this, we design and fully implement several new attacks based on the UI state inference attack, including hijacking the UI state to steal sensitive user input (e.g., login credentials) and obtain sensitive camera images shot by the user (e.g., personal check photos for banking apps). We also discuss non-trivial challenges in eliminating the identified side channel, and suggest more secure alternative system designs." ] }
1907.04269
2962406362
We present a scheme for sequential decision making with a risk-sensitive objective and constraints in a dynamic environment. A neural network is trained as an approximator of the mapping from parameter space to space of risk and policy with risk-sensitive constraints. For a given risk-sensitive problem, in which the objective and constraints are, or can be estimated by, functions of the mean and variance of return, we generate a synthetic dataset as training data. Parameters defining a targeted process might be dynamic, i.e., they might vary over time, so we sample them within specified intervals to deal with these dynamics. We show that: i). Most risk measures can be estimated using return variance; ii). By virtue of the state-augmentation transformation, practical problems modeled by Markov decision processes with stochastic rewards can be solved in a risk-sensitive scenario; and iii). The proposed scheme is validated by a numerical experiment.
The SDM problems considering risk, dynamic environment, and constraint are usually studied separately. Besides the works reviewed in , Shen @cite_13 generalized risk measures to the valuation functions. The author applied a set of valuation functions, derived some model-free risk-sensitive reinforcement learning algorithms, and presented a risk control example in simulated algorithmic trading of stocks. For SDMs in dynamic environments, Hadoux @cite_1 proposed a new model named Hidden Semi-Markov-Mode Markov Decision Process (HS3MDP), which represented non-stationary problems whose dynamics evolved among a finite set of contexts. The author adapted the Partially Observable Monte-Carlo Planning (POMCP) algorithm to HS3MDPs in order to efficiently solve those problems. The POMCP algorithm used a black-box environment simulator and a particle filter to approximate a belief state. The simulator relaxed the model-based requirement, and each filter particle represented a state of the POMDP being solved. For different types of dynamic environment, the author compared a regret-based method with its Markov counterpart @cite_31 . In the regret-based method, the agent was involved in a two-players repeated game, where two agents (the player and the opponent, which can be the environment) chose an action to play, got a feedback, and repeated the game.
{ "cite_N": [ "@cite_1", "@cite_31", "@cite_13" ], "mid": [ "2339343364", "2285087685", "2181918151" ], "abstract": [ "In sequential decision-making problems under uncertainty, an agent makes decisions, one after another, considering the current state of the environment where she evolves. In most work, the environment the agent evolves in is assumed to be stationary, i.e., its dynamics do not change over time. However, the stationarity hypothesis can be invalid if, for instance, exogenous events can occur. In this document, we are interested in sequential decision-making in non-stationary environments. We propose a new model named HS3MDP, allowing us to represent non-stationary problems whose dynamics evolve among a finite set of contexts. In order to efficiently solve those problems, we adapt the POMCP algorithm to HS3MDPs. We also present RLCD with SCD, a new method to learn the dynamics of the environments, without knowing a priori the number of contexts. We then explore the field of argumentation problems, where few works consider sequential decision-making. We address two types of problems: stochastic debates (APS ) and mediation problems with non-stationary agents (DMP). In this work, we present a model formalizing APS and allowing us to transform them into an MOMDP in order to optimize the sequence of arguments of one agent in the debate. We then extend this model to DMPs to allow a mediator to strategically organize speak-turns in a debate.", "", "This thesis investigates risk-sensitive sequential decision-making problems in an uncertain environment. We rst introduce the axiomatic concept of valuation functions that generalize known concepts of risk measures in mathematical nance to cover most of the existing risk related models in various elds, in particular, behavioral economics and cognitive neuroscience. By applying this concept to Markov processes, we construct valuation maps and develop thereby a uni ed framework for incorporating risk into Markov decision processes on general spaces. Within the framework, we study mainly two types of in nite-horizon risk-sensitive criteria, discounted and average valuations, and solve the associated optimization problems by value iteration. For the discounted case, we propose a new discount scheme, which is di erent from the conventional form but consistent with existing literature, while for the average criterion, we state Lyapunov-type stability conditions that generalize known conditions for Markov chains to ensure the existence of solutions to the optimality equation and a geometric convergence rate for the value iteration. Applying a set of valuation functions, called utility-based shortfall, we derive a family of model-free risk-sensitive reinforcement learning algorithms for solving the optimization problems corresponding to risk-sensitive valuations. In addition, we nd that when appropriate utility functions are chosen, agents’ behaviors express key features of human behavior as predicted by prospect theory, for example, di erent risk preferences for gains and losses, as well as the shape of subjective probability curves. As a proof of principle for the applicability of the new algorithms, we apply them to two tasks, 1) to quantify human behavior in a sequential investment task and 2) to perform risk control in simulated algorithmic trading of stocks. In the rst task, the risk-sensitive variant provides a signi cantly better t to the behavioral data and it leads to an interpretation of the subject’s responses which is indeed consistent with prospect theory. The analysis of simultaneously measured fMRI signals show a signi cant correlation of the risk-sensitive temporal di erence error with BOLD signal change in the ventral striatum. In the second task, our algorithm outperforms the risk-neutral reinforcement learning algorithm by keeping the trading cost at a substantially low level at the spot when the 2010 Flash Crash happened, and signi cantly reducing the risk over the whole test period." ] }
1907.04427
2957141087
In this paper, we tackle channel estimation in millimeter-wave hybrid multiple-input multiple-output systems by considering off-grid effects. In particular, we assume that spatial parameters can take any value in the angular domain, and need not fall on predefined discretized angles. Instead of increasing the number of discretized points to combat off-grid effects, we use implicit Dirichlet kernel structure in the Fourier domain, which conventional compressed sensing methods do not use. We propose greedy low-complexity algorithms based on orthogonal matching pursuit (OMP); our core idea is to traverse the Dirichlet kernel peak using estimates of the discrete Fourier transform. We demonstrate the efficacy of our proposed algorithms compared to standard OMP reconstruction. Numerical results show that our proposed algorithms obtain smaller reconstruction errors when off-grid effects are accounted for.
One of the most promising features of next-generation wireless systems is to use high-frequency high-bandwidth signals in millimeter-wave (mmWave) frequency bands. These mmWave bands combined with multiple-input multiple-output (MIMO) technology have great potential in delivering higher data rates, higher spectral efficiency, and lower latency, exceeding the performance of traditional cellular systems operating at sub-6 GHz bands. Conventional mmWave MIMO architectures use a large number of antennas, which results in high cost and power consumption, making it difficult to assign a radio frequency (RF) chain per antenna. To curtail these issues, a architecture is adapted at mmWave bands @cite_1 @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_1" ], "mid": [ "2195693430", "2195833401" ], "abstract": [ "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.", "Hybrid analog digital multiple-input multiple-output architectures were recently proposed as an alternative for fully digital-precoding in millimeter wave wireless communication systems. This is motivated by the possible reduction in the number of RF chains and analog-to-digital converters. In these architectures, the analog processing network is usually based on variable phase shifters. In this paper, we propose hybrid architectures based on switching networks to reduce the complexity and the power consumption of the structures based on phase shifters. We define a power consumption model and use it to evaluate the energy efficiency of both structures. To estimate the complete MIMO channel, we propose an open-loop compressive channel estimation technique that is independent of the hardware used in the analog processing stage. We analyze the performance of the new estimation algorithm for hybrid architectures based on phase shifters and switches. Using the estimate, we develop two algorithms for the design of the hybrid combiner based on switches and analyze the achieved spectral efficiency. Finally, we study the tradeoffs between power consumption, hardware complexity, and spectral efficiency for hybrid architectures based on phase shifting networks and switching networks. Numerical results show that architectures based on switches obtain equal or better channel estimation performance to that obtained using phase shifters, while reducing hardware complexity and power consumption. For equal power consumption, all the hybrid architectures provide similar spectral efficiencies." ] }
1907.04427
2957141087
In this paper, we tackle channel estimation in millimeter-wave hybrid multiple-input multiple-output systems by considering off-grid effects. In particular, we assume that spatial parameters can take any value in the angular domain, and need not fall on predefined discretized angles. Instead of increasing the number of discretized points to combat off-grid effects, we use implicit Dirichlet kernel structure in the Fourier domain, which conventional compressed sensing methods do not use. We propose greedy low-complexity algorithms based on orthogonal matching pursuit (OMP); our core idea is to traverse the Dirichlet kernel peak using estimates of the discrete Fourier transform. We demonstrate the efficacy of our proposed algorithms compared to standard OMP reconstruction. Numerical results show that our proposed algorithms obtain smaller reconstruction errors when off-grid effects are accounted for.
The HADB architecture complicates the channel estimation process, because only the low dimensional signals pre-combined by the analog combiner are available at baseband, which severely degrades the channel estimation process. The accuracy with which the channel is estimated plays a critical role in physical layer performance as it directly affects receiver design, e.g., channel equalization @cite_9 and radio resource management @cite_10 . To overcome these challenges, channel estimation algorithms based on compressed sensing (CS) @cite_12 @cite_16 have been proposed. These CS-based methods are based on @cite_4 , which provide a virtual angular representation of MIMO channels.
{ "cite_N": [ "@cite_4", "@cite_9", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "2128865660", "2921341714", "2195693430", "2510608419", "2339667469" ], "abstract": [ "Accurate and tractable channel modeling is critical to realizing the full potential of antenna arrays in wireless communications. Current approaches represent two extremes: idealized statistical models representing a rich scattering environment and parameterized physical models that describe realistic scattering environments via the angles and gains associated with different propagation paths. However, simple rules that capture the effects of scattering characteristics on channel capacity and diversity are difficult to infer from existing models. We propose an intermediate virtual channel representation that captures the essence of physical modeling and provides a simple geometric interpretation of the scattering environment. The virtual representation corresponds to a fixed coordinate transformation via spatial basis functions defined by fixed virtual angles. We show that in an uncorrelated scattering environment, the elements of the channel matrix form a segment of a stationary process and that the virtual channel coefficients are approximately uncorrelated samples of the underlying spectral representation. For any scattering environment, the virtual channel matrix clearly reveals the two key factors affecting capacity: the number of parallel channels and the level of diversity. The concepts of spatial zooming and aliasing are introduced to provide a transparent interpretation of the effect of antenna spacing on channel statistics and capacity. Numerical results are presented to illustrate various aspects of the virtual framework.", "Correlation-based techniques used for frame synchronization can suffer significant performance degradation over multi-path frequency-selective channels. In this paper, we propose a joint frame synchronization and channel estimation (JFSCE) framework as a remedy to this problem. This framework, however, increases the size of the resulting combined channel vector which should capture both the channel impulse response vector and the frame boundary offset and, therefore, its estimation becomes more challenging. On the other hand, because the combined channel vector is sparse, sparse channel estimation methods can be applied. We propose several JFSCE methods using popular sparse signal recovery algorithms which exploit the sparsity of the combined channel vector. Subsequently, the sparse channel vector estimate is used to design a sparse equalizer. Our simulation results and experimental measurements using software defined radios show that in some scenarios our proposed method improves the overall system performance significantly, in terms of the mean square error between the transmitted and the equalized symbols compared to the conventional method.", "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.", "We consider the joint path, subcarrier and power allocation problem for a multi-user orthogonal frequency division multiple access (OFDMA) relay-enhanced cellular (REC) network. The goal is to maximize a system utility function subject to a total transmit power constraint and the OFDMA resolver allocation constraint. Since the problem is of combinatorial complexity, we propose an iterative re-weighted minimization (IRM) framework based on reformulating the combinatorial OFDMA constraint as an equivalent continuous optimization problem. The result is a fast, globally convergent algorithm for solving the resource allocation problem in the single cell OFDMA REC network. We demonstrate the efficacy of the proposed approach and compare its performance with existing schemes through Monte Carlo simulations.", "We propose an efficient open-loop channel estimator for a millimeter-wave (mm-wave) hybrid multiple-input multiple-output (MIMO) system consisting of radio-frequency (RF) beamformers with large antenna arrays followed by a baseband MIMO processor. A sparse signal recovery problem exploiting the sparse nature of mm-wave channels is formulated for channel estimation based on the parametric channel model with quantized angles of departures arrivals (AoDs AoAs), called the angle grids. The problem is solved by the orthogonal matching pursuit (OMP) algorithm employing a redundant dictionary consisting of array response vectors with finely quantized angle grids. We suggest the use of non-uniformly quantized angle grids and show that such grids reduce the coherence of the redundant dictionary. The lower and upper bounds of the sum-of-squared errors of the proposed OMP-based estimator are derived analytically: the lower bound is derived by considering the oracle estimator that assumes the knowledge of AoDs AoAs, and the upper bound is derived based on the results of the OMP performance guarantees. The design of training vectors (or sensing matrix) is particularly important in hybrid MIMO systems, because the RF beamformer prevents the use of independent and identically distributed random training vectors, which are popular in compressed sensing. We design training vectors so that the total coherence of the equivalent sensing matrix is minimized for a given RF beamforming matrix, which is assumed to be unitary. It is observed that the estimation accuracy can be improved significantly by randomly permuting the columns of the RF beamforming matrix. The simulation results demonstrate the advantage of the proposed OMP with a redundant dictionary over the existing methods such as the least squares method and the OMP based on the virtual channel model." ] }
1907.04427
2957141087
In this paper, we tackle channel estimation in millimeter-wave hybrid multiple-input multiple-output systems by considering off-grid effects. In particular, we assume that spatial parameters can take any value in the angular domain, and need not fall on predefined discretized angles. Instead of increasing the number of discretized points to combat off-grid effects, we use implicit Dirichlet kernel structure in the Fourier domain, which conventional compressed sensing methods do not use. We propose greedy low-complexity algorithms based on orthogonal matching pursuit (OMP); our core idea is to traverse the Dirichlet kernel peak using estimates of the discrete Fourier transform. We demonstrate the efficacy of our proposed algorithms compared to standard OMP reconstruction. Numerical results show that our proposed algorithms obtain smaller reconstruction errors when off-grid effects are accounted for.
The virtual channel model describes the channel with respect to (w.r.t.) fixed basis functions corresponding to spatial angles within a finite discrete dictionary. In other words, the continuous parameter space of spatial angular features is discretized into a finite set of pre-defined spatial angles, which emphasizes the sparse representation of the MIMO channels. The estimation accuracy of CS methods based on this discretization is limited by the number of points in the dictionary. Although this discretization procedure yields state-of-art performance, it has several intrinsic disadvantages @cite_11 , including the .
{ "cite_N": [ "@cite_11" ], "mid": [ "2565293665" ], "abstract": [ "This paper investigates the problem of estimating the frequency components of a mixture of s complex sinusoids from a random subset of n regularly spaced samples. Unlike previous work in compressed sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the normalized frequency domain [0, 1]. An atomic norm minimization approach is proposed to exactly recover the unobserved samples and identify the unknown frequencies, which is then reformulated as an exact semidefinite program. Even with this continuous dictionary, it is shown that O(slog s log n) random samples are sufficient to guarantee exact frequency localization with high probability, provided the frequencies are well separated. Extensive numerical experiments are performed to illustrate the effectiveness of the proposed method." ] }
1907.04427
2957141087
In this paper, we tackle channel estimation in millimeter-wave hybrid multiple-input multiple-output systems by considering off-grid effects. In particular, we assume that spatial parameters can take any value in the angular domain, and need not fall on predefined discretized angles. Instead of increasing the number of discretized points to combat off-grid effects, we use implicit Dirichlet kernel structure in the Fourier domain, which conventional compressed sensing methods do not use. We propose greedy low-complexity algorithms based on orthogonal matching pursuit (OMP); our core idea is to traverse the Dirichlet kernel peak using estimates of the discrete Fourier transform. We demonstrate the efficacy of our proposed algorithms compared to standard OMP reconstruction. Numerical results show that our proposed algorithms obtain smaller reconstruction errors when off-grid effects are accounted for.
A natural yet inefficient approach to reduce off-grid effects is to increase the number of discretized points, corresponding to increased grid resolution. This approach not only increases the mutual coherence of the dictionary matrix, leading to loss of the restricted isometric property, but also increases the problem dimension, which requires more computation @cite_8 . An alternative is to tackle off-grid effects upfront without increasing the grid size. For example, in the context of channel estimation, @cite_8 provide a controlled perturbation mechanism for spatial angular parameters based on orthogonal matching pursuit (OMP) @cite_6 . Other related works involve an improved off-grid sparse Bayesian algorithm @cite_15 , and a grid-less CS technique developed via atomic norm minimization @cite_3 . Although these methods all tackle off-grid issues, they are computationally prohibitive, which motivates us to develop and analyze robust low-complexity channel estimation algorithms that account for off-grid effects.
{ "cite_N": [ "@cite_15", "@cite_3", "@cite_6", "@cite_8" ], "mid": [ "2810297269", "2739833327", "2127271355", "2848225903" ], "abstract": [ "In this letter, an angle domain off-grid channel estimation algorithm for the uplink millimeter wave (mmWave) massive multiple-input and multiple-output systems is proposed. By exploiting spatial sparse structure in mmWave channels, the proposed method is capable of identifying the angles and gains of the scatterer paths. Comparing the conventional channel estimation methods for mmWave systems, the proposed method achieves better performance in terms of mean square error. Numerical simulation results are provided to verify the superiority of the proposed algorithm.", "In millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems, channel estimation in the presence of sparse multipath fading boils down to two-dimensional (2D) direction-of-arrival (DOA) estimation followed by path gain estimation. To achieve super-resolution angle estimation at affordable complexity, this paper develops an efficient channel estimation approach by applying a truncated atomic norm minimization (T-ANM) technique, which is implemented via partial antenna activation during training-based channel estimation. This technique makes use of a key observation that the sparse scattering characteristics of mmWave MIMO channel gives rise to a low-rank two-level Toeplitz structure in the angular domain. Because of the low-rank property, only a subset of the transceiver antennas needs to be activated to save training resources. Meanwhile, the Toeplitz structure enables ANM-based gridless 2D DOA estimation via reduced-size semidefinite programming. Simulation results show that the proposed reduced-size method can achieve comparable spectral efficiency as the full-size benchmark method at much lower computational complexity and shorter sensing time.", "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.", "The millimeter-wave (mmWave) communications is a promising technology for next-generation wireless networks with its available broad spectrum. Along with massive number of antennas employed at both end of the transceiver, the number of unknown channel coefficients become extremely large. Thanks to sparse nature of mmWave links, this paper proposes a parameter perturbation based sparse recovery technique for mmWave channel estimation. Recently, classical compressive sensing (CS) based sparse recovery techniques have been applied in this area. However, CS based reconstructions are highly effected by basis mismatch problems such as off-the-grid targets, or, equivalently, scattering points. The proposed iterative algorithm called parameter perturbed orthogonal matching pursuit (PPOMP) jointly solves for both the sparse signal, which is the unknown mmWave channel itself, and the basis mismatch due to off-the-grid problem. We verify through extensive numerical results that the proposed PPOMP algorithm achieves significantly better channel estimation performance compared to the state of the art sparse reconstruction techniques." ] }
1907.04427
2957141087
In this paper, we tackle channel estimation in millimeter-wave hybrid multiple-input multiple-output systems by considering off-grid effects. In particular, we assume that spatial parameters can take any value in the angular domain, and need not fall on predefined discretized angles. Instead of increasing the number of discretized points to combat off-grid effects, we use implicit Dirichlet kernel structure in the Fourier domain, which conventional compressed sensing methods do not use. We propose greedy low-complexity algorithms based on orthogonal matching pursuit (OMP); our core idea is to traverse the Dirichlet kernel peak using estimates of the discrete Fourier transform. We demonstrate the efficacy of our proposed algorithms compared to standard OMP reconstruction. Numerical results show that our proposed algorithms obtain smaller reconstruction errors when off-grid effects are accounted for.
Interestingly, standard CS methods based on sparsity fail to leverage Dirichlet structure in the Fourier domain. We exploit this structure to improve the channel estimation process. In particular, we propose low-complexity algorithms based on OMP @cite_6 , owing to its computational tractability. Our numerical results show that while accounting for off-grid effects, our proposed algorithms obtain smaller channel reconstruction errors compared to standard OMP algorithms.
{ "cite_N": [ "@cite_6" ], "mid": [ "2127271355" ], "abstract": [ "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems." ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
The dominant approach to iris segmentation is certainly the one based on circular approximations of the inner and outer iris boundaries @cite_7 , with later extensions to more complex shapes approximated by Fourier series @cite_21 . We focus on more recent, deep learning-based solutions in this paper.
{ "cite_N": [ "@cite_21", "@cite_7" ], "mid": [ "2167075312", "2102796633" ], "abstract": [ "This paper presents the following four advances in iris recognition: 1) more disciplined methods for detecting and faithfully modeling the iris inner and outer boundaries with active contours, leading to more flexible embedded coordinate systems; 2) Fourier-based methods for solving problems in iris trigonometry and projective geometry, allowing off-axis gaze to be handled by detecting it and ldquorotatingrdquo the eye into orthographic perspective; 3) statistical inference methods for detecting and excluding eyelashes; and 4) exploration of score normalizations, depending on the amount of iris data that is available in images and the required scale of database search. Statistical results are presented based on 200 billion iris cross-comparisons that were generated from 632 500 irises in the United Arab Emirates database to analyze the normalization issues raised in different regions of receiver operating characteristic curves.", "A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person's face is the detailed texture of each eye's iris. The visible texture of a person's iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte \"iris code\". Statistical decision theory generates identification decisions from Exclusive-OR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical \"cross-over\" error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about 10 sup 31 . >" ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
Jalilian and Uhl @cite_18 proposed the first, known to us, deep learning-based iris segmentation method. They used several types of convolutional encoder-decoder network trained on ND-Iris-0405, IITD and CASIA-Iris-Ageing-v5 datasets with manually annotated ground-truth segmentation masks. The authors reported better performance for CNN-based method when compared to conventional algorithms, such as OSIRIS @cite_14 , WAHET @cite_10 , CAHT @cite_19 and IFPP @cite_23 . The paper does not mention anything about the trained network being available to others.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_19", "@cite_23", "@cite_10" ], "mid": [ "2741839910", "1869924930", "2200208292", "121937170", "2118282683" ], "abstract": [ "As a considerable breakthrough in artificial intelligence, deep learning has gained great success in resolving key computer vision challenges. Accurate segmentation of the iris region in the eye image plays a vital role in efficient performance of iris recognition systems, as one of the most reliable systems used for biometric identification. In this chapter, as the first contribution, we consider the application of Fully Convolutional Encoder–Decoder Networks (FCEDNs) for iris segmentation. To this extent, we utilize three types of FCEDN architectures for segmentation of the iris in the images, obtained from five different datasets, acquired under different scenarios. Subsequently, we conduct performance analysis, evaluation, and comparison of these three networks for iris segmentation. Furthermore, and as the second contribution, in order to subsidize the true evaluation of the proposed networks, we apply a selection of conventional (non-CNN) iris segmentation algorithms on the same datasets, and similarly evaluate their performances. The results then get compared against those obtained from the FCEDNs. Based on the results, the proposed networks achieve superior performance over all other algorithms, on all datasets.", "Abstract In this paper, we present the evolution of the open source iris recognition system OSIRIS through its more relevant versions: OSIRISV2, OSIRISV4, and OSIRISV4.1. We developed OSIRIS in the framework of BioSecure Association as an open source software aiming at providing a reference for the scientific community. The software is mainly composed of four key modules, namely segmentation, normalization, feature extraction and template matching, which are described in detail for each version. A novel approach for iris normalization, based on a non geometric parameterization of contours is proposed in the latest version: OSIRISV4.1 and is detailed in particular here. Improvements in performance through the different versions of OSIRIS are reported on two public databases commonly used, ICE2005 and CASIA-IrisV4-Thousand. We note the high verification rates obtained by the last version. For this reason, OSIRISV4.1 can be proposed as a baseline system for comparison to other algorithms, this way supplying a helpful research tool for the iris recognition community.", "Traditional iris processing following Daugman’s approach [116] extracts binary features after mapping the textural area between inner pupillary and outer limbic boundary into a doubly dimensionless representation.", "This paper presents a multi-stage iris segmentation framework for the localization of pupillary and limbic boundaries of human eyes. Instead of applying time-consuming exhaustive search approaches, like traditional circular Hough Transform or Daugman's integrodifferential operator, an iterative approach is used. By decoupling coarse center detection and fine boundary localization, faster processing and modular design can be achieved. This alleviates more sophisticated quality control and feedback during the segmentation process. By avoiding database-specific optimizations, this work aims at supporting different sensors and light spectra, i.e. Visible Wavelength and Near Infrared, without parameter tuning. The system is evaluated by using multiple open iris databases and it is compared to existing classical approaches.", "Efficient and robust segmentation of less intrusively or non-cooperatively captured iris images is still a challenging task in iris biometrics. This paper proposes a novel two-stage algorithm for the localization and mapping of iris texture in images of the human eye into Daugman's doubly dimensionless polar coordinates. Motivated by the growing demand for real-time capable solutions, coarse center detection and fine boundary localization usually combined in traditional approaches are decoupled. Therefore, search space at each stage is reduced without having to stick to simpler models. Another motivation of this work is independence of sensors. A comparison of reference software on different datasets highlights the problem of database-specific optimizations in existing solutions. This paper instead proposes the application of Gaussian weighting functions to incorporate model-specific prior knowledge. An adaptive Hough transform is applied at multiple resolutions to estimate the approximate position of the iris center. Subsequent polar transform detects the first elliptic limbic or pupillary boundary, and an ellipsopolar transform finds the second boundary based on the outcome of the first. This way, both iris images with clear limbic (typical for visible-wavelength) and with clear pupillary boundaries (typical for near infrared) can be processed in a uniform manner." ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
Arsalan al @cite_1 adapted the VGG-Face network to segmentation of visible-light iris images acquired for NICE-II benchmark and MICHE dataset. The proposed version of VGG has two output neurons, and thus the segmentation is expressed as a binary classification problem (iris non-iris) defined for local image patches. This, certainly, results in significant processing times for each iris image. This effort was later extended to NIR iris images acquired for CASIA-Iris-Interval-v4 and IITD datasets @cite_28 . The paper does not provide an information about an open-source solution offered to ther researchers.
{ "cite_N": [ "@cite_28", "@cite_1" ], "mid": [ "2802806477", "2765685479" ], "abstract": [ "The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.", "Existing iris recognition systems are heavily dependent on specific conditions, such as the distance of image acquisition and the stop-and-stare environment, which require significant user cooperation. In environments where user cooperation is not guaranteed, prevailing segmentation schemes of the iris region are confronted with many problems, such as heavy occlusion of eyelashes, invalid off-axis rotations, motion blurs, and non-regular reflections in the eye area. In addition, iris recognition based on visible light environment has been investigated to avoid the use of additional near-infrared (NIR) light camera and NIR illuminator, which increased the difficulty of segmenting the iris region accurately owing to the environmental noise of visible light. To address these issues; this study proposes a two-stage iris segmentation scheme based on convolutional neural network (CNN); which is capable of accurate iris segmentation in severely noisy environments of iris recognition by visible light camera sensor. In the experiment; the noisy iris challenge evaluation part-II (NICE-II) training database (selected from the UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE) dataset were used. Experimental results showed that our method outperformed the existing segmentation methods." ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
Lozej al @cite_13 re-trained the U-Net architecture @cite_9 , with different settings of hyperparametres, on CASIA benchmark and made this model publicly available to the research community.
{ "cite_N": [ "@cite_9", "@cite_13" ], "mid": [ "2952232639", "2891396062" ], "abstract": [ "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Iris segmentation is an important research topic that received significant attention from the research community over the years. Traditional iris segmentation techniques have typically been focused on hand-crafted procedures that, nonetheless, achieved remarkable segmentation performance even with images captured in difficult settings. With the success of deep-learning models, researchers are increasingly looking towards convolutional neural networks (CNNs) to further improve on the accuracy of existing iris segmentation techniques and several CNN-based techniques have already been presented recently in the literature. In this paper we also consider deep-learning models for iris segmentation and present an iris segmentation approach based on the popular U-Net architecture. Our model is trainable end-to-end and, hence, avoids the need for hand designing the segmentation procedure. We evaluate the model on the CASIA dataset and report encouraging results in comparison to existing techniques used in this area." ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
Bazrafkan al @cite_11 proposed a few newly designed convolutional neural networks, working in parallel for end-to-end iris segmentation. Initially trained on NIR images from BATH800 and CASIA-Thousand-v4, these structures were fine-tuned for visible-light images collected for UBIRIS and MobBio benchmarks. This paper additionally proposed various data augmentation techniques, specific to training neural networks specialized for iris segmentation. There is no information in the paper about the availability of the software or network weights.
{ "cite_N": [ "@cite_11" ], "mid": [ "2774581827" ], "abstract": [ "With the increasing imaging and processing capabilities of today's mobile devices, user authentication using iris biometrics has become feasible. However, as the acquisition conditions become more unconstrained and as image quality is typically lower than dedicated iris acquisition systems, the accurate segmentation of iris regions is crucial for these devices. In this work, an end to end Fully Convolutional Deep Neural Network (FCDNN) design is proposed to perform the iris segmentation task for lower-quality iris images. The network design process is explained in detail, and the resulting network is trained and tuned using several large public iris datasets. A set of methods to generate and augment suitable lower quality iris images from the high-quality public databases are provided. The network is trained on Near InfraRed (NIR) images initially and later tuned on additional datasets derived from visible images. Comprehensive inter-database comparisons are provided together with results from a selection of experiments detailing the effects of different tunings of the network. Finally, the proposed model is compared with SegNet-basic, and a near-optimal tuning of the network is compared to a selection of other state-of-art iris segmentation algorithms. The results show very promising performance from the optimized Deep Neural Networks design when compared with state-of-art techniques applied to the same lower quality datasets." ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
Bezerra al @cite_0 proposed to use Generative Adversarial Networks @cite_6 in iris segmentation for the first time, in addition to previously used fully convolutional neural networks. These solutions were evaluated on NIR images from BioSec, CASIA-Iris-Interval-v3, CASIA-Iris-Thousand-v4 and IITD datasets, as well as on visible-light images taken from NICE.I, CrEye-Iris and MICHE-I benchmarks. The authors offer manually labeled 2,431 images from ASIA-Thousand, CrEye-Iris and MICHE-I datasets, however the implemented methods and or network weights are not offered with the paper.
{ "cite_N": [ "@cite_0", "@cite_6" ], "mid": [ "2891787817", "2099471712" ], "abstract": [ "The iris can be considered as one of the most important biometric traits due to its high degree of uniqueness. Iris-based biometrics applications depend mainly on the iris segmentation whose suitability is not robust for different environments such as near-infrared (NIR) and visible (VIS) ones. In this paper, two approaches for robust iris segmentation based on Fully Convolutional Networks (FCNs) and Generative Adversarial Networks (GANs) are described. Similar to a common convolutional network, but without the fully connected layers (i.e., the classification layers), an FCN employs at its end combination of pooling layers from different convolutional layers. Based on the game theory, a GAN is designed as two networks competing with each other to generate the best segmentation. The proposed segmentation networks achieved promising results in all evaluated datasets (i.e., BioSec, CasiaI3, CasiaT4, IITD-1) of NIR images and (NICE.I, CrEye-Iris and MICHE-I) of VIS images in both non-cooperative and cooperative domains, outperforming the baselines techniques which are the best ones found so far in the literature, i.e., a new state of the art for these datasets. Furthermore, we manually labeled 2,431 images from CasiaT4, CrEye-Iris and MICHE-I datasets, making the masks available for research purposes.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1901.01028
2906683466
This paper offers three new, open-source, deep learning-based iris segmentation methods, and the methodology how to use irregular segmentation masks in a conventional Gabor-wavelet-based iris recognition. To train and validate the methods, we used a wide spectrum of iris images acquired by different teams and different sensors and offered publicly, including data taken from CASIA-Iris-Interval-v4, BioSec, ND-Iris-0405, UBIRIS, Warsaw-BioBase-Post-Mortem-Iris v2.0 (post-mortem iris images), and ND-TWINS-2009-2010 (iris images acquired from identical twins). This varied training data should increase the generalization capabilities of the proposed segmentation techniques. In database-disjoint training and testing, we show that deep learning-based segmentation outperforms the conventional (OSIRIS) segmentation in terms of Intersection over Union calculated between the obtained results and manually annotated ground-truth. Interestingly, the Gabor-based iris matching is not always better when deep learning-based segmentation is used, and is on par with the method employing Daugman's based segmentation.
the only previous solution that was open-sourced is the one offered by Lozej al @cite_13 , and this was in the form of the network weights, no previous work assessed the resulting deep learning-based segmentation from the matching perspective, relying instead only on comparison to manually-annotated segmentation.
{ "cite_N": [ "@cite_13" ], "mid": [ "2891396062" ], "abstract": [ "Iris segmentation is an important research topic that received significant attention from the research community over the years. Traditional iris segmentation techniques have typically been focused on hand-crafted procedures that, nonetheless, achieved remarkable segmentation performance even with images captured in difficult settings. With the success of deep-learning models, researchers are increasingly looking towards convolutional neural networks (CNNs) to further improve on the accuracy of existing iris segmentation techniques and several CNN-based techniques have already been presented recently in the literature. In this paper we also consider deep-learning models for iris segmentation and present an iris segmentation approach based on the popular U-Net architecture. Our model is trainable end-to-end and, hence, avoids the need for hand designing the segmentation procedure. We evaluate the model on the CASIA dataset and report encouraging results in comparison to existing techniques used in this area." ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
@cite_36 proposed one of the first approaches to learn visual relationships using CNN. CNN @cite_36 computes an embedding space such that similar examples have similar embeddings and vice versa. @cite_21 uses loss on Siamese CNN to learn embedding for face verification. One of the recent prominent works using CNNs for learning face embedding @cite_30 uses to train a CNN for learning face embeddings for identification. While triplet loss considers three samples for computing a loss measure, contrastive loss requires only two samples. Contrastive loss is computationally more efficient than triplet, however, several approaches @cite_48 @cite_9 @cite_6 @cite_38 @cite_0 @cite_35 have reported state-of-the-art performances using triplet loss. This superiority of triplet loss is attributed to the additional context using three samples. Section in this paper elaborates on these losses.
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_35", "@cite_36", "@cite_48", "@cite_9", "@cite_21", "@cite_6", "@cite_0" ], "mid": [ "2096733369", "2789546350", "2963775347", "2171590421", "2689134854", "2788212895", "2062677035", "2794497862", "2598634450" ], "abstract": [ "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "The widespread use of surveillance cameras toward smart and safe cities poses the critical but challenging problem of vehicle reidentification (Re-ID). The state-of-the-art research work performed vehicle Re-ID relying on deep metric learning with a triplet network. However, most existing methods basically ignore the impact of intraclass variance-incorporated embedding on the performance of vehicle reidentification, in which robust fine-grained features for large-scale vehicle Re-ID have not been fully studied. In this paper, we propose a deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation “group” between samples and each individual vehicle in the triplet network learning. To capture the intraclass variance attributes of each individual vehicle, we utilize an online grouping method to partition samples within each vehicle ID into a few groups, and build up the triplet samples at multiple granularities across different vehicle IDs as well as different groups within the same vehicle ID to learn fine-grained features. In particular, we construct a large-scale vehicle database “PKU-Vehicle,” consisting of 10 million vehicle images captured by different surveillance cameras in several cities, to evaluate the vehicle Re-ID performance in real-world video surveillance applications. Extensive experiments over benchmark datasets VehicleID, VeRI, and CompCar have shown that the proposed GS-TRE significantly outperforms the state-of-the-art approaches for vehicle Re-ID.", "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.", "This paper describes the development of an algorithm for verification of signatures written on a touch-sensitive pad. The signature verification algorithm is based on an artificial neural network. The novel network presented here, called a “Siamese” time delay neural network, consists of two identical networks joined at their output. During training the network learns to measure the similarity between pairs of signatures. When used for verification, only one half of the Siamese network is evaluated. The output of this half network is the feature vector for the input signature. Verification consists of comparing this feature vector with a stored feature vector for the signer. Signatures closer than a chosen threshold to this stored representation are accepted, all other signatures are rejected as forgeries. System performance is illustrated with experiments performed in the laboratory.", "Deep embeddings answer one simple question: How similar are two images? Learning these embeddings is the bedrock of verification, zero-shot learning, and visual search. The most prominent approaches optimize a deep convolutional network with a suitable loss function, such as contrastive loss or triplet loss. While a rich line of work focuses solely on the loss functions, we show in this paper that selecting training examples plays an equally important role. We propose distance weighted sampling, which selects more informative and stable examples than traditional approaches. In addition, we show that a simple margin based loss is sufficient to outperform all other loss functions. We evaluate our approach on the Stanford Online Products, CAR196, and the CUB200-2011 datasets for image retrieval and clustering, and on the LFW dataset for face verification. Our method achieves state-of-the-art performance on all of them.", "Vehicle re-identification (re-ID) is to identify the same vehicle across different cameras. It’s a significant but challenging topic, which has received little attention due to the complex intra-class and inter-class variation of vehicle images and the lack of large-scale vehicle re-ID dataset. Previous methods focus on pulling images from different vehicles apart but neglect the discrimination between vehicles from different vehicle models, which is actually quite important to obtain a correct ranking order for vehicle re-ID. In this paper, we learn a structured feature embedding for vehicle re-ID with a novel coarse-to-fine ranking loss to pull images of the same vehicle as close as possible and achieve discrimination between images from different vehicles as well as vehicles from different vehicle models. In the learnt feature space, both intra-class compactness and inter-class distinction are well guaranteed and the Euclidean distance between features directly reflects the semantic similarity of vehicle images. Furthermore, we build so far the largest vehicle re-ID dataset “Vehicle-1M”1 which involves nearly 1 million images captured in various surveillance scenarios. Experimental results on “Vehicle-1M”and “VehicleID” demonstrate the superiority of our proposed approach.", "This paper proposes a novel image representation which can properly handle both background and illumination variations. It is therefore adapted to the person face reidentification tasks, avoiding the use of any additional pre-processing steps such as foreground-background separation or face and body part segmentation. This novel representation relies on the combination of Biologically Inspired Features (BIF) and covariance descriptors used to compute the similarity of the BIF features at neighboring scales. Hence, we will refer to it as the BiCov representation. To show the effectiveness of BiCov, this paper conducts experiments on two person re-identification tasks (VIPeR and ETHZ) and one face verification task (LFW), on which it improves the current state-of-the-art performance.", "Multi-Target Multi-Camera Tracking (MTMCT) tracks many people through video taken from several cameras. Person Re-Identification (Re-ID) retrieves from a gallery images of people similar to a person query image. We learn good features for both MTMCT and Re-ID with a convolutional neural network. Our contributions include an adaptive weighted triplet loss for training and a new technique for hard-identity mining. Our method outperforms the state of the art both on the DukeMTMC benchmarks for tracking, and on the Market-1501 and DukeMTMC-ReID benchmarks for Re-ID. We examine the correlation between good Re-ID and good MTMCT scores, and perform ablation studies to elucidate the contributions of the main components of our system. Code is available.", "In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin." ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
Another method for obtaining an embedding for an object is utilizing a traditional softmax layer @cite_40 @cite_13 , wherein a fully-connected (embedding) layer is added prior to the softmax-loss layer. Each identity is considered as a separate category and the number of categories is equal to the number of identities in the training set. Once the network is trained using classification loss ( cross-entropy), the classification layer is stripped off and an embedding is obtained form the new final layer of the network. @cite_13 proposed a similar approach to learning vehicle embedding based on training a network for vehicle-model classification task. Since the network is not directly trained on embedding or metric learning loss, usually the performance of such a network is poor when compared to networks incorporating embedding loss. Cross entropy loss ensures separability of features but the features may not be discriminative enough for separating unseen identities. Furthermore learning becomes computationally prohibitive when considering datasets of @math identities. Some recent works @cite_22 @cite_41 @cite_3 unify classification loss with metric learning.
{ "cite_N": [ "@cite_22", "@cite_41", "@cite_3", "@cite_40", "@cite_13" ], "mid": [ "2800513603", "2962887033", "2339172597", "2342611082", "" ], "abstract": [ "Metric learning aims to construct an embedding where two extracted features corresponding to the same identity are likely to be closer than features from different identities. This paper presents a method for learning such a feature space where the cosine similarity is effectively optimized through a simple re-parametrization of the conventional softmax classification regime. At test time, the final classification layer can be stripped from the network to facilitate nearest neighbor queries on unseen individuals using the cosine similarity metric. This approach presents a simple alternative to direct metric learning objectives such as siamese networks that have required sophisticated pair or triplet sampling strategies in the past. The method is evaluated on two large-scale pedestrian re-identification datasets where competitive results are achieved overall. In particular, we achieve better generalization on the test set compared to a network trained with triplet loss.", "Abstract: Distance metric learning (DML) approaches learn a transformation to a representation space where distance is in correspondence with a predefined notion of similarity. While such models offer a number of compelling benefits, it has been difficult for these to compete with modern classification algorithms in performance and even in feature extraction. In this work, we propose a novel approach explicitly designed to address a number of subtle yet important issues which have stymied earlier DML algorithms. It maintains an explicit model of the distributions of the different classes in representation space. It then employs this knowledge to adaptively assess similarity, and achieve local discrimination by penalizing class distribution overlap. We demonstrate the effectiveness of this idea on several tasks. Our approach achieves state-of-the-art classification results on a number of fine-grained visual recognition datasets, surpassing the standard softmax classifier and outperforming triplet loss by a relative margin of 30-40 . In terms of computational performance, it alleviates training inefficiencies in the traditional triplet loss, reaching the same error in 5-30 times fewer iterations. Beyond classification, we further validate the saliency of the learnt representations via their attribute concentration and hierarchy recovery properties, achieving 10-25 relative gains on the softmax classifier and 25-50 on triplet loss in these tasks.", "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two large scale challenging datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.", "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.", "" ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
: Fine grained vehicle classification is a closely related problem to vehicle re-identification. Notable works for vehicle classification are @cite_32 @cite_23 @cite_44 @cite_10 @cite_24 @cite_50 . The general task is to predict vehicle , BMW-i3-2016, Toyota-Camry-1996. Vehicle re-identification is a relatively finer grained problem than vehicle-model classification: a re-identification approach should be able to extract visual differences between two vehicles belonging to the same model category. The visual differences could include subtle cosmetic and color differences making this problem more difficult. Furthermore a re-identification method is expected to work without any knowledge of all possible vehicle models in the city or a geographical entity.
{ "cite_N": [ "@cite_32", "@cite_44", "@cite_24", "@cite_23", "@cite_50", "@cite_10" ], "mid": [ "2294126139", "2605117450", "2475242006", "2028563077", "1958236864", "196211074" ], "abstract": [ "", "Fine-grained car recognition aims to recognize the category information of a car, such as car make, car model, or even the year of manufacture. A number of recent studies have shown that a deep convolutional neural network (DCNN) trained on a large-scale data set can achieve impressive results at a range of generic object classification tasks. In this paper, we propose a spatially weighted pooling (SWP) strategy, which considerably improves the robustness and effectiveness of the feature representation of most dominant DCNNs. More specifically, the SWP is a novel pooling layer, which contains a predefined number of spatially weighted masks or pooling channels. The SWP pools the extracted features of DCNNs with the guidance of its learnt masks, which measures the importance of the spatial units in terms of discriminative power. As the existing methods that apply uniform grid pooling on the convolutional feature maps of DCNNs, the proposed method can extract the convolutional features and generate the pooling channels from a single DCNN. Thus minimal modification is needed in terms of implementation. Moreover, the parameters of the SWP layer can be learned in the end-to-end training process of the DCNN. By applying our method to several fine-grained car recognition data sets, we demonstrate that the proposed method can achieve better performance than recent approaches in the literature. We advance the state-of-the-art results by improving the accuracy from 92.6 to 93.1 on the Stanford Cars-196 data set and 91.2 to 97.6 on the recent CompCars data set. We have also tested the proposed method on two additional large-scale data sets with impressive results observed.", "We are dealing with the problem of fine-grained vehicle make&model recognition and verification. Our contribution is showing that extracting additional data from the video stream – besides the vehicle image itself – and feeding it into the deep convolutional neural network boosts the recognition performance considerably. This additional information includes: 3D vehicle bounding box used for \"unpacking\" the vehicle image, its rasterized low-resolution shape, and information about the 3D vehicle orientation. Experiments show that adding such information decreases classification error by 26 (the accuracy is improved from 0.772 to 0.832) and boosts verification average precision by 208 (0.378 to 0.785) compared to baseline pure CNN without any input modifications. Also, the pure baseline CNN outperforms the recent state of the art solution by 0.081. We provide an annotated set \"BoxCars\" of surveillance vehicle images augmented by various automatically extracted auxiliary information. Our approach and the dataset can considerably improve the performance of traffic surveillance systems.", "This paper presents a mirror morphing scheme to deal with the challenging pose variation problem in car model recognition. Conventionally, researchers adopt pose estimation techniques to overcome the pose problem, whereas it is difficult to obtain very accurate pose estimation. Moreover, slight deviation in pose estimation degrades the recognition performance dramatically. The mirror morphing technique utilizes the symmetric property of cars to normalize car images of any orientation into a typical view. Therefore, the pose error and center bias can be eliminated and satisfactory recognition performance can be obtained. To support mirror morphing, active shape model (ASM) is used to acquire car shape information. An effective pose and center estimation approach is also proposed to provide a good initialization for ASM. In experiments, our proposed car model recognition system can achieve very high recognition rate (>95 ) with very low probability of false alarm even when it is dealing with the severe pose problem in the cases of cars with similar shape and color.", "This paper aims to highlight vision related tasks centered around “car”, which has been largely neglected by vision community in comparison to other objects. We show that there are still many interesting car-related problems and applications, which are not yet well explored and researched. To facilitate future car-related research, in this paper we present our on-going effort in collecting a large-scale dataset, “CompCars”, that covers not only different car views, but also their different internal and external parts, and rich attributes. Importantly, the dataset is constructed with a cross-modality nature, containing a surveillance-nature set and a web-nature set. We further demonstrate a few important applications exploiting the dataset, namely car model classification, car model verification, and attribute prediction. We also discuss specific challenges of the car-related problems and other potential applications that worth further investigations. The latest dataset can be downloaded at http: mmlab.ie.cuhk.edu.hk datasets comp_cars index.html", "3D object modeling and fine-grained classification are often treated as separate tasks. We propose to optimize 3D model fitting and fine-grained classification jointly. Detailed 3D object representations encode more information (e.g., precise part locations and viewpoint) than traditional 2D-based approaches, and can therefore improve fine-grained classification performance. Meanwhile, the predicted class label can also improve 3D model fitting accuracy, e.g., by providing more detailed class-specific shape models. We evaluate our method on a new fine-grained 3D car dataset (FG3DCar), demonstrating our method outperforms several state-of-the-art approaches. Furthermore, we also conduct a series of analyses to explore the dependence between fine-grained classification performance and 3D models." ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
: Some notable approaches prior to deep learning are @cite_5 @cite_39 . Popular deep learning approaches for vehicle re-identification are @cite_42 @cite_18 @cite_33 @cite_38 @cite_15 @cite_9 @cite_28 @cite_45 @cite_29 @cite_13 @cite_19 . @cite_15 proposed fusion of handcrafted features color, texture along with high level attribute feature obtained using CNN. @cite_42 proposed a progressive refinement approach to searching query vehicles. A list of candidates is obtained for a query using embeddings from a siamese-CNN trained using contrastive loss. This list is then pruned using a siamese network to match license plates. In order to get reliable query for visually similar vehicles, authors factor in the usage of spatio-temporal distance comparison in addition to visual embedding distances.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_33", "@cite_28", "@cite_9", "@cite_42", "@cite_29", "@cite_39", "@cite_19", "@cite_45", "@cite_5", "@cite_15", "@cite_13" ], "mid": [ "2789546350", "2470322391", "2779954854", "2749235995", "2788212895", "2519904008", "2802810579", "2495961871", "2799251491", "2776879428", "2088801782", "", "" ], "abstract": [ "The widespread use of surveillance cameras toward smart and safe cities poses the critical but challenging problem of vehicle reidentification (Re-ID). The state-of-the-art research work performed vehicle Re-ID relying on deep metric learning with a triplet network. However, most existing methods basically ignore the impact of intraclass variance-incorporated embedding on the performance of vehicle reidentification, in which robust fine-grained features for large-scale vehicle Re-ID have not been fully studied. In this paper, we propose a deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation “group” between samples and each individual vehicle in the triplet network learning. To capture the intraclass variance attributes of each individual vehicle, we utilize an online grouping method to partition samples within each vehicle ID into a few groups, and build up the triplet samples at multiple granularities across different vehicle IDs as well as different groups within the same vehicle ID to learn fine-grained features. In particular, we construct a large-scale vehicle database “PKU-Vehicle,” consisting of 10 million vehicle images captured by different surveillance cameras in several cities, to evaluate the vehicle Re-ID performance in real-world video surveillance applications. Extensive experiments over benchmark datasets VehicleID, VeRI, and CompCar have shown that the proposed GS-TRE significantly outperforms the state-of-the-art approaches for vehicle Re-ID.", "The growing explosion in the use of surveillance cameras in public security highlights the importance of vehicle search from a large-scale image or video database. However, compared with person re-identification or face recognition, vehicle search problem has long been neglected by researchers in vision community. This paper focuses on an interesting but challenging problem, vehicle re-identification (a.k.a precise vehicle search). We propose a Deep Relative Distance Learning (DRDL) method which exploits a two-branch deep convolutional network to project raw vehicle images into an Euclidean space where distance can be directly used to measure the similarity of arbitrary two vehicles. To further facilitate the future research on this problem, we also present a carefully-organized largescale image database \"VehicleID\", which includes multiple images of the same vehicle captured by different realworld cameras in a city. We evaluate our DRDL method on our VehicleID dataset and another recently-released vehicle model classification dataset \"CompCars\" in three sets of experiments: vehicle re-identification, vehicle model verification and vehicle retrieval. Experimental results show that our method can achieve promising results and outperforms several state-of-the-art approaches.", "Precise search of visually-similar vehicles poses a great challenge in computer vision, which needs to find exactly the same vehicle among a massive vehicles with visually similar appearances for a given query image. In this paper, we model the relationship of vehicle images as multiple grains. Following this, we propose two approaches to alleviate the precise vehicle search problem by exploiting multi-grain ranking constraints. One is Generalized Pairwise Ranking, which generalizes the conventional pairwise from considering only binary similar dissimilar relations to multiple relations. The other is Multi-Grain based List Ranking, which introduces permutation probability to score a permutation of a multi-grain list, and further optimizes the ranking by the likelihood loss function. We implement the two approaches with multi-attribute classification in a multi-task deep learning framework. To further facilitate the research on precise vehicle search, we also contribute two high-quality and well-annotated vehicle datasets, named VD1 and VD2, which are collected from two different cities with diverse annotated attributes. As two of the largest publicly available precise vehicle search datasets, they contain 1,097,649 and 807,260 vehicle images respectively. Experimental results show that our approaches achieve the state-of-the-art performance on both datasets.", "Vehicle re-identification is an important problem and has many applications in video surveillance and intelligent transportation. It gains increasing attention because of the recent advances of person re-identification techniques. However, unlike person re-identification, the visual differences between pairs of vehicle images are usually subtle and even challenging for humans to distinguish. Incorporating additional spatio-temporal information is vital for solving the challenging re-identification task. Existing vehicle re-identification methods ignored or used over-simplified models for the spatio-temporal relations between vehicle images. In this paper, we propose a two-stage framework that incorporates complex spatio-temporal information for effectively regularizing the re-identification results. Given a pair of vehicle images with their spatio-temporal information, a candidate visual-spatio-temporal path is first generated by a chain MRF model with a deeply learned potential function, where each visual-spatio-temporal state corresponds to an actual vehicle image with its spatio-temporal information. A Siamese-CNN+Path-LSTM model takes the candidate path as well as the pairwise queries to generate their similarity score. Extensive experiments and analysis show the effectiveness of our proposed method and individual components.", "Vehicle re-identification (re-ID) is to identify the same vehicle across different cameras. It’s a significant but challenging topic, which has received little attention due to the complex intra-class and inter-class variation of vehicle images and the lack of large-scale vehicle re-ID dataset. Previous methods focus on pulling images from different vehicles apart but neglect the discrimination between vehicles from different vehicle models, which is actually quite important to obtain a correct ranking order for vehicle re-ID. In this paper, we learn a structured feature embedding for vehicle re-ID with a novel coarse-to-fine ranking loss to pull images of the same vehicle as close as possible and achieve discrimination between images from different vehicles as well as vehicles from different vehicle models. In the learnt feature space, both intra-class compactness and inter-class distinction are well guaranteed and the Euclidean distance between features directly reflects the semantic similarity of vehicle images. Furthermore, we build so far the largest vehicle re-ID dataset “Vehicle-1M”1 which involves nearly 1 million images captured in various surveillance scenarios. Experimental results on “Vehicle-1M”and “VehicleID” demonstrate the superiority of our proposed approach.", "While re-identification (Re-Id) of persons has attracted intensive attention, vehicle, which is a significant object class in urban video surveillance, is often overlooked by vision community. Most existing methods for vehicle Re-Id only achieve limited performance, as they predominantly focus on the generic appearance of vehicle while neglecting some unique identities of vehicle (e.g., license plate). In this paper, we propose a novel deep learning-based approach to PROgressive Vehicle re-ID, called “PROVID”. Our approach treats vehicle Re-Id as two specific progressive search processes: coarse-to-fine search in the feature space, and near-to-distant search in the real world surveillance environment. The first search process employs the appearance attributes of vehicle for a coarse filtering, and then exploits the Siamese Neural Network for license plate verification to accurately identify vehicles. The near-to-distant search process retrieves vehicles in a manner like human beings, by searching from near to faraway cameras and from close to distant time. Moreover, to facilitate progressive vehicle Re-Id research, we collect to-date the largest dataset named VeRi-776 from large-scale urban surveillance videos, which contains not only massive vehicles with diverse attributes and high recurrence rate, but also sufficient license plates and spatiotemporal labels. A comprehensive evaluation on the VeRi-776 shows that our approach outperforms the state-of-the-art methods by 9.28 improvements in term of mAP.", "Vehicle re-identification (re-ID) is an area that has received far less attention in the computer vision community than the prevalent person re-ID. Possible reasons for this slow progress are the lack of appropriate research data and the special 3D structure of a vehicle. Previous works have generally focused on limited views (e.g. front and rear), but these methods are less effective in realistic scenarios where vehicles usually appear in arbitrary views to cameras. In this paper, we focus on the uncertainty of vehicle viewpoint in re-ID, proposing an Adversarial Bi-directional LSTM Network (ABLN). Our model exploits the great advantages of the Long Short-Term Memory (LSTM) to model transformations across continuous view variations of a vehicle and adopts the adversarial architecture to enhance training. Thus, a global vehicle representation containing all views' information can be inferred from only one visible view, and then used for learning to measure the distance between two vehicles with arbitrary views. To verify our model, we evaluate the proposed method on the public VehicleID and VeRi datasets. Experimental results illustrate that our approach achieves consistent improvements over state-of-the-art vehicle re-ID methods.", "This paper proposes an approach to the vehicle reidentification problem in a multiple camera system. We focused on the re-identification itself assuming that the vehicle detection problem is already solved including extraction of a full-fledged 3D bounding box. The re-identification problem is solved by using color histograms and histograms of oriented gradients by a linear regressor. The features are used in separate models in order to get the best results in the shortest CPU computation time. The proposed method works with a high accuracy (60 true positives retrieved with 10 false positive rate on a challenging subset of the test data) in 85 milliseconds of the CPU (Core i7) computation time per one vehicle re-identification assuming the fullHD resolution video input. The applications of this work include finding important parameters such as travel time, traffic flow, or traffic information in a distributed traffic surveillance and monitoring system.", "Vehicle re-identification (re-ID) has the huge potential to contribute to the intelligent video surveillance. However, it suffers from challenges that different vehicle identities with a similar appearance have little inter-instance discrepancy while one vehicle usually has large intra-instance differences under viewpoint and illumination variations. Previous methods address vehicle re-ID by simply using visual features from originally captured views and usually exploit the spatial-temporal information of the vehicles to refine the results. In this paper, we propose a Viewpoint-aware Attentive Multi-view Inference (VAMI) model that only requires visual information to solve the multi-view vehicle reID problem. Given vehicle images of arbitrary viewpoints, the VAMI extracts the single-view feature for each input image and aims to transform the features into a global multiview feature representation so that pairwise distance metric learning can be better optimized in such a viewpointinvariant feature space. The VAMI adopts a viewpoint-aware attention model to select core regions at different viewpoints and implement effective multi-view feature inference by an adversarial training architecture. Extensive experiments validate the effectiveness of each proposed component and illustrate that our approach achieves consistent improvements over state-of-the-art vehicle re-ID methods on two public datasets: VeRi and VehicleID.", "In this paper, we tackle the vehicle Re-identification (ReID) problem which is of great importance in urban surveillance and can be used for multiple applications. In our vehicle ReID framework, an orientation invariant feature embedding module and a spatial-temporal regularization module are proposed. With orientation invariant feature embedding, local region features of different orientations can be extracted based on 20 key point locations and can be well aligned and combined. With spatial-temporal regularization, the log-normal distribution is adopted to model the spatial-temporal constraints and the retrieval results can be refined. Experiments are conducted on public vehicle ReID datasets and our proposed method achieves state-of-the-art performance. Investigations of the proposed framework is conducted, including the landmark regressor and comparisons with attention mechanism. Both the orientation invariant feature embedding and the spatio-temporal regularization achieve considerable improvements.", "In current cities, the number of vehicles grows rapidly especially in developing countries, and the traffic surveillance system usually has tens of thousands of cameras connected into a huge network. Hence the volume of data generated by traffic cameras becomes astronomical. So it is a great challenge to process and utilize these big data resources effectively and efficiently. Towards this end, this paper proposes a system which provides a novel service of vehicle trajectory search for urban traffic surveillance. In this system, smart cameras extract vehicle IDs with time information when vehicles appear in their views and send these information to a data center with very little bandwidth cost. After that, the center server stores and organizes traffic data using two types of tables, camera tables and inverted tables. We fuse vehicle IDs, spatial-temporal data, and topology of urban roads to build a global graph and propose a PathRank algorithm to support the vehicle trajectory search. Experiment results on data from a real city traffic surveillance network validate and evaluate our system.", "", "" ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
@cite_9 presents a structured deep learning loss comprising a classification loss term (based on vehicle model) as well as coarse and fine grained ranking terms. @cite_18 proposed a modification of triplet loss by replacing anchor samples with corresponding class center in order to suppress effects of using poor anchors. Furthermore the deep model is trained for both vehicle model classification and identity labels in a multi-level process. @cite_33 focuses on the relationship between different vehicle images as multiple grains by using diverse vehicle attributes. The authors proposed ranking methods incorporated into multi-grain classification.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_33" ], "mid": [ "2788212895", "2470322391", "2779954854" ], "abstract": [ "Vehicle re-identification (re-ID) is to identify the same vehicle across different cameras. It’s a significant but challenging topic, which has received little attention due to the complex intra-class and inter-class variation of vehicle images and the lack of large-scale vehicle re-ID dataset. Previous methods focus on pulling images from different vehicles apart but neglect the discrimination between vehicles from different vehicle models, which is actually quite important to obtain a correct ranking order for vehicle re-ID. In this paper, we learn a structured feature embedding for vehicle re-ID with a novel coarse-to-fine ranking loss to pull images of the same vehicle as close as possible and achieve discrimination between images from different vehicles as well as vehicles from different vehicle models. In the learnt feature space, both intra-class compactness and inter-class distinction are well guaranteed and the Euclidean distance between features directly reflects the semantic similarity of vehicle images. Furthermore, we build so far the largest vehicle re-ID dataset “Vehicle-1M”1 which involves nearly 1 million images captured in various surveillance scenarios. Experimental results on “Vehicle-1M”and “VehicleID” demonstrate the superiority of our proposed approach.", "The growing explosion in the use of surveillance cameras in public security highlights the importance of vehicle search from a large-scale image or video database. However, compared with person re-identification or face recognition, vehicle search problem has long been neglected by researchers in vision community. This paper focuses on an interesting but challenging problem, vehicle re-identification (a.k.a precise vehicle search). We propose a Deep Relative Distance Learning (DRDL) method which exploits a two-branch deep convolutional network to project raw vehicle images into an Euclidean space where distance can be directly used to measure the similarity of arbitrary two vehicles. To further facilitate the future research on this problem, we also present a carefully-organized largescale image database \"VehicleID\", which includes multiple images of the same vehicle captured by different realworld cameras in a city. We evaluate our DRDL method on our VehicleID dataset and another recently-released vehicle model classification dataset \"CompCars\" in three sets of experiments: vehicle re-identification, vehicle model verification and vehicle retrieval. Experimental results show that our method can achieve promising results and outperforms several state-of-the-art approaches.", "Precise search of visually-similar vehicles poses a great challenge in computer vision, which needs to find exactly the same vehicle among a massive vehicles with visually similar appearances for a given query image. In this paper, we model the relationship of vehicle images as multiple grains. Following this, we propose two approaches to alleviate the precise vehicle search problem by exploiting multi-grain ranking constraints. One is Generalized Pairwise Ranking, which generalizes the conventional pairwise from considering only binary similar dissimilar relations to multiple relations. The other is Multi-Grain based List Ranking, which introduces permutation probability to score a permutation of a multi-grain list, and further optimizes the ranking by the likelihood loss function. We implement the two approaches with multi-attribute classification in a multi-task deep learning framework. To further facilitate the research on precise vehicle search, we also contribute two high-quality and well-annotated vehicle datasets, named VD1 and VD2, which are collected from two different cities with diverse annotated attributes. As two of the largest publicly available precise vehicle search datasets, they contain 1,097,649 and 807,260 vehicle images respectively. Experimental results show that our approaches achieve the state-of-the-art performance on both datasets." ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
In a recent work @cite_38 , the authors propose to include group-based sub-clustering in a triplet loss framework. This helps in explicitly dealing with intra-class variations of vehicle identification problem. During training an online grouping method is used to cluster samples within each identity into disparate clusters. The authors demonstrate state-of-the-art results in different datasets.
{ "cite_N": [ "@cite_38" ], "mid": [ "2789546350" ], "abstract": [ "The widespread use of surveillance cameras toward smart and safe cities poses the critical but challenging problem of vehicle reidentification (Re-ID). The state-of-the-art research work performed vehicle Re-ID relying on deep metric learning with a triplet network. However, most existing methods basically ignore the impact of intraclass variance-incorporated embedding on the performance of vehicle reidentification, in which robust fine-grained features for large-scale vehicle Re-ID have not been fully studied. In this paper, we propose a deep metric learning method, group-sensitive-triplet embedding (GS-TRE), to recognize and retrieve vehicles, in which intraclass variance is elegantly modeled by incorporating an intermediate representation “group” between samples and each individual vehicle in the triplet network learning. To capture the intraclass variance attributes of each individual vehicle, we utilize an online grouping method to partition samples within each vehicle ID into a few groups, and build up the triplet samples at multiple granularities across different vehicle IDs as well as different groups within the same vehicle ID to learn fine-grained features. In particular, we construct a large-scale vehicle database “PKU-Vehicle,” consisting of 10 million vehicle images captured by different surveillance cameras in several cities, to evaluate the vehicle Re-ID performance in real-world video surveillance applications. Extensive experiments over benchmark datasets VehicleID, VeRI, and CompCar have shown that the proposed GS-TRE significantly outperforms the state-of-the-art approaches for vehicle Re-ID." ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
@cite_29 proposes to use a view-point synthesis approach to predict embedding for unknown views given a true view image. These synthetic embeddings for unknown views are generated using bi-directional LSTM @cite_26 . The complete network is trained using a combination of contrastive, reconstruction and generative adversarial loss @cite_52 . Similar to the objective of @cite_29 for inferring a global feature vector using view-synthesis, authors in @cite_19 propose a framework. Utilizing attentive @cite_46 and adversarial loss, authors transform a single view feature into a global multi-view feature representation.
{ "cite_N": [ "@cite_26", "@cite_29", "@cite_52", "@cite_19", "@cite_46" ], "mid": [ "", "2802810579", "1710476689", "2799251491", "2147527908" ], "abstract": [ "", "Vehicle re-identification (re-ID) is an area that has received far less attention in the computer vision community than the prevalent person re-ID. Possible reasons for this slow progress are the lack of appropriate research data and the special 3D structure of a vehicle. Previous works have generally focused on limited views (e.g. front and rear), but these methods are less effective in realistic scenarios where vehicles usually appear in arbitrary views to cameras. In this paper, we focus on the uncertainty of vehicle viewpoint in re-ID, proposing an Adversarial Bi-directional LSTM Network (ABLN). Our model exploits the great advantages of the Long Short-Term Memory (LSTM) to model transformations across continuous view variations of a vehicle and adopts the adversarial architecture to enhance training. Thus, a global vehicle representation containing all views' information can be inferred from only one visible view, and then used for learning to measure the distance between two vehicles with arbitrary views. To verify our model, we evaluate the proposed method on the public VehicleID and VeRi datasets. Experimental results illustrate that our approach achieves consistent improvements over state-of-the-art vehicle re-ID methods.", "For many AI projects, deep learning techniques are increasingly being used as the building blocks for innovative solutions ranging from image classification to object detection, image segmentation, image similarity, and text analytics (e.g., sentiment analysis, key phrase extraction). GANs, first introduced by (2014), are emerging as a powerful new approach toward teaching computers how to do complex tasks through a generative process. As noted by Yann LeCun (at http: bit.ly LeCunGANs ), GANs are truly the “coolest idea in machine learning in the last 20 years.”", "Vehicle re-identification (re-ID) has the huge potential to contribute to the intelligent video surveillance. However, it suffers from challenges that different vehicle identities with a similar appearance have little inter-instance discrepancy while one vehicle usually has large intra-instance differences under viewpoint and illumination variations. Previous methods address vehicle re-ID by simply using visual features from originally captured views and usually exploit the spatial-temporal information of the vehicles to refine the results. In this paper, we propose a Viewpoint-aware Attentive Multi-view Inference (VAMI) model that only requires visual information to solve the multi-view vehicle reID problem. Given vehicle images of arbitrary viewpoints, the VAMI extracts the single-view feature for each input image and aims to transform the features into a global multiview feature representation so that pairwise distance metric learning can be better optimized in such a viewpointinvariant feature space. The VAMI adopts a viewpoint-aware attention model to select core regions at different viewpoints and implement effective multi-view feature inference by an adversarial training architecture. Extensive experiments validate the effectiveness of each proposed component and illustrate that our approach achieves consistent improvements over state-of-the-art vehicle re-ID methods on two public datasets: VeRi and VehicleID.", "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so." ] }
1901.01015
2907285302
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of these losses applied to vehicle re-identification and demonstrate that using the best practices for learning embeddings outperform most of the previous approaches proposed in the vehicle re-identification literature. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature.
@cite_45 develops a framework utilizing keypoint annotations on vehicles to learn viewpoint invariant features from a CNN. To further enhance the retrieval of matching vehicles the authors use probabilistic spatio-temporal regularization using random variables representing camera transition probabilities. The authors demonstrate superior results by adding this regularization during retrieval procedure. @cite_28 formulate these probabilities by generating proposals of path (trajectories) and employing a LSTM and Siamese CNN to obtain a robust re-identification performance.
{ "cite_N": [ "@cite_28", "@cite_45" ], "mid": [ "2749235995", "2776879428" ], "abstract": [ "Vehicle re-identification is an important problem and has many applications in video surveillance and intelligent transportation. It gains increasing attention because of the recent advances of person re-identification techniques. However, unlike person re-identification, the visual differences between pairs of vehicle images are usually subtle and even challenging for humans to distinguish. Incorporating additional spatio-temporal information is vital for solving the challenging re-identification task. Existing vehicle re-identification methods ignored or used over-simplified models for the spatio-temporal relations between vehicle images. In this paper, we propose a two-stage framework that incorporates complex spatio-temporal information for effectively regularizing the re-identification results. Given a pair of vehicle images with their spatio-temporal information, a candidate visual-spatio-temporal path is first generated by a chain MRF model with a deeply learned potential function, where each visual-spatio-temporal state corresponds to an actual vehicle image with its spatio-temporal information. A Siamese-CNN+Path-LSTM model takes the candidate path as well as the pairwise queries to generate their similarity score. Extensive experiments and analysis show the effectiveness of our proposed method and individual components.", "In this paper, we tackle the vehicle Re-identification (ReID) problem which is of great importance in urban surveillance and can be used for multiple applications. In our vehicle ReID framework, an orientation invariant feature embedding module and a spatial-temporal regularization module are proposed. With orientation invariant feature embedding, local region features of different orientations can be extracted based on 20 key point locations and can be well aligned and combined. With spatial-temporal regularization, the log-normal distribution is adopted to model the spatial-temporal constraints and the retrieval results can be refined. Experiments are conducted on public vehicle ReID datasets and our proposed method achieves state-of-the-art performance. Investigations of the proposed framework is conducted, including the landmark regressor and comparisons with attention mechanism. Both the orientation invariant feature embedding and the spatio-temporal regularization achieve considerable improvements." ] }
1901.01229
2951350728
A new mechanism for efficiently solving the Markov decision processes (MDPs) is proposed in this paper. We introduce the notion of reachability landscape where we use the Mean First Passage Time (MFPT) as a means to characterize the reachability of every state in the state space. We show that such reachability characterization very well assesses the importance of states and thus provides a natural basis for effectively prioritizing states and approximating policies. Built on such a novel observation, we design two new algorithms -- Mean First Passage Time based Value Iteration (MFPT-VI) and Mean First Passage Time based Policy Iteration (MFPT-PI) -- that have been modified from the state-of-the-art solution methods. To validate our design, we have performed numerical evaluations in robotic decision-making scenarios, by comparing the proposed new methods with corresponding classic baseline mechanisms. The evaluation results showed that MFPT-VI and MFPT-PI have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations. Aside from the advantage of fast convergence, this new solution method is intuitively easy to understand and practically simple to implement.
Decision-making in uncertain environments is a basic problem in the area of artificial intelligence @cite_15 @cite_8 , and Markov decision processes (MDPs) have become very popular for modeling non-deterministic planning problems with full observability @cite_14 @cite_10 .
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_10", "@cite_8" ], "mid": [ "", "2119567691", "1978942630", "2737668828" ], "abstract": [ "", "From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a \"theorem-proof\" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic", "A collection of papers on the application of Markov decision processes is surveyed and classified according to the use of real life data, structural results and special computational schemes. Observations are made about various features of the applications.", "Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision problems under uncertainty as well as Reinforcement Learning problems. Written by experts in the field, this book provides a global view of current research using MDPs in Artificial Intelligence. It starts with an introductory presentation of the fundamental aspects of MDPs (planning in MDPs, Reinforcement Learning, Partially Observable MDPs, Markov games and the use of non-classical criteria). Then it presents more advanced research trends in the domain and gives some concrete examples using illustrative applications." ] }
1901.01229
2951350728
A new mechanism for efficiently solving the Markov decision processes (MDPs) is proposed in this paper. We introduce the notion of reachability landscape where we use the Mean First Passage Time (MFPT) as a means to characterize the reachability of every state in the state space. We show that such reachability characterization very well assesses the importance of states and thus provides a natural basis for effectively prioritizing states and approximating policies. Built on such a novel observation, we design two new algorithms -- Mean First Passage Time based Value Iteration (MFPT-VI) and Mean First Passage Time based Policy Iteration (MFPT-PI) -- that have been modified from the state-of-the-art solution methods. To validate our design, we have performed numerical evaluations in robotic decision-making scenarios, by comparing the proposed new methods with corresponding classic baseline mechanisms. The evaluation results showed that MFPT-VI and MFPT-PI have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations. Aside from the advantage of fast convergence, this new solution method is intuitively easy to understand and practically simple to implement.
An important related heuristic for efficiently solving MDPs is the prioritized sweeping @cite_9 , which has been broadly employed to further speed up the value iteration process. This heuristic evaluates each state and obtains a score based on the state's contribution to the convergence, and then prioritizes sorts all states based on their scores (e.g., those states with larger difference in value between two consecutive iterations will get higher scores) @cite_7 @cite_3 . Then in the immediately next dynamic programming iteration, evaluating the value of states follow the newly prioritized order. The prioritized sweeping heuristic is also leveraged in our MFPT based value iteration procedure, and comparisons with baseline approaches have been conducted in our experimental section.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_7" ], "mid": [ "2048226872", "2121733891", "2159420891" ], "abstract": [ "We present a new algorithm, prioritized sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as temporal differencing and Q-learning have real-time performance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of state-space. We compare prioritized sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems. It successfully solves large state-space real-time problems with which other methods have difficulty.", "The performance of value and policy iteration can be dramatically improved by eliminating redundant or useless backups, and by backing up states in the right order. We study several methods designed to accelerate these iterative solvers, including prioritization, partitioning, and variable reordering. We generate a family of algorithms by combining several of the methods discussed, and present extensive empirical evidence demonstrating that performance can improve by several orders of magnitude for many problems, while preserving accuracy and convergence guarantees.", "Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping." ] }
1901.01229
2951350728
A new mechanism for efficiently solving the Markov decision processes (MDPs) is proposed in this paper. We introduce the notion of reachability landscape where we use the Mean First Passage Time (MFPT) as a means to characterize the reachability of every state in the state space. We show that such reachability characterization very well assesses the importance of states and thus provides a natural basis for effectively prioritizing states and approximating policies. Built on such a novel observation, we design two new algorithms -- Mean First Passage Time based Value Iteration (MFPT-VI) and Mean First Passage Time based Policy Iteration (MFPT-PI) -- that have been modified from the state-of-the-art solution methods. To validate our design, we have performed numerical evaluations in robotic decision-making scenarios, by comparing the proposed new methods with corresponding classic baseline mechanisms. The evaluation results showed that MFPT-VI and MFPT-PI have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations. Aside from the advantage of fast convergence, this new solution method is intuitively easy to understand and practically simple to implement.
Important related frameworks for solving MDPs also include compact representations such as linear function representation and approximation @cite_5 @cite_14 used in the policy iteration algorithms. The linear equation based techniques (detailed formulation is provided in the paper) do not exploit regions of uniformity in value functions associated with states, but rather a compact form of state features that can somewhat reflect values @cite_16 . Our method for computing the MFPT can also be formulated into a linear system. However, the intermediate results generated from MFPT are more direct: they very well capture -- and also allow us to visualize -- the importance" of states, and can lead to a faster convergence speed which is demonstrated in the experiments.
{ "cite_N": [ "@cite_5", "@cite_14", "@cite_16" ], "mid": [ "2028145673", "2119567691", "1997477668" ], "abstract": [ "", "From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a \"theorem-proof\" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic", "Abstract Markov decision processes (MDPs) have proven to be popular models for decision-theoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, state-based specifications and computations. To alleviate the combinatorial problems associated with such methods, we propose new representational and computational techniques for MDPs that exploit certain types of problem structure. We use dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP, together with a decision-tree representation of rewards. Based on this representation, we develop versions of standard dynamic programming algorithms that directly manipulate decision-tree representations of policies and value functions. This generally obviates the need for state-by-state computation, aggregating states at the leaves of these trees and requiring computations only for each aggregate state. The key to these algorithms is a decision-theoretic generalization of classic regression analysis, in which we determine the features relevant to predicting expected value. We demonstrate the method empirically on several planning problems, showing significant savings for certain types of domains. We also identify certain classes of problems for which this technique fails to perform well and suggest extensions and related ideas that may prove useful in such circumstances. We also briefly describe an approximation scheme based on this approach." ] }
1901.01229
2951350728
A new mechanism for efficiently solving the Markov decision processes (MDPs) is proposed in this paper. We introduce the notion of reachability landscape where we use the Mean First Passage Time (MFPT) as a means to characterize the reachability of every state in the state space. We show that such reachability characterization very well assesses the importance of states and thus provides a natural basis for effectively prioritizing states and approximating policies. Built on such a novel observation, we design two new algorithms -- Mean First Passage Time based Value Iteration (MFPT-VI) and Mean First Passage Time based Policy Iteration (MFPT-PI) -- that have been modified from the state-of-the-art solution methods. To validate our design, we have performed numerical evaluations in robotic decision-making scenarios, by comparing the proposed new methods with corresponding classic baseline mechanisms. The evaluation results showed that MFPT-VI and MFPT-PI have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations. Aside from the advantage of fast convergence, this new solution method is intuitively easy to understand and practically simple to implement.
Another relevant strategy is called real-time dynamic programming (RTDP) @cite_11 where states are not treated uniformly. Specifically, in each DP iteration, only a subset of most important states might be explored, and the selection of the subset of states are usually built on and related to agent's exploration experience. For a single DP iteration, the RTDP usually requires less computation time in comparison to the classic DP where all states need to be swept, and thus can be extended as an online process and integrated into the real-time reinforcement learning framework @cite_13 . Similar strategies also include the state abstraction @cite_17 @cite_12 , where states with similar characteristics are hierarchically and or adaptively grouped together, either in offline static or online dynamic aggregation style. Although we believe our proposed framework can be easily extended to the fashions of RTDP's partial states exploration and the adaptive states abstraction computation, in this work we consider the complete and full exploration of all states, and compare with state-of-the-art methods that evaluate across the entire state space.
{ "cite_N": [ "@cite_13", "@cite_17", "@cite_12", "@cite_11" ], "mid": [ "108082272", "2089561656", "2397240726", "2009533501" ], "abstract": [ "RTDP is a recent heuristic-search DP algorithm for solving non-deterministic planning problems with full observability. In relation to other dynamic programming methods, RTDP has two benefits: first, it does not have to evaluate the entire state space in order to deliver an optimal policy, and second, it can often deliver good policies pretty fast. On the other hand, RTDP final convergence is slow. In this paper we introduce a labeling scheme into RTDP that speeds up its convergence while retaining its good anytime behavior. The idea is to label a state s as solved when the heuristic values, and thus, the greedy policy defined by them, have converged over s and the states that can be reached from s with the greedy policy. While due to the presence of cycles, these labels cannot be computed in a recursive, bottom-up fashion in general, we show nonetheless that they can be computed quite fast, and that the overhead is compensated by the recomputations avoided. In addition, when the labeling procedure cannot label a state as solved, it improves the heuristic value of a relevant state. This results in the number of Labeled RTDP trials needed for convergence, unlike the number of RTDP trials, to be bounded. From a practical point of view, Labeled RTDP (LRTDP) converges orders of magnitude faster than RTDP, and faster also than another recent heuristic-search DP algorithm, LAO*. Moreover, LRTDP often converges faster than value iteration, even with the heuristic h = 0, thus suggesting that LRTDP has a quite general scope.", "Safe state abstraction in reinforcement learning allows an agent to ignore aspects of its current state that are irrelevant to its current decision, and therefore speeds up dynamic programming and learning. This paper explores safe state abstraction in hierarchical reinforcement learning, where learned behaviors must conform to a given partial, hierarchical program. Unlike previous approaches to this problem, our methods yield significant state abstraction while maintaining hierarchical optimality, i.e., optimality among all policies consistent with the partial program. We show how to achieve this for a partial programming language that is essentially Lisp augmented with nondeterministic constructs. We demonstrate our methods on two variants of Dietterich's taxi domain, showing how state abstraction and hierarchical optimality result in faster learning of better policies and enable the transfer of learned skills from one problem to another.", "State abstraction (or state aggregation) has been extensively studied in the fields of artificial intelligence and operations research. Instead of working in the ground state space, the decision maker usually finds solutions in the abstract state space much faster by treating groups of states as a unit by ignoring irrelevant state information. A number of abstractions have been proposed and studied in the reinforcement-learning and planning literatures, and positive and negative results are known. We provide a unified treatment of state abstraction for Markov decision processes. We study five particular abstraction schemes, some of which have been proposed in the past in different forms, and analyze their usability for planning and learning.", "Learning methods based on dynamic programming (DP) are receiving increasing attention in artificial intelligence. Researchers have argued that DP provides the appropriate basis for compiling planning results into reactive strategies for real-time control, as well as for learning such strategies when the system being controlled is incompletely known. We introduce an algorithm based on DP, which we call Real-Time DP (RTDP), by which an embedded system can improve its performance with experience. RTDP generalizes Korf''s Learning-Real-Time-A algorithm to problems involving uncertainty. We invoke results from the theory of asynchronous DP to prove that RTDP achieves optimal behavior in several different classes of problems. We also use the theory of asynchronous DP to illuminate aspects of other DP-based reinforcement learning methods such as Watkins'' Q-Learning algorithm. A secondary aim of this article is to provide a bridge between AI research on real-time planning and learning and relevant concepts and algorithms from control theory. This research was supported by grants to A.G. Barto from the National Science Foundation (ECS-8912623 and ECS-9214866) and the Air Force Office of Scientific Research, Bolling AFB (AFOSR-89-0526)." ] }
1901.01172
2890696839
Two-level indexes have been widely used to handle trajectories of moving objects that are constrained to a network. The top-level of these indexes handles the spatial dimension, whereas the bottom level handles the temporal dimension. The latter turns out to be an instance of the interval-intersection problem, but it has been tackled by non-specialized spatial indexes. In this work, we propose the use of a compact data structure on the bottom level of these indexes. Our experimental evaluation shows that our approach is both faster and smaller than existing solutions.
Like FNR-tree and MON-tree, we focus on these two-level indexes. To solve the spatial problem, that is, the representation of the network in space (two-dimensional plane), aforementioned structures use a 2D R-tree, storing the segments of the network as lines. With the spatial problem solved, time has to be associated with segments in the network. More precisely, it is necessary to look for all the time intervals (times in which some objects pass through a segment) that intersect with a given query interval. This problem is known in the literature as , an extension of the interval stabbing problem @cite_2 . Classical structures to solve this problem are Interval trees and Priority trees @cite_25 .
{ "cite_N": [ "@cite_25", "@cite_2" ], "mid": [ "2149906774", "1579902393" ], "abstract": [ "This introduction to computational geometry focuses on algorithms. Motivation is provided from the application areas as all techniques are related to particular applications in robotics, graphics, CAD CAM, and geographic information systems. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement.", "Given a set I of n intervals, a stabbing query consists of a point q and asks for all intervals in I that contain q. The Interval Stabbing Problem is to find a data structure that can handle stabbing queries efficiently. We propose a new, simple and optimal approach for different kinds of interval stabbing problems in a static setting where the query points and interval ends are in 1,...,O(n) ." ] }
1901.01172
2890696839
Two-level indexes have been widely used to handle trajectories of moving objects that are constrained to a network. The top-level of these indexes handles the spatial dimension, whereas the bottom level handles the temporal dimension. The latter turns out to be an instance of the interval-intersection problem, but it has been tackled by non-specialized spatial indexes. In this work, we propose the use of a compact data structure on the bottom level of these indexes. Our experimental evaluation shows that our approach is both faster and smaller than existing solutions.
Another difference with previous solutions is that our approach uncouples the network from the trajectories. This model known as Network-Matched has been successfully used @cite_10 @cite_18 , but without using compact data structures in its implementation. Our approach has the advantage that mapping trajectories to a network facilitates the finding of similar trajectories and, in consequence, it allows a better use of space.
{ "cite_N": [ "@cite_18", "@cite_10" ], "mid": [ "2116949673", "2010567219" ], "abstract": [ "In traffic research, management, and planning a number of path-based analyses are heavily used, e.g., for computing turn-times, evaluating green waves, or studying traffic flow. These analyses require retrieving the trajectories that follow the full path being analyzed. Existing path queries cannot sufficiently support such path-based analyses because they retrieve all trajectories that touch any edge in the path. In this paper, we define and formalize the strict path query. This is a novel query type tailored to support path-based analysis, where trajectories must follow all edges in the path. To efficiently support strict path queries, we present a novel NET work-constrained TRAjectory index (NETTRA). This index enables very efficient retrieval of trajectories that follow a specific path, i.e., strict path queries. NETTRA uses a new path encoding scheme that can determine if a trajectory follows a specific path by only retrieving data from the first and last edge in the path. To correctly answer strict path queries existing network-constrained trajectory indexes must retrieve data from all edges in the path. An extensive performance study of NETTRA using a very large real-world trajectory data set, consisting of 1.7 million trajectories (941 million GPS records) and a road network with 1.3 million edges, shows a speed-up of two orders of magnitude compared to state-of-the-art trajectory indexes.", "Tracking and managing the locations of moving objects are essential in modern intelligent transportation systems (ITSs). However, a number of limitations in existing methods make them unsuitable for real-world ITS applications. In particular, Euclidean-based methods are not accurate enough in representing locations and in analyzing traffic, unless the locations are frequently updated. Network-based methods require either digital maps to be installed in moving objects or transmission of prediction policies, which inevitably increase the cost. To solve these problems, we propose a network-matched trajectory-based moving-object database (NMTMOD) mechanism and a traffic flow analysis method using the NMTMOD. In the NMTMOD, the locations of moving objects are tracked through a dense sampling and batch uploading strategy, and a novel edge-centric network-matching method, which is running at the server side, is adopted to efficiently match the densely sampled GPS points to the network. In addition, a deviation-based trajectory optimization method is provided to minimize the trajectory size. Empirical studies with large real trajectory data set offer insight into the design properties of the proposed NMTMOD and suggest that the NMTMOD significantly outperforms other mobile-map free-moving-object database models in terms of precision of both location tracking and network-based traffic flow analysis." ] }
1907.05007
2958457519
With a growing demand for the search by image, many works have studied the task of fashion instance-level image retrieval (FIR). Furthermore, the recent works introduce a concept of fashion attribute manipulation (FAM) which manipulates a specific attribute (e.g color) of a fashion item while maintaining the rest of the attributes (e.g shape, and pattern). In this way, users can search not only "the same" items but also "similar" items with the desired attributes. FAM is a challenging task in that the attributes are hard to define, and the unique characteristics of a query are hard to be preserved. Although both FIR and FAM are important in real-life applications, most of the previous studies have focused on only one of these problem. In this study, we aim to achieve competitive performance on both FIR and FAM. To do so, we propose a novel method that converts a query into a representation with the desired attributes. We introduce a new idea of attribute manipulation at the feature level, by matching the distribution of manipulated features with real features. In this fashion, the attribute manipulation can be done independently from learning a representation from the image. By introducing the feature-level attribute manipulation, the previous methods for FIR can perform attribute manipulation without sacrificing their retrieval performance.
The recent instance-level image retrieval methods have shown a dramatic increase in performance using advances in metric learning @cite_29 @cite_21 @cite_1 @cite_15 @cite_0 . The metric learning focuses on the way it calculates loss by making a pair in an effective way. The instance ID is required for making a pair, and no additional labels are needed in general. In the fashion domain, several studies considered the cross-domain problem that aims to find in-shop images by querying street images in uncontrolled conditions @cite_36 @cite_11 @cite_22 @cite_3 . cross1 @cite_36 learns a similarity measure between the street and in-shop domain, and cross2 @cite_11 uses human clothing part alignment. gajic2018cross @cite_22 trained a simple model with triplet loss and max pooling.
{ "cite_N": [ "@cite_22", "@cite_36", "@cite_29", "@cite_21", "@cite_1", "@cite_3", "@cite_0", "@cite_15", "@cite_11" ], "mid": [ "2892607309", "2157732827", "2544587078", "2096733369", "2555897561", "2143183660", "2924876209", "2883348239", "2200092826" ], "abstract": [ "Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task.", "In this paper, we address a practical problem of cross-scenario clothing retrieval — given a daily human photo captured in general environment, e.g., on street, finding similar clothing in online shops, where the photos are captured more professionally and with clean background. There are large discrepancies between daily photo scenario and online shopping scenario. We first propose to alleviate the human pose discrepancy by locating 30 human parts detected by a well trained human detector. Then, founded on part features, we propose a two-step calculation to obtain more reliable one-to-many similarities between the query daily photo and online shopping photos: 1) the within-scenario one-to-many similarities between a query daily photo and the auxiliary set are derived by direct sparse reconstruction; and 2) by a cross-scenario many-to-many similarity transfer matrix inferred offline from an extra auxiliary set and the online shopping set, the reliable cross-scenario one-to-many similarities between the query daily photo and all online shopping photos are obtained. We collect a large online shopping dataset and a daily photo dataset, both of which are thoroughly labeled with 15 clothing attributes via Mechanic Turk. The extensive experimental evaluations on the collected datasets well demonstrate the effectiveness of the proposed framework for cross-scenario clothing retrieval.", "While deep learning has become a key ingredient in the top performing methods for many computer vision tasks, it has failed so far to bring similar improvements to instance-level image retrieval. In this article, we argue that reasons for the underwhelming results of deep methods on image retrieval are threefold: (1) noisy training data, (2) inappropriate deep architecture, and (3) suboptimal training procedure. We address all three issues. First, we leverage a large-scale but noisy landmark dataset and develop an automatic cleaning method that produces a suitable training set for deep retrieval. Second, we build on the recent R-MAC descriptor, show that it can be interpreted as a deep and differentiable architecture, and present improvements to enhance it. Last, we train this network with a siamese architecture that combines three streams with a triplet loss. At the end of the training process, the proposed architecture produces a global image representation in a single forward pass that is well suited for image retrieval. Extensive experiments show that our approach significantly outperforms previous retrieval approaches, including state-of-the-art methods based on costly local descriptor indexing and spatial verification. On Oxford 5k, Paris 6k and Holidays, we respectively report 94.7, 96.6, and 94.8 mean average precision. Our representations can also be heavily compressed using product quantization with little loss in accuracy.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.", "Deep metric learning has gained much popularity in recent years, following the success of deep learning. However, existing frameworks of deep metric learning based on contrastive loss and triplet loss often suffer from slow convergence, partially because they employ only one negative example while not interacting with the other negative classes in each update. In this paper, we propose to address this problem with a new metric learning objective called multi-class N-pair loss. The proposed objective function firstly generalizes triplet loss by allowing joint comparison among more than one negative examples - more specifically, N-1 negative examples - and secondly reduces the computational burden of evaluating deep embedding vectors via an efficient batch construction strategy using only N pairs of examples, instead of (N+1) x N. We demonstrate the superiority of our proposed loss to the triplet loss as well as other competing loss functions for a variety of tasks on several visual recognition benchmark, including fine-grained object recognition and verification, image clustering and retrieval, and face verification and identification.", "We present a scalable approach to automatically suggest relevant clothing products, given a single image without metadata. We formulate the problem as cross-scenario retrieval: the query is a real-world image, while the products from online shopping catalogs are usually presented in a clean environment. We divide our approach into two main stages: a) Starting from articulated pose estimation, we segment the person area and cluster promising image regions in order to detect the clothing classes present in the query image. b) We use image retrieval techniques to retrieve visually similar products from each of the detected classes. We achieve clothing detection performance comparable to the state-of-the-art on a very recent annotated dataset, while being more than 50 times faster. Finally, we present a large scale clothing suggestion scenario, where the product database contains over one million products.", "Recent studies in image retrieval task have shown that ensembling different models and combining multiple global descriptors lead to performance improvement. However, training different models for ensemble is not only difficult but also inefficient with respect to time or memory. In this paper, we propose a novel framework that exploits multiple global descriptors to get an ensemble-like effect while it can be trained in an end-to-end manner. The proposed framework is flexible and expandable by the global descriptor, CNN backbone, loss, and dataset. Moreover, we investigate the effectiveness of combining multiple global descriptors with quantitative and qualitative analysis. Our extensive experiments show that the combined descriptor outperforms a single global descriptor, as it can utilize different types of feature properties. In the benchmark evaluation, the proposed framework achieves the state-of-the-art performance on the CARS196, CUB200-2011, In-shop Clothes and Stanford Online Products on image retrieval tasks by a large margin compared to competing approaches. Our model implementations and pretrained models are publicly available.", "Person re-identification (re-ID) is a highly challenging task due to large variations of pose, viewpoint, illumination, and occlusion. Deep metric learning provides a satisfactory solution to person re-ID by training a deep network under supervision of metric loss, e.g., triplet loss. However, the performance of deep metric learning is greatly limited by traditional sampling methods. To solve this problem, we propose a Hard-Aware Point-to-Set (HAP2S) loss with a soft hard-mining scheme. Based on the point-to-set triplet loss framework, the HAP2S loss adaptively assigns greater weights to harder samples. Several advantageous properties are observed when compared with other state-of-the-art loss functions: (1) Accuracy: HAP2S loss consistently achieves higher re-ID accuracies than other alternatives on three large-scale benchmark datasets; (2) Robustness: HAP2S loss is more robust to outliers than other losses; (3) Flexibility: HAP2S loss does not rely on a specific weight function, i.e., different instantiations of HAP2S loss are equally effective. (4) Generality: In addition to person re-ID, we apply the proposed method to generic deep metric learning benchmarks including CUB-200-2011 and Cars196, and also achieve state-of-the-art results.", "In this paper, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same item in an online shop. This is an extremely challenging task due to visual differences between street photos (pictures of people wearing clothing in everyday uncontrolled settings) and online shop photos (pictures of clothing items on people, mannequins, or in isolation, captured by professionals in more controlled settings). We collect a new dataset for this application containing 404,683 shop photos collected from 25 different online retailers and 20,357 street photos, providing a total of 39,479 clothing item matches between street and shop photos. We develop three different methods for Exact Street to Shop retrieval, including two deep learning baseline methods, and a method to learn a similarity measure between the street and shop domains. Experiments demonstrate that our learned similarity significantly outperforms our baselines that use existing deep learning based representations." ] }
1907.05007
2958457519
With a growing demand for the search by image, many works have studied the task of fashion instance-level image retrieval (FIR). Furthermore, the recent works introduce a concept of fashion attribute manipulation (FAM) which manipulates a specific attribute (e.g color) of a fashion item while maintaining the rest of the attributes (e.g shape, and pattern). In this way, users can search not only "the same" items but also "similar" items with the desired attributes. FAM is a challenging task in that the attributes are hard to define, and the unique characteristics of a query are hard to be preserved. Although both FIR and FAM are important in real-life applications, most of the previous studies have focused on only one of these problem. In this study, we aim to achieve competitive performance on both FIR and FAM. To do so, we propose a novel method that converts a query into a representation with the desired attributes. We introduce a new idea of attribute manipulation at the feature level, by matching the distribution of manipulated features with real features. In this fashion, the attribute manipulation can be done independently from learning a representation from the image. By introducing the feature-level attribute manipulation, the previous methods for FIR can perform attribute manipulation without sacrificing their retrieval performance.
Recently, a concept of interactive search in fashion domain has been introduced @cite_10 @cite_14 @cite_8 . The main idea is that users manipulate attributes of a query, and the search engine finds a fashion item with the desired attributes ( green striped dress). Although attribute manipulation is particularly useful in fashion domain, we found that only a few papers have addressed this issue @cite_5 @cite_12 . zhao2017memory @cite_5 proposed to use a memory block to obtain a manipulated feature. The feature was combined with an attribute-specific representation from the memory block. Triplet pairs were constructed based on a combination of attribute labels, which requires a dense annotation for attributes. ak2018learning @cite_12 suggested a region-aware attribute manipulation by the integration of the attribute activation maps. The extracted features per regions are then combined to construct the global representation.
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2893666197", "2889720216", "2735001949", "2155855695", "2798951647" ], "abstract": [ "", "In this paper, we introduce an attribute-based interactive image search which can leverage human-in-the-loop feedback to iteratively refine image search results. We study active image search where human feedback is solicited exclusively in visual form, without using relative attribute annotations used by prior work which are not typically found in many datasets. In order to optimize the image selection strategy, a deep reinforcement model is trained to learn what images are informative rather than rely on hand-crafted measures typically leveraged in prior work. Additionally, we extend the recently introduced Conditional Similarity Network to incorporate global similarity in training visual embeddings, which results in more natural transitions as the user explores the learned similarity embeddings. Our experiments demonstrate the effectiveness of our approach, producing compelling results on both active image search and image attribute representation tasks.", "We introduce a new fashion search protocol where attribute manipulation is allowed within the interaction between users and search engines, e.g. manipulating the color attribute of the clothing from red to blue. It is particularly useful for image-based search when the query image cannot perfectly match users expectation of the desired product. To build such a search engine, we propose a novel memory-augmented Attribute Manipulation Network (AMNet) which can manipulate image representation at the attribute level. Given a query image and some attributes that need to modify, AMNet can manipulate the intermediate representation encoding the unwanted attributes and change them to the desired ones through following four novel components: (1) a dual-path CNN architecture for discriminative deep attribute representation learning, (2) a memory block with an internal memory and a neural controller for prototype attribute representation learning and hosting, (3) an attribute manipulation network to modify the representation of the query image with the prototype feature retrieved from the memory block, (4) a loss layer which jointly optimizes the attribute classification loss and a triplet ranking loss over triplet images for facilitating precise attribute manipulation and image retrieving. Extensive experiments conducted on two large-scale fashion search datasets, i.e. DARN and DeepFashion, have demonstrated that AMNet is able to achieve remarkably good performance compared with well-designed baselines in terms of effectiveness of attribute manipulation and search accuracy.", "In interactive image search, a user iteratively refines his results by giving feedback on exemplar images. Active selection methods aim to elicit useful feedback, but traditional approaches suffer from expensive selection criteria and cannot predict in formativeness reliably due to the imprecision of relevance feedback. To address these drawbacks, we propose to actively select \"pivot\" exemplars for which feedback in the form of a visual comparison will most reduce the system's uncertainty. For example, the system might ask, \"Is your target image more or less crowded than this image?\" Our approach relies on a series of binary search trees in relative attribute space, together with a selection function that predicts the information gain were the user to compare his envisioned target to the next node deeper in a given attribute's tree. It makes interactive search more efficient than existing strategies-both in terms of the system's selection time as well as the user's feedback effort.", "In this paper, we investigate ways of conducting a detailed fashion search using query images and attributes. A credible fashion search platform should be able to (1) find images that share the same attributes as the query image, (2) allow users to manipulate certain attributes, e.g. replace collar attribute from round to v-neck, and (3) handle region-specific attribute manipulations, e.g. replacing the color attribute of the sleeve region without changing the color attribute of other regions. A key challenge to be addressed is that fashion products have multiple attributes and it is important for each of these attributes to have representative features. To address these challenges, we propose the FashionSearchNet which uses a weakly supervised localization method to extract regions of attributes. By doing so, unrelated features can be ignored thus improving the similarity learning. Also, FashionSearchNet incorporates a new procedure that enables region awareness to be able to handle region-specific requests. FashionSearchNet outperforms the most recent fashion search techniques and is shown to be able to carry out different search scenarios using the dynamic queries." ] }
1907.05045
2957442170
Logic programming languages such as Datalog have become popular as Domain Specific Languages (DSLs) for solving large-scale, real-world problems, in particular, static program analysis and network analysis. The logic specifications which model analysis problems, process millions of tuples of data and contain hundreds of highly recursive rules. As a result, they are notoriously difficult to debug. While the database community has proposed several data-provenance techniques that address the Declarative Debugging Challenge for Databases, in the cases of analysis problems, these state-of-the-art techniques do not scale. In this paper, we introduce a novel bottom-up Datalog evaluation strategy for debugging: our provenance evaluation strategy relies on a new provenance lattice that includes proof annotations, and a new fixed-point semantics for semi-naive evaluation. A debugging query mechanism allows arbitrary provenance queries, constructing partial proof trees of tuples with minimal height. We integrate our technique into Souffle, a Datalog engine that synthesizes C++ code, and achieve high performance by using specialized parallel data structures. Experiments are conducted with DOOP DaCapo, producing proof annotations for tens of millions of output tuples. We show that our method has a runtime overhead of 1.27x on average while being more flexible than existing state-of-the-art techniques.
Debugging for logic programming languages has a long history, with work having been done on algorithmic debugging strategies since the 1980s @cite_10 @cite_8 . These works present a framework for the algorithmic debugging method for Prolog programs, where a system asks the user questions about the intended model of the program to find buggy rules. However, they are based on the SLDNF resolution of Prolog which is not truly declarative, and thus the semantics of Datalog differ. Our method aligns practically with the interactive debugging frameworks presented here, but applied to the bottom-up evaluation of Datalog, and with sophisticated and efficient techniques to generate the debugging information.
{ "cite_N": [ "@cite_10", "@cite_8" ], "mid": [ "1608059426", "1514468887" ], "abstract": [ "A meta-program, regardless of the nature of the programming language, is a program whose data denotes another (object) program. The importance of meta-programming can be gauged from its large number of applications. These include compilers, interpreters, program analysers, and program transformers. Furthermore, a logic program when used in artificial intelligence often formalises some knowledge; in this case a meta-program is viewed as a meta-reasoner for reasoning about this knowledge.", "The thesis lays a theoretical framework for program debugging, with the goal of partly mechanizing this activity. In particular, we formalize and develop algorithmic solutions to the following two questions: (1) How do we identify a bug in a program that behaves incorrectly? (2) How do we fix a bug, once one is identified? We develop interactive diagnosis algorithms that identify a bug in a program that behaves incorrectly, and implement them in Prolog for the diagnosis of Prolog programs. Their performance suggests that they can be the backbone of debugging aids that go far beyond what is offered by current programming environments. We develop an inductive inference algorithm that synthesizes logic programs from examples of their behavior. The algorithm incorporates the diagnosis algorithms as a component. It is incremental, and progresses by debugging a program with respect to the examples. The Model Inference System is a Prolog implementation of the algorithm. Its range of applications and efficiency is comparable to existing systems for program synthesis from examples and grammatical inference. We develop an algorithm that can fix a bug that has been identified, and integrate it with the diagnosis algorithms to form an interactive debugging system. By restricting the class of bugs we attempt to correct, the system can debug programs that are too complex for the Model Inference System to synthesize." ] }
1907.05045
2957442170
Logic programming languages such as Datalog have become popular as Domain Specific Languages (DSLs) for solving large-scale, real-world problems, in particular, static program analysis and network analysis. The logic specifications which model analysis problems, process millions of tuples of data and contain hundreds of highly recursive rules. As a result, they are notoriously difficult to debug. While the database community has proposed several data-provenance techniques that address the Declarative Debugging Challenge for Databases, in the cases of analysis problems, these state-of-the-art techniques do not scale. In this paper, we introduce a novel bottom-up Datalog evaluation strategy for debugging: our provenance evaluation strategy relies on a new provenance lattice that includes proof annotations, and a new fixed-point semantics for semi-naive evaluation. A debugging query mechanism allows arbitrary provenance queries, constructing partial proof trees of tuples with minimal height. We integrate our technique into Souffle, a Datalog engine that synthesizes C++ code, and achieve high performance by using specialized parallel data structures. Experiments are conducted with DOOP DaCapo, producing proof annotations for tens of millions of output tuples. We show that our method has a runtime overhead of 1.27x on average while being more flexible than existing state-of-the-art techniques.
Our method also fits into the established frameworks for provenance in Datalog @cite_23 and debugging for Datalog @cite_32 . The proof trees generated by our method are analogous to the computation graphs presented in @cite_32 , and are equally effective for debugging. They can also be seen as an extension of -provenance @cite_27 . However, our hybrid method for generating proof trees is novel, i.e., it permits multiple debugging queries in a single debugging cycle. Hence, our method is especially useful for debugging large Datalog specifications.
{ "cite_N": [ "@cite_27", "@cite_32", "@cite_23" ], "mid": [ "1552694902", "1600055947", "2167541073" ], "abstract": [ "With the proliferation of database views and curated databases, the issue of data provenance - where a piece of data came from and the process by which it arrived in the database - is becoming increasingly important, especially in scientific databases where understanding provenance is crucial to the accuracy and currency of data. In this paper we describe an approach to computing provenance when the data of interest has been created by a database query. We adopt a syntactic approach and present results for a general data model that applies to relational databases as well as to hierarchical data such as XML. A novel aspect of our work is a distinction between \"why\" provenance (refers to the source data that had some influence on the existence of the data) and \"where\" provenance (refers to the location(s) in the source databases from which the data was extracted).", "The logic programming language Datalog has been extensively researched as a query language for deductive databases. Although similar to Prolog, the Datalog operational mechanisms are more intricate, leading to computations quite hard to debug by traditional approaches. In this paper, we present a theoretical framework for debugging Datalog programs based on the ideas of declarative debugging. In our setting, a debugging session starts when the user detects an unexpected answer for some query, and ends with the debugger pointing to either an erroneous predicate or to a set of mutually recursive predicates as the cause of the unexpected answer. Instead of representing the computations by means of trees, as usual in declarative debugging, we propose graphs as a more convenient structure in the case of Datalog, proving formally the soundness and completeness of the debugging technique. We also present a debugging tool implemented in the publicly available deductive database system DES following this theoretical framework.", "Different notions of provenance for database queries have been proposed and studied in the past few years. In this article, we detail three main notions of database provenance, some of their applications, and compare and contrast amongst them. Specifically, we review why, how, and where provenance, describe the relationships among these notions of provenance, and describe some of their applications in confidence computation, view maintenance and update, debugging, and annotation propagation." ] }
1907.05045
2957442170
Logic programming languages such as Datalog have become popular as Domain Specific Languages (DSLs) for solving large-scale, real-world problems, in particular, static program analysis and network analysis. The logic specifications which model analysis problems, process millions of tuples of data and contain hundreds of highly recursive rules. As a result, they are notoriously difficult to debug. While the database community has proposed several data-provenance techniques that address the Declarative Debugging Challenge for Databases, in the cases of analysis problems, these state-of-the-art techniques do not scale. In this paper, we introduce a novel bottom-up Datalog evaluation strategy for debugging: our provenance evaluation strategy relies on a new provenance lattice that includes proof annotations, and a new fixed-point semantics for semi-naive evaluation. A debugging query mechanism allows arbitrary provenance queries, constructing partial proof trees of tuples with minimal height. We integrate our technique into Souffle, a Datalog engine that synthesizes C++ code, and achieve high performance by using specialized parallel data structures. Experiments are conducted with DOOP DaCapo, producing proof annotations for tens of millions of output tuples. We show that our method has a runtime overhead of 1.27x on average while being more flexible than existing state-of-the-art techniques.
Debugging Datalog specifications is not the only use case for provenance, with user-guided approaches @cite_40 @cite_22 @cite_3 @cite_5 for program analysis also relying on tracking the origins of data. In @cite_3 @cite_40 @cite_22 , a user may tag certain static analysis alarms, to increase or decrease their importance in the next analysis cycle. In @cite_5 , the analysis system automatically generates an appropriate abstraction, by iteratively trying and refining failing abstractions. All these approaches rely on an annotation framework for Datalog: the user-guided systems require the user to add an annotation representing the importance of an alarm, and the abstraction refinement system requires the system to tag failing analyses with annotations. In any case, our provenance evaluation strategy would fit well into these systems, by providing an annotation framework at the Datalog engine level.
{ "cite_N": [ "@cite_5", "@cite_40", "@cite_22", "@cite_3" ], "mid": [ "2050680750", "2762682773", "2079877139", "2798352717" ], "abstract": [ "A central task for a program analysis concerns how to efficiently find a program abstraction that keeps only information relevant for proving properties of interest. We present a new approach for finding such abstractions for program analyses written in Datalog. Our approach is based on counterexample-guided abstraction refinement: when a Datalog analysis run fails using an abstraction, it seeks to generalize the cause of the failure to other abstractions, and pick a new abstraction that avoids a similar failure. Our solution uses a boolean satisfiability formulation that is general, complete, and optimal: it is independent of the Datalog solver, it generalizes the failure of an abstraction to as many other abstractions as possible, and it identifies the cheapest refined abstraction to try next. We show the performance of our approach on a pointer analysis and a typestate analysis, on eight real-world Java benchmark programs.", "We propose an interactive approach to resolve static analysis alarms. Our approach synergistically combines a sound but imprecise analysis with precise but unsound heuristics, through user interaction. In each iteration, it solves an optimization problem to find a set of questions for the user such that the expected payoff is maximized. We have implemented our approach in a tool, Ursa, that enables interactive alarm resolution for any analysis specified in the declarative logic programming language Datalog. We demonstrate the effectiveness of Ursa on a state-of-the-art static datarace analysis using a suite of 8 Java programs comprising 41-194 KLOC each. Ursa is able to eliminate 74 of the false alarms per benchmark with an average payoff of 12× per question. Moreover, Ursa prioritizes user effort effectively by posing questions that yield high payoffs earlier.", "Program analysis tools often produce undesirable output due to various approximations. We present an approach and a system EUGENE that allows user feedback to guide such approximations towards producing the desired output. We formulate the problem of user-guided program analysis in terms of solving a combination of hard rules and soft rules: hard rules capture soundness while soft rules capture degrees of approximations and preferences of users. Our technique solves the rules using an off-the-shelf solver in a manner that is sound (satisfies all hard rules), optimal (maximally satisfies soft rules), and scales to real-world analyses and programs. We evaluate EUGENE on two different analyses with labeled output on a suite of seven Java programs of size 131–198 KLOC. We also report upon a user study involving nine users who employ EUGENE to guide an information-flow analysis on three Java micro-benchmarks. In our experiments, EUGENE significantly reduces misclassified reports upon providing limited amounts of feedback.", "Program analyses necessarily make approximations that often lead them to report true alarms interspersed with many false alarms. We propose a new approach to leverage user feedback to guide program analyses towards true alarms and away from false alarms. Our approach associates each alarm with a confidence value by performing Bayesian inference on a probabilistic model derived from the analysis rules. In each iteration, the user inspects the alarm with the highest confidence and labels its ground truth, and the approach recomputes the confidences of the remaining alarms given this feedback. It thereby maximizes the return on the effort by the user in inspecting each alarm. We have implemented our approach in a tool named Bingo for program analyses expressed in Datalog. Experiments with real users and two sophisticated analyses---a static datarace analysis for Java programs and a static taint analysis for Android apps---show significant improvements on a range of metrics, including false alarm rates and number of bugs found." ] }
1907.05190
2952522529
Not all types of supervision signals are created equal: Different types of feedback have different costs and effects on learning. We show how self-regulation strategies that decide when to ask for which kind of feedback from a teacher (or from oneself) can be cast as a learning-to-learn problem leading to improved cost-aware sequence-to-sequence learning. In experiments on interactive neural machine translation, we find that the self-regulator discovers an @math -greedy strategy for the optimal cost-quality trade-off by mixing different feedback types including corrections, error markups, and self-supervision. Furthermore, we demonstrate its robustness under domain shift and identify it as a promising alternative to active learning.
Further connections between our work on learning with multiple feedback types can be drawn to various extensions of reinforcement learning by multiple tasks @cite_27 , multiple loss functions @cite_28 , or multiple policies @cite_0 .
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_0" ], "mid": [ "2963780286", "2551887912", "2785940258" ], "abstract": [ "Teaching is critical to human society: it is with teaching that prospective students are educated and human civilization can be inherited and advanced. A good teacher not only provides his her students with qualified teaching materials (e.g., textbooks), but also sets up appropriate learning objectives (e.g., course projects and exams) considering different situations of a student. When it comes to artificial intelligence, treating machine learning models as students, the loss functions that are optimized act as perfect counterparts of the learning objective set by the teacher. In this work, we explore the possibility of imitating human teaching behaviors by dynamically and automatically outputting appropriate loss functions to train machine learning models. Different from typical learning settings in which the loss function of a machine learning model is predefined and fixed, in our framework, the loss function of a machine learning model (we call it student) is defined by another machine learning model (we call it teacher). The ultimate goal of teacher model is cultivating the student to have better performance measured on development dataset. Towards that end, similar to human teaching, the teacher, a parametric model, dynamically outputs different loss functions that will be used and optimized by its student model at different training stages. We develop an efficient learning method for the teacher model that makes gradient based optimization possible, exempt of the ineffective solutions such as policy optimization. We name our method as learning to teach with dynamic loss functions'' (L2T-DLF for short). Extensive experiments on real world tasks including image classification and neural machine translation demonstrate that our method significantly improves the quality of various student models.", "Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth.", "In the pursuit of increasingly intelligent learning systems, abstraction plays a vital role in enabling sophisticated decisions to be made in complex environments. The options framework provides formalism for such abstraction over sequences of decisions. However most models require that options be given a priori, presumably specified by hand, which is neither efficient, nor scalable. Indeed, it is preferable to learn options directly from interaction with the environment. Despite several efforts, this remains a difficult problem: many approaches require access to a model of the environmental dynamics, and inferred options are often not interpretable, which limits our ability to explain the system behavior for verification or debugging purposes. In this work we develop a novel policy gradient method for the automatic learning of policies with options. This algorithm uses inference methods to simultaneously improve all of the options available to an agent, and thus can be employed in an off-policy manner, without observing option labels. Experimental results show that the options learned can be interpreted. Further, we find that the method presented here is more sample efficient than existing methods, leading to faster and more stable learning of policies with options." ] }
1907.04954
2954915885
This paper presents work on modelling the social psychological aspect of socialization in the case of a computationally creative master-apprentice system. In each master-apprentice pair, the master, a genetic algorithm, is seen as a parent for its apprentice, which is an NMT based sequence-to-sequence model. The effect of different parenting styles on the creative output of each pair is in the focus of this study. This approach brings a novel view point to computational social creativity, which has mainly focused in the past on computationally creative agents being on a socially equal level, whereas our approach studies the phenomenon in the context of a social hierarchy.
Research on an agent community consisting of self-organizing maps @cite_14 , although outside of the computational creativity paradigm, presents a way of simulating the emergence of language. The agents are capable of meaning negotiation and converging into a common language to communicate about edibility of different food items in their shared world.
{ "cite_N": [ "@cite_14" ], "mid": [ "2187002499" ], "abstract": [ "In this article, we present a model of a cognitive system, or an agent, with the following properties: it can perceive its environment, it can move in its environment, it can perform some simple actions, and it can send and receive messages. The main components of its internal structure include a working memory, a semantic memory, and a decision making mechanism. In our implemented simulation model, the agent associates linguistic expressions and vis ual perceptions. The main motivation for communication is to exchange information. The linguistic expressions are not symbolic but pattern-like. With the current framework and simulation tool, we wish to provide a useful model for language emergence based on the unsupervised learning paradigm among a community of communicating autonomous agents. In the future, we plan to include other aspects of cognitive modeling including more realistic multimodal information processing, anticipatory decision making, language evolution, and emotional modeling." ] }
1907.04954
2954915885
This paper presents work on modelling the social psychological aspect of socialization in the case of a computationally creative master-apprentice system. In each master-apprentice pair, the master, a genetic algorithm, is seen as a parent for its apprentice, which is an NMT based sequence-to-sequence model. The effect of different parenting styles on the creative output of each pair is in the focus of this study. This approach brings a novel view point to computational social creativity, which has mainly focused in the past on computationally creative agents being on a socially equal level, whereas our approach studies the phenomenon in the context of a social hierarchy.
The papers discussed in this section, as well as other similar previously conducted work @cite_7 @cite_0 @cite_22 , study mostly the collaboration of agents that have an equal social status, in contrast to our case where the social status is hierarchical. Therefore we find that there's need for conducting the study presented in this paper to shed some light into asymmetrical social relations in computational creativity.
{ "cite_N": [ "@cite_0", "@cite_22", "@cite_7" ], "mid": [ "2952184798", "", "1841291977" ], "abstract": [ "One particular challenge in AI is the computational modelling and simulation of creativity. Feedback and learning from experience are key aspects of the creative process. Here we investigate how we could implement feedback in creative systems using a social model. From the field of creative writing we borrow the concept of a Writers Workshop as a model for learning through feedback. The Writers Workshop encourages examination, discussion and debates of a piece of creative work using a prescribed format of activities. We propose a computational model of the Writers Workshop as a roadmap for incorporation of feedback in artificial creativity systems. We argue that the Writers Workshop setting describes the anatomy of the creative process. We support our claim with a case study that describes how to implement the Writers Workshop model in a computational creativity system. We present this work using patterns other people can follow to implement similar designs in their own systems. We conclude by discussing the broader relevance of this model to other aspects of AI.", "", "Holland's (1975) genetic algorithm is a minimal computer model of natural selection that made it possible to investigate the effect of manipulating specific parameters on the evolutionary process. If culture is, like biology, a form of evolution, it should be possible to similarly abstract the underlying skeleton of the process and develop a minimal model of it. Meme and Variations, or MAV, is a computational model, inspired by the genetic algorithm, of how ideas evolve in a society of interacting individuals (Gabora 1995). The name is a pun on the classical music form 'theme and variations', because it is based on the premise that novel ideas are variations of old ones; they result from tweaking or combining existing ideas in new ways ( 1981). MAV explores the impact of biological phenomena such as over-dominance and epistasis as well as cognitive and social phenomena such as the ability to learn generalizations or imitate others on the fitness and diversity of cultural transmissible actions." ] }
1907.04569
2959551295
Road markings provide guidance to traffic participants and enforce safe driving behaviour, understanding their semantic meaning is therefore paramount in (automated) driving. However, producing the vast quantities of road marking labels required for training state-of-the-art deep networks is costly, time-consuming, and simply infeasible for every domain and condition. In addition, training data retrieved from virtual worlds often lack the richness and complexity of the real world and consequently cannot be used directly. In this paper, we provide an alternative approach in which new road marking training pairs are automatically generated. To this end, we apply principles of domain randomization to the road layout and synthesize new images from altered semantic labels. We demonstrate that training on these synthetic pairs improves mIoU of the segmentation of rare road marking classes during real-world deployment in complex urban environments by more than 12 percentage points, while performance for other classes is retained. This framework can easily be scaled to all domains and conditions to generate large-scale road marking datasets, while avoiding manual labelling effort.
Road marking segmentation as demonstrated in @cite_31 is closest to the application of this paper. The authors train a network for semantic road marking segmentation and improve their results by predicting the vanishing point simultaneously. In contrast to this paper, they require thousands of hand-labelled images, which is very labour expensive. Alternatively, the authors of @cite_34 hand-label road markings such as arrows and bicycle signs and train an object detection network to predict bounding boxes instead of pixel segmentations. In previous work @cite_20 (includes more extensive review), we have introduced a weakly-supervised approach for binary road marking segmentation, which is used here to acquire road marking labels for real-world scenes.
{ "cite_N": [ "@cite_31", "@cite_34", "@cite_20" ], "mid": [ "2964332990", "2909971279", "2890657615" ], "abstract": [ "In this paper, we propose a unified end-to-end trainable multi-task network that jointly handles lane and road marking detection and recognition that is guided by a vanishing point under adverse weather conditions. We tackle rainy and low illumination conditions, which have not been extensively studied until now due to clear challenges. For example, images taken under rainy days are subject to low illumination, while wet roads cause light reflection and distort the appearance of lane and road markings. At night, color distortion occurs under limited illumination. As a result, no benchmark dataset exists and only a few developed algorithms work under poor weather conditions. To address this shortcoming, we build up a lane and road marking benchmark which consists of about 20,000 images with 17 lane and road marking classes under four different scenarios: no rain, rain, heavy rain, and night. We train and evaluate several versions of the proposed multi-task network and validate the importance of each task. The resulting approach, VPGNet, can detect and classify lanes and road markings, and predict a vanishing point with a single forward pass. Experimental results show that our approach achieves high accuracy and robustness under various conditions in realtime (20 fps). The benchmark and the VPGNet model will be publicly available", "Detection and classification of road markings are a prerequisite for operating autonomous vehicles. Although most studies have focused on the detection of road lane markings, the detection and classification of other road markings, such as arrows and bike markings, have not received much attention. Therefore, we propose a detection and classification method for various types of arrow markings and bike markings on the road in various complex environments using a one-stage deep convolutional neural network (CNN), called RetinaNet. We tested the proposed method in complex road scenarios with three open datasets captured by visible light camera sensors, namely the Malaga urban dataset, the Cambridge dataset, and the Daimler dataset on both a desktop computer and an NVIDIA Jetson TX2 embedded system. Experimental results obtained using the three open databases showed that the proposed RetinaNet-based method outperformed other methods for detection and classification of road markings in terms of both accuracy and processing time.", "This paper presents a weakly-supervised learning system for real-time road marking detection using images of complex urban environments obtained from a monocular camera. We avoid expensive manual labelling by exploiting additional sensor modalities to generate large quantities of annotated images in a weakly-supervised way, which are then used to train a deep semantic segmentation network. At run time, the road markings in the scene are detected in real time in a variety of traffic situations and under different lighting and weather conditions without relying on any preprocessing steps or predefined models. We achieve reliable qualitative performance on the Oxford RobotCar dataset, and demonstrate quantitatively on the CamVid dataset that exploiting these annotations significantly reduces the required labelling effort and improves performance." ] }
1907.04569
2959551295
Road markings provide guidance to traffic participants and enforce safe driving behaviour, understanding their semantic meaning is therefore paramount in (automated) driving. However, producing the vast quantities of road marking labels required for training state-of-the-art deep networks is costly, time-consuming, and simply infeasible for every domain and condition. In addition, training data retrieved from virtual worlds often lack the richness and complexity of the real world and consequently cannot be used directly. In this paper, we provide an alternative approach in which new road marking training pairs are automatically generated. To this end, we apply principles of domain randomization to the road layout and synthesize new images from altered semantic labels. We demonstrate that training on these synthetic pairs improves mIoU of the segmentation of rare road marking classes during real-world deployment in complex urban environments by more than 12 percentage points, while performance for other classes is retained. This framework can easily be scaled to all domains and conditions to generate large-scale road marking datasets, while avoiding manual labelling effort.
However, virtual data lacks the richness and complexity of the real world. A possible alternative is to augment real-world data. For the task of semantic segmentation this means either generating new, photo-realistic images from semantic labels @cite_26 @cite_5 @cite_7 or enriching semantic labels with virtually-generated information @cite_8 . Both of these principles are applied in this paper. For object detection tasks, the main difficulty is to place the (dynamic) objects coherently into the scene. The simplest solution is random object placement (i.e domain randomization) @cite_19 . Alternatively, the authors of @cite_6 @cite_2 place photo-realistic, synthetic cars into real-world images by taking into account the geometry of the scene. The most recent approaches @cite_25 @cite_0 @cite_4 @cite_28 learn context-aware object placement from real-world examples. However, placing dynamic objects, such as pedestrians, seems less complex than road markings, because the space of realistic solutions is less restrictive. Therefore, we place road markings randomly onto the road surface in this paper.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_7", "@cite_8", "@cite_28", "@cite_6", "@cite_0", "@cite_19", "@cite_2", "@cite_5", "@cite_25" ], "mid": [ "2899412645", "", "", "2903103701", "", "2743627947", "", "2963201472", "", "", "2902804655" ], "abstract": [ "Semantic segmentation is one of the basic topics in computer vision, it aims to assign semantic labels to every pixel of an image. Unbalanced semantic label distribution could have a negative influence on segmentation accuracy. In this paper, we investigate using data augmentation approach to balance the semantic label distribution in order to improve segmentation performance. We propose using generative adversarial networks (GANs) to generate realistic images for improving the performance of semantic segmentation networks. Experimental results show that the proposed method can not only improve segmentation performance on those classes with low accuracy, but also obtain 1.3 to 2.1 increase in average segmentation accuracy. It shows that this augmentation method can boost accuracy and be easily applicable to any other segmentation models.", "", "", "In this paper, we make the first attempt to build a framework to simultaneously estimate semantic parts, shape, translation, and orientation of cars from single street view. Our framework contains three major contributions. Firstly, a novel domain adaptation approach based on the class consistency loss is developed to transfer our part segmentation model from the synthesized images to the real images. Secondly, we propose a novel network structure that leverages part-level features from street views and 3D losses for pose and shape estimation. Thirdly, we construct a high quality dataset that contains more than 300 different car models with physical dimensions and part-level annotations based on global and local deformations. We have conducted experiments on both synthesized data and real images. Our results show that the domain adaptation approach can bring 35.5 percentage point performance improvement in terms of mean intersection-over-union score (mIoU) comparing with the baseline network using domain randomization only. Our network for translation and orientation estimation achieves competitive performance on highly complex street views (e.g., 11 cars per image on average). Moreover, our network is able to reconstruct a list of 3D car models with part-level details from street views, which could benefit various applications such as fine-grained car recognition, vehicle re-identification, and traffic simulation.", "", "The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.", "", "We present a system for training deep neural networks for object detection using synthetic images. To handle the variability in real-world data, the system relies upon the technique of domain randomization, in which the parameters of the simulator-such as lighting, pose, object textures, etc.-are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest. We explore the importance of these parameters, showing that it is possible to produce a network with compelling performance using only non-artistically-generated synthetic data. With additional fine-tuning on real data, the network yields better performance than using real data alone. This result opens up the possibility of using inexpensive synthetic data for training neural networks while avoiding the need to collect large amounts of hand-annotated real-world data or to generate high-fidelity synthetic worlds-both of which remain bottlenecks for many applications. The approach is evaluated on bounding box detection of cars on the KITTI dataset.", "", "", "We address the issue of learning from synthetic domain randomized data effectively. While previous works have showcased domain randomization as an effective learning approach, it lacks in challenging the learner and wastes valuable compute on generating easy examples. This can be attributed to uniform randomization over the rendering parameter distribution. In this work, firstly we provide a theoretical perspective on characteristics of domain randomization and analyze its limitations. As a solution to these limitations, we propose a novel algorithm which closes the loop between the synthetic generative model and the learner in an adversarial fashion. Our framework easily extends to the scenario when there is unlabelled target data available, thus incorporating domain adaptation. We evaluate our method on diverse vision tasks using state-of-the-art simulators for public datasets like CLEVR, Syn2Real, and VIRAT, where we demonstrate that a learner trained using adversarial data generation performs better than using a random data generation strategy." ] }
1907.04569
2959551295
Road markings provide guidance to traffic participants and enforce safe driving behaviour, understanding their semantic meaning is therefore paramount in (automated) driving. However, producing the vast quantities of road marking labels required for training state-of-the-art deep networks is costly, time-consuming, and simply infeasible for every domain and condition. In addition, training data retrieved from virtual worlds often lack the richness and complexity of the real world and consequently cannot be used directly. In this paper, we provide an alternative approach in which new road marking training pairs are automatically generated. To this end, we apply principles of domain randomization to the road layout and synthesize new images from altered semantic labels. We demonstrate that training on these synthetic pairs improves mIoU of the segmentation of rare road marking classes during real-world deployment in complex urban environments by more than 12 percentage points, while performance for other classes is retained. This framework can easily be scaled to all domains and conditions to generate large-scale road marking datasets, while avoiding manual labelling effort.
* Recently, several approaches have been introduced for more complex scene manipulation, beyond simple augmentation. Additional sensor modalities are used in @cite_29 to offer the flexibility (e.g. different view points) of a virtual simulator, while generating data with the fidelity and richness of real-world images. The authors of @cite_35 introduce a probabilistic programming language to synthesize complex scenarios from existing domain knowledge. Another system @cite_21 offers similar levels of control, while the camera sensor is modelled accurately at the same time. These frameworks potentially offer a way to generate improved training data for our approach.
{ "cite_N": [ "@cite_35", "@cite_29", "@cite_21" ], "mid": [ "2894520414", "2950317706", "2912124074" ], "abstract": [ "Synthetic data has proved increasingly useful in both training and testing machine learning models such as neural networks. The major problem in synthetic data generation is producing meaningful data that is not simply random but reflects properties of real-world data or covers particular cases of interest. In this paper, we show how a probabilistic programming language can be used to guide data synthesis by encoding domain knowledge about what data is useful. Specifically, we focus on data sets arising from \"scenes\", configurations of physical objects; for example, images of cars on a road. We design a domain-specific language, Scenic, for describing \"scenarios\" that are distributions over scenes. The syntax of Scenic makes it easy to specify complex relationships between the positions and orientations of objects. As a probabilistic programming language, Scenic allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene. A Scenic scenario thereby implicitly defines a distribution over scenes, and we formulate the problem of sampling from this distribution as \"scene improvisation\". We implement an improviser for Scenic scenarios and apply it in a case study generating synthetic data sets for a convolutional neural network designed to detect cars in road images. Our experiments demonstrate the usefulness of our approach by using Scenic to analyze and improve the performance of the network in various scenarios.", "Simulation systems have become an essential component in the development and validation of autonomous driving technologies. The prevailing state-of-the-art approach for simulation is to use game engines or high-fidelity computer graphics (CG) models to create driving scenarios. However, creating CG models and vehicle movements (e.g., the assets for simulation) remains a manual task that can be costly and time-consuming. In addition, the fidelity of CG images still lacks the richness and authenticity of real-world images and using these images for training leads to degraded performance. In this paper we present a novel approach to address these issues: Augmented Autonomous Driving Simulation (AADS). Our formulation augments real-world pictures with a simulated traffic flow to create photo-realistic simulation images and renderings. More specifically, we use LiDAR and cameras to scan street scenes. From the acquired trajectory data, we generate highly plausible traffic flows for cars and pedestrians and compose them into the background. The composite images can be re-synthesized with different viewpoints and sensor models. The resulting images are photo-realistic, fully annotated, and ready for end-to-end training and testing of autonomous driving systems from perception to planning. We explain our system design and validate our algorithms with a number of autonomous driving tasks from detection to segmentation and predictions. Compared to traditional approaches, our method offers unmatched scalability and realism. Scalability is particularly important for AD simulation and we believe the complexity and diversity of the real world cannot be realistically captured in a virtual environment. Our augmented approach combines the flexibility in a virtual environment (e.g., vehicle movements) with the richness of the real world to allow effective simulation of anywhere on earth.", "We describe an open-source simulator that creates sensor irradiance and sensor images of typical automotive scenes in urban settings. The purpose of the system is to support camera design and testing for automotive applications. The user can specify scene parameters (e.g., scene type, road type, traffic density, time of day) to assemble a large number of random scenes from graphics assets stored in a database. The sensor irradiance is generated using quantitative computer graphics methods, and the sensor images are created using image systems sensor simulation. The synthetic sensor images have pixel level annotations; hence, they can be used to train and evaluate neural networks for imaging tasks, such as object detection and classification. The end-to-end simulation system supports quantitative assessment, from scene to camera to network accuracy, for automotive applications." ] }
1907.04868
2959020461
We are interested in the task of generating multi-instrumental music scores. The Transformer architecture has recently shown great promise for the task of piano score generation; here we adapt it to the multi-instrumental setting. Transformers are complex, high-dimensional language models which are capable of capturing long-term structure in sequence data, but require large amounts of data to fit. Their success on piano score generation is partially explained by the large volumes of symbolic data readily available for that domain. We leverage the recently-introduced NES-MDB dataset of four-instrument scores from an early video game sound synthesis chip (the NES), which we find to be well-suited to training with the Transformer architecture. To further improve the performance of our model, we propose a pre-training technique to leverage the information in a large collection of heterogeneous music, namely the Lakh MIDI dataset. Despite differences between the two corpora, we find that this transfer learning procedure improves both quantitative and qualitative performance for our primary task.
Music generation has been an active area of research for decades. Most early work involved manually encoding musical rules into generative systems or rearranging fragments of human-composed music; see @cite_13 for an extensive overview. Recent research has favored machine learning systems which automatically extract patterns from corpora of human-composed music.
{ "cite_N": [ "@cite_13" ], "mid": [ "1556624199" ], "abstract": [ "Algorithmic composition composing by means of formalizable methods has a century old tradition not only in occidental music history. This is the first book to provide a detailed overview of prominent procedures of algorithmic composition in a pragmatic way rather than by treating formalizable aspects in single works. In addition to an historic overview, each chapter presents a specific class of algorithm in a compositional context by providing a general introduction to its development and theoretical basis and describes different musical applications. Each chapter outlines the strengths, weaknesses and possible aesthetical implications resulting from the application of the treated approaches. Topics covered are: markov models, generative grammars, transition networks, chaos and self-similarity, genetic algorithms, cellular automata, neural networks and artificial intelligence are covered. The comprehensive bibliography makes this work ideal for the musician and the researcher alike." ] }
1907.04868
2959020461
We are interested in the task of generating multi-instrumental music scores. The Transformer architecture has recently shown great promise for the task of piano score generation; here we adapt it to the multi-instrumental setting. Transformers are complex, high-dimensional language models which are capable of capturing long-term structure in sequence data, but require large amounts of data to fit. Their success on piano score generation is partially explained by the large volumes of symbolic data readily available for that domain. We leverage the recently-introduced NES-MDB dataset of four-instrument scores from an early video game sound synthesis chip (the NES), which we find to be well-suited to training with the Transformer architecture. To further improve the performance of our model, we propose a pre-training technique to leverage the information in a large collection of heterogeneous music, namely the Lakh MIDI dataset. Despite differences between the two corpora, we find that this transfer learning procedure improves both quantitative and qualitative performance for our primary task.
Other research focuses on the multi-instrumental setting and seeks to provide systems which can with human-composed material @cite_31 @cite_12 @cite_11 @cite_0 . Unlike the system we develop here, these approaches all require complex inference procedures to generate music without human input. Recent work @cite_18 @cite_28 @cite_9 attempts multi-instrumental music generation from scratch, but these methods are limited to generating fixed lengths, unlike our method which can generate arbitrarily-long sequences. There is also music generation research that operates on the audio domain @cite_1 @cite_2 , though this work is largely unrelated to symbolic domain methods. The work described in this paper is methodologically similar to MuseNet @cite_25 , which was concurrent with our work.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_9", "@cite_1", "@cite_0", "@cite_2", "@cite_31", "@cite_25", "@cite_12", "@cite_11" ], "mid": [ "2964289981", "2792210438", "2963681776", "2894295011", "2902184207", "2962942158", "2161850243", "", "2753868141", "2963575853" ], "abstract": [ "", "The Variational Autoencoder (VAE) has proven to be an effective model for producing semantically meaningful latent representations for natural data. However, it has thus far seen limited application to sequential data, and, as we demonstrate, existing recurrent VAE models have difficulty modeling sequences with long-term structure. To address this issue, we propose the use of a hierarchical decoder, which first outputs embeddings for subsequences of the input and then uses these embeddings to generate each subsequence independently. This structure encourages the model to utilize its latent code, thereby avoiding the \"posterior collapse\" problem which remains an issue for recurrent VAEs. We apply this architecture to modeling sequences of musical notes and find that it exhibits dramatically better sampling, interpolation, and reconstruction performance than a \"flat\" baseline model. An implementation of our \"MusicVAE\" is available online at this http URL", "", "", "", "Realistic music generation is a challenging task. When building generative models of music that are learnt from data, typically high-level representations such as scores or MIDI are used that abstract away the idiosyncrasies of a particular performance. But these nuances are very important for our perception of musicality and realism, so in this work we embark on modelling music in the raw audio domain. It has been shown that autoregressive models excel at generating raw audio waveforms of speech, but when applied to music, we find them biased towards capturing local signal structure at the expense of modelling long-range correlations. This is problematic because music exhibits structure at many different timescales. In this work, we explore autoregressive discrete autoencoders (ADAs) as a means to enable autoregressive models to capture long-range correlations in waveforms. We find that they allow us to unconditionally generate piano music directly in the raw audio domain, which shows stylistic consistency across tens of seconds.", "We describe how we used a data set of chorale harmonisations composed by Johann Sebastian Bach to train Hidden Markov Models. Using a probabilistic framework allows us to create a harmonisation system which learns from examples, and which can compose new harmonisations. We make a quantitative comparison of our system's harmonisation performance against simpler models, and provide example harmonisations.", "", "Machine learning models of music typically break down the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs here and there, often revisiting choices previously made. We explore the use of blocked Gibbs sampling as an analogue to the human approach, and introduce Coconet, a convolutional neural network in the NADE family of generative models. Despite ostensibly sampling from the same distribution as the NADE ancestral sampling procedure, we find that a blocked Gibbs approach significantly improves sample quality. We provide evidence that this is due to some conditional distributions being poorly modeled. Moreover, we show that even the cheap approximate blocked Gibbs procedure from (2014) yields better samples than ancestral sampling. We demonstrate the versatility of our method on unconditioned polyphonic music generation.", "This paper introduces DeepBach, a graphical model aimed at modeling polyphonic music and specifically hymn-like pieces. We claim that, after being trained on the chorale harmonizations by Johann Sebastian Bach, our model is capable of generating highly convincing chorales in the style of Bach. DeepBach's strength comes from the use of pseudo-Gibbs sampling coupled with an adapted representation of musical data. This is in contrast with many automatic music composition approaches which tend to compose music sequentially. Our model is also steerable in the sense that a user can constrain the generation by imposing positional constraints such as notes, rhythms or cadences in the generated score. We also provide a plugin on top of the MuseScore music editor making the interaction with Deep-Bach easy to use." ] }
1907.04669
2960010413
When predictive models are used to support complex and important decisions, the ability to explain a model's reasoning can increase trust, expose hidden biases, and reduce vulnerability to adversarial attacks. However, attempts at interpreting models are often ad hoc and application-specific, and the concept of interpretability itself is not well-defined. We propose a general optimization framework to create explanations for linear models. Our methodology decomposes a linear model into a sequence of models of increasing complexity using coordinate updates on the coefficients. Computing this decomposition optimally is a difficult optimization problem for which we propose exact algorithms and scalable heuristics. By solving this problem, we can derive a parametrized family of interpretability metrics for linear models that generalizes typical proxies, and study the tradeoff between interpretability and predictive accuracy.
Many interpretable machine learning approaches involve optimizing some characteristics of the model as proxies for interpretability. Examples include sparsity for linear models @cite_14 , number of splits for decision trees @cite_29 , number of subspace features for case-based reasoning @cite_26 , or depth for rule lists @cite_1 @cite_5 . Some approaches optimize these proxies directly, while others fit auxiliary simple models to more complex black-box models @cite_4 @cite_6 @cite_7 @cite_17 @cite_27 @cite_24 .
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_4", "@cite_7", "@cite_29", "@cite_1", "@cite_6", "@cite_24", "@cite_27", "@cite_5", "@cite_17" ], "mid": [ "1523985187", "2963673242", "2282821441", "2613463286", "1594031697", "2962861173", "", "", "", "2964112969", "2617799811" ], "abstract": [ "Discover New Methods for Dealing with High-Dimensional Data A sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data. Top experts in this rapidly evolving field, the authors describe the lasso for linear regression and a simple coordinate descent algorithm for its computation. They discuss the application of 1 penalties to generalized linear models and support vector machines, cover generalized penalties such as the elastic net and group lasso, and review numerical methods for optimization. They also present statistical inference methods for fitted (lasso) models, including the bootstrap, Bayesian methods, and recently developed approaches. In addition, the book examines matrix decomposition, sparse multivariate analysis, graphical models, and compressed sensing. It concludes with a survey of theoretical results for the lasso. In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. Data analysts, computer scientists, and theorists will appreciate this thorough and up-to-date treatment of sparse statistical modeling.", "We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the \"quintessential\" observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants' understanding when using explanations produced by BCM, compared to those given by prior art.", "Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally varound the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.", "Algorithmic systems that employ machine learning are often opaque—it is difficult to explain why a certain decision was made. We present a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of input influence on system outputs. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals and groups. Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the average marginal influence of individual inputs within such a set (e.g., income) using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting.", "Background. Introduction to Tree Classification. Right Sized Trees and Honest Estimates. Splitting Rules. Strengthening and Interpreting. Medical Diagnosis and Prognosis. Mass Spectra Classification. Regression Trees. Bayes Rules and Partitions. Optimal Pruning. Construction of Trees from a Learning Sample. Consistency. Bibliography. Notation Index. Subject Index.", "We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (for example, if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily interpretable decision statements. We introduce a generative model called Bayesian Rule Lists that yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that Bayesian Rule Lists has predictive accuracy on par with the current top algorithms for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS2 score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial fibrillation. Our model is as interpretable as CHADS2, but more accurate.", "", "", "", "We present an algorithm for building probabilistic rule lists that is two orders of magnitude faster than previous work. Rule list algorithms are competitors for decision tree algorithms. They are associative classifiers, in that they are built from pre-mined association rules. They have a logical structure that is a sequence of IF-THEN rules, identical to a decision list or one-sided decision tree. Instead of using greedy splitting and pruning like decision tree algorithms, we aim to fully optimize over rule lists, striking a practical balance between accuracy, interpretability, and computational speed. The algorithm presented here uses a mixture of theoretical bounds (tight enough to have practical implications as a screening or bounding procedure), computational reuse, and highly tuned language libraries to achieve computational efficiency. Currently, for many practical problems, this method achieves better accuracy and sparsity than decision trees, with practical running times. The predictions in each leaf are probabilistic.", "Interpretability has become incredibly important as machine learning is increasingly used to inform consequential decisions. We propose to construct global explanations of complex, blackbox models in the form of a decision tree approximating the original model---as long as the decision tree is a good approximation, then it mirrors the computation performed by the blackbox model. We devise a novel algorithm for extracting decision tree explanations that actively samples new training points to avoid overfitting. We evaluate our algorithm on a random forest to predict diabetes risk and a learned controller for cart-pole. Compared to several baselines, our decision trees are both substantially more accurate and equally or more interpretable based on a user study. Finally, we describe several insights provided by our interpretations, including a causal issue validated by a physician." ] }
1907.04669
2960010413
When predictive models are used to support complex and important decisions, the ability to explain a model's reasoning can increase trust, expose hidden biases, and reduce vulnerability to adversarial attacks. However, attempts at interpreting models are often ad hoc and application-specific, and the concept of interpretability itself is not well-defined. We propose a general optimization framework to create explanations for linear models. Our methodology decomposes a linear model into a sequence of models of increasing complexity using coordinate updates on the coefficients. Computing this decomposition optimally is a difficult optimization problem for which we propose exact algorithms and scalable heuristics. By solving this problem, we can derive a parametrized family of interpretability metrics for linear models that generalizes typical proxies, and study the tradeoff between interpretability and predictive accuracy.
In the specific case of linear models, the typical interpretability proxy of sparsity (small number of nonzero coefficients) has been a topic of extensive study over the past twenty years @cite_14 . Sparse regression models can be trained using heuristics such as LASSO @cite_18 , stagewise regression @cite_10 or least-angle regression @cite_12 , or using scalable mixed-integer approaches @cite_19 . More recently, another factor of interpretability in linear models has involved imposing integrality on the coefficients @cite_23 @cite_15 , which allows to think of the output as tallying up points from each feature into a final score.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_19", "@cite_23", "@cite_15", "@cite_10", "@cite_12" ], "mid": [ "2135046866", "1523985187", "2963351303", "2588168955", "2164878629", "1885924565", "2063978378" ], "abstract": [ "SUMMARY We propose a new method for estimation in linear models. The 'lasso' minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactly 0 and hence gives interpretable models. Our simulation studies suggest that the lasso enjoys some of the favourable properties of both subset selection and ridge regression. It produces interpretable models like subset selection and exhibits the stability of ridge regression. There is also an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and tree-based models are briefly described.", "Discover New Methods for Dealing with High-Dimensional Data A sparse statistical model has only a small number of nonzero parameters or weights; therefore, it is much easier to estimate and interpret than a dense model. Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data. Top experts in this rapidly evolving field, the authors describe the lasso for linear regression and a simple coordinate descent algorithm for its computation. They discuss the application of 1 penalties to generalized linear models and support vector machines, cover generalized penalties such as the elastic net and group lasso, and review numerical methods for optimization. They also present statistical inference methods for fitted (lasso) models, including the bootstrap, Bayesian methods, and recently developed approaches. In addition, the book examines matrix decomposition, sparse multivariate analysis, graphical models, and compressed sensing. It concludes with a survey of theoretical results for the lasso. In this age of big data, the number of features measured on a person or object can be large and might be larger than the number of observations. This book shows how the sparsity assumption allows us to tackle these problems and extract useful and reproducible patterns from big datasets. Data analysts, computer scientists, and theorists will appreciate this thorough and up-to-date treatment of sparse statistical modeling.", "In the period 1991–2015, algorithmic advances in Mixed Integer Optimization (MIO) coupled with hardware improvements have resulted in an astonishing 450 billion factor speedup in solving MIO problems. We present a MIO approach for solving the classical best subset selection problem of choosing k out of p features in linear regression given n observations. We develop a discrete extension of modern first-order continuous optimization methods to find high quality feasible solutions that we use as warm starts to a MIO solver that finds provably optimal solutions. The resulting algorithm (a) provides a solution with a guarantee on its suboptimality even if we terminate the algorithm early, (b) can accommodate side constraints on the coefficients of the linear regression and (c) extends to finding best subset solutions for the least absolute deviation loss function. Using a wide variety of synthetic and real datasets, we demonstrate that our approach solves problems with n in the 1000s and p in the 100s in minutes to provable optimality, and finds near optimal solutions for n in the 100s and p in the 1000s in minutes. We also establish via numerical experiments that the MIO approach performs better than Lasso and other popularly used sparse learning procedures, in terms of achieving sparse solutions with good predictive power.", "From doctors diagnosing patients to judges setting bail, experts often base their decisions on experience and intuition rather than on statistical models. While understandable, relying on intuition over models has often been found to result in inferior outcomes. Here we present a new method, select-regress-and-round, for constructing simple rules that perform well for complex decisions. These rules take the form of a weighted checklist, can be applied mentally, and nonetheless rival the performance of modern machine learning algorithms. Our method for creating these rules is itself simple, and can be carried out by practitioners with basic statistics knowledge. We demonstrate this technique with a detailed case study of judicial decisions to release or detain defendants while they await trial. In this application, as in many policy settings, the effects of proposed decision rules cannot be directly observed from historical data: if a rule recommends releasing a defendant that the judge in reality detained, we do not observe what would have happened under the proposed action. We address this key counterfactual estimation problem by drawing on tools from causal inference. We find that simple rules significantly outperform judges and are on par with decisions derived from random forests trained on all available features. Generalizing to 22 varied decision-making domains, we find this basic result replicates. We conclude with an analytical framework that helps explain why these simple decision rules perform as well as they do.", "Scoring systems are linear classification models that only require users to add, subtract and multiply a few small numbers in order to make a prediction. These models are in widespread use by the medical community, but are difficult to learn from data because they need to be accurate and sparse, have coprime integer coefficients, and satisfy multiple operational constraints. We present a new method for creating data-driven scoring systems called a Supersparse Linear Integer Model (SLIM). SLIM scoring systems are built by using an integer programming problem that directly encodes measures of accuracy (the 0---1 loss) and sparsity (the @math l0-seminorm) while restricting coefficients to coprime integers. SLIM can seamlessly incorporate a wide range of operational constraints related to accuracy and sparsity, and can produce acceptable models without parameter tuning because of the direct control provided over these quantities. We provide bounds on the testing and training accuracy of SLIM scoring systems, and present a new data reduction technique that can improve scalability by eliminating a portion of the training data beforehand. Our paper includes results from a collaboration with the Massachusetts General Hospital Sleep Laboratory, where SLIM is being used to create a highly tailored scoring system for sleep apnea screening.", "We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.", "The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to select a parsimonious set for the efficient prediction of a response variable. Least Angle Regression (LARS), a new model selection algorithm, is a useful and less greedy version of traditional forward selection methods. Three main properties are derived: (1) A simple modification of the LARS algorithm implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS modification efficiently implements Forward Stagewise linear regression, another promising new model selection method; this connection explains the similar numerical results previously observed for the Lasso and Stagewise, and helps us understand the properties of both methods, which are seen as constrained versions of the simpler LARS algorithm. (3) A simple approximation for the degrees of freedom of a LARS estimate is available, from which we derive a Cp estimate of prediction error; this allows a principled choice among the range of possible LARS estimates. LARS and its variants are computationally efficient: the paper describes a publicly available algorithm that requires only the same order of magnitude of computational effort as ordinary least squares applied to the full set of covariates." ] }
1907.04669
2960010413
When predictive models are used to support complex and important decisions, the ability to explain a model's reasoning can increase trust, expose hidden biases, and reduce vulnerability to adversarial attacks. However, attempts at interpreting models are often ad hoc and application-specific, and the concept of interpretability itself is not well-defined. We propose a general optimization framework to create explanations for linear models. Our methodology decomposes a linear model into a sequence of models of increasing complexity using coordinate updates on the coefficients. Computing this decomposition optimally is a difficult optimization problem for which we propose exact algorithms and scalable heuristics. By solving this problem, we can derive a parametrized family of interpretability metrics for linear models that generalizes typical proxies, and study the tradeoff between interpretability and predictive accuracy.
Training low-complexity models often affects predictive accuracy, and the tradeoff between the two can be difficult to quantify @cite_9 . Similarly, the limitations of an ex post explanation relative to the original black box model can be difficult to explain to users @cite_16 . And it is not clear that practitioners always find models that optimize these proxies more interpretable @cite_8 . Recent landmark works @cite_3 @cite_22 @cite_16 have argued that any study of interpretability must include input from human users. The framework we propose is both human-driven and mathematically rigorous, as users can define their own understanding of interpretability and quantify the resulting tradeoff with accuracy.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_9", "@cite_3", "@cite_16" ], "mid": [ "2439568532", "2119315254", "2160455305", "2594475271", "2807015674" ], "abstract": [ "Supervised machine learning models boast remarkable predictive capabilities. But can you trust your model? Will it work in deployment? What else can it tell you about the world? We want models to be not only good, but interpretable. And yet the task of interpretation appears underspecified. Papers provide diverse and sometimes non-overlapping motivations for interpretability, and offer myriad notions of what attributes render models interpretable. Despite this ambiguity, many papers proclaim interpretability axiomatically, absent further explanation. In this paper, we seek to refine the discourse on interpretability. First, we examine the motivations underlying interest in interpretability, finding them to be diverse and occasionally discordant. Then, we address model properties and techniques thought to confer interpretability, identifying transparency to humans and post-hoc explanations as competing notions. Throughout, we discuss the feasibility and desirability of different notions, and question the oft-made assertions that linear models are interpretable and that deep neural networks are not.", "Abstract Widespread use of medical information systems and explosive growth of medical databases require traditional manual data analysis to be coupled with methods for efficient computer-assisted analysis. This paper presents selected data mining techniques that can be applied in medicine, and in particular some machine learning techniques including the mechanisms that make them better suited for the analysis of medical databases (derivation of symbolic rules, use of background knowledge, sensitivity and specificity of induced descriptions). The importance of the interpretability of results of data analysis is discussed and illustrated on selected medical applications.", "There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated bya given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown. The statistical communityhas been committed to the almost exclusive use of data models. This commit- ment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current prob- lems. Algorithmic modeling, both in theoryand practice, has developed rapidlyin fields outside statistics. It can be used both on large complex data sets and as a more accurate and informative alternative to data modeling on smaller data sets. If our goal as a field is to use data to solve problems, then we need to move awayfrom exclusive dependence on data models and adopt a more diverse set of tools.", "As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and describe when interpretability is needed (and when it is not). Next, we suggest a taxonomy for rigorous evaluation and expose open questions towards a more rigorous science of interpretable machine learning.", "" ] }
1907.04685
2960705509
Learning robot control policies from physics simulations is of great interest to the robotics community as it may render the learning process faster, cheaper, and safer by alleviating the need for expensive real-world experiments. However, the direct transfer of learned behavior from simulation to reality is a major challenge. Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the 'Simulation Optimization Bias' (SOB). In this case, the optimizer exploits modeling errors of the simulator such that the resulting behavior can potentially damage the robot. We tackle this challenge by applying domain randomization, i.e., randomizing the parameters of the physics simulations during learning. We propose an algorithm called Simulation-based Policy Optimization with Transferability Assessment (SPOTA) which uses an estimator of the SOB to formulate a stopping criterion for training. The introduced estimator quantifies the over-fitting to the set of domains experienced while training. Our experimental results in two different environments show that the new simulation-based policy search algorithm is able to learn a control policy exclusively from a randomized simulator, which can be applied directly to real system without any additional training on the latter.
Hobbs & Hepenstal @cite_7 proved for linear programs that optimization is optimistically biased, given that there are errors in estimating the objective function coefficients. Furthermore, they demonstrated the optimistic bias of a nonlinear program, and mentioned the effect of errors on the parameters of linear constraints. The optimization problem introduced in belongs to the class of SP for which the assumption required in @cite_7 are guaranteed to hold. The most common approaches to solve convex SP are sample average approximation methods, including: (i) the MRP and its derivatives @cite_6 @cite_10 which assess a solution's quality by comparing with sampled alternative solutions, and (ii) RA @cite_16 @cite_11 which iteratively improved the solution by lowering the error tolerance. Bastin al @cite_24 extended the existing convergence guarantees from convex to non-convex SP , showing almost sure convergence of the minimizers.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_24", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "1990277704", "2000953623", "1995878217", "2621667117", "2076635261", "1832379062" ], "abstract": [ "Does optimization systematically lead to solutions that appear better than they actually turn out to be when implemented? The answer can be yes if there are errors in estimating objective function coefficients. Even if such errors are unbiased, the calculated value of the objective function for the optimal solution will, in an expected value sense, overstate that solution's true performance. This presupposes that errors in the constraint set are relatively unimportant. The existence of such a bias is shown by proof; Monte Carlo simulations of two realistic water resources optimization problems show its significance for water planners. The most important implication is that the estimated net benefits of model solutions may be exaggerated compared to existing water systems, whose performance is generally known with more accuracy.", "A stochastic program SP with solution value z^* can be approximately solved by sampling n realizations of the program's stochastic parameters, and by solving the resulting ''approximating problem'' for (x^*\"n,z^*\"n). We show that, in expectation, z^*\"n is a lower bound on z^* and that this bound monotonically improves as n increases. The first result is used to construct confidence intervals on the optimality gap for any candidate solution x@^ to SP, e.g., x@^=x^*\"n. A sampling procedure based on common random numbers ensures nonnegative gap estimates and provides significant variance reduction over naive sampling on four test problems.", "Monte Carlo methods have extensively been used and studied in the area of stochastic programming. Their convergence properties typically consider global minimizers or first-order critical points of the sample average approximation (SAA) problems and minimizers of the true problem, and show that the former converge to the latter for increasing sample size. However, the assumption of global minimization essentially restricts the scope of these results to convex problems. We review and extend these results in two directions: we allow for local SAA minimizers of possibly nonconvex problems and prove, under suitable conditions, almost sure convergence of local second-order solutions of the SAA problem to second-order critical points of the true problem. We also apply this new theory to the estimation of mixed logit models for discrete choice analysis. New useful convergence properties are derived in this context, both for the constrained and unconstrained cases, and associated estimates of the simulation bias and variance are proposed.", "The stochastic root-finding problem (SRFP) is that of solving a nonlinear system of equations using only a simulation that provides estimates of the functions at requested points. Equivalently, SRFPs seek locations where an unknown vector function attains a given target using only a simulation capable of providing estimates of the function. SRFPs find application in a wide variety of physical settings. We develop a family of retrospective-approximation (RA) algorithms called Bounding RA that efficiently solves a certain class of multidimensional SRFPs. During each iteration, Bounding RA generates and solves a sample-path problem by identifying a polytope of stipulated diameter, with an image that bounds the given target to within stipulated tolerance. Across iterations, the stipulations become increasingly stringent, resulting in a sequence of shrinking polytopes that approach the correct solution. Efficiency results from: (i) the RA structure, (ii) the idea of using bounding polytopes to exploit problem structure, and (iii) careful step-size and direction choice during algorithm evolution. Bounding RA has good finite-time performance that is robust with respect to the location of the initial solution, and algorithm parameter values. Empirical tests suggest that Bounding RA outperforms Simultaneous Perturbation Stochastic Approximation (SPSA), which is arguably the best-known algorithm for solving SRFPs.", "Determining whether a solution is of high quality (optimal or near optimal) is fundamental in optimization theory and algorithms. In this paper, we develop Monte Carlo sampling-based procedures for assessing solution quality in stochastic programs. Quality is defined via the optimality gap and our procedures' output is a confidence interval on this gap. We review a multiple-replications procedure that requires solution of, say, 30 optimization problems and then, we present a result that justifies a computationally simplified single-replication procedure that only requires solving one optimization problem. Even though the single replication procedure is computationally significantly less demanding, the resulting confidence interval might have low coverage probability for small sample sizes for some problems. We provide variants of this procedure that require two replications instead of one and that perform better empirically. We present computational results for a newsvendor problem and for two-stage stochastic linear programs from the literature. We also discuss when the procedures perform well and when they fail, and we propose using ɛ-optimal solutions to strengthen the performance of our procedures.", "This chapter reviews the principles of sample average approximation (SAA) for solving simulation optimization problems. We provide an accessible overview of the area and survey interesting recent developments. We explain when one might want to use SAA and when one might expect it to provide good-quality solutions. We also review some of the key theoretical properties of the solutions obtained through SAA. We contrast SAA with stochastic approximation (SA) methods in terms of the computational effort required to obtain solutions of a given quality, explaining why SA “wins” asymptotically. However, an extension of SAA known as retrospective optimization can match the asymptotic convergence rate of SA, at least up to a multiplicative constant." ] }
1907.04685
2960705509
Learning robot control policies from physics simulations is of great interest to the robotics community as it may render the learning process faster, cheaper, and safer by alleviating the need for expensive real-world experiments. However, the direct transfer of learned behavior from simulation to reality is a major challenge. Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the 'Simulation Optimization Bias' (SOB). In this case, the optimizer exploits modeling errors of the simulator such that the resulting behavior can potentially damage the robot. We tackle this challenge by applying domain randomization, i.e., randomizing the parameters of the physics simulations during learning. We propose an algorithm called Simulation-based Policy Optimization with Transferability Assessment (SPOTA) which uses an estimator of the SOB to formulate a stopping criterion for training. The introduced estimator quantifies the over-fitting to the set of domains experienced while training. Our experimental results in two different environments show that the new simulation-based policy search algorithm is able to learn a control policy exclusively from a randomized simulator, which can be applied directly to real system without any additional training on the latter.
Physics simulations have already been used successfully in robot learning. Traditionally, simulators are operating on a single nominal model, which makes the direct transfer of policies from simulation to reality highly vulnerable to model uncertainties and biases. Thus, model-based control in most cases relies on fine-tuned dynamics models. The mismatch between the simulated and the real world has been addressed by robotics researchers from different viewpoints. Prominent examples are: adding noise to the observations and actions in order to mimic real-world sensor and actuator behavior @cite_12 , model generation and selection depending on the short-term state-action history @cite_17 , learning a transferability function which maps solutions to a score that quantifies how well the simulation matches the reality @cite_5 , randomizing the physics simulation's parameters, and applying adversarial perturbations to the system, where the last two approaches are particularly related.
{ "cite_N": [ "@cite_5", "@cite_12", "@cite_17" ], "mid": [ "1978161072", "1481659984", "2001685400" ], "abstract": [ "The reality gap, which often makes controllers evolved in simulation inefficient once transferred onto the physical robot, remains a critical issue in evolutionary robotics (ER). We hypothesize that this gap highlights a conflict between the efficiency of the solutions in simulation and their transferability from simulation to reality: the most efficient solutions in simulation often exploit badly modeled phenomena to achieve high fitness values with unrealistic behaviors. This hypothesis leads to the transferability approach, a multiobjective formulation of ER in which two main objectives are optimized via a Pareto-based multiobjective evolutionary algorithm: 1) the fitness; and 2) the transferability, estimated by a simulation-to-reality (STR) disparity measure. To evaluate this second objective, a surrogate model of the exact STR disparity is built during the optimization. This transferability approach has been compared to two reality-based optimization methods, a noise-based approach inspired from Jakobi's minimal simulation methodology and a local search approach. It has been validated on two robotic applications: 1) a navigation task with an e-puck robot; and 2) a walking task with a 8-DOF quadrupedal robot. For both experimental setups, our approach successfully finds efficient and well-transferable controllers only with about ten experiments on the physical robot.", "The pitfalls of naive robot simulations have been recognised for areas such as evolutionary robotics. It has been suggested that carefully validated simulations with a proper treatment of noise may overcome these problems. This paper reports the results of experiments intended to test some of these claims. A simulation was constructed of a two-wheeled Khepera robot with IR and ambient light sensors. This included detailed mathematical models of the robot-environment interaction dynamics with empirically determined parameters. Artificial evolution was used to develop recurrent dynamical network controllers for the simulated robot, for obstacle-avoidance and light-seeking tasks, using different levels of noise in the simulation. The evolved controllers were down-loaded onto the real robot and the correspondence between behaviour in simulation and in reality was tested. The level of correspondence varied according to how much noise was used in the simulation, with very good results achieved when realistic quantities were applied. It has been demonstrated that it is possible to develop successful robot controllers in simulation that generate almost identical behaviours in reality, at least for a particular class of robot-environment interaction dynamics.", "Animals sustain the ability to operate after injury by creating qualitatively different compensatory behaviors. Although such robustness would be desirable in engineered systems, most machines fail in the face of unexpected damage. We describe a robot that can recover from such change autonomously, through continuous self-modeling. A four-legged machine uses actuation-sensation relationships to indirectly infer its own structure, and it then uses this self-model to generate forward locomotion. When a leg part is removed, it adapts the self-models, leading to the generation of alternative gaits. This concept may help develop more robust machines and shed light on self-modeling in animals." ] }
1907.04685
2960705509
Learning robot control policies from physics simulations is of great interest to the robotics community as it may render the learning process faster, cheaper, and safer by alleviating the need for expensive real-world experiments. However, the direct transfer of learned behavior from simulation to reality is a major challenge. Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the 'Simulation Optimization Bias' (SOB). In this case, the optimizer exploits modeling errors of the simulator such that the resulting behavior can potentially damage the robot. We tackle this challenge by applying domain randomization, i.e., randomizing the parameters of the physics simulations during learning. We propose an algorithm called Simulation-based Policy Optimization with Transferability Assessment (SPOTA) which uses an estimator of the SOB to formulate a stopping criterion for training. The introduced estimator quantifies the over-fitting to the set of domains experienced while training. Our experimental results in two different environments show that the new simulation-based policy search algorithm is able to learn a control policy exclusively from a randomized simulator, which can be applied directly to real system without any additional training on the latter.
As done in @cite_20 @cite_18 @cite_23 @cite_1 @cite_2 we use the domain parameter distribution as a prior which ensures the physical plausibility of each parameter. Note that specifying this distribution in the current state-of-the-art requires the researcher to make design decisions. Chebotar al @cite_14 presented a promising method which adapts the domain parameter distribution using real-world data in the loop. The main advantage is that this approach alleviates the need for hand-tuning the distributions of the domain parameters, which is currently a significant part of the hyper-parameter search. However, the initial distribution still demands for design decisions. On the downside, the adaptation requires data from the real robot which is considered significantly more expensive to obtain. Since we aim for performing a transfer without using any real-world data, the introduced method only samples from static probability distributions.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_23", "@cite_2", "@cite_20" ], "mid": [ "2529477964", "2897345632", "2767050701", "", "2963614114", "2205975260" ], "abstract": [ "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning adaptation.", "We consider the problem of transferring policies to the real world by training on a distribution of simulated scenarios. Rather than manually tuning the randomization of simulations, we adapt the simulation parameter distribution using a few real world roll-outs interleaved with policy training. In doing so, we are able to change the distribution of simulations to improve the policy transfer by matching the policy behavior in simulation and the real world. We show that policies trained with our method are able to reliably transfer to different robots in two real world tasks: swing-peg-in-hole and opening a cabinet drawer. The video of our experiments can be found at this https URL", "Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this \"reality gap.\" By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.", "", "", "While a lot of progress has recently been made in dynamic motion planning for humanoid robots, much of this work has remained limited to simulation. Here we show that executing the resulting trajectories on a Darwin-OP robot, even with local feedback derived from the optimizer, does not result in stable movements. We then develop a new trajectory optimization method, adapting our earlier CIO algorithm to plan through ensembles of perturbed models. This makes the plan robust to model uncertainty, and leads to successful execution on the robot. We obtain a high rate of task completion without trajectory divergence (falling) in dynamic forward walking, sideways walking, and turning, and a similarly high success rate in getting up from the floor (the robot broke before we could quantify the latter). Even though the planning is still done offline, the present work represents a significant step towards automating the tedious scripting of complex movements." ] }
1907.04685
2960705509
Learning robot control policies from physics simulations is of great interest to the robotics community as it may render the learning process faster, cheaper, and safer by alleviating the need for expensive real-world experiments. However, the direct transfer of learned behavior from simulation to reality is a major challenge. Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the 'Simulation Optimization Bias' (SOB). In this case, the optimizer exploits modeling errors of the simulator such that the resulting behavior can potentially damage the robot. We tackle this challenge by applying domain randomization, i.e., randomizing the parameters of the physics simulations during learning. We propose an algorithm called Simulation-based Policy Optimization with Transferability Assessment (SPOTA) which uses an estimator of the SOB to formulate a stopping criterion for training. The introduced estimator quantifies the over-fitting to the set of domains experienced while training. Our experimental results in two different environments show that the new simulation-based policy search algorithm is able to learn a control policy exclusively from a randomized simulator, which can be applied directly to real system without any additional training on the latter.
There is a large consensus that further increasing the simulator's accuracy alone will not bridge the reality gap. Instead, the idea of domain randomization has recently gained momentum. The common characteristic of such approaches is the perturbation of the parameters which determine the physics simulator and the state estimation, including but not limited to the system dynamics. While the idea of randomizing the sensors and actuators dates back to at least 1995 @cite_12 , the systematic analysis of perturbed simulations in robot RL is a relatively new research direction.
{ "cite_N": [ "@cite_12" ], "mid": [ "1481659984" ], "abstract": [ "The pitfalls of naive robot simulations have been recognised for areas such as evolutionary robotics. It has been suggested that carefully validated simulations with a proper treatment of noise may overcome these problems. This paper reports the results of experiments intended to test some of these claims. A simulation was constructed of a two-wheeled Khepera robot with IR and ambient light sensors. This included detailed mathematical models of the robot-environment interaction dynamics with empirically determined parameters. Artificial evolution was used to develop recurrent dynamical network controllers for the simulated robot, for obstacle-avoidance and light-seeking tasks, using different levels of noise in the simulation. The evolved controllers were down-loaded onto the real robot and the correspondence between behaviour in simulation and in reality was tested. The level of correspondence varied according to how much noise was used in the simulation, with very good results achieved when realistic quantities were applied. It has been demonstrated that it is possible to develop successful robot controllers in simulation that generate almost identical behaviours in reality, at least for a particular class of robot-environment interaction dynamics." ] }
1907.04685
2960705509
Learning robot control policies from physics simulations is of great interest to the robotics community as it may render the learning process faster, cheaper, and safer by alleviating the need for expensive real-world experiments. However, the direct transfer of learned behavior from simulation to reality is a major challenge. Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the 'Simulation Optimization Bias' (SOB). In this case, the optimizer exploits modeling errors of the simulator such that the resulting behavior can potentially damage the robot. We tackle this challenge by applying domain randomization, i.e., randomizing the parameters of the physics simulations during learning. We propose an algorithm called Simulation-based Policy Optimization with Transferability Assessment (SPOTA) which uses an estimator of the SOB to formulate a stopping criterion for training. The introduced estimator quantifies the over-fitting to the set of domains experienced while training. Our experimental results in two different environments show that the new simulation-based policy search algorithm is able to learn a control policy exclusively from a randomized simulator, which can be applied directly to real system without any additional training on the latter.
Wang al @cite_26 proposed sampling initial states, external disturbances, goals, as well as actuator noise from probability distributions and learned walking policies in simulation. Regarding robot RL , recent domain randomization methods focus on perturbing the parameters defining the system dynamics. Approaches cover: (i) trajectory optimization on finite model-ensembles @cite_20 (ii) learning a feedforward NN policy for an under-actuated problem @cite_8 , (iii) using a risk-averse objective function @cite_18 , (iv) employing recurrent NN policies trained with experience replay @cite_1 , and (v) optimizing a policy from samples of a model randomly chosen from a set which is repeatedly fitted to real-world data @cite_30 . From the listed approaches @cite_20 @cite_8 @cite_1 were able to cross the reality gap without acquiring samples from the real world.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_8", "@cite_1", "@cite_20" ], "mid": [ "2785389871", "2529477964", "1966784014", "2623289472", "2767050701", "2205975260" ], "abstract": [ "Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. However, they tend to suffer from high sample complexity, which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning than backpropagation through time. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks.", "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning adaptation.", "We introduce methods for optimizing physics-based walking controllers for robustness to uncertainty. Many unknown factors, such as external forces, control torques, and user control inputs, cannot be known in advance and must be treated as uncertain. These variables are represented with probability distributions, and a return function scores the desirability of a single motion. Controller optimization entails maximizing the expected value of the return, which is computed by Monte Carlo methods. We demonstrate examples with different sources of uncertainty and task constraints. Optimizing control strategies under uncertainty increases robustness and produces natural variations in style.", "Using Reinforcement Learning (RL) in simulation to construct policies useful in real life is challenging. This is often attributed to the sequential decision making aspect: inaccuracies in simulation accumulate over multiple steps, hence the simulated trajectories diverge from what would happen in reality. In our work we show the need to consider another important aspect: the mismatch in simulating control. We bring attention to the need for modeling control as well as dynamics, since oversimplifying assumptions about applying actions of RL policies could make the policies fail on real-world systems. We design a simulator for solving a pivoting task (of interest in Robotics) and demonstrate that even a simple simulator designed with RL in mind outperforms high-fidelity simulators when it comes to learning a policy that is to be deployed on a real robotic system. We show that a phenomenon that is hard to model - friction - could be exploited successfully, even when RL is performed using a simulator with a simple dynamics and noise model. Hence, we demonstrate that as long as the main sources of uncertainty are identified, it could be possible to learn policies applicable to real systems even using a simple simulator. RL-compatible simulators could open the possibilities for applying a wide range of RL algorithms in various fields. This is important, since currently data sparsity in fields like healthcare and education frequently forces researchers and engineers to only consider sample-efficient RL approaches. Successful simulator-aided RL could increase flexibility of experimenting with RL algorithms and help applying RL policies to real-world settings in fields where data is scarce. We believe that lessons learned in Robotics could help other fields design RL-compatible simulators, so we summarize our experience and conclude with suggestions.", "Simulations are attractive environments for training agents as they provide an abundant source of data and alleviate certain safety concerns during the training process. But the behaviours developed by agents in simulation are often specific to the characteristics of the simulator. Due to modeling error, strategies that are successful in simulation may not transfer to their real world counterparts. In this paper, we demonstrate a simple method to bridge this \"reality gap.\" By randomizing the dynamics of the simulator during training, we are able to develop policies that are capable of adapting to very different dynamics, including ones that differ significantly from the dynamics on which the policies were trained. This adaptivity enables the policies to generalize to the dynamics of the real world without any training on the physical system. Our approach is demonstrated on an object pushing task using a robotic arm. Despite being trained exclusively in simulation, our policies are able to maintain a similar level of performance when deployed on a real robot, reliably moving an object to a desired location from random initial configurations. We explore the impact of various design decisions and show that the resulting policies are robust to significant calibration error.", "While a lot of progress has recently been made in dynamic motion planning for humanoid robots, much of this work has remained limited to simulation. Here we show that executing the resulting trajectories on a Darwin-OP robot, even with local feedback derived from the optimizer, does not result in stable movements. We then develop a new trajectory optimization method, adapting our earlier CIO algorithm to plan through ensembles of perturbed models. This makes the plan robust to model uncertainty, and leads to successful execution on the robot. We obtain a high rate of task completion without trajectory divergence (falling) in dynamic forward walking, sideways walking, and turning, and a similarly high success rate in getting up from the floor (the robot broke before we could quantify the latter). Even though the planning is still done offline, the present work represents a significant step towards automating the tedious scripting of complex movements." ] }
1907.04685
2960705509
Learning robot control policies from physics simulations is of great interest to the robotics community as it may render the learning process faster, cheaper, and safer by alleviating the need for expensive real-world experiments. However, the direct transfer of learned behavior from simulation to reality is a major challenge. Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the 'Simulation Optimization Bias' (SOB). In this case, the optimizer exploits modeling errors of the simulator such that the resulting behavior can potentially damage the robot. We tackle this challenge by applying domain randomization, i.e., randomizing the parameters of the physics simulations during learning. We propose an algorithm called Simulation-based Policy Optimization with Transferability Assessment (SPOTA) which uses an estimator of the SOB to formulate a stopping criterion for training. The introduced estimator quantifies the over-fitting to the set of domains experienced while training. Our experimental results in two different environments show that the new simulation-based policy search algorithm is able to learn a control policy exclusively from a randomized simulator, which can be applied directly to real system without any additional training on the latter.
Another approach of learning robust policies in simulation is to apply adversarial disturbances to the training process. Mandleka al @cite_28 proposed physically plausible perturbations by randomly deciding when to add a rescaled gradient of the expected return. Pinto al @cite_19 introduced the idea of a second agent whose goal is to hinder the first agent from fulfilling its task. Both agents are trained simultaneously and make up a zero-sum game. In general, adversarial approaches may provide a particularly robust policy. However, without any further restrictions, it is always possible create scenarios in which the protagonist agent can never win, , the policy will not learn the task.
{ "cite_N": [ "@cite_28", "@cite_19" ], "mid": [ "2773691349", "2602963933" ], "abstract": [ "Policy search methods in reinforcement learning have demonstrated success in scaling up to larger problems beyond toy examples. However, deploying these methods on real robots remains challenging due to the large sample complexity required during learning and their vulnerability to malicious intervention. We introduce Adversarially Robust Policy Learning (ARPL), an algorithm that leverages active computation of physically-plausible adversarial examples during training to enable robust policy learning in the source domain and robust performance under both random and adversarial input perturbations. We evaluate ARPL on four continuous control tasks and show superior resilience to changes in physical environment dynamics parameters and environment state as compared to state-of-the-art robust policy learning methods. Code, data, and additional experimental results are available at: stanfordvl.github.io ARPL", "Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity leads to failed generalization from training to test scenarios (e.g., due to different friction or object masses). Inspired from H∞ control methods, we note that both modeling errors and differences in training and test scenarios can be viewed as extra forces disturbances in the system. This paper proposes the idea of robust adversarial reinforcement learning (RARL), where we train an agent to operate in the presence of a destabilizing adversary that applies disturbance forces to the system. The jointly trained adversary is reinforced - that is, it learns an optimal destabilization policy. We formulate the policy learning as a zero-sum, minimax objective function. Extensive experiments in multiple environments (InvertedPendulum, HalfCheetah, Swimmer, Hopper, Walker2d and Ant) conclusively demonstrate that our method (a) improves training stability; (b) is robust to differences in training test conditions; and c) outperform the baseline even in the absence of the adversary." ] }
1907.04889
2960851860
Persistent cycles, especially the minimal ones, are useful geometric features functioning as augmentations for the intervals in the purely topological persistence diagrams (also termed as barcodes). In our earlier work, we showed that computing minimal 1-dimensional persistent cycles (persistent 1-cycles) for finite intervals is NP-hard while the same for infinite intervals is polynomially tractable. In this paper, we address this problem for general dimensions with @math coefficients. In addition to proving that it is NP-hard to compute minimal persistent d-cycles (d>1) for both types of intervals given arbitrary simplicial complexes, we identify two interesting cases which are polynomially tractable. These two cases assume the complex to be a certain generalization of manifolds which we term as weak pseudomanifolds. For finite intervals from the d-th persistence diagram of a weak (d+1)-pseudomanifold, we utilize the fact that persistent cycles of such intervals are null-homologous and reduce the problem to a minimal cut problem. Since the same problem for infinite intervals is NP-hard, we further assume the weak (d+1)-pseudomanifold to be embedded in @math so that the complex has a natural dual graph structure and the problem reduces to a minimal cut problem. Experiments with both algorithms on scientific data indicate that the minimal persistent cycles capture various significant features of the data.
In terms of computing minimal cycles for homology groups, two problems are of most interest: the localization problem and the minimal basis problem. The localization problem asks for computing a minimal cycle in a homology class and the minimal basis problem asks for computing a set of generating cycles for a homology group whose sum of weights is minimal. With @math coefficients, these two problems are in general hard. Specifically, Chambers et al. @cite_11 proved that the localization problem over dimension one is NP-hard when the given simplicial complex is a 2-manifold. Chen and Freedman @cite_17 proved that the localization problem is NP-hard to approximate with fixed ratio over arbitrary dimension. They also showed that the minimal basis problem is NP-hard to approximate with fixed ratio over dimension greater than one. For one-dimensional homology, Dey et al. @cite_8 proposed a polynomial time algorithm for the minimal basis problem. Several other works @cite_10 @cite_23 @cite_22 address variants of the two problems while considering special input classes, alternative cycle measures, or coefficients for homology other than @math .
{ "cite_N": [ "@cite_11", "@cite_22", "@cite_8", "@cite_23", "@cite_10", "@cite_17" ], "mid": [ "2066108505", "2093766264", "1976067114", "2029958010", "2118382603", "1991566340" ], "abstract": [ "We describe the first algorithms to compute minimum cuts in surface-embedded graphs in near-linear time. Given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, our algorithm computes a minimum (s,t)-cut in gO(g) n log n time. Except for the special case of planar graphs, for which O(n log n)-time algorithms have been known for more than 20 years, the best previous time bounds for finding minimum cuts in embedded graphs follow from algorithms for general sparse graphs. A slight generalization of our minimum-cut algorithm computes a minimum-cost subgraph in every Z2-homology class. We also prove that finding a minimum-cost subgraph homologous to a single input cycle is NP -hard.", "We describe simple greedy algorithms to construct the shortest set of loops that generates either the fundamental group (with a given basepoint) or the first homology group (over any fixed coefficient field) of any oriented 2-manifold. In particular, we show that the shortest set of loops that generate the fundamental group of any oriented combinatorial 2-manifold, with any given basepoint, can be constructed in O(n log n) time using a straightforward application of Dijkstra's shortest path algorithm. This solves an open problem of Colin de Verdiere and Lazarus.", "Inference of topological and geometric attributes of a hidden manifold from its point data is a fundamental problem arising in many scientific studies and engineering applications. In this paper we present an algorithm to compute a set of loops from a point data that presumably sample a smooth manifold M ⊂ Rd. These loops approximate a shortest basis of the one dimensional homology group H1(M) over coefficients in finite field Z2. Previous results addressed the issue of computing the rank of the homology groups from point data, but there is no result on approximating the shortest basis of a manifold from its point sample. In arriving our result, we also present a polynomial time algorithm for computing a shortest basis of H1 (Κ) for any finite simplicial complex Κ whose edges have non-negative weights.", "Given a simplicial complex with weights on its simplices, and a nontrivial cycle on it, we are interested in finding the cycle with minimal weight which is homologous to the given one. Assuming that the homology is defined with integer ( @math ) coefficients, we show the following (Theorem 5.2): For a finite simplicial complex @math of dimension greater than @math , the boundary matrix @math is totally unimodular if and only if @math is torsion-free for all pure subcomplexes @math in @math of dimensions @math and @math , respectively, where @math . Because of the total unimodularity of the boundary matrix, we can solve the optimization problem, which is inherently an integer programming problem, as a linear program and obtain an integer solution. Thus, the problem of finding optimal cycles in a given homology class can be solved in polynomial time. This result is surprising in the backdrop of a recent result which says that the problem is NP-hard under @math coefficients which, being a field, is in general easier to deal with. Our result implies, among other things, that one can compute in polynomial time an optimal @math -cycle in a given homology class for any triangulation of an orientable compact @math -manifold or for any finite simplicial complex embedded in @math . Our optimization approach can also be used for various related problems, such as finding an optimal chain homologous to a given one when these are not cycles. Our result can also be viewed as providing a topological characterization of total unimodularity.", "We develop a method for measuring homology classes. This involves two problems. First, we define the size of a homology class, using ideas from relative homology. Second, we define an optimal basis of a homology group to be the basis whose elements' size have the minimal sum. We provide a greedy algorithm to compute the optimal basis and measure classes in it. The algorithm runs in O(@bn^3log^2n) time, where n is the size of the simplicial complex and @b is the Betti number of the homology group. Finally, we prove the stability of our result. The algorithm can be adapted to measure any given class.", "We address the problem of localizing homology classes, namely, finding the cycle representing a given class with the most concise geometric measure. We focus on the volume measure, that is, the 1-norm of a cycle. Two main results are presented. First, we prove the problem is NP-hard to approximate within any constant factor. Second, we prove that for homology of dimension two or higher, the problem is NP-hard to approximate even when the Betti number is O(1). A side effect is the inapproximability of the problem of computing the nonbounding cycle with the smallest volume, and computing cycles representing a homology basis with the minimal total volume. We also discuss other geometric measures (diameter and radius) and show their disadvantages in homology localization. Our work is restricted to homology over the Z2 field." ] }
1907.04889
2960851860
Persistent cycles, especially the minimal ones, are useful geometric features functioning as augmentations for the intervals in the purely topological persistence diagrams (also termed as barcodes). In our earlier work, we showed that computing minimal 1-dimensional persistent cycles (persistent 1-cycles) for finite intervals is NP-hard while the same for infinite intervals is polynomially tractable. In this paper, we address this problem for general dimensions with @math coefficients. In addition to proving that it is NP-hard to compute minimal persistent d-cycles (d>1) for both types of intervals given arbitrary simplicial complexes, we identify two interesting cases which are polynomially tractable. These two cases assume the complex to be a certain generalization of manifolds which we term as weak pseudomanifolds. For finite intervals from the d-th persistence diagram of a weak (d+1)-pseudomanifold, we utilize the fact that persistent cycles of such intervals are null-homologous and reduce the problem to a minimal cut problem. Since the same problem for infinite intervals is NP-hard, we further assume the weak (d+1)-pseudomanifold to be embedded in @math so that the complex has a natural dual graph structure and the problem reduces to a minimal cut problem. Experiments with both algorithms on scientific data indicate that the minimal persistent cycles capture various significant features of the data.
In this work, we use graph cuts and their duality extensively. The duality of cuts on a planar graph and separating cycles on the dual graph has long been utilized to efficiently compute maximal flows and minimal cuts on planar graphs, a topic for which Chambers et al. @cite_11 provide a comprehensive review. In their paper @cite_11 , Chambers et al. discover the duality between minimal cuts of a surface-embedded graph and minimal homologous cycles in a dual complex, and then devise @math algorithms for both problems assuming the genus of the surface to be fixed. Chen and Freedman @cite_17 proposed an algorithm which computes a minimal non-bounding @math -cycle given a @math -complex embedded in @math , utilizing a natural duality of @math -cycles in the complex and cuts in the dual graph. The minimal non-bounding cycle algorithm can be further extended to solve the localization problem and the minimal basis problem over dimension @math given a @math -complex embedded in @math .
{ "cite_N": [ "@cite_17", "@cite_11" ], "mid": [ "1991566340", "2066108505" ], "abstract": [ "We address the problem of localizing homology classes, namely, finding the cycle representing a given class with the most concise geometric measure. We focus on the volume measure, that is, the 1-norm of a cycle. Two main results are presented. First, we prove the problem is NP-hard to approximate within any constant factor. Second, we prove that for homology of dimension two or higher, the problem is NP-hard to approximate even when the Betti number is O(1). A side effect is the inapproximability of the problem of computing the nonbounding cycle with the smallest volume, and computing cycles representing a homology basis with the minimal total volume. We also discuss other geometric measures (diameter and radius) and show their disadvantages in homology localization. Our work is restricted to homology over the Z2 field.", "We describe the first algorithms to compute minimum cuts in surface-embedded graphs in near-linear time. Given an undirected graph embedded on an orientable surface of genus g, with two specified vertices s and t, our algorithm computes a minimum (s,t)-cut in gO(g) n log n time. Except for the special case of planar graphs, for which O(n log n)-time algorithms have been known for more than 20 years, the best previous time bounds for finding minimum cuts in embedded graphs follow from algorithms for general sparse graphs. A slight generalization of our minimum-cut algorithm computes a minimum-cost subgraph in every Z2-homology class. We also prove that finding a minimum-cost subgraph homologous to a single input cycle is NP -hard." ] }
1907.04889
2960851860
Persistent cycles, especially the minimal ones, are useful geometric features functioning as augmentations for the intervals in the purely topological persistence diagrams (also termed as barcodes). In our earlier work, we showed that computing minimal 1-dimensional persistent cycles (persistent 1-cycles) for finite intervals is NP-hard while the same for infinite intervals is polynomially tractable. In this paper, we address this problem for general dimensions with @math coefficients. In addition to proving that it is NP-hard to compute minimal persistent d-cycles (d>1) for both types of intervals given arbitrary simplicial complexes, we identify two interesting cases which are polynomially tractable. These two cases assume the complex to be a certain generalization of manifolds which we term as weak pseudomanifolds. For finite intervals from the d-th persistence diagram of a weak (d+1)-pseudomanifold, we utilize the fact that persistent cycles of such intervals are null-homologous and reduce the problem to a minimal cut problem. Since the same problem for infinite intervals is NP-hard, we further assume the weak (d+1)-pseudomanifold to be embedded in @math so that the complex has a natural dual graph structure and the problem reduces to a minimal cut problem. Experiments with both algorithms on scientific data indicate that the minimal persistent cycles capture various significant features of the data.
As pointed out earlier, our main focus is the optimality of representative cycles in the persistence framework. Some early works @cite_12 @cite_9 address the representative cycle problem for persistence by computing minimal cycles at the birth points of intervals without considering what actually die at the death points. Wu et al. @cite_20 proposed an algorithm computing minimal persistent 1-cycles for finite intervals using an annotation technique and heuristic search. However, the time complexity of the algorithm is exponential in the worst-case. Obayashi @cite_24 casts the minimal persistent cycle problem for finite intervals into an integer program, but the rounded result of the relaxed linear program is not guaranteed to be optimal. Dey et al. @cite_2 formalizes the definition of persistent cycles for both finite and infinite intervals. They also proved the NP-hardness of computing minimal persistent 1-cycles for finite intervals and proposed a polynomial time algorithm for computing non-optimal ones which are still good in practice.
{ "cite_N": [ "@cite_9", "@cite_24", "@cite_2", "@cite_12", "@cite_20" ], "mid": [ "2227949165", "2963904897", "2897738333", "2150304504", "2618638433" ], "abstract": [ "In this work, we discuss the problem of finding optimal cycles for homology groups of simplicial complexes and for persistent homology of filtrations. We review the linear programming formulation of the optimal homologous cycle problem and its extension to allow for multiple cycles. By inserting these linear programming problems into the persistent homology algorithm, we are able to compute an optimal cycle, that has been optimized at birth, for every persistent interval in the persistent diagram.", "The present paper shows a mathematical formalization of---as well as algorithms and software for computing---volume-optimal cycles. Volume-optimal cycles are useful for understanding geometric features appearing in a persistence diagram. Volume-optimal cycles provide concrete and optimal homologous structures, such as rings or cavities, on a given dataset. The key idea is the optimality on a @math -chain complex for a @math th homology generator. This optimality formalization is suitable for persistent homology. We can solve the optimization problem using linear programming. For an alpha filtration on @math , volume-optimal cycles on an @math st persistence diagram are more efficiently computable using a merge-tree algorithm. The merge-tree algorithm also provides a tree structure on the diagram containing richer information than volume-optimal cycles. The key mathematical idea used here is Alexander duality.", "Persistence diagrams, which summarize the birth and death of homological features extracted from data, are employed as stable signatures for applications in image analysis and other areas. Besides simply considering the multiset of intervals included in a persistence diagram, some applications need to find representative cycles for the intervals. In this paper, we address the problem of computing these representative cycles, termed as persistent 1-cycles. The definition of persistent cycles is based on the interval module decomposition of persistence modules, which reveals the structure of persistent homology. After showing that the computation of the optimal persistent 1-cycles is NP-hard, we propose an alternative set of meaningful persistent 1-cycles that can be computed with an efficient polynomial time algorithm. We also inspect the stability issues of the optimal persistent 1-cycles and the persistent 1-cycles computed by our algorithm with the observation that the perturbations of both cannot be properly bounded. We design a software which applies our algorithm to various datasets. Experiments on 3D point clouds, mineral structures, and images show the effectiveness of our algorithm in practice.", "The three dimensional structure of DNA in the nucleus (chromatin) plays an important role in many cellular processes. Recent experimental advances have led to high-throughput methods of capturing information about chromatin conformation on genome-wide scales. New models are needed to quantitatively interpret this data at a global scale. Here we introduce the use of tools from topological data analysis to study chromatin conformation. We use persistent homology to identify and characterize conserved loops and voids in contact map data and identify scales of interaction. We demonstrate the utility of the approach on simulated data and then look data from both a bacterial genome and a human cell line. We identify substantial multiscale topology in these datasets.", "In cardiac image analysis, it is important yet challenging to reconstruct the trabeculae, namely, fine muscle columns whose ends are attached to the ventricular walls. To extract these fine structures, traditional image segmentation methods are insufficient. In this paper, we propose a novel method to jointly detect salient topological handles and compute the optimal representations of them. The detected handles are considered hypothetical trabeculae structures. They are further screened using a classifier and are then included in the final segmentation. We show in experiments the significance of our contribution compared with previous standard segmentation methods without topological priors, as well as with previous topological method in which non-optimal representations of topological handles are used." ] }
1907.04733
2958944212
We initiate the study of coresets for clustering in graph metrics, i.e., the shortest-path metric of edge-weighted graphs. Such clustering problems (on graph metrics) are essential to data analysis and used for example in road networks and data visualization. Specifically, we consider @math -Clustering, where given a metric space @math , the goal is to minimize, over all @math -point center sets @math , the objective @math . This problem is a well-known generalization of both k-Median ( @math ) and k-Means ( @math ). A coreset is a compact summary of the data that approximately preserves the clustering objective for every possible center set. Coresets offer significant efficiency improvements in terms of running time, storage, and communication, including in streaming and distributed settings. Our main result is a near-linear time construction of a coreset of size @math for @math -Clustering in a graph @math whose treewidth is @math . The construction is based on the framework of Feldman and Langberg [STOC 2011], and our main technical contribution, as required by this framework, is a uniform bound of @math on the shattering dimension under any point weights. Previously, the only construction applicable to graph metrics, even for @math , was a generic one with size @math where @math [Feldman and Langberg, STOC 2011]. We complement our construction with an @math size lower bound, which matches our construction's linear dependence on @math . This further provides the first proof that the @math factor in the generic upper bound is indeed necessary, and also justifies restricting the graph topology.
Coresets for clustering in Euclidean spaces @math have been well studied. @cite_3 constructed the first strong coreset for both and with an exponential size dependence on @math . @cite_12 improved the dimensionality dependence to be polynomial for both and . @cite_32 designed coresets for with size independent of @math . @cite_38 generalizeed this result to . Recently, coreset for generalized clustering objective receives attention from the research community, for example, @cite_19 obtained simultaneous coreset for . For another special case @math , which is the clustering, an @math -coreset of size @math can be constructed in near-linear time @cite_13 @cite_26 . Many NP-hard graph optimization problems may be solved in polynomial time or even linear time in bounded treewidth graphs, including maximum independent set, hamiltonian path and chromatic number @cite_8 @cite_23 @cite_6 . The main approach to solving these problems is dynamic programming. Generally, Courcelle's Theorem @cite_4 states that any graph optimization problem that can be described by Monadic Second-Order Logic is solvable in linear time in bounded treewidth graphs.
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_4", "@cite_8", "@cite_32", "@cite_3", "@cite_6", "@cite_19", "@cite_23", "@cite_13", "@cite_12" ], "mid": [ "2220402431", "", "2013967419", "2016056456", "2229238337", "2045964207", "1537449969", "2964100282", "1991755800", "1999939497", "2094048240" ], "abstract": [ "We design a data stream algorithm for the k-means problem, called BICO, that combines the data structure of the SIGMOD Test of Time award winning algorithm BIRCH [27] with the theoretical concept of coresets for clustering problems. The k-means problem asks for a set C of k centers minimizing the sum of the squared distances from every point in a set P to its nearest center in C. In a data stream, the points arrive one by one in arbitrary order and there is limited storage space.", "", "Abstract Every graph generated by a hyperedge replacement graph-grammar can be represented by a tree, namely the derivation tree of the derivation sequence that produced it. Certain functions on graphs can be computed recursively on the derivation trees of these graphs. By using monadic second-order logic and semiring homomorphisms, we describe in a single formalism a large class of such functions. Polynomial and even linear algorithms can be constructed for some of these functions. We unify similar results obtained by Takamizawa (1982), Bern (1987), Arnborg (1991) and Habel (1989).", "Abstract A general problem in computational graph theory is that of finding an optimal subgraph H of a given weighted graph G . The matching problem (which is easy) and the traveling salesman problem (which is not) are well-known examples of this general problem. In the literature one can also find a variety of ad hoc algorithms for solving certain special cases in linear time. We suggest a general approach for constructing linear-time algorithms in the case where the graph G is defined by certain rules of composition (as are trees, series-parallel graphs, and outerplanar graphs) and the desired subgraph H satisfies a property that is “regular” with respect to these rules of composition (as do matchings, dominating sets, and independent sets for all the classes just mentioned). This approach is applied to obtain a linear-time algorithm for computing the irredundance number of a tree, a problem for which no polynomial-time algorithm was previously known.", "@d can be approximated up to (1 + e)-factor, for an arbitrary small e > 0, using the O(k e2)-rank approximation of A and a constant. This implies, for example, that the optimal k-means clustering of the rows of A is (1 + e)-approximated by an optimal k-means clustering of their projection on the O(k e2) first right singular vectors (principle components) of A. A (j, k)-coreset for projective clustering is a small set of points that yields a (1 + e)-approximation to the sum of squared distances from the n rows of A to any set of k affine subspaces, each of dimension at most j. Our embedding yields (0, k)-coresets of size O(k) for handling k-means queries, (j, 1)-coresets of size O(j) for PCA queries, and (j, k)-coresets of size (log n)O(jk) for any j, k ≥ 1 and constant e e (0, 1 2). Previous coresets usually have a size which is linearly or even exponentially dependent of d, which makes them useless when d n. Using our coresets with the merge-and-reduce approach, we obtain embarrassingly parallel streaming algorithms for problems such as k-means, PCA and projective clustering. These algorithms use update time per point and memory that is polynomial in log n and only linear in d. For cost functions other than squared Euclidean distances we suggest a simple recursive coreset construction that produces coresets of size", "In this paper, we show the existence of small coresets for the problems of computing k-median and k-means clustering for points in low dimension. In other words, we show that given a point set P in Rd, one can compute a weighted set S ⊆ P, of size O(k e-d log n), such that one can compute the k-median means clustering on S instead of on P, and get an (1+e)-approximation. As a result, we improve the fastest known algorithms for (1+e)-approximate k-means and k-median. Our algorithms have linear running time for a fixed k and e. In addition, we can maintain the (1+e)-approximate k-median or k-means clustering of a stream when points are being only inserted, using polylogarithmic space and update time.", "This paper gives an overview of several results and techniques for graphs algorithms that compute the treewidth of a graph or that solve otherwise intractable problems when restricted graphs with bounded treewidth more efficiently. Also, several results on graph minors are reviewed.", "", "Abstract We present and illustrate by a sequence of examples an algorithm paradigm for solving NP- hard problems on graphs restricted to partial graphs of k -trees and given with an embedding in a k -tree. Such algorithms, linear in the size of the graph but exponential or superexponential in k , exist for most NP-hard problems that have linear time algorithms for trees. The examples used are optimization problems involving independent sets, dominating sets, graph coloring, Hamiltonian circuits, network reliability and minimum vertex deletion forbidden subgraphs. The results generalize previous results for series-parallel graphs, bandwidth-constrained graphs, and non- serial dynamic programming.", "In this paper we present an n^ O(k1-1 d) -time algorithm for solving the k -center problem in , under Lźfty - and L2 -metrics. The algorithm extends to other metrics, and to the discrete k -center problem. We also describe a simple (1+ź) -approximation algorithm for the k -center problem, with running time O(nlog k) + (k ź)^ O(k1-1 d) . Finally, we present an n^ O(k1-1 d) -time algorithm for solving the L -capacitated k -center problem, provided that L=Ω(n k1-1 d) or L=O(1) .", "We present new approximation algorithms for the @math -median and @math -means clustering problems. To this end, we obtain small coresets for @math -median and @math -means clustering in general metric spaces and in Euclidean spaces. In @math , these coresets are of size with polynomial dependency on the dimension @math . This leads to @math -approximation algorithms to the optimal @math -median and @math -means clustering in @math , with running time @math , where @math is the number of points. This improves over previous results. We use those coresets to maintain a @math -approximate @math -median and @math -means clustering of a stream of points in @math , using @math space. These are the first streaming algorithms, for those problems, that have space complexity with polynomial dependency on the dimension." ] }
1907.04840
2956434358
We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving performance levels competitive with dense networks. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8 , 15 , and 6 compared to other sparse algorithms. Furthermore, we show that our algorithm can reliably find the equivalent of winning lottery tickets from random initialization: Our algorithm finds sparse configurations with 20 or fewer weights which perform as well, or better than their dense counterparts. Sparse momentum also decreases the training time: It requires a single training run -- no re-training is required -- and increases training speed up to 11.85x. In our analysis, we show that our sparse networks might be able to reach dense performance levels by learning more general features which are useful to a broader range of classes than dense networks.
: @cite_0 show that "winning lottery tickets" exist for deep neural networks -- sparse initializations which reach similar predictive performance as dense networks and train just as fast. However, finding these winning lottery tickets is computationally expensive and involves multiple prune and re-train cycles starting from a dense network. Followup work concentrated on finding these configurations faster . In contrast, we reach dense performance levels with a sparse network from random initialization with a single training run while accelerating training.
{ "cite_N": [ "@cite_0" ], "mid": [ "2805003733" ], "abstract": [ "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90 , decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the \"lottery ticket hypothesis:\" dense, randomly-initialized, feed-forward networks contain subnetworks (\"winning tickets\") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20 of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy." ] }
1907.04707
2958092922
Graph classification is practically important in many domains. To solve this problem, one usually calculates a low-dimensional representation for each node in the graph with supervised or unsupervised approaches. Most existing approaches consider all the edges between nodes while overlooking whether the edge will brings positive or negative influence to the node representation learning. In many real-world applications, however, some connections among the nodes can be noisy for graph convolution, and not all the edges deserve your attention. In this work, we distinguish the positive and negative impacts of the neighbors to the node in graph node classification, and propose to enhance the graph convolutional network by considering the labels between the neighbor edges. We present a novel GCN framework, called Label-aware Graph Convolutional Network (LAGCN), which incorporates the supervised and unsupervised learning by introducing the edge label predictor. As a general model, LAGCN can be easily adapted in various previous GCN and enhance their performance with some theoretical guarantees. Experimental results on multiple real-world datasets show that LAGCN is competitive against various state-of-the-art methods in graph classification.
When dealing with graph structured data, it is important to design efficient graph models to learn the embedding for each node. These methods can be categorized into supervised learning methods and unsupervised learning methods depending on whether utilize the information of the training labels. In , convolution-based methods have achieved better performance by aggregating the information from localized neighbors. The spectral and localized approach was first proposed in @cite_33 , which defined the graph convolution operation in the Fourier domain. Later, @cite_14 and @cite_22 introduced the localized filters and Chebyshev expansion to avoid the eigen-decomposition. The GCN proposed in @cite_15 simplifies the previous convolution operations as a matrix multiplication of the normalized adjacent matrix and the hidden features, which restricted that the convolution to be computed layer by layer and activated with non-linear functions at each layer. To accelerate the training and inference process of the GCN, SGC @cite_9 reduce the excess complexity of GCNs by repeatedly removing the nonlinearities between GCN layers and collapsing the resulting function into a single linear transformation. The authors verified that SGC exhibits comparable performance as GCN.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_33", "@cite_9", "@cite_15" ], "mid": [ "637153065", "2964321699", "1662382123", "2916106175", "2964015378" ], "abstract": [ "Deep Learning's recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.", "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs.", "Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.", "Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.", "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin." ] }
1907.04707
2958092922
Graph classification is practically important in many domains. To solve this problem, one usually calculates a low-dimensional representation for each node in the graph with supervised or unsupervised approaches. Most existing approaches consider all the edges between nodes while overlooking whether the edge will brings positive or negative influence to the node representation learning. In many real-world applications, however, some connections among the nodes can be noisy for graph convolution, and not all the edges deserve your attention. In this work, we distinguish the positive and negative impacts of the neighbors to the node in graph node classification, and propose to enhance the graph convolutional network by considering the labels between the neighbor edges. We present a novel GCN framework, called Label-aware Graph Convolutional Network (LAGCN), which incorporates the supervised and unsupervised learning by introducing the edge label predictor. As a general model, LAGCN can be easily adapted in various previous GCN and enhance their performance with some theoretical guarantees. Experimental results on multiple real-world datasets show that LAGCN is competitive against various state-of-the-art methods in graph classification.
To further enhance the performance of GCN, two types of methods are proposed: 1) sampling and 2) attention. There are two kinds of sampling-based methods including GraphSAGE @cite_10 and FastGCN @cite_12 , they introduced node-wise sampling and layer-wise sampling, separately. GraphSAGE computed node representation with sampling the 1-step and 2-step neighbors of the center node to construct a sub-graph for the center node, and fused the information of the sub-graph with convolutional aggregators. The FastGCN model interpreted graph convolutions as a layer-wise integral transformation and sampled the nodes in each layers independently.
{ "cite_N": [ "@cite_10", "@cite_12" ], "mid": [ "2962767366", "2786915849" ], "abstract": [ "Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.", "The graph convolutional networks (GCN) recently proposed by Kipf and Welling are an effective graph model for semi-supervised learning. This model, however, was originally designed to be learned with the presence of both training and test data. Moreover, the recursive neighborhood expansion across layers poses time and memory challenges for training with large, dense graphs. To relax the requirement of simultaneous availability of test data, we interpret graph convolutions as integral transforms of embedding functions under probability measures. Such an interpretation allows for the use of Monte Carlo approaches to consistently estimate the integrals, which in turn leads to a batched training scheme as we propose in this work---FastGCN. Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference. We show a comprehensive set of experiments to demonstrate its effectiveness compared with GCN and related models. In particular, training is orders of magnitude more efficient while predictions remain comparably accurate." ] }
1907.04707
2958092922
Graph classification is practically important in many domains. To solve this problem, one usually calculates a low-dimensional representation for each node in the graph with supervised or unsupervised approaches. Most existing approaches consider all the edges between nodes while overlooking whether the edge will brings positive or negative influence to the node representation learning. In many real-world applications, however, some connections among the nodes can be noisy for graph convolution, and not all the edges deserve your attention. In this work, we distinguish the positive and negative impacts of the neighbors to the node in graph node classification, and propose to enhance the graph convolutional network by considering the labels between the neighbor edges. We present a novel GCN framework, called Label-aware Graph Convolutional Network (LAGCN), which incorporates the supervised and unsupervised learning by introducing the edge label predictor. As a general model, LAGCN can be easily adapted in various previous GCN and enhance their performance with some theoretical guarantees. Experimental results on multiple real-world datasets show that LAGCN is competitive against various state-of-the-art methods in graph classification.
Another class of methods can enhance the GCN network are using the attention mechanism, GAT @cite_35 first applies the idea of self-attention to graph representation learning. GAT gives different weights to the neighbors of the center node by weighting the similarity between each neighbor and the center node. However, the computation of attention consumes a large amount of time. Gaan @cite_37 accelerate the training and prediction of GAT by using the node-wise sampling method mentioned in GraphSAGE. ASGCN @cite_19 @cite_29 ensemble the node-wise sampling, layer-wise sampling and the attention mechanism to achieve a better performance. However, these methods are not aware of the labels of the edges, which may aggregate the information from the neighbors whose labels are different. For nodes which have fewer neighbors than others, these methods also suffer from lacking of localize information.
{ "cite_N": [ "@cite_19", "@cite_35", "@cite_37", "@cite_29" ], "mid": [ "2890703109", "2766453196", "2792839479", "" ], "abstract": [ "Graph Convolutional Networks (GCNs) have become a crucial tool on learning representations of graph vertices. The main challenge of adapting GCNs on large-scale graphs is the scalability issue that it incurs heavy cost both in computation and memory due to the uncontrollable neighborhood expansion across layers. In this paper, we accelerate the training of GCNs through developing an adaptive layer-wise sampling method. By constructing the network layer by layer in a top-down passway, we sample the lower layer conditioned on the top one, where the sampled neighborhoods are shared by different parent nodes and the over expansion is avoided owing to the fixed-size sampling. More importantly, the proposed sampler is adaptive and applicable for explicit variance reduction, which in turn enhances the training of our method. Furthermore, we propose a novel and economical approach to promote the message passing over distant nodes by applying skip connections. Intensive experiments on several benchmarks verify the effectiveness of our method regarding the classification accuracy while enjoying faster convergence speed.", "We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).", "We propose a new network architecture, Gated Attention Networks (GaAN), for learning on graphs. Unlike the traditional multi-head attention mechanism, which equally consumes all attention heads, GaAN uses a convolutional sub-network to control each attention head's importance. We demonstrate the effectiveness of GaAN on the inductive node classification problem. Moreover, with GaAN as a building block, we construct the Graph Gated Recurrent Unit (GGRU) to address the traffic speed forecasting problem. Extensive experiments on three real-world datasets show that our GaAN framework achieves state-of-the-art results on both tasks.", "" ] }
1907.04667
2961035135
Click-through rate (CTR) prediction is a critical task in online advertising systems. Models like Deep Neural Networks (DNNs) are simple but stateless. They consider each target ad independently and cannot directly extract useful information contained in users' historical ad impressions and clicks. In contrast, models like Recurrent Neural Networks (RNNs) are stateful but complex. They model temporal dependency between users' sequential behaviors and can achieve improved prediction performance than DNNs. However, both the offline training and online prediction process of RNNs are much more complex and time-consuming. In this paper, we propose Memory Augmented DNN (MA-DNN) for practical CTR prediction services. In particular, we create two external memory vectors for each user, memorizing high-level abstractions of what a user possibly likes and dislikes. The proposed MA-DNN achieves a good compromise between DNN and RNN. It is as simple as DNN, but has certain ability to exploit useful information contained in users' historical behaviors as RNN. Both offline and online experiments demonstrate the effectiveness of MA-DNN for practical CTR prediction services. Actually, the memory component can be augmented to other models as well (e.g., the Wide&Deep model).
CTR prediction has attracted lots of attention from both academia and industry @cite_11 @cite_13 @cite_16 . Generalized linear models, such as Logistic Regression (LR) @cite_2 and Follow-The-Regularized-Leader (FTRL) @cite_6 , have shown decent performance in practice. However, a linear model lacks the ability to learn sophisticated feature interactions. Factorization Machines (FMs) @cite_8 are proposed to model pairwise feature interactions and they show improved performance. In recent years, Deep Neural Networks (DNNs) are exploited for CTR prediction and item recommendation in order to automatically learn feature representations and high-order feature interactions @cite_1 @cite_18 @cite_0 @cite_16 . @cite_9 propose the Product-based Neural Network where a product layer is introduced between the embedding layer and the fully connected layer. @cite_13 propose Wide &Deep, which combines LR and DNN to improve both the memorization and generalization abilities of the model. @cite_10 propose DeepFM, which models low-order feature interactions like FM and models high-order feature interactions like DNN. To capture dependency on users' sequential behaviors, @cite_5 propose Recurrent Neural Network (RNN) based models for CTR prediction. Nevertheless, the application of RNNs to practical CTR prediction services is rather complex.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_10", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_11" ], "mid": [ "2443960221", "", "2604662567", "2748787082", "2512971201", "2074694452", "2964182926", "2090883204", "1838102683", "2963323306", "2475334473", "2076618162" ], "abstract": [ "Predicting user responses, such as click-through rate and conversion rate, are critical in many web applications including web search, personalised recommendation, and online advertising. Different from continuous raw features that we usually found in the image and audio domains, the input features in web space are always of multi-field and are mostly discrete and categorical while their dependencies are little known. Major user response prediction models have to either limit themselves to linear models or require manually building up high-order combination features. The former loses the ability of exploring feature interactions, while the latter results in a heavy computation in the large feature space. To tackle the issue, we propose two novel models using deep neural networks (DNNs) to automatically learn effective patterns from categorical feature interactions and make predictions of users’ ad clicks. To get our DNNs efficiently work, we propose to leverage three feature transformation methods, i.e., factorisation machines (FMs), restricted Boltzmann machines (RBMs) and denoising auto-encoders (DAEs). This paper presents the structure of our models and their efficient training algorithms. The large-scale experiments with real-world data demonstrate that our methods work better than major state-of-the-art models.", "", "Learning sophisticated feature interactions behind user behaviors is critical in maximizing CTR for recommender systems. Despite great progress, existing methods seem to have a strong bias towards low- or high-order interactions, or require expertise feature engineering. In this paper, we show that it is possible to derive an end-to-end learning model that emphasizes both low- and high-order feature interactions. The proposed model, DeepFM, combines the power of factorization machines for recommendation and deep learning for feature learning in a new neural network architecture. Compared to the latest Wide & Deep model from Google, DeepFM has a shared input to its \"wide\" and \"deep\" parts, with no need of feature engineering besides raw features. Comprehensive experiments are conducted to demonstrate the effectiveness and efficiency of DeepFM over the existing models for CTR prediction, on both benchmark data and commercial data.", "", "YouTube represents one of the largest scale and most sophisticated industrial recommendation systems in existence. In this paper, we describe the system at a high level and focus on the dramatic performance improvements brought by deep learning. The paper is split according to the classic two-stage information retrieval dichotomy: first, we detail a deep candidate generation model and then describe a separate deep ranking model. We also provide practical lessons and insights derived from designing, iterating and maintaining a massive recommendation system with enormous user-facing impact.", "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates. We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.", "Feature engineering has been the key to the success of many prediction models. However, the process is nontrivial and often requires manual feature engineering or exhaustive searching. DNNs are able to automatically learn feature interactions; however, they generate all the interactions implicitly, and are not necessarily efficient in learning all types of cross features. In this paper, we propose the Deep & Cross Network (DCN) which keeps the benefits of a DNN model, and beyond that, it introduces a novel cross network that is more efficient in learning certain bounded-degree feature interactions. In particular, DCN explicitly applies feature crossing at each layer, requires no manual feature engineering, and adds negligible extra complexity to the DNN model. Our experimental results have demonstrated its superiority over the state-of-art algorithms on the CTR prediction dataset and dense classification dataset, in terms of both model accuracy and memory usage.", "Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction.", "Click prediction is one of the fundamental problems in sponsored search. Most of existing studies took advantage of machine learning approaches to predict ad click for each event of ad view independently. However, as observed in the real-world sponsored search system, user's behaviors on ads yield high dependency on how the user behaved along with the past time, especially in terms of what queries she submitted, what ads she clicked or ignored, and how long she spent on the landing pages of clicked ads, etc. Inspired by these observations, we introduce a novel framework based on Recurrent Neural Networks (RNN). Compared to traditional methods, this framework directly models the dependency on user's sequential behaviors into the click prediction process through the recurrent structure in RNN. Large scale evaluations on the click-through logs from a commercial search engine demonstrate that our approach can significantly improve the click prediction accuracy, compared to sequence-independent approaches.", "Many predictive tasks of web applications need to model categorical variables, such as user IDs and demographics like genders and occupations. To apply standard machine learning techniques, these categorical predictors are always converted to a set of binary features via one-hot encoding, making the resultant feature vector highly sparse. To learn from such sparse data effectively, it is crucial to account for the interactions between features. Factorization Machines (FMs) are a popular solution for efficiently using the second-order feature interactions. However, FM models feature interactions in a linear way, which can be insufficient for capturing the non-linear and complex inherent structure of real-world data. While deep neural networks have recently been applied to learn non-linear feature interactions in industry, such as the Wide&Deep by Google and DeepCross by Microsoft, the deep structure meanwhile makes them difficult to train. In this paper, we propose a novel model Neural Factorization Machine (NFM) for prediction under sparse settings. NFM seamlessly combines the linearity of FM in modelling second-order feature interactions and the non-linearity of neural network in modelling higher-order feature interactions. Conceptually, NFM is more expressive than FM since FM can be seen as a special case of NFM without hidden layers. Empirical results on two regression tasks show that with one hidden layer only, NFM significantly outperforms FM with a 7.3 relative improvement. Compared to the recent deep learning methods Wide&Deep and DeepCross, our NFM uses a shallower structure but offers better performance, being much easier to train and tune in practice.", "Generalized linear models with nonlinear feature transformations are widely used for large-scale regression and classification problems with sparse inputs. Memorization of feature interactions through a wide set of cross-product feature transformations are effective and interpretable, while generalization requires more feature engineering effort. With less feature engineering, deep neural networks can generalize better to unseen feature combinations through low-dimensional dense embeddings learned for the sparse features. However, deep neural networks with embeddings can over-generalize and recommend less relevant items when the user-item interactions are sparse and high-rank. In this paper, we present Wide & Deep learning---jointly trained wide linear models and deep neural networks---to combine the benefits of memorization and generalization for recommender systems. We productionized and evaluated the system on Google Play, a commercial mobile app store with over one billion active users and over one million apps. Online experiment results show that Wide & Deep significantly increased app acquisitions compared with wide-only and deep-only models. We have also open-sourced our implementation in TensorFlow.", "Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3 , an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with." ] }
1907.04667
2961035135
Click-through rate (CTR) prediction is a critical task in online advertising systems. Models like Deep Neural Networks (DNNs) are simple but stateless. They consider each target ad independently and cannot directly extract useful information contained in users' historical ad impressions and clicks. In contrast, models like Recurrent Neural Networks (RNNs) are stateful but complex. They model temporal dependency between users' sequential behaviors and can achieve improved prediction performance than DNNs. However, both the offline training and online prediction process of RNNs are much more complex and time-consuming. In this paper, we propose Memory Augmented DNN (MA-DNN) for practical CTR prediction services. In particular, we create two external memory vectors for each user, memorizing high-level abstractions of what a user possibly likes and dislikes. The proposed MA-DNN achieves a good compromise between DNN and RNN. It is as simple as DNN, but has certain ability to exploit useful information contained in users' historical behaviors as RNN. Both offline and online experiments demonstrate the effectiveness of MA-DNN for practical CTR prediction services. Actually, the memory component can be augmented to other models as well (e.g., the Wide&Deep model).
In this paper, we propose Memory Augmented Deep Neural Network (MA-DNN) for CTR prediction. The proposed MA-DNN achieves a good compromise between DNN and RNN. We are aware of recent work like @cite_12 that also utilizes memory networks @cite_17 . However, @cite_12 is proposed for recommender systems and our way of designing the user memory component is different from that in @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_17" ], "mid": [ "2783944588", "2950527759" ], "abstract": [ "User preferences are usually dynamic in real-world recommender systems, and a user»s historical behavior records may not be equally important when predicting his her future interests. Existing recommendation algorithms -- including both shallow and deep approaches -- usually embed a user»s historical records into a single latent vector representation, which may have lost the per item- or feature-level correlations between a user»s historical records and future interests. In this paper, we aim to express, store, and manipulate users» historical records in a more explicit, dynamic, and effective manner. To do so, we introduce the memory mechanism to recommender systems. Specifically, we design a memory-augmented neural network (MANN) integrated with the insights of collaborative filtering for recommendation. By leveraging the external memory matrix in MANN, we store and update users» historical records explicitly, which enhances the expressiveness of the model. We further adapt our framework to both item- and feature-level versions, and design the corresponding memory reading writing operations according to the nature of personalized recommendation scenarios. Compared with state-of-the-art methods that consider users» sequential behavior for recommendation, e.g., sequential recommenders with recurrent neural networks (RNN) or Markov chains, our method achieves significantly and consistently better performance on four real-world datasets. Moreover, experimental analyses show that our method is able to extract the intuitive patterns of how users» future actions are affected by previous behaviors.", "We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples." ] }
1907.04752
2957510905
We present the first algorithm for regular expression matching that can take advantage of sparsity in the input instance. Our main result is a new algorithm that solves regular expression matching in @math time, where @math is the number of positions in the regular expression, @math is the length of the string, and @math is the of the instance, defined as the total number of active states in a simulation of the position automaton. This measure is a lower bound on the total number of active states in simulations of all classic polynomial sized finite automata. Our bound improves the best known bounds for regular expression matching by almost a linear factor in the density of the problem. The key component in the result is a novel linear space representation of the position automaton that supports state-set transition computation in near-linear time in the size of the input and output state sets.
A related construction, by Chang and Paige @cite_26 , considered compact representations of the position automaton that supports efficiently implementing NFA to DFA conversion by subset construction. They presented a linear space representation that supports efficiently computing the set of states @math reachable via character from a state-set @math in time @math . Note that this does not imply an efficient representation for computing state-set transitions since @math can be significantly larger than @math . Another related result by Groz and Maneth @cite_0 considered the special case of position automata, that is, position automata that are always in a single state in any state-set simulation and where state-set transitions always map singleton state-set to singleton state-set. They present a linear space representation that supports these restricted singleton state-set transition computations in @math time. Theorem captures this result as a special case (i.e., when @math ) and generalizes it to all position automata.
{ "cite_N": [ "@cite_0", "@cite_26" ], "mid": [ "2604580999", "1966773178" ], "abstract": [ "Abstract A linear time algorithm is presented for testing determinism of a regular expression. It is shown that an input word of length n can be matched against a deterministic regular expression of length m in time O ( m + n log ⁡ log ⁡ m ) . If the deterministic regular expression has bounded depth of alternating union and concatenation operators, then matching can be performed in time O ( m + n ) . These results extend to regular expressions containing numerical occurrence indicators.", "There are two principal methods for turning regular expressions into NFA's — one due to McNaughton and Yamada and another due to Thompson. Unfortunately, both have drawbacks. Given a regular expression R of length r and with s occurrences of alphabet symbols, Chang and Paige (1992) and Bruggemann-Klein (1993) gave Θ(m + r) time and O(r) space algorithms to produce a Θ(m) space representation of McNaughton and Yamada's NFA with s + 1 states and m transitions. The problem with this NFA is that m = Θ(s2) in the worst case. Thompson's method takes Θ(r) time and space to construct a Θ(r) space NFA with Θ(r) states and Θ(r) transitions. The problem with this NFA is that r can be arbitrarily larger than s. We overcome drawbacks of both methods with a Θ(r) time Θ(s) space algorithm to construct an O(s) space representation of McNaughton and Yamada's NFA. Given any set V of NFA states, our representation can be used to compute the set U of states one transition away from the states in V in optimal time O(¦V¦ + ¦U¦). McNaughton and Yamada's NFA requires Θ(¦V¦ × ¦U¦) time in the worst case. Using Thompson's NFA, the equivalent calculation requires Θ(r) time in the worst case. Comparative benchmarks show that an implementation of our method outperforms implementations of competing methods with respect to time for NFA construction, NFA accepting testing, and NFA-to-DFA conversion by subset construction. Throughout this paper program transformations are used to design algorithms and derive programs. A transformation of special importance is a form of finite differencing used previously by Douglas Smith to improve the efficiency of functional programs." ] }
1907.04658
2958996718
The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed the skill level of intermediate amateurs simply using supervised learning. Further training and implementation of non-rectangular convolutions and reinforcement learning will likely increase this skill level much further.
Computer Go, the creation of Go-playing agents for computers, has existed as early as 1968 @cite_8 . As mentioned previously, before convolutional neural networks became popular, MCTS was the most powerful method to play Go. These techniques generally required a lot of enhancements and optimizations. For example, MCTS Solver to detect forced as described by @cite_18 allowed Go bots to quickly solve games. Rapid Action Value Estimation used in @cite_7 allowed bots to identify duplicate game states, thereby significantly reducing the number of computations necessary. The problem with these techniques was that there were too many states to evaluate. Therefore, programs such as introduced in @cite_12 and @cite_9 were only truly competitive on @math boards while only capable of achieving moderate success on @math boards.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_8", "@cite_9", "@cite_12" ], "mid": [ "1603772156", "1568143599", "1598695809", "2155625359", "2101101673" ], "abstract": [ "Recently, Monte-Carlo Tree Search (MCTS) has advanced the field of computer Go substantially. In this article we investigate the application of MCTS for the game Lines of Action (LOA). A new MCTS variant, called MCTS-Solver, has been designed to play narrow tactical lines better in sudden-death games such as LOA. The variant differs from the traditional MCTS in respect to backpropagation and selection strategy. It is able to prove the game-theoretical value of a position given sufficient time. Experiments show that a Monte-Carlo LOA program using MCTS-Solver defeats a program using MCTS by a winning score of 65 . Moreover, MCTS-Solver performs much better than a program using MCTS against several different versions of the world-class ?βprogram MIA. Thus, MCTS-Solver constitutes genuine progress in using simulation-based search approaches in sudden-death games, significantly improving upon MCTS-based programs.", "The Monte-Carlo Tree Search algorithm has been successfully applied in various domains. However, its performance heavily depends on the Monte-Carlo part. In this paper, we propose a generic way of improving the Monte-Carlo simulations by using RAVE values, which already strongly improved the tree part of the algorithm. We prove the generality and efficiency of our approach by showing improvements on two different applications: the game of Havannah and the game of Go.", "", "In order to promote computer Go and stimulate further development and research in the field, the event activities, Computational Intelligence Forum and World 9times9 Computer Go Championship, were held in Taiwan. This study focuses on the invited games played in the tournament Taiwanese Go players versus the computer program MoGo held at the National University of Tainan (NUTN), Tainan, Taiwan. Several Taiwanese Go players, including one 9-Dan (9D) professional Go player and eight amateur Go players, were invited by NUTN to play against MoGo from August 26 to October 4, 2008. The MoGo program combines all-moves-as-first (AMAF) rapid action value estimation (RAVE) values, online \"upper confidence tree (UCT)-like\" values, offline values extracted from databases, and expert rules. Additionally, four properties of MoGo are analyzed including: (1) the weakness in corners, (2) the scaling over time, (3) the behavior in handicap games, and (4) the main strength of MoGo in contact fights. The results reveal that MoGo can reach the level of 3 Dan (3D) with: (1) good skills for fights, (2) weaknesses in corners, in particular, for \"semeai\" situations, and (3) weaknesses in favorable situations such as handicap games. It is hoped that the advances in AI and computational power will enable considerable progress in the field of computer Go, with the aim of achieving the same levels as computer chess or Chinese chess in the future.", "FUEGO is both an open-source software framework and a state-of-the-art program that plays the game of Go. The framework supports developing game engines for full-information two-player board games, and is used successfully in a substantial number of projects. The FUEGO Go program became the first program to win a game against a top professional player in 9 × 9 Go. It has won a number of strong tournaments against other programs, and is competitive for 19 × 19 as well. This paper gives an overview of the development and current state of the FUEGO project. It describes the reusable components of the software framework and specific algorithms used in the Go engine." ] }
1907.04658
2958996718
The game of Go has a long history in East Asian countries, but the field of Computer Go has yet to catch up to humans until the past couple of years. While the rules of Go are simple, the strategy and combinatorics of the game are immensely complex. Even within the past couple of years, new programs that rely on neural networks to evaluate board positions still explore many orders of magnitude more board positions per second than a professional can. We attempt to mimic human intuition in the game by creating a convolutional neural policy network which, without any sort of tree search, should play the game at or above the level of most humans. We introduce three structures and training methods that aim to create a strong Go player: non-rectangular convolutions, which will better learn the shapes on the board, supervised learning, training on a data set of 53,000 professional games, and reinforcement learning, training on games played between different versions of the network. Our network has already surpassed the skill level of intermediate amateurs simply using supervised learning. Further training and implementation of non-rectangular convolutions and reinforcement learning will likely increase this skill level much further.
Previous work has been done on using only convolutional neural network to play Go. They offered boosts over traditional MCTS but were not able to achieve the same level of play as AlphaGo. In 2008, @cite_6 created a Convolutional Neural Network to play Go using an ensemble of networks. They were only able to achieve a then state-of-the-art 36.9 More recent approaches such as @cite_16 have introduced fancier convolutional neural networks that rely on long term predictions for extended play. @cite_16 achieved a slightly better accuracy of around 56 It's important to remember that deep convolutional neural networks are not just used to play Go. Many games such as Chess, Stratego, Hexagon and, more obviously, Atari games can be treated as images with labels being where to move next @cite_22 @cite_0 @cite_20 @cite_21 . This shows the flexibility of deep convolutional neural networks as a tool to model many hard to play (and hard to understand) games.
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_6", "@cite_0", "@cite_16", "@cite_20" ], "mid": [ "", "1757796397", "1589775371", "", "2963284097", "1947291763" ], "abstract": [ "", "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.", "Building a strong computer Go player is a longstanding open problem. In this paper we consider the related problem of predicting the moves made by Go experts in professional games. The ability to predict experts' moves is useful, because it can, in principle, be used to narrow the search done by a computer Go player. We applied an ensemble of convolutional neural networks to this problem. Our main result is that the ensemble learns to predict 36.9 of the moves made in test expert Go games, improving upon the state of the art, and that the best single convolutional neural network of the ensemble achieves 34 accuracy. This network has less than 104parameters.", "", "Abstract: Competing with top human players in the ancient game of Go has been a long-term goal of artificial intelligence. Go's high branching factor makes traditional search techniques ineffective, even on leading-edge hardware, and Go's evaluation function could change drastically with one stone change. Recent works [ (2015); Clark & Storkey (2015)] show that search is not strictly necessary for machine Go players. A pure pattern-matching approach, based on a Deep Convolutional Neural Network (DCNN) that predicts the next move, can perform as well as Monte Carlo Tree Search (MCTS)-based open source Go engines such as Pachi [Baudis & Gailly (2012)] if its search budget is limited. We extend this idea in our bot named darkforest, which relies on a DCNN designed for long-term predictions. Darkforest substantially improves the win rate for pattern-matching approaches against MCTS-based approaches, even with looser search budgets. Against human players, the newest versions, darkfores2, achieve a stable 3d level on KGS Go Server as a ranked bot, a substantial improvement upon the estimated 4k-5k ranks for DCNN reported in Clark & Storkey (2015) based on games against other machine players. Adding MCTS to darkfores2 creates a much stronger player named darkfmcts3: with 5000 rollouts, it beats Pachi with 10k rollouts in all 250 games; with 75k rollouts it achieves a stable 5d level in KGS server, on par with state-of-the-art Go AIs (e.g., Zen, DolBaram, CrazyStone) except for AlphaGo [ (2016)]; with 110k rollouts, it won the 3rd place in January KGS Go Tournament.", "Abstract: The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55 of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97 of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move." ] }
1907.04662
2960490173
What is a good exploration strategy for an agent that interacts with an environment in the absence of external rewards? Ideally, we would like to get a policy driving towards a uniform state-action visitation (highly exploring) in a minimum number of steps (fast mixing), in order to ease efficient learning of any goal-conditioned policy later on. Unfortunately, it is remarkably arduous to directly learn an optimal policy of this nature. In this paper, we propose a novel surrogate objective for learning highly exploring and fast mixing policies, which focuses on maximizing a lower bound to the entropy of the steady-state distribution induced by the policy. In particular, we introduce three novel lower bounds, that lead to as many optimization problems, that tradeoff the theoretical guarantees with computational complexity. Then, we present a model-based reinforcement learning algorithm, IDE @math AL, to learn an optimal policy according to the introduced objective. Finally, we provide an empirical evaluation of this algorithm on a set of hard-exploration tasks.
Other works propose to intrinsically motivate the agent towards learning to reach all possible states in the environment @cite_19 . To extend this same idea from the tabular setting to the context of a continuous, high-dimensional state space, @cite_5 employ a generative model to seek for a maximum-entropy goal distribution. @cite_21 , the authors propose an approach, called Go-Explore, to methodically reach any state by keeping an archive of any visited state and the best trajectory that brought the agent there. At each iteration, the agent draws a promising state from the archive, returns there replicating the stored trajectory (Go), then explores from this state trying to discover new states (Explore).
{ "cite_N": [ "@cite_19", "@cite_5", "@cite_21" ], "mid": [ "2293729149", "2922007426", "2914261249" ], "abstract": [ "While intrinsically motivated learning agents hold considerable promise to overcome limitations of more supervised learning systems, quantitative evaluation and theoretical analysis of such agents are difficult. We propose to consider a restricted setting for autonomous learning where systematic evaluation of learning performance is possible. In this setting the agent needs to learn to navigate in a Markov Decision Process where extrinsic rewards are not present or are ignored. We present a learning algorithm for this scenario and evaluate it by the amount of exploration it uses to learn the environment.", "In standard reinforcement learning, each new skill requires a manually-designed reward function, which takes considerable manual effort and engineering. Self-supervised goal setting has the potential to automate this process, enabling an agent to propose its own goals and acquire skills that achieve these goals. However, such methods typically rely on manually-designed goal distributions, or heuristics to force the agent to explore a wide range of states. We propose a formal exploration objective for goal-reaching policies that maximizes state coverage. We show that this objective is equivalent to maximizing the entropy of the goal distribution together with goal reaching performance, where goals correspond to entire states. We present an algorithm called Skew-Fit for learning such a maximum-entropy goal distribution, and show that under certain regularity conditions, our method converges to a uniform distribution over the set of possible states, even when we do not know this set beforehand. Skew-Fit enables self-supervised agents to autonomously choose and practice diverse goals. Our experiments show that it can learn a variety of manipulation tasks from images, including opening a door with a real robot, entirely from scratch and without any manually-designed reward function.", "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of \"superhuman\" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics)." ] }
1907.04662
2960490173
What is a good exploration strategy for an agent that interacts with an environment in the absence of external rewards? Ideally, we would like to get a policy driving towards a uniform state-action visitation (highly exploring) in a minimum number of steps (fast mixing), in order to ease efficient learning of any goal-conditioned policy later on. Unfortunately, it is remarkably arduous to directly learn an optimal policy of this nature. In this paper, we propose a novel surrogate objective for learning highly exploring and fast mixing policies, which focuses on maximizing a lower bound to the entropy of the steady-state distribution induced by the policy. In particular, we introduce three novel lower bounds, that lead to as many optimization problems, that tradeoff the theoretical guarantees with computational complexity. Then, we present a model-based reinforcement learning algorithm, IDE @math AL, to learn an optimal policy according to the introduced objective. Finally, we provide an empirical evaluation of this algorithm on a set of hard-exploration tasks.
Another promising intrinsic objective is to make value out of the exploration phase by acquiring a set of reusable skills, typically formulated by means of the option framework @cite_10 , which can be combined hierarchically to achieve challenging goals. @cite_2 , a set of options is learned by maximizing an intrinsic reward that is generated at the occurrence of some, user-defined, salient event. The approach proposed by @cite_15 , which presents some similarities with the work in @cite_21 , is based on learning a set of options to return with high probability to promising states. In their context, a promising state is both a hard state to reach, and a doorway to reach many other states. In this way, the learned options heuristically favor an even exploration of the state space.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_10", "@cite_2" ], "mid": [ "1601389419", "2914261249", "2109910161", "1486707268" ], "abstract": [ "A central role in the development process of children is played by self-exploratory activities Through a playful interaction with the surrounding environment, they test their own capabilities, explore novel situations, and understand how their actions affect the world During this kind of exploration, interesting situations may be discovered By learning to reach these situations, a child incrementally develops more and more complex skills Inspired by studies from psychology, neuroscience, and machine learning, we designed SMILe (Self-Motivated Incremental Learning), a learning framework that allows artificial agents to autonomously identify and learn a set of abilities useful to face several different tasks, through an iterated three phase process: by means of a random exploration of the environment (babbling phase), the agent identifies interesting situations and generates an intrinsic motivation (motivating phase) aimed at learning how to get into these situations (skill acquisition phase) This process incrementally increases the skills of the agent, so that new interesting configurations can be experienced We present results on two gridworld environments to show how SMILe makes it possible to learn skills that enable the agent to perform well and robustly in many different tasks.", "A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of \"superhuman\" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).", "Learning, planning, and representing knowledge at multiple levels of temporal ab- straction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforce- ment learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options—closed-loop policies for taking ac- tion over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as mus- cle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning frame- work in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic pro- gramming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: 1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, 2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and 3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state abstraction, hierarchy, function approximation, or the macro-utility problem.", "Humans and other animals often engage in activities for their own sakes rather than as steps toward solving practical problems. Psychologists call these intrinsically motivated behaviors. What we learn during intrinsically motivated behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they arise. In this paper we present initial results from a computational study of intrinsically motivated learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy. At the core of the model are recent theoretical and algorithmic advances in computational reinforcement learning, specifically, new concepts related to skills and new learning algorithms for learning with skill hierarchies." ] }
1907.04580
2962430776
This paper studies the stabilization for a kind of linear and impulse control systems in finite-dimensional spaces, where impulse instants appear periodically. We present several characterizations on the stabilization; show how to design feedback laws; and provide locations for impulse instants to ensure the stabilization. In the proofs of these results, we set up a discrete LQ problem; derived a discrete dynamic programming principle, built up a variant of Riccati's equation; applied repeatedly the Kalman controllability decomposition; and used a controllability result built up in [17].
In @cite_1 , the author built up a Kalman-type controllability decomposition for the system: @math Based on the decomposition, a necessary condition, as well as a sufficient condition, for the stabilization of the above system was given. Both results are related to some kind of reachability. The stabilization of the above system was also studied in @cite_12 .
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2139559232", "2161442655" ], "abstract": [ "In this paper, we address an output feedback stabilization problem for a class of linear impulsive systems that accommodate arbitrarily-spaced impulse times and possibly singular state transition matrices. By combining recent results for state feedback stabilization and state estimation, we show that a separation property holds and formulate an output feedback compensation scheme in which the feedback loop is closed between a discrete-time measurement and a continuoustime control input. Rather than directly adopting an observerbased structure involving the time-varying gains associated with the separate stabilization and estimation problems, we construct a purely discrete-time compensator followed by a memoryless generalized hold device that achieves closed-loop exponential stability.", "This paper establishes the equivalence of three stabilizability-related properties for a class of linear impulsive systems. The first involves a gramian-based condition inspired by results for time-varying, discrete-time linear systems introduced decades ago. The second is the ability to achieve closed-loop exponential stability via state feedback. Finally, the third property is exponential stability of an ‘unreachable’ subsystem identified from a decomposition of the original system derived from an invariant subspace that characterizes the set of reachable states. A consequence of this analysis is that full state reachability of a linear impulsive system is not necessary for state feedback stabilization, a well-known fact for linear time-invariant systems. The main ideas of the paper are applied to the problem of synchronizing two Lorenz oscillators using underactuated impulsive control." ] }
1907.04580
2962430776
This paper studies the stabilization for a kind of linear and impulse control systems in finite-dimensional spaces, where impulse instants appear periodically. We present several characterizations on the stabilization; show how to design feedback laws; and provide locations for impulse instants to ensure the stabilization. In the proofs of these results, we set up a discrete LQ problem; derived a discrete dynamic programming principle, built up a variant of Riccati's equation; applied repeatedly the Kalman controllability decomposition; and used a controllability result built up in [17].
About the controllability for impulse control systems, we mention works: @cite_10 @cite_13 @cite_19 @cite_16 @cite_9 @cite_14 @cite_15 and the references therein.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_9", "@cite_19", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2074704816", "2031217568", "2581030625", "2029314834", "1989542040", "2045019399", "2152531925" ], "abstract": [ "This paper studies the controllability and observability of a class of linear piecewise constant impulsive systems. Necessary and sufficient criteria for reachability and controllability are established, respectively. It is proved that the reachability is equivalent to the controllability under some mild conditions. Then, necessary and sufficient criteria for observability and determinability of such systems are established, respectively. It is also proved that the observability is equivalent to the determinability under some mild conditions. Our criteria are of geometric type, they can be transformed into algebraic type conveniently. Finally, a numerical example is given to illustrate the utility of our criteria.", "The present note investigates the stabilization, controllability and optimal control problem of Boolean networks with impulsive effects and state constraints. By using the semi-tensor product method, the algebraic form of the Boolean networks with impulsive effects and state constraints is derived. The stabilization and controllability issues of the systems are investigated and some necessary and sufficient conditions are obtained. In addition, the Mayer-type optimal control problem is also studied and algorithms are provided to design the control sequence. Furthermore, examples are given to illustrate the main results.", "Abstract This paper studies the approximate and null controllability for impulse controlled systems of heat equations coupled by a pair ( A , B ) of constant matrices. We present a necessary and sufficient condition for the approximate controllability, which is exactly Kalman's controllability rank condition of ( A , B ) . We prove that when such a system is approximately controllable, the approximate controllability over an interval [ 0 , T ] can be realized by adding controls at arbitrary q ( A , B ) different control instants 0 τ 1 τ 2 ⋯ τ q ( A , B ) T , provided that τ q ( A , B ) − τ 1 d A , where d A ≜ min ⁡ π | Im λ | : λ ∈ σ ( A ) and q ( A , B ) ≤ n . We also show that in general, such systems are not null controllable.", "Many dynamic systems in physics, chemistry, biology, engineering, and information science have impulsive dynamical behaviors due to abrupt jumps at certain instants during the dynamical processes. These complex dynamic behaviors can be modeled by impulsive differential systems. This paper studies the controllability and observability for a class of time-varying impulsive control systems. Several sufficient and necessary conditions for state controllability and state observability of such systems are established and the corresponding criteria for time-invariant impulsive control systems are also obtained. Meanwhile, several new results associated with variation of parameters for time-varying impulsive control systems are derived.", "This paper is concerned with the controllability and observability for a class of piecewise linear time-varying impulsive systems. Several sufficient and necessary conditions for state controllability and observability of such systems are established. Meanwhile, corresponding criteria for time-invariant impulsive systems are also obtained and the criteria are compared with the existing results.", "For a linear impulsive system, the set of states that are reachable from the origin when the initial time, impulse times, and final time are fixed is contained in an invariant subspace determined by the system data. It is known that reversibility of the system is sufficient to yield, for a specified initial time, the existence of some impulse time set and final time for which the reachable set equals the invariant subspace. In this paper, we relax the reversibility requirement and present a condition that is necessary as well as sufficient under which this property holds. This new condition involves the property of achieving reversibility via feedback and admits an explicit geometric characterization. Moreover, this feedback-reversibility property only needs to hold for the subsystem defined as the full system restricted to the invariant subspace. We further show that feedback-reversibility of the restricted system ensures that the reachable set equals the invariant subspace for almost any impulse time set and final time for which the number of impulse times contained in the underlying time interval exceeds a lower bound.", "Many practical systems in physics, chemistry, biology, engineering, and information science have impulsive dynamical behaviors due to abrupt changes at certain instants during the dynamical processes. These complex dynamical behaviors can be modeled by impulsive differential systems. This paper studies the controllability and observability issues for a general time-varying impulsive control systems. Sufficient and necessary conditions for state controllability and state observability of the impulsive control systems are established and their applications to time-invariant impulsive control systems are also discussed. Furthermore, several new results associated with variation of parameters for time-varying impulsive control systems are derived." ] }
1907.04580
2962430776
This paper studies the stabilization for a kind of linear and impulse control systems in finite-dimensional spaces, where impulse instants appear periodically. We present several characterizations on the stabilization; show how to design feedback laws; and provide locations for impulse instants to ensure the stabilization. In the proofs of these results, we set up a discrete LQ problem; derived a discrete dynamic programming principle, built up a variant of Riccati's equation; applied repeatedly the Kalman controllability decomposition; and used a controllability result built up in [17].
In @cite_9 , the authors studied the controllability for the system: @math (Here @math , @math and @math .) They found @math (defined in ) with @math and @math ) so that for each @math and each @math with @math , the above system is controllable, provided that @math holds Kalman controllability rank condition. This result is used in the proofs of Theorem , as well as Theorem .
{ "cite_N": [ "@cite_9" ], "mid": [ "2581030625" ], "abstract": [ "Abstract This paper studies the approximate and null controllability for impulse controlled systems of heat equations coupled by a pair ( A , B ) of constant matrices. We present a necessary and sufficient condition for the approximate controllability, which is exactly Kalman's controllability rank condition of ( A , B ) . We prove that when such a system is approximately controllable, the approximate controllability over an interval [ 0 , T ] can be realized by adding controls at arbitrary q ( A , B ) different control instants 0 τ 1 τ 2 ⋯ τ q ( A , B ) T , provided that τ q ( A , B ) − τ 1 d A , where d A ≜ min ⁡ π | Im λ | : λ ∈ σ ( A ) and q ( A , B ) ≤ n . We also show that in general, such systems are not null controllable." ] }
1907.04580
2962430776
This paper studies the stabilization for a kind of linear and impulse control systems in finite-dimensional spaces, where impulse instants appear periodically. We present several characterizations on the stabilization; show how to design feedback laws; and provide locations for impulse instants to ensure the stabilization. In the proofs of these results, we set up a discrete LQ problem; derived a discrete dynamic programming principle, built up a variant of Riccati's equation; applied repeatedly the Kalman controllability decomposition; and used a controllability result built up in [17].
About optimal control for impulse control systems, we mention the works: @cite_24 @cite_4 @cite_11 @cite_8 @cite_7 and the references therein.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_8", "@cite_24", "@cite_11" ], "mid": [ "2073506225", "1985491254", "2127615550", "1524099673", "2326361197" ], "abstract": [ "This paper addresses the problem of impulsive control and optimization for linear dynamical systems. An essential benefit of impulsive control is that such controls may be simpler to implement and involve cheaper control mechanisms. We shall establish some impulsive controllability criteria and obtain some conditions for the existence of optimal solutions. An example is also worked though employing Maple symbolic computation.", "Optimal control problem of semilinear evolutionary distributed parameter systems with impulse controls is considered. Necessary conditions of optimal controls are derived. The result generalises the usual Pontryagin's maximum principle.", "We consider the optimal control problem of minimizing some quadratic functional over all possible solutions of an internally controlled multidimensional heat equation with a periodic terminal state constraint. This problem has a unique optimal solution, which can be characterized by an optimality system derived from the Pontryagin maximum principle. We define two approximations of this optimal control problem. The first one is an impulse approximation and consists of considering a system of linear heat equations with impulse control. The second one is obtained by the sample-and-hold procedure applied to the control, resulting in a sampled-data approximation of the controlled heat equation. We prove that both problems have a unique optimal solution, and we establish precise error estimates for the optimal controls and optimal states of the initial problem with respect to its impulse and sampled-data approximations.", "", "This paper is mainly concerned with a class of optimal control problems of systems governed by the nonlinear impulsive differential equation on time scale. The reasonable weak solution of nonlinear impulsive differential equation on time scale is introduced and the existence and uniqueness of the weak solution and its properties are presented. By @math strong @math weak lower semicontinuity of integral functional on time scale, we give the existence of optimal controls. Using integration by parts formula on time scale, the necessary conditions of optimality are derived. An example on mathematical programming is also presented for demonstration." ] }
1907.04580
2962430776
This paper studies the stabilization for a kind of linear and impulse control systems in finite-dimensional spaces, where impulse instants appear periodically. We present several characterizations on the stabilization; show how to design feedback laws; and provide locations for impulse instants to ensure the stabilization. In the proofs of these results, we set up a discrete LQ problem; derived a discrete dynamic programming principle, built up a variant of Riccati's equation; applied repeatedly the Kalman controllability decomposition; and used a controllability result built up in [17].
About general theory for impulse systems, we refer readers to @cite_6 @cite_22 @cite_23 and the references therein.
{ "cite_N": [ "@cite_23", "@cite_22", "@cite_6" ], "mid": [ "1484739396", "2043104638", "2050240106" ], "abstract": [ "Geared primarily to an audience consisting of mathematically advanced undergraduate or beginning graduate students, this text may additionally be used by engineering students interested in a rigorous, proof-oriented systems course that goes beyond the classical frequency-domain material and more applied courses. The minimal mathematical background required is a working knowledge of linear algebra and differential equations. The book covers what constitutes the common core of control theory and is unique in its emphasis on foundational aspects. While covering a wide range of topics written in a standard theorem proof style, it also develops the necessary techniques from scratch. In this second edition, new chapters and sections have been added, dealing with time optimal control of linear systems, variational and numerical approaches to nonlinear control, nonlinear controllability via Lie-algebraic methods, and controllability of recurrent nets and of linear systems with bounded controls.", "A new hyperchaotic system has been proposed recently. It is generated by controlling a unified chaotic system to hyperchaotic via a simple technique using a sinusoidal parameter perturbation control input. In this paper, we further investigate its dynamical behaviors, its circuit implementation and its impulsive control. Different chaotic attractors are illustrated by both numerical simulations and electronic experiments. It is also shown that the new hyperchaotic system can be stabilized by impulsive control.", "Many evolution processes are characterized by the fact that at certain moments of time they experience a change of state abruptly. These processes are subject to short-term perturbations whose duration is negligible in comparison with the duration of the process. Consequently, it is natural to assume that these perturbations act instantaneously, that is, in the form of impulses. It is known, for example, that many biological phenomena involving thresholds, bursting rhythm models in medicine and biology, optimal control models in economics, pharmacokinetics and frequency modulated systems, do exhibit impulsive effects. Thus impulsive differential equations, that is, differential equations involving impulse effects, appear as a natural description of observed evolution phenomena of several real world problems." ] }
1901.00898
2905297777
This work examines the role of reinforcement learning in reducing the severity of on-road collisions by controlling velocity and steering in situations in which contact is imminent. We construct a model, given camera images as input, that is capable of learning and predicting the dynamics of obstacles, cars and pedestrians, and train our policy using this model. Two policies that control both braking and steering are compared against a baseline where the only action taken is (conventional) braking in a straight line. The two policies are trained using two distinct reward structures, one where any and all collisions incur a fixed penalty, and a second one where the penalty is calculated based on already established delta-v models of injury severity. The results show that both policies exceed the performance of the baseline, with the policy trained using injury models having the highest performance.
A system capable of early detecting a pedestrian’s intention of crossing the road and performing an evasive maneuver if avoidance by braking is impossible is presented in @cite_13 . However, they rely on the existence of a Road Side Unit placed in dangerous road spots in order to detect pedestrian intention and send this information to the On Board Unit placed in the vehicle.
{ "cite_N": [ "@cite_13" ], "mid": [ "2010256323" ], "abstract": [ "We present an active pedestrian protection system that performs an autonomous lane-keeping evasive maneuver in urban traffic scenarios when collision avoidance by braking is no longer possible. The system focuses on pedestrians standing at the curb and intending to cross the street despite an approaching car. It is demonstrated that the evasive maneuver of the car can be initiated before the pedestrian's foot hits the lane, by means of video-based motion contour histograms of oriented gradients and stationary detection. Using clothoid-based real-time trajectory planning and a lateral control of the car, combining feedforward and feedback control, the difference between the driven and the calculated trajectories is kept below 10 cm at maximum lateral accelerations of 4 ms-2 and -5 ms-2. We present the technical realization of the system and its precision with respect to intention recognition and driven trajectories. A case study showed that the system reacted faster than human drivers in five out of 11 cases, with an average time gain of 214 ms, even though the drivers were able to pay the utmost attention to the behavior of the crossing pedestrian." ] }
1901.00898
2905297777
This work examines the role of reinforcement learning in reducing the severity of on-road collisions by controlling velocity and steering in situations in which contact is imminent. We construct a model, given camera images as input, that is capable of learning and predicting the dynamics of obstacles, cars and pedestrians, and train our policy using this model. Two policies that control both braking and steering are compared against a baseline where the only action taken is (conventional) braking in a straight line. The two policies are trained using two distinct reward structures, one where any and all collisions incur a fixed penalty, and a second one where the penalty is calculated based on already established delta-v models of injury severity. The results show that both policies exceed the performance of the baseline, with the policy trained using injury models having the highest performance.
As opposed to most previous research, @cite_7 propose a model-free collision avoidance system using Deep Reinforcement Learning (DRL). They derive a balanced reward function for an autonomous braking system based on DRL, where the action space allows 4 choices: no braking, weak braking, medium and strong. The reward function consists of one component that penalizes the agent for braking too early while the second one is a penalty for collision with the pedestrian and takes into account the velocity of the vehicle in order to reflect the degree of damage. Due to an unstable learning performance (collisions rarely occuring), the authors use memory replay @cite_1 in order to remind the agent of collisions, whatever the present policy. The results show that collision rates are 0 for time-to-collision larger or equal to 1.5 seconds. This prompts us to study TTC values below @math .
{ "cite_N": [ "@cite_1", "@cite_7" ], "mid": [ "2145339207", "2586886183" ], "abstract": [ "An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.", "In this paper, we propose a new autonomous braking system based on deep reinforcement learning. The proposed autonomous braking system automatically decides whether to apply the brake at each time step when confronting the risk of collision using the information on the obstacle obtained by the sensors. The problem of designing brake control is formulated as searching for the optimal policy in Markov decision process (MDP) model where the state is given by the relative position of the obstacle and the vehicle's speed, and the action space is defined as whether brake is stepped or not. The policy used for brake control is learned through computer simulations using the deep reinforcement learning method called deep Q-network (DQN). In order to derive desirable braking policy, we propose the reward function which balances the damage imposed to the obstacle in case of accident and the reward achieved when the vehicle runs out of risk as soon as possible. DQN is trained for the scenario where a vehicle is encountered with a pedestrian crossing the urban road. Experiments show that the control agent exhibits desirable control behavior and avoids collision without any mistake in various uncertain environments." ] }
1901.00898
2905297777
This work examines the role of reinforcement learning in reducing the severity of on-road collisions by controlling velocity and steering in situations in which contact is imminent. We construct a model, given camera images as input, that is capable of learning and predicting the dynamics of obstacles, cars and pedestrians, and train our policy using this model. Two policies that control both braking and steering are compared against a baseline where the only action taken is (conventional) braking in a straight line. The two policies are trained using two distinct reward structures, one where any and all collisions incur a fixed penalty, and a second one where the penalty is calculated based on already established delta-v models of injury severity. The results show that both policies exceed the performance of the baseline, with the policy trained using injury models having the highest performance.
Various metrics are being employed in research as a measure of crash severity, while this makes for grim reading they do provide widely adopted quantitative, data driven models of accident outcomes. They include the Acceleration Severity Index (ASI), Occupant Impact Velocity (OIV) and Delta-V. However, @cite_19 show that the former two do not offer significant predictive advantage over the latter. Since its emergence in the 1970s, Delta-V has been the traditional metric for crash severity and is defined as the absolute change between pre-collision velocity and post-collision velocity, with the assumption that larger differences in velocities are correlated with more severe injuries ( @cite_9 ):
{ "cite_N": [ "@cite_19", "@cite_9" ], "mid": [ "1969207082", "594023927" ], "abstract": [ "The occupant impact velocity (OIV) and acceleration severity index (ASI) are competing measures of crash severity used to assess occupant injury risk in full-scale crash tests involving roadside safety hardware, e.g. guardrail. Delta-V, or the maximum change in vehicle velocity, is the traditional metric of crash severity for real world crashes. This study compares the ability of the OIV, ASI, and delta-V to discriminate between serious and non-serious occupant injury in real world frontal collisions. Vehicle kinematics data from event data recorders (EDRs) were matched with detailed occupant injury information for 180 real world crashes. Cumulative probability of injury risk curves were generated using binary logistic regression for belted and unbelted data subsets. By comparing the available fit statistics and performing a separate ROC curve analysis, the more computationally intensive OIV and ASI were found to offer no significant predictive advantage over the simpler delta-V.", "Delta-V (Δv) is a measure of the severity of a traffic collision, defined as the change in velocity between pre-collision and post-collision trajectories of a vehicle. Delta-V emerged in the 1970s in the context of crash reconstruction analysis, and is considered by some researchers to be the best single predictor of crash severity. However, this indicator has not been applied to the analysis of traffic conflicts, until recently when it was incorporated into the automated conflict analysis algorithms of the Surrogate Safety Assessment Model (SSAM). This paper introduces Delta-V and demonstrates how it overcomes shortcomings present in several traditional measures of traffic conflict severity. We discuss the ambiguity present in the literature on the topic of traffic conflict severity, and suggest the adoption of alternative terminology and definitions. We demonstrate a new approach, incorporating Delta-V, to estimate the collision propensity and potential collision severity of a traffic conflict." ] }
1901.00898
2905297777
This work examines the role of reinforcement learning in reducing the severity of on-road collisions by controlling velocity and steering in situations in which contact is imminent. We construct a model, given camera images as input, that is capable of learning and predicting the dynamics of obstacles, cars and pedestrians, and train our policy using this model. Two policies that control both braking and steering are compared against a baseline where the only action taken is (conventional) braking in a straight line. The two policies are trained using two distinct reward structures, one where any and all collisions incur a fixed penalty, and a second one where the penalty is calculated based on already established delta-v models of injury severity. The results show that both policies exceed the performance of the baseline, with the policy trained using injury models having the highest performance.
In @cite_6 the fatality risk of pedestrians as given by the vehicle's speed on impact using the GIDAS dataset (German In-Depth Accident Study) is studied. The dataset includes data from 2127 pedestrians that were involved in accidents between 1999 and 2007. They present a now widely-adopted approximation of the fatality risk as: where @math is the velocity when impacting the pedestrian (km h).
{ "cite_N": [ "@cite_6" ], "mid": [ "2062417984" ], "abstract": [ "Knowledge of the amount of violence tolerated by the human body is essential when developing and implementing pedestrian safety strategies. When estimating the potential benefits of new countermeasures, the pedestrian fatality risk as a function of impact speed is of particular importance. Although this function has been analysed previously, we state that a proper understanding does not exist. Based on the largest in-depth, pedestrian accident study undertaken to date, we derive an improved risk function for adult pedestrians hit by the front of passenger cars. Our results show far lower fatality risks than generally reported in the traffic safety literature. This discrepancy is primarily explained by sample bias towards severe injury accidents in earlier studies. Nevertheless, a strong dependence on impact speed is found, with the fatality risk at 50 km h being more than twice as high as the risk at 40 km h and more than five times higher than the risk at 30 km h. Our findings should have important implications for the development of pedestrian accident countermeasures worldwide. In particular, the scope of future pedestrian safety policies and research should be broadened to include accidents with impact speeds exceeding 50 km h." ] }
1901.00826
2907783554
The last decade has witnessed an unprecedented growth in the demand for data-driven real-time services. These services are fueled by emerging applications that require rapidly injecting data streams and computing updated analytics results in real-time. In many of such applications, the computing resources are often shared for processing both updates from information sources and queries from end users. This requires joint scheduling of updates and queries because the service provider needs to make a critical decision upon receiving a user query: either it responds immediately with currently available but possibly stale information, or it first processes new updates and then responds with fresher information. Hence, the tradeoff between service performance and information freshness naturally arises in this context. To that end, we propose a simple single-server two-queue model that captures the coupled scheduling of updates and queries and aim to design scheduling policies that can properly address the important tradeoff between performance and freshness. Specifically, we consider the response time as a performance metric and the Age of Information (AoI) as a freshness metric. After demonstrating the limitations of the simplest FCFS policy, we propose two threshold-based policies: the Query-k policy that prioritizes queries and the Update-k policy that prioritizes updates. Then, we rigorously analyze both the response time and the Peak AoI (PAoI) of the threshold-based policies. Further, we propose the Joint-(M,N) policy, which allows flexibly prioritizing updates or queries through choosing different values of two thresholds M and N. Finally, we conduct simulations to evaluate the response time and the PAoI of the proposed policies. The results show that our proposed threshold-based policies can effectively control the balance between performance and freshness.
The notion of AoI is formally introduced in @cite_14 , where the authors analyze the time average AoI in M M 1, M D 1, and D M 1 systems under the FCFS policy. Since this seminal work, the study on the AoI has attracted a lot of research interests. There is a large body of work that focuses on the analysis of the AoI under a number of queueing model. For example, the work of @cite_7 @cite_6 @cite_12 focuses on the model where the updates arrive according to the Poisson process and are served by a single server. There is another body of work that considers how to minimize the AoI by carefully designing scheduling policies in different scenarios (e.g., wireless networks @cite_15 @cite_9 and energy harvesting networks @cite_11 @cite_13 ). In @cite_22 , the authors propose the Pull model for investigating the expected AoI at the user's side and discover a new tradeoff between different levels of information freshness and different response times across the servers. Besides the above work that focuses on the analysis and optimization of the AoI, several other work also considers applications where the AoI is highly relevant (see, e.g., @cite_17 @cite_18 ).
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_22", "@cite_7", "@cite_9", "@cite_17", "@cite_6", "@cite_15", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2076618162", "1993918491", "2962807737", "2069689609", "2809579289", "1205143274", "2069417241", "2782340910", "2744248483", "1970679118", "1640967807" ], "abstract": [ "Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3 , an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with.", "Increasingly ubiquitous communication networks and connectivity via portable devices have engendered a host of applications in which sources, for example people and environmental sensors, send updates of their status to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited network resources. In this paper, we employ a time-average age metric for the performance evaluation of status update systems. We derive general methods for calculating the age metric that can be applied to a broad class of service systems. We apply these methods to queue-theoretic system abstractions consisting of a source, a service facility and monitors, with the model of the service facility (physical constraints) a given. The queue discipline of first-come-first-served (FCFS) is explored. We show the existence of an optimal rate at which a source must generate its information to keep its status as timely as possible at all its monitors. This rate differs from those that maximize utilization (throughput) or minimize status packet delivery delay. While our abstractions are simpler than their real-world counterparts, the insights obtained, we believe, are a useful starting point in understanding and designing systems that support real time status updates.", "The Age-of-Information (AoI) has recently been proposed as an important metric for investigating the timeliness performance in information-update systems. Prior studies on AoI optimization often consider a Push model, which is concerned about when and how to \"push\" (i.e., generate and transmit) the updated information to the user. In stark contrast, in this paper we introduce a new Pull model, which is more relevant for certain applications (such as the real-time stock quotes service), where a user sends requests to the servers to proactively \"pull\" the information of interest. Moreover, we propose to employ request replication to reduce the AoI. Interestingly, we find that under this new Pull model, replication schemes capture a novel tradeoff between different levels of information freshness and different response times across the servers, which can be exploited to minimize the expected AoI at the user's side. Specifically, assuming Poisson updating process at the servers and exponentially distributed response time, we derive a closed-form formula for computing the expected AoI and obtain the optimal number of responses to wait for to minimize the expected AoI. Finally, we conduct numerical simulations to elucidate our theoretical results. Our findings show that waiting for more than one response can significantly reduce the AoI in most scenarios.", "", "We consider the problem of scheduling real-time traffic with hard deadlines in a wireless ad hoc network. In contrast to existing real-time scheduling policies that merely ensure a minimal timely throughput, our design goal is to provide guarantees on both the timely throughput and data freshness in terms of age-of-information (AoI), which is a newly proposed metric that captures the \"age\" of the most recently received information at the destination of a link. The main idea is to introduce the AoI as one of the driving factors in making scheduling decisions. We first prove that the proposed scheduling policy is feasibility-optimal, i.e., satisfying the per-traffic timely throughput requirement. Then, we derive an upper bound on a considered data freshness metric in terms of AoI, demonstrating that the network-wide data freshness is guaranteed and can be tuned under the proposed scheduling policy. Interestingly, we reveal that the improvement of network data freshness is at the cost of slowing down the convergence of the timely throughput. Extensive simulations are performed to validate our analytical results. Both analytical and simulation results confirm the capability of the proposed scheduling policy to improve the data freshness without sacrificing the feasibility optimality.", "Recent advances in vehicular networks have enforced researchers to focus on various information dissemination techniques. Exchanging information among the vehicles is imperative due to the ever-changing network topology in vehicular networks. However, random transmitter selection in traditional CSMA based channel access mechanism limits the delay performance. Data, such as state information, is often time critical, and hence, efficient information dissemination techniques to improve delay performance are essential. In this work, we aim to minimize the average system age which is the mean number of time slots old a vehicle's information is at all other vehicles in the network. To achieve this, we explore the benefits of simultaneous transmission along with piggybacking of information for multi-hop communication. While allowing simultaneous transmission guarantees faster dissemination of information, piggybacking facilitates dissemination of more information per transmission, thereby keeping the network more updated. We have also analysed the relationship between piggybacked information and number of vehicles in the network. Simulation results show improvement in network performance. Our analytical results are in good agreement with the simulation results.", "We consider the system where a source randomly generates status update messages and transmits them via a network cloud to the intended destination. These update message can take different times to traverse the network, which we model as exponential service times, and may result in packets reaching the destination out of order, rendering some of the earlier transmissions obsolete. We analyze the status update age for such a system, and show that it tracks well with simulation results.", "We consider a wireless broadcast network with a base station sending time-sensitive information to a number of clients through unreliable channels. The Age of Information (AoI), namely the amount of time that elapsed since the most recently delivered packet was generated, captures the freshness of the information. We formulate a discrete-time decision problem to find a transmission scheduling policy that minimizes the expected weighted sum AoI of the clients in the network. We first show that in symmetric networks a Greedy policy, which transmits the packet with highest current age, is optimal. For general networks, we develop three low-complexity scheduling policies: a randomized policy, a Max-Weight policy and a Whittle's Index policy, and derive performance guarantees as a function of the network configuration. To the best of our knowledge, this is the first work to derive performance guarantees for scheduling policies that attempt to minimize AoI in wireless networks with unreliable channels. Numerical results show that both Max-Weight and Whittle's Index policies outperform the other scheduling policies in every configuration simulated, and achieve near optimal performance.", "In this paper, we study how to optimally manage the freshness of information updates sent from a source node to a destination via a channel. A proper metric for data freshness at the destination is the age-of-information , or simply age , which is defined as how old the freshest received update is, since the moment that this update was generated at the source node (e.g., a sensor). A reasonable update policy is the zero-wait policy, i.e., the source node submits a fresh update once the previous update is delivered, which achieves the maximum throughput and the minimum delay. Surprisingly, this zero-wait policy does not always minimize the age. This counter-intuitive phenomenon motivates us to study how to optimally control information updates to keep the data fresh and to understand when the zero-wait policy is optimal. We introduce a general age penalty function to characterize the level of dissatisfaction on data staleness and formulate the average age penalty minimization problem as a constrained semi-Markov decision problem with an uncountable state space. We develop efficient algorithms to find the optimal update policy among all causal policies and establish sufficient and necessary conditions for the optimality of the zero-wait policy. Our investigation shows that the zero-wait policy is far from the optimum if: 1) the age penalty function grows quickly with respect to the age; 2) the packet transmission times over the channel are positively correlated over time; or 3) the packet transmission times are highly random (e.g., following a heavy-tail distribution).", "We examine multiple independent sources providing status updates to a monitor through a first-come-first-served M M 1 queue. We formulate a status-age timeliness metric and find the region of feasible average status ages for a pair of updating sources. In the presence of interfering traffic with a given offered load, we show the existence of an optimal rate at which a source should generate its updates.", "A source submits status updates to a service facility for delivery to a monitor. Each update requires energy and the source is powered by a stochastic energy harvesting system. With knowledge of the service facility state, the source avoids queue-induced delays by submitting a fresh update only after the service completion of a prior update. For a source with a large battery, we evaluate updating policies using a status age timeliness metric. We show that an optimal policy is lazy; following a service completion, the service facility is frequently left idle even though the server may have sufficient energy to submit an update." ] }
1901.00826
2907783554
The last decade has witnessed an unprecedented growth in the demand for data-driven real-time services. These services are fueled by emerging applications that require rapidly injecting data streams and computing updated analytics results in real-time. In many of such applications, the computing resources are often shared for processing both updates from information sources and queries from end users. This requires joint scheduling of updates and queries because the service provider needs to make a critical decision upon receiving a user query: either it responds immediately with currently available but possibly stale information, or it first processes new updates and then responds with fresher information. Hence, the tradeoff between service performance and information freshness naturally arises in this context. To that end, we propose a simple single-server two-queue model that captures the coupled scheduling of updates and queries and aim to design scheduling policies that can properly address the important tradeoff between performance and freshness. Specifically, we consider the response time as a performance metric and the Age of Information (AoI) as a freshness metric. After demonstrating the limitations of the simplest FCFS policy, we propose two threshold-based policies: the Query-k policy that prioritizes queries and the Update-k policy that prioritizes updates. Then, we rigorously analyze both the response time and the Peak AoI (PAoI) of the threshold-based policies. Further, we propose the Joint-(M,N) policy, which allows flexibly prioritizing updates or queries through choosing different values of two thresholds M and N. Finally, we conduct simulations to evaluate the response time and the PAoI of the proposed policies. The results show that our proposed threshold-based policies can effectively control the balance between performance and freshness.
Despite the aforementioned studies on service performance and information freshness, the tradeoff between them has often been neglected in the literature (partially due to the nature of the considered applications), except for the following limited work. In @cite_3 , the tradeoff of performance and freshness has been considered for database-driven web servers, where the goal is to optimize performance under the freshness constraint. The work of @cite_4 proposes to combine performance and freshness into a single compound metric and addresses the tradeoff between them through optimizing the compound metric. Further, the work of @cite_4 has been extended to account for user preference for performance and freshness @cite_16 . In stark contrast to these studies that provide heuristic solutions only, in this paper we aim to systematically understand this tradeoff by providing theoretical results with rigorous analysis.
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_3" ], "mid": [ "2145068111", "2137536766", "2031513076" ], "abstract": [ "Typical Web-database systems receive read-only queries, that generate dynamic Web pages as a response, and write-only updates, that keep information up-to-date. Users expect short response times and low staleness. However, it may be extremely hard to apply all updates on time, i.e., keep zero staleness, and also get fast response times, especially in periods of bursty traffic. In this paper, we present the concept of quality contracts (QCs) which combines the two incomparable performance metrics: response time or quality of service (QoS), and staleness or quality of data (QoD). QCs allows individual users to express their preferences for the expected QoS and QoD of their queries by assigning \"profit\" values. To maximize the total profit from submitted QCs, we propose an adaptive algorithm, called QUTS. QUTS addresses the problem of prioritizing the scheduling of updates over queries using a two-level scheduling scheme that dynamically allocates CPU resources to updates and queries according to user preferences. We present the results of an extensive experimental study using real data (taken from a stock information Web site), where we show that QUTS performs better than baseline algorithms under the entire spectrum of QCs; QUTS also adapts fast to changing workloads.", "Web-database systems are nowadays an integral part of everybody’s life, with applications ranging from monitoring trading stock portfolios, to personalized blog aggregation and news services, to personalized weather tracking services. For most of these services to be successful (and their users to be kept satisfied), two criteria need to be met: user requests must be answered in a timely fashion and using fresh data. This paper presents a framework to balance both requirements from the users’ perspective. Toward this, we propose a user satisfaction metric to measure the overall effectiveness of the Web-database system. We also provide a set of algorithms to dynamically optimize this metric, through query admission control and update frequency modulation. Finally, we present extensive experimental results which compare our proposed algorithms to the current state of the art and show that we outperform competitors under various workloads (generated based on real traces) and user requirements.", "Personalization, advertising, and the sheer volume of online data generate a staggering amount of dynamic Web content. In addition to Web caching, view materialization has been shown to accelerate the generation of dynamic Web content. View materialization is an attractive solution as it decouples the serving of access requests from the handling of updates. In the context of the Web, selecting which views to materialize must be decided online and needs to consider both performance and data freshness, which we refer to as the online view selection problem. In this paper, we define data freshness metrics, provide an adaptive algorithm for the online view selection problem that is based on user-specified data freshness requirements, and present experimental results. Furthermore, we examine alternative metrics for data freshness and extend our proposed algorithm to handle multiple users and alternative definitions of data freshness." ] }
1901.00942
2907266819
Decision-making problems can be modeled as combinatorial optimization problems with Constraint Programming formalisms such as Constrained Optimization Problems. However, few Constraint Programming formalisms can deal with both optimization and uncertainty at the same time, and none of them are convenient to model problems we tackle in this paper. Here, we propose a way to deal with combinatorial optimization problems under uncertainty within the classical Constrained Optimization Problems formalism by injecting the Rank Dependent Utility from decision theory. We also propose a proof of concept of our method to show it is implementable and can solve concrete decision-making problems using a regular constraint solver, and propose a bot that won the partially observable track of the 2018 RTS AI competition. Our result shows it is possible to handle uncertainty with regular Constraint Programming solvers, without having to define a new formalism neither to develop dedicated solvers. This brings new perspective to tackle uncertainty in Constraint Programming.
Although the following papers do not deal with uncertainty, they all focus on solving optimization problems in RTS games, in particular StarCraft. Thus @cite_3 propose to model with the optimal building placement to make a wall at a base entrance in order to make easier its defense. @cite_0 @cite_9 propose a and solver, GHOST, that we used for our experiments. Their Constraint Programming solver as been designed to output good quality solution within some tenth of milliseconds, make it usable in RTS games.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_3" ], "mid": [ "1425820592", "2406883374", "1716013926" ], "abstract": [ "GHOST is a framework to help game developers to model and implement their own optimization problems, or to simply instantiate a problem already encoded in GHOST. Previous works show that GHOST leads to high-quality solutions in some tens of milliseconds for three RTS-related problems: build order, wall-in placement and target selection. In this paper, we present two new problems in GHOST: pathfinding and resource allocation. The goal of this paper is to show the robustness of the framework, having very good results for a problem it is not designed for (pathfinding), and to show its flexibility, where it is easy to propose different models of the same problem (resource allocation problem).", "This paper presents GHOST , a combinatorial optimization framework that a real-time strategy (RTS) AI developer can use to model and solve any problem encoded as a constraint satisfaction optimization problem (CSP COP). We show a way to model three different problems as a CSP COP, using instances from the RTS game StarCraft as test beds. Each problem belongs to a specific level of abstraction (the target selection as reactive control problem, the wall-in as a tactics problem, and the build order planning as a strategy problem). In our experiments, GHOST shows good results computed within some tens of milliseconds. We also show that GHOST outperforms state-of-the-art constraint solvers, matching them on the resources allocation problem, a common combinatorial optimization problem.", "This paper presents a constraint optimization approach to walling in real-time strategy (RTS) games. Walling is a specific type of spatial reasoning, typically employed by human expert players and not currently fully exploited in RTS game AI, consisting on finding configurations of buildings to completely or partially block paths. Our approach is based on local search, and is specifically designed for the real-time nature of RTS games. We present experiments in the context of the RTS game StarCraft showing promising results." ] }
1901.00942
2907266819
Decision-making problems can be modeled as combinatorial optimization problems with Constraint Programming formalisms such as Constrained Optimization Problems. However, few Constraint Programming formalisms can deal with both optimization and uncertainty at the same time, and none of them are convenient to model problems we tackle in this paper. Here, we propose a way to deal with combinatorial optimization problems under uncertainty within the classical Constrained Optimization Problems formalism by injecting the Rank Dependent Utility from decision theory. We also propose a proof of concept of our method to show it is implementable and can solve concrete decision-making problems using a regular constraint solver, and propose a bot that won the partially observable track of the 2018 RTS AI competition. Our result shows it is possible to handle uncertainty with regular Constraint Programming solvers, without having to define a new formalism neither to develop dedicated solvers. This brings new perspective to tackle uncertainty in Constraint Programming.
Beyond Constraint Programming but close enough, @cite_7 use a branch and bound algorithms to optimize build order in the RTS game StarCraft. Like @cite_3 , @cite_6 tackle the problem to optimize a wall-in building placement in StarCraft but through the prism of Answer-Set Programming.
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_7" ], "mid": [ "1716013926", "1599388449", "2098487995" ], "abstract": [ "This paper presents a constraint optimization approach to walling in real-time strategy (RTS) games. Walling is a specific type of spatial reasoning, typically employed by human expert players and not currently fully exploited in RTS game AI, consisting on finding configurations of buildings to completely or partially block paths. Our approach is based on local search, and is specifically designed for the real-time nature of RTS games. We present experiments in the context of the RTS game StarCraft showing promising results.", "In real-time strategy games like StarCraft, skilled players often block the entrance to their base with buildings to prevent the opponent's units from getting inside. This technique, called \"walling-in\", is a vital part of player's skill set, allowing him to survive early aggression. However, current artificial players (bots) do not possess this skill, due to numerous inconveniences surfacing during its implementation in imperative languages like C++ or Java. In this text, written as a guide for bot programmers, we address the problem of finding an appropriate building placement that would block the entrance to player's base, and present a ready to use declarative solution employing the paradigm of answer set programming (ASP). We also encourage the readers to experiment with different declarative approaches to this problem.", "In recent years, real-time strategy (RTS) games have gained interest in the AI research community for their multitude of challenging subproblems — such as collaborative pathfinding, effective resource allocation and unit targeting, to name a few. In this paper we consider the build order problem in RTS games in which we need to find concurrent action sequences that, constrained by unit dependencies and resource availability, create a certain number of units and structures in the shortest possible time span. We present abstractions and heuristics that speed up the search for approximative solutions considerably in the game of StarCraft, and show the efficacy of our method by comparing its real-time performance with that of professional StarCraft players." ] }