aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
Several methods have been put forward to make robot autonomously explore their environment and learn new skills to reach various goals. An important part of this literature treats this as a control problem, where the system learns an inverse model (or multiple inverse models) to generate the motor commands in order to reach an arbitrary position in a given goal space. Following some works showing that human babies learn how to control their body through body babbling'' @cite_9 , several methods use a similar motor babbling approach in robots with random motor orders @cite_16 @cite_23 . The two main issues of these approaches are that (1) learning the inverse model from collected samples is a challenging supervised machine learning problem for non-toy problems and that (2) sampling the motor space - which may be large - in a sample-efficient way is required and also difficult to achieve. Several approaches have been proposed to tackle these issues.
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_23" ], "mid": [ "2026593493", "2132558143", "1559736362" ], "abstract": [ "A long-standing puzzle in developmental psychology is how infants imitate gestures they cannot see themselves perform (facial gestures). Two critical issues are: (a) the metric infants use to detect cross-modal equivalences in human acts and (b) the process by which they correct their imitative errors. We address these issues in a detailed model of the mechanisms underlying facial imitation. The model can be extended to encompass other types of imitation. The model capitalizes on three new theoretical concepts. First, organ identification is the means by which infants relate parts of their own bodies to corresponding ones of the adult's. Second, body babbling (infants' movement practice gained through self-generated activity) provides experience mapping movements to the resulting body configurations. Third, organ relations provide the metric by which infant and adult acts are perceived in commensurate terms. In imitating, infants attempt to match the organ relations they see exhibited by the adults with those they feel themselves make. We show how development restructures the meaning and function of early imitation. We argue that important aspects of later social cognition are rooted in the initial cross-modal equivalence between self and other found in newborns. ©1997 John Wiley & Sons, Ltd.", "Real-time control of the end-effector of a humanoid robot in external coordinates requires computationally efficient solutions of the inverse kinematics problem. In this context, this paper investigates inverse kinematics learning for resolved motion rate control (RMRC) employing an optimization criterion to resolve kinematic redundancies. Our learning approach is based on the key observations that learning an inverse of a nonuniquely invertible function can be accomplished by augmenting the input representation to the inverse model and by using a spatially localized learning approach. We apply this strategy to inverse kinematics learning and demonstrate how a recently developed statistical learning algorithm, locally weighted projection regression, allows efficient learning of inverse kinematic mappings in an incremental fashion even when input spaces become rather high dimensional. Our results are illustrated with a 30-DOF humanoid robot.", "How does an individual use the knowledge acquired through self exploration as a manipulable model through which to understand others and benefit from their knowledge? How can developmental and social learning be combined for their mutual benefit? In this paper we review a hierarchical architecture (HAMMER) which allows a principled way for combining knowledge through exploration and knowledge from others, through the creation and use of multiple inverse and forward models. We describe how Bayesian Belief Networks can be used to learn the association between a robot’s motor commands and sensory consequences (forward models), and how the inverse association can be used for imitation. Inverse models created through self exploration, as well as those from observing others can coexist and compete in a principled unified framework, that utilises the simulation theory of mind approach to mentally rehearse and understand the actions of others." ] }
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
A proposal to make the sampling process more efficient is to sample in the goal space rather than in the motor space, choosing points in the goal space and using the robot's current inverse model to to reach them, thus generating new samples that allow to further train the inverse model online. This goal babbling'' approach has been found to be more sample-efficient than motor babbling when using a small set of simple predefined goal space targets @cite_40 , or random goals that can be generated without any prior knowledge of the expected robot behavior @cite_21 . Since an existing inverse model is required to generate motions at any time, this model has first to be bootstrapped, for example with random initialization. Further developments of this goal babbling approach improve the sampling efficiency through the use of intrinsic motivations that choose goals maximizing the learning progress @cite_25 .
{ "cite_N": [ "@cite_40", "@cite_21", "@cite_25" ], "mid": [ "2052519881", "2116086091", "2101524054" ], "abstract": [ "We present a neural network approach to early motor learning. The goal is to explore the needs for boot-strapping the control of hand movements in a biologically plausible learning scenario. The model is applied to the control of hand postures of the humanoid robot ASIMO by means of full upper body movements. For training, we use an efficient online scheme for recurrent reservoir networks consisting of supervised backpropagation-decorrelation output adaptation and an unsupervised intrinsic plasticity reservoir optimization. We demonstrate that the network can acquire accurate inverse models for the highly redundant ASIMO, applying bi-manual target motions and exploiting all upper body degrees of freedom. We show that very few, but highly symmetric training data is sufficient to generate excellent generalization capabilities to untrained target motions. We also succeed in reproducing real motion recorded from a human demonstrator, massively differing from the training data in range and dynamics. The demonstrated generalization capabilities provide a fundamental prerequisite for an autonomous and incremental motor learning in an developmentally plausible way. Our exploration process - though not yet fully autonomous - clearly shows that goal-directed exploration can, in contrast to “babbling” of joints angles, be done very efficiently even for many degrees of freedom and non-linear kinematic configurations as ASIMOs.", "We present an approach to learn inverse kinematics of redundant systems without prior- or expert-knowledge. The method allows for an iterative bootstrapping and refinement of the inverse kinematics estimate. The essential novelty lies in a path-based sampling approach: we generate training data along paths, which result from execution of the currently learned estimate along a desired path towards a goal. The information structure thereby induced enables an efficient detection and resolution of inconsistent samples solely from directly observable data. We derive and illustrate the exploration and learning process with a low-dimensional kinematic example that provides direct insight into the bootstrapping process. We further show that the method scales for high dimensional problems, such as the Honda humanoid robot or hyperredundant planar arms with up to 50 degrees of freedom.", "Exploratory activities seem to be intrinsically rewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics, and active learning, this paper presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable, thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby play mat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learned. Finally, these various results are discussed in relation to more complex forms of behavioral organization and data coming from developmental psychology" ] }
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
In order to simplify inverse model learning, it has been proposed to divide the goal space in several regions according to a spatial segmentation @cite_35 @cite_38 , or in several independant subspaces corresponding to the state of different objects @cite_1 , and learn a different inverse model for each subspace. Another alternative is to use unsupervised learning to learn a goal state representation with which the goal babbling process is improved @cite_0 @cite_28 .
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_28", "@cite_1", "@cite_0" ], "mid": [ "2896027395", "2004303440", "2810132790", "2565678376", "2963973554" ], "abstract": [ "Intelligent adaptive curiosity (IAC) was initially introduced as a developmental mechanism allowing a robot to self-organize developmental trajectories of increasing complexity without preprogramming the particular developmental stages. In this paper, we argue that IAC and other intrinsically motivated learning heuristics could be viewed as active learning algorithms that are particularly suited for learning forward models in unprepared sensorimotor spaces with large unlearnable subspaces. Then, we introduce a novel formulation of IAC, called robust intelligent adaptive curiosity (R-IAC), and show that its performances as an intrinsically motivated active learning algorithm are far superior to IAC in a complex sensorimotor space where only a small subspace is neither unlearnable nor trivial. We also show results in which the learnt forward model is reused in a control scheme. Finally, an open source accompanying software containing these algorithms as well as tools to reproduce all the experiments presented in this paper is made publicly available.", "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills policies that solve a corresponding distribution of parameterized tasks goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: (1) learning the inverse kinematics in a highly-redundant robotic arm, (2) learning omnidirectional locomotion with motor primitives in a quadruped robot, and (3) an arm learning to control a fishing rod with a flexible wire. We show that (1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; (2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; (3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.", "Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled goal space. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment.", "This article studies algorithms used by a learner to explore high-dimensional structured sensorimotor spaces such as in tool use discovery. In particular, we consider goal babbling architectures that were designed to explore and learn solutions to fields of sensorimotor problems, i.e. to acquire inverse models mapping a space of parameterized sensorimotor problems effects to a corresponding space of parameterized motor primitives. However, so far these architectures have not been used in high-dimensional spaces of effects. Here, we show the limits of existing goal babbling architectures for efficient exploration in such spaces, and introduce a novel exploration architecture called Model Babbling (MB). MB exploits efficiently a modular representation of the space of parameterized problems effects. We also study an active version of Model Babbling (the MACOB architecture). These architectures are compared in a simulated experimental setup with an arm that can discover and learn how to move objects using two tools with different properties, embedding structured high-dimensional continuous motor and sensory spaces.", "Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action spaces. However, they have so far assumed that self-generated goals are sampled in a specifically engineered feature space, limiting their autonomy. In this work, we propose an approach using deep representation learning algorithms to learn an adequate goal space. This is a developmental 2-stage approach: first, in a perceptual learning stage, deep learning algorithms use passive raw sensor observations of world changes to learn a corresponding latent space; then goal exploration happens in a second stage by sampling goals in this latent space. We present experiments with a simulated robot arm interacting with an object, and we show that exploration algorithms using such learned representations can closely match, and even sometimes improve, the performance obtained using engineered representations." ] }
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
Unsupervised learning algorithms can extract a small set of primitive actions from direct policy search learning traces @cite_39 , thus allowing to use reinforcement learning with an acquired set of actions. Although this approach works well for problems like navigation where a policy can naturally be described as a sequence of lower level decisions and actions, it is not straightforward to apply to problems such as object manipulation primitives, which lack this structure.
{ "cite_N": [ "@cite_39" ], "mid": [ "2595700172" ], "abstract": [ "Reinforcement learning (RL) problems are hard to solve in a robotics context as classical algorithms rely on discrete representations of actions and states, but in robotics both are continuous. A discrete set of actions and states can be defined, but it requires an expertise that may not be available, in particular in open environments. It is proposed to define a process to make a robot build its own representation for an RL algorithm. The principle is to first use a direct policy search in the sensori-motor space, i.e., with no predefined discrete sets of states nor actions, and then extract from the corresponding learning traces discrete actions and identify the relevant dimensions of the state to estimate the value function. Once this is done, the robot can apply RL: 1) to be more robust to new domains and, if required and 2) to learn faster than a direct policy search. This approach allows to take the best of both worlds: first learning in a continuous space to avoid the need of a specific representation, but at a price of a long learning process and a poor generalization, and then learning with an adapted representation to be faster and more robust." ] }
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
Approaches based on divergent evolutionary algorithms such as novelty search @cite_8 , learn a repertoire of actions: they exploit the principles of variation and selection to gradually build a discrete set of actions covering a given - such as a goal space. Since those approaches do not explicitly build an inverse model, they explore in the motor space and because they do not rely on a unique (or locally unique) inverse model, they can find multiple actions to reach a goal in different ways. Further works introduced quality-diversity algorithms @cite_4 @cite_41 @cite_20 , which combine the behavior space exploration with a global or local quality metric.
{ "cite_N": [ "@cite_41", "@cite_4", "@cite_20", "@cite_8" ], "mid": [ "1974150877", "2099746672", "2962687375", "" ], "abstract": [ "In contrast to the conventional role of evolution in evolutionary computation (EC) as an optimization algorithm, a new class of evolutionary algorithms has emerged in recent years that instead aim to accumulate as diverse a collection of discoveries as possible, yet where each variant in the collection is as fit as it can be. Often applied in both neuroevolution and morphological evolution, these new quality diversity (QD) algorithms are particularly well-suited to evolution's inherent strengths, thereby offering a promising niche for EC within the broader field of machine learning. However, because QD algorithms are so new, until now no comprehensive study has yet attempted to systematically elucidate their relative strengths and weaknesses under different conditions. Taking a first step in this direction, this paper introduces a new benchmark domain designed specifically to compare and contrast QD algorithms. It then shows how the degree of alignment between the measure of quality and the behavior characterization (which is an essential component of all QD algorithms to date) impacts the ultimate performance of different such algorithms. The hope is that this initial study will help to stimulate interest in QD and begin to unify the disparate ideas in the area.", "An ambitious challenge in artificial life is to craft an evolutionary process that discovers a wide diversity of well-adapted virtual creatures within a single run. Unlike in nature, evolving creatures in virtual worlds tend to converge to a single morphology because selection therein greedily rewards the morphology that is easiest to exploit. However, novelty search, a technique that explicitly rewards diverging, can potentially mitigate such convergence. Thus in this paper an existing creature evolution platform is extended with multi-objective search that balances drives for both novelty and performance. However, there are different ways to combine performance-driven search and novelty search. The suggested approach is to provide evolution with both a novelty objective that encourages diverse morphologies and a local competition objective that rewards individuals outperforming those most similar in morphology. The results in an experiment evolving locomoting virtual creatures show that novelty search with local competition discovers more functional morphological diversity within a single run than models with global competition, which are more predisposed to converge. The conclusions are that novelty search with local competition may complement recent advances in evolving virtual creatures and may in general be a principled approach to combining novelty search with pressure to achieve.", "The optimization of functions to find the best solution according to one or several objectives has a central role in many engineering and research fields. Recently, a new family of optimization algorithms, named quality-diversity (QD) optimization, has been introduced, and contrasts with classic algorithms. Instead of searching for a single solution, QD algorithms are searching for a large collection of both diverse and high-performing solutions. The role of this collection is to cover the range of possible solution types as much as possible, and to contain the best solution for each type. The contribution of this paper is threefold. First, we present a unifying framework of QD optimization algorithms that covers the two main algorithms of this family (multidimensional archive of phenotypic elites and the novelty search with local competition), and that highlights the large variety of variants that can be investigated within this family. Second, we propose algorithms with a new selection mechanism for QD algorithms that outperforms all the algorithms tested in this paper. Lastly, we present a new collection management that overcomes the erosion issues observed when using unstructured collections. These three contributions are supported by extensive experimental comparisons of QD algorithms on three different experimental scenarios.", "" ] }
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
Exploiting this repertoire raises challenges that differ from those of learning an inverse model. Reaching a known goal for which an action is present in the repertoire @cite_8 @cite_41 , is very simple and inexpensive, as the correct action just needs to be selected and executed. Adapting to changes in the problem domain (for example a slightly different environment, or a damaged robot, which modifies the effect of actions) and finding the right action to reach an unknown goal can be done in a sample-efficient way by discretizing the behavior space and using bayesian optimization to quickly discover the best action @cite_32 @cite_12 @cite_44 . Some recent works propose to use the repertoire as a training set to learn an inverse model using a conditional GAN @cite_7 , but the learnt inverse model still has a limited accuracy.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_41", "@cite_32", "@cite_44", "@cite_12" ], "mid": [ "2900424942", "", "1974150877", "758372786", "2532372495", "1738827650" ], "abstract": [ "Learning algorithms are enabling robots to solve increasingly challenging real-world tasks. These approaches often rely on demonstrations and reproduce the behavior shown. Unexpected changes in the environment may require using different behaviors to achieve the same effect, for instance to reach and grasp an object in changing clutter. An emerging paradigm addressing this robustness issue is to learn a diverse set of successful behaviors for a given task, from which a robot can select the most suitable policy when faced with a new environment. In this paper, we explore a novel realization of this vision by learning a generative model over policies. Rather than learning a single policy, or a small fixed repertoire, our generative model for policies compactly encodes an unbounded number of policies and allows novel controller variants to be sampled. Leveraging our generative policy network, a robot can sample novel behaviors until it finds one that works for a new environment. We demonstrate this idea with an application of robust ball-throwing in the presence of obstacles. We show that this approach achieves a greater diversity of behaviors than an existing evolutionary approach, while maintaining good efficacy of sampled behaviors, allowing a Baxter robot to hit targets more often when ball throwing in the presence of obstacles.", "", "In contrast to the conventional role of evolution in evolutionary computation (EC) as an optimization algorithm, a new class of evolutionary algorithms has emerged in recent years that instead aim to accumulate as diverse a collection of discoveries as possible, yet where each variant in the collection is as fit as it can be. Often applied in both neuroevolution and morphological evolution, these new quality diversity (QD) algorithms are particularly well-suited to evolution's inherent strengths, thereby offering a promising niche for EC within the broader field of machine learning. However, because QD algorithms are so new, until now no comprehensive study has yet attempted to systematically elucidate their relative strengths and weaknesses under different conditions. Taking a first step in this direction, this paper introduces a new benchmark domain designed specifically to compare and contrast QD algorithms. It then shows how the degree of alignment between the measure of quality and the behavior characterization (which is an essential component of all QD algorithms to date) impacts the ultimate performance of different such algorithms. The hope is that this initial study will help to stimulate interest in QD and begin to unify the disparate ideas in the area.", "Many fields use search algorithms, which automatically explore a search space to find high-performing solutions: chemists search through the space of molecules to discover new drugs; engineers search for stronger, cheaper, safer designs, scientists search for models that best explain data, etc. The goal of search algorithms has traditionally been to return the single highest-performing solution in a search space. Here we describe a new, fundamentally different type of algorithm that is more useful because it provides a holistic view of how high-performing solutions are distributed throughout a search space. It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. For example, a drug company may wish to understand how performance changes as the size of molecules and their cost-to-produce vary. MAP-Elites produces a large diversity of high-performing, yet qualitatively different solutions, which can be more helpful than a single, high-performing solution. Interestingly, because MAP-Elites explores more of the search space, it also tends to find a better overall solution than state-of-the-art search algorithms. We demonstrate the benefits of this new algorithm in three different problem domains ranging from producing modular neural networks to designing simulated and real soft robots. Because MAP- Elites (1) illuminates the relationship between performance and dimensions of interest in solutions, (2) returns a set of high-performing, yet diverse solutions, and (3) improves finding a single, best solution, it will advance science and engineering.", "The recently introduced Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) is an evolutionary algorithm capable of producing a large archive of diverse, high-performing solutions in a single run. It works by discretizing a continuous feature space into unique regions according to the desired discretization per dimension. While simple, this algorithm has a main drawback: it cannot scale to high-dimensional feature spaces since the number of regions increase exponentially with the number of dimensions. In this paper, we address this limitation by introducing a simple extension of MAP-Elites that has a constant, pre-defined number of regions irrespective of the dimensionality of the feature space. Our main insight is that methods from computational geometry could partition a high-dimensional space into well-spread geometric regions. In particular, our algorithm uses a centroidal Voronoi tessellation (CVT) to divide the feature space into a desired number of regions; it then places every generated individual in its closest region, replacing a less fit one if the region is already occupied. We demonstrate the effectiveness of the new \"CVT-MAP-Elites\" algorithm in high-dimensional feature spaces through comparisons against MAP-Elites in a hexapod locomotion task.", "An intelligent trial-and-error learning algorithm is presented that allows robots to adapt in minutes to compensate for a wide variety of types of damage." ] }
1901.00811
2907904605
Programming a robot to deal with open-ended tasks remains a challenge, in particular if the robot has to manipulate objects. Launching, grasping, pushing or any other object interaction can be simulated but the corresponding models are not reversible and the robot behavior thus cannot be directly deduced. These behaviors are hard to learn without a demonstration as the search space is large and the reward sparse. We propose a method to autonomously generate a diverse repertoire of simple object interaction behaviors in simulation. Our goal is to bootstrap a robot learning and development process with limited informations about what the robot has to achieve and how. This repertoire can be exploited to solve different tasks in reality thanks to a proposed adaptation method or could be used as a training set for data-hungry algorithms. The proposed approach relies on the definition of a goal space and generates a repertoire of trajectories to reach attainable goals, thus allowing the robot to control this goal space. The repertoire is built with an off-the-shelf simulation thanks to a quality diversity algorithm. The result is a set of solutions tested in simulation only. It may result in two different problems: (1) as the repertoire is discrete and finite, it may not contain the trajectory to deal with a given situation or (2) some trajectories may lead to a behavior in reality that differs from simulation because of a reality gap. We propose an approach to deal with both issues by using a local linearization between the motion parameters and the observed effects. Furthermore, we present an approach to update the existing solution repertoire with the tests done on the real robot. The approach has been validated on two different experiments on the Baxter robot: a ball launching and a joystick manipulation tasks.
The previously described skill learning techniques are generally performed in a simulator, which acts as a direct model of the robotic system and its environment. Simulation can be much more practical than real robotics, being cheaper, safe from damaging the robot and its environment during experiments, and much faster than real experiments as simulations can be performed faster than real time and massively parallellized. However, the actions learnt in a simulator often do not transfer well to a real robot and environment, because the exact physical properties of the robot body and its environment can never be perfectly modeled. The behavioral differences between simulated and real experiments have been termed the reality gap @cite_22 .
{ "cite_N": [ "@cite_22" ], "mid": [ "1625577255" ], "abstract": [ "In robotics, gradient-free optimization algorithms (e.g. evolutionary algorithms) are often used only in simulation because they require the evaluation of many candidate solutions. Nevertheless, solutions obtained in simulation often do not work well on the real device. The transferability approach aims at crossing this gap between simulation and reality by making the optimization algorithm aware of the limits of the simulation. In the present paper, we first describe the transferability function, that maps solution descriptors to a score representing how well a simulator matches the reality. We then show that this function can be learned using a regression algorithm and a few experiments with the real devices. Our results are sup- ported by an extensive study of the reality gap for a simple quadruped robot whose control parameters are optimized. In particular, we mapped the whole search space in reality and in simulation to understand the differences between the fitness landscapes." ] }
1901.01007
2908087012
Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.
Much work has addressed the mapping of inference training of CNNs to clusters with programmable accelerators, including @cite_32 @cite_23 . Many frameworks and libraries have been deployed, e.g., MXNet @cite_24 , Caffe @cite_10 , and Tensorflow @cite_2 . These systems hide the complexity of workload decomposition and provide friendly programmer interfaces, including Python, R, and Scala. For FPGA-based clouds, the prior work is more limited. Microsoft's Catapult project @cite_22 @cite_11 implements a parameterized CNN accelerator cluster which can deliver over 1 TFLOPS with very high energy efficiency. Zhang's CDSC FPGA-Enabled Cluster accelerates CNNs on top of Spark and Hadoop @cite_18 @cite_29 . In @cite_29 , researchers build a deeply pipelined FPGA cluster with 6 Xilinx VC709 boards to accelerate CNNs. In @cite_38 , an FPGA-based framework of CNN training is proposed, but focuses mainly on single-FPGA designs.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_22", "@cite_29", "@cite_32", "@cite_24", "@cite_23", "@cite_2", "@cite_10", "@cite_11" ], "mid": [ "2557355796", "", "2272300165", "2475840367", "2769918628", "2186615578", "", "2271840356", "2155893237", "2542189141" ], "abstract": [ "This paper presents a novel reconfigurable framework for training Convolutional Neural Networks (CNNs). The proposed framework is based on reconfiguring a streaming datapath at runtime to cover the training cycle for the various layers in a CNN. The streaming datapath can support various parameterized modules which can be customized to produce implementations with different trade-offs in performance and resource usage. The modules follow the same input and output data layout, simplifying configuration scheduling. For different layers, instances of the modules contain different computation kernels in parallel, which can be customized with different layer configurations and data precision. The associated models on performance, resource and bandwidth can be used in deriving parameters for the datapath to guide the analysis of design trade-offs to meet application requirements or platform constraints. They enable estimation of the implementation specifications given different layer configurations, to maximize performance under the constraints on bandwidth and hardware resources. Experimental results indicate that the proposed module design targeting Maxeler technology can achieve a performance of 62.06 GFLOPS for 32-bit floating-point arithmetic, outperforming existing accelerators. Further evaluation based on training LeNet-5 shows that the proposed framework achieves about 4 times faster than CPU implementation of Caffe and about 7.5 times more energy efficient than the GPU implementation of Caffe.", "", "Recent breakthroughs in the development of multi-layer convolutional neural networks have led to stateof-the-art improvements in the accuracy of non-trivial recognition tasks such as large-category image classification and automatic speech recognition [1]. These many-layered neural networks are large, complex, and require substantial computing resources to train and evaluate [2]. Unfortunately, these demands come at an inopportune moment due to the recent slowing of gains in commodity processor performance. Hardware specialization in the form of GPGPUs, FPGAs, and ASICs offers a promising path towards major leaps in processing capability while achieving high energy efficiency. To harness specialization, an effort is underway at Microsoft to accelerate Deep Convolutional Neural Networks (CNN) using servers augmented with FPGAs—similar to the hardware that is being integrated into some of Microsoft’s datacenters [3]. Initial efforts to implement a single-node CNN accelerator on a mid-range FPGA show significant promise, resulting in respectable performance relative to prior FPGA designs and high-end GPGPUs, at a fraction of the power. In the future, combining multiple FPGAs over a low-latency communication fabric offers further opportunity to train and evaluate models of unprecedented size and quality. Background State-of-the-art deep convolutional neural networks are typically organized into alternating convolutional and max-pooling neural network layers followed by a number of dense, fully-connected layers—as illustrated in the well-known topology by in Figure 1 [1]. Each 3D volume represents an input to a layer, and is transformed into a new 3D volume feeding the subsequent layer. In the example below, there are five convolutional layers, three max-pooling layers, and three fully-connected layers. Figure 1. Example of Deep Convolutional Neural Network for Image Classification. Image source: [1]. 1 General Purpose Computing on Graphics Processing Units, Field Programmable Gate Arrays, ApplicationSpecific Integrated Circuits.", "Recently, FPGA-based CNN accelerators have demonstrated superior energy efficiency compared to high-performance devices like GPGPUs. However, due to the constrained on-chip resource and many other factors, single-board FPGA designs may have difficulties in achieving optimal energy efficiency. In this paper we present a deeply pipelined multi-FPGA architecture that expands the design space for optimal performance and energy efficiency. A dynamic programming algorithm is proposed to map the CNN computing layers efficiently to different FPGA boards. To demonstrate the potential of the architecture, we built a prototype system with seven FPGA boards connected with high-speed serial links. The experimental results on AlexNet and VGG-16 show that the prototype can achieve up to 21x and 2x energy efficiency compared to optimized multi-core CPU and GPU implementations, respectively.", "Convolutional Neural Networks have dramatically improved in recent years, surpassing human accuracy on certain problems and performance exceeding that of traditional computer vision algorithms. While the compute pattern in itself is relatively simple, significant compute and memory challenges remain as CNNs may contain millions of floating-point parameters and require billions of floating-point operations to process a single image. These computational requirements, combined with storage footprints that exceed typical cache sizes, pose a significant performance and power challenge for modern compute architectures. One of the promising opportunities to scale performance and power efficiency is leveraging reduced precision representations for all activations and weights as this allows to scale compute capabilities, reduce weight and feature map buffering requirements as well as energy consumption. While a small reduction in accuracy is encountered, these Quantized Neural Networks have been shown to achieve state-of-the-art accuracy on standard benchmark datasets, such as MNIST, CIFAR-10, SVHN and even ImageNet, and thus provide highly attractive design trade-offs. Current research has focused mainly on the implementation of extreme variants with full binarization of weights and or activations, as well typically smaller input images. Within this paper, we investigate the scalability of dataflow architectures with respect to supporting various precisions for both weights and activations, larger image dimensions, and increasing numbers of feature map channels. Key contributions are a formalized approach to understanding the scalability of the existing hardware architecture with cost models and a performance prediction as a function of the target device size. We provide validating experimental results for an ImageNet classification on a server class platform, namely the AWS F1 node.", "MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.", "", "TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.", "Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models. The framework is a BSD-licensed C++ library with Python and MATLAB bindings for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures. Caffe fits industry and internet-scale media needs by CUDA GPU computation, processing over 40 million images a day on a single K40 or Titan GPU (approx 2 ms per image). By separating model representation from actual implementation, Caffe allows experimentation and seamless switching among platforms for ease of development and deployment from prototyping machines to cloud environments. Caffe is maintained and developed by the Berkeley Vision and Learning Center (BVLC) with the help of an active community of contributors on GitHub. It powers ongoing research projects, large-scale industrial applications, and startup prototypes in vision, speech, and multimedia.", "Hyperscale datacenter providers have struggled to balance the growing need for specialized hardware (efficiency) with the economic benefits of homogeneity (manageability). In this paper we propose a new cloud architecture that uses reconfigurable logic to accelerate both network plane functions and applications. This Configurable Cloud architecture places a layer of reconfigurable logic (FPGAs) between the network switches and the servers, enabling network flows to be programmably transformed at line rate, enabling acceleration of local applications running on the server, and enabling the FPGAs to communicate directly, at datacenter scale, to harvest remote FPGAs unused by their local servers. We deployed this design over a production server bed, and show how it can be used for both service acceleration (Web search ranking) and network acceleration (encryption of data in transit at high-speeds). This architecture is much more scalable than prior work which used secondary rack-scale networks for inter-FPGA communication. By coupling to the network plane, direct FPGA-to-FPGA messages can be achieved at comparable latency to previous work, without the secondary network. Additionally, the scale of direct inter-FPGA messaging is much larger. The average round-trip latencies observed in our measurements among 24, 1000, and 250,000 machines are under 3, 9, and 20 microseconds, respectively. The Configurable Cloud architecture has been deployed at hyperscale in Microsoft's production datacenters worldwide." ] }
1901.01007
2908087012
Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.
Most distributed CNN systems, including TensorFlow and CNTK, are based on the distributed synchronous SGD algorithm (Centralized Parallel SGD algorithm - C-PSGD, see Fig.(A)). The Parameter Server Topology @cite_28 uses a central parameter node connected with multiple worker nodes. Clearly, there are several bottlenecks: communication load on the central node @cite_20 and idle time waiting for straggling worker nodes @cite_39 . Also, for large-scale clusters, the growth in the SGD mini-batch size limits scalability. Lian, et al use a decentralized parallel SGD algorithm (D-PSGD) to build a large-scale cluster @cite_20 . As shown in Fig.(B)) each node must maintain its own local copy of the model and data duplication is inevitable. We complete the taxonomy of mapping CNN applications to distributed clusters in the remainder of Fig.. Fig.(C) shows the primary design choices: note the present work is decentralized MP.
{ "cite_N": [ "@cite_28", "@cite_20", "@cite_39" ], "mid": [ "2060393849", "2963228337", "2336650964" ], "abstract": [ "Big data may contain big values, but also brings lots of challenges to the computing theory, architecture, framework, knowledge discovery algorithms, and domain specific tools and applications. Beyond the 4-V or 5-V characters of big datasets, the data processing shows the features like inexact, incremental, and inductive manner. This brings new research opportunities to research community across theory, systems, algorithms, and applications. Is there some new \"theory\" for the big data? How to handle the data computing algorithms in an operatable manner? This report shares some view on new challenges identified, and covers some of the application scenarios such as micro-blog data analysis and data processing in building next generation search engines.", "Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.", "Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies." ] }
1901.01007
2908087012
Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.
Fig. shows the design space for mapping CNNs onto distributed nodes. We use terminology introduced by @cite_25 .
{ "cite_N": [ "@cite_25" ], "mid": [ "2788193959" ], "abstract": [ "Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. Specifically, we present trends in DNN architectures and the resulting implications on parallelization strategies. We discuss the different types of concurrency in DNNs; synchronous and asynchronous stochastic gradient descent; distributed system architectures; communication schemes; and performance modeling. Based on these approaches, we extrapolate potential directions for parallelism in deep learning." ] }
1901.01007
2908087012
Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.
Data parallelism (Fig.(A)) is the most popular approach in CPU and GPU clouds @cite_24 @cite_2 . It is also widely used in existing FPGA clouds, such as Catapult and CDSC @cite_18 . This method has drawbacks as mentioned in Section I. In CNNs, the configurations of each layer, such as kernel size, pooling size, and stride size, vary greatly, requiring different hardware designs to obtain optimal performance. Thus, FPGAs need to be reconfigured between layers, leading to significant overhead. In addition, as each FPGA executes all layers in sequential order, each layer starts only after the previous layer has completed. Thus, for all intermediate features, weights need to be stored to and loaded from the host upon completion of a layer, leading to heavy communication with off-chip memory.
{ "cite_N": [ "@cite_24", "@cite_18", "@cite_2" ], "mid": [ "2186615578", "", "2271840356" ], "abstract": [ "MXNet is a multi-language machine learning (ML) library to ease the development of ML algorithms, especially for deep neural networks. Embedded in the host language, it blends declarative symbolic expression with imperative tensor computation. It offers auto differentiation to derive gradients. MXNet is computation and memory efficient and runs on various heterogeneous systems, ranging from mobile devices to distributed GPU clusters. This paper describes both the API design and the system implementation of MXNet, and explains how embedding of both symbolic expression and tensor operation is handled in a unified fashion. Our preliminary experiments reveal promising results on large scale deep neural network applications using multiple GPU machines.", "", "TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org." ] }
1901.01007
2908087012
Deep Neural Networks (DNNs) have revolutionized numerous applications, but the demand for ever more performance remains unabated. Scaling DNN computations to larger clusters is generally done by distributing tasks in batch mode using methods such as distributed synchronous SGD. Among the issues with this approach is that to make the distributed cluster work with high utilization, the workload distributed to each node must be large, which implies nontrivial growth in the SGD mini-batch size. In this paper, we propose a framework called FPDeep, which uses a hybrid of model and layer parallelism to configure distributed reconfigurable clusters to train DNNs. This approach has numerous benefits. First, the design does not suffer from batch size growth. Second, novel workload and weight partitioning leads to balanced loads of both among nodes. And third, the entire system is a fine-grained pipeline. This leads to high parallelism and utilization and also minimizes the time features need to be cached while waiting for back-propagation. As a result, storage demand is reduced to the point where only on-chip memory is used for the convolution layers. We evaluate FPDeep with the Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that FPDeep has good scalability to a large number of FPGAs, with the limiting factor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep shows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to GOPs J. FPDeep provides, on average, 6.36x higher energy efficiency than comparable GPU servers.
Layer Parallelism (Fig.(B)) maps layers of the CNN onto individual nodes and pipelines CNN computation. It has been employed by both GPU and FPGA frameworks. In @cite_0 , multiple GPUs are used in a pipelined manner. Each LSTM layer is assigned to a different GPU. After GPU 1 finishes computing layer 1 for the first sentence, it passes its output to GPU 2. At the same time, GPU 1 fetches the next sentence and starts training. In their work, each layer is allocated with a certain GPU, thus, workloads are not balanced among devices. For multi-FPGA systems, @cite_29 only focuses on inference; also, the parallelism is coarse-grained, the workload is unbalanced, and there is heavy off-chip memory communication. So while Layer Parallelism mitigates some of the problems with batch size and frequent reconfiguration, it suffers from other problems: load balancing and stalls as some nodes wait for others to finish.
{ "cite_N": [ "@cite_0", "@cite_29" ], "mid": [ "2525778437", "2475840367" ], "abstract": [ "Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60 compared to Google's phrase-based production system.", "Recently, FPGA-based CNN accelerators have demonstrated superior energy efficiency compared to high-performance devices like GPGPUs. However, due to the constrained on-chip resource and many other factors, single-board FPGA designs may have difficulties in achieving optimal energy efficiency. In this paper we present a deeply pipelined multi-FPGA architecture that expands the design space for optimal performance and energy efficiency. A dynamic programming algorithm is proposed to map the CNN computing layers efficiently to different FPGA boards. To demonstrate the potential of the architecture, we built a prototype system with seven FPGA boards connected with high-speed serial links. The experimental results on AlexNet and VGG-16 show that the prototype can achieve up to 21x and 2x energy efficiency compared to optimized multi-core CPU and GPU implementations, respectively." ] }
1901.00921
2907758432
The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states. In this paper, we present new heuristics to speed up the solution convergence of MDPs. First, we quantify the level of reachability of every state using the Mean First Passage Time (MFPT) and show that such reachability characterization very well assesses the importance of states which is used for effective state prioritization. Then, we introduce the notion of backup differentials as an extension to the prioritized sweeping mechanism, in order to evaluate the impacts of states at an even finer scale. Finally, we extend the state prioritization to the temporal process, where only partial sweeping can be performed during certain intermediate value iteration stages. To validate our design, we have performed numerical evaluations by comparing the proposed new heuristics with corresponding classic baseline mechanisms. The evaluation results showed that our reachability based framework and its differential variants have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations.
Another important heuristic for efficiently solving MDPs is the prioritized sweeping @cite_11 , which has been broadly employed to further speed up the value iteration process. This heuristic evaluates each state and obtains a score based on the state's contribution to the convergence, and then prioritizes sorts all states based on their scores (e.g., those states with larger difference in value between two consecutive iterations will get higher scores) @cite_9 @cite_4 . Then in the immediately next dynamic programming iteration, evaluating the states follows the newly prioritized order. The prioritized sweeping heuristic is also leveraged in our MFPT based value iteration procedure, and comparisons with baseline approaches have been conducted in our experimental section.
{ "cite_N": [ "@cite_9", "@cite_4", "@cite_11" ], "mid": [ "2159420891", "2121733891", "2048226872" ], "abstract": [ "Prioritized sweeping is a model-based reinforcement learning method that attempts to focus an agent's limited computational resources to achieve a good estimate of the value of environment states. To choose effectively where to spend a costly planning step, classic prioritized sweeping uses a simple heuristic to focus computation on the states that are likely to have the largest errors. In this paper, we introduce generalized prioritized sweeping, a principled method for generating such estimates in a representation-specific manner. This allows us to extend prioritized sweeping beyond an explicit, state-based representation to deal with compact representations that are necessary for dealing with large state spaces. We apply this method for generalized model approximators (such as Bayesian networks), and describe preliminary experiments that compare our approach with classical prioritized sweeping.", "The performance of value and policy iteration can be dramatically improved by eliminating redundant or useless backups, and by backing up states in the right order. We study several methods designed to accelerate these iterative solvers, including prioritization, partitioning, and variable reordering. We generate a family of algorithms by combining several of the methods discussed, and present extensive empirical evidence demonstrating that performance can improve by several orders of magnitude for many problems, while preserving accuracy and convergence guarantees.", "We present a new algorithm, prioritized sweeping, for efficient prediction and control of stochastic Markov systems. Incremental learning methods such as temporal differencing and Q-learning have real-time performance. Classical methods are slower, but more accurate, because they make full use of the observations. Prioritized sweeping aims for the best of both worlds. It uses all previous experiences both to prioritize important dynamic programming sweeps and to guide the exploration of state-space. We compare prioritized sweeping with other reinforcement learning schemes for a number of different stochastic optimal control problems. It successfully solves large state-space real-time problems with which other methods have difficulty." ] }
1901.00921
2907758432
The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states. In this paper, we present new heuristics to speed up the solution convergence of MDPs. First, we quantify the level of reachability of every state using the Mean First Passage Time (MFPT) and show that such reachability characterization very well assesses the importance of states which is used for effective state prioritization. Then, we introduce the notion of backup differentials as an extension to the prioritized sweeping mechanism, in order to evaluate the impacts of states at an even finer scale. Finally, we extend the state prioritization to the temporal process, where only partial sweeping can be performed during certain intermediate value iteration stages. To validate our design, we have performed numerical evaluations by comparing the proposed new heuristics with corresponding classic baseline mechanisms. The evaluation results showed that our reachability based framework and its differential variants have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations.
The reachability of state space has been investigated in existing works. For example, the structured reachability analysis @cite_2 of MDPs has been proposed to evaluate whether a state is reachable or not, so that one can restrict the dynamic programming to only reachable states, reducing the computational burden of solving an MDP. Note, the reachability in that work is defined as a binary state, and if a state is eventually reachable from a given starting state, then it is defined as reachable, otherwise it is unreachable. This is different from our reachability landscape where each state's reachability is measured with a real-valued number.
{ "cite_N": [ "@cite_2" ], "mid": [ "1545472701" ], "abstract": [ "Recent research in decision theoretic planning has focussed on making the solution of Markov decision processes (MDPs) more feasible. We develop a family of algorithms for structured reachability analysis of MDPs that are suitable when an initial state (or set of states) is known. Usin compact, structured representations of MDPs (e.g., Bayesian networks), our methods, which vary in the tradeoff between complexity and accurac roduce structured descriptions of (estimated) reacpagle states that can be used to eliminate variables oy variable values from the problem description, reducing the size of the MDP and making it easier to solve. One contribution of our work is the extension of ideas from GRAPHPLAN to deal with the distributed nature of action reoresentations typically embodied within Bayes nets and the problem of correlated action effects. We also demonstrate that our algorithm can be made more complete by using k-ary constraints instead of binary constraints. Another contribution is the illustration of how the compact representation of reachability constraints can be exploited by several existing (exact and approximate) abstraction algorithms for MDPs." ] }
1901.00921
2907758432
The solution convergence of Markov Decision Processes (MDPs) can be accelerated by prioritized sweeping of states ranked by their potential impacts to other states. In this paper, we present new heuristics to speed up the solution convergence of MDPs. First, we quantify the level of reachability of every state using the Mean First Passage Time (MFPT) and show that such reachability characterization very well assesses the importance of states which is used for effective state prioritization. Then, we introduce the notion of backup differentials as an extension to the prioritized sweeping mechanism, in order to evaluate the impacts of states at an even finer scale. Finally, we extend the state prioritization to the temporal process, where only partial sweeping can be performed during certain intermediate value iteration stages. To validate our design, we have performed numerical evaluations by comparing the proposed new heuristics with corresponding classic baseline mechanisms. The evaluation results showed that our reachability based framework and its differential variants have outperformed the state-of-the-art solutions in terms of both practical runtime and number of iterations.
Important related frameworks for solving MDPs also include compact representations such as linear function representation and approximation @cite_7 @cite_16 used in the policy iteration algorithms. The linear equation based techniques do not exploit regions of uniformity in value functions associated with states, but rather a compact form of state features that can somewhat reflect values @cite_19 . Our method for computing the MFPT can also be formulated into a linear system. However, the intermediate results generated from MFPT are more direct: the produced reachability landscape represented by a grid map" very well capture -- and also allow us to visualize -- the relevance or importance of states, and can lead to a faster convergence speed which is demonstrated in the experiments.
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_7" ], "mid": [ "1997477668", "2119567691", "2028145673" ], "abstract": [ "Abstract Markov decision processes (MDPs) have proven to be popular models for decision-theoretic planning, but standard dynamic programming algorithms for solving MDPs rely on explicit, state-based specifications and computations. To alleviate the combinatorial problems associated with such methods, we propose new representational and computational techniques for MDPs that exploit certain types of problem structure. We use dynamic Bayesian networks (with decision trees representing the local families of conditional probability distributions) to represent stochastic actions in an MDP, together with a decision-tree representation of rewards. Based on this representation, we develop versions of standard dynamic programming algorithms that directly manipulate decision-tree representations of policies and value functions. This generally obviates the need for state-by-state computation, aggregating states at the leaves of these trees and requiring computations only for each aggregate state. The key to these algorithms is a decision-theoretic generalization of classic regression analysis, in which we determine the features relevant to predicting expected value. We demonstrate the method empirically on several planning problems, showing significant savings for certain types of domains. We also identify certain classes of problems for which this technique fails to perform well and suggest extensions and related ideas that may prove useful in such circumstances. We also briefly describe an approximation scheme based on this approach.", "From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are uncertain and sequential decision-making processes are needed. A timely response to this increased activity, Martin L. Puterman's new work provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models. It discusses all major research directions in the field, highlights many significant applications of Markov decision processes models, and explores numerous important topics that have previously been neglected or given cursory coverage in the literature. Markov Decision Processes focuses primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous-time discrete state models. The book is organized around optimality criteria, using a common framework centered on the optimality (Bellman) equation for presenting results. The results are presented in a \"theorem-proof\" format and elaborated on through both discussion and examples, including results that are not available in any other book. A two-state Markov decision process model, presented in Chapter 3, is analyzed repeatedly throughout the book and demonstrates many results and algorithms. Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria. It also explores several topics that have received little or no attention in other books, including modified policy iteration, multichain models with average reward criterion, and sensitive optimality. In addition, a Bibliographic Remarks section in each chapter comments on relevant historic", "" ] }
1901.00889
2907963126
Thermal to visible face verification is a challenging problem due to the large domain discrepancy between the modalities. Existing approaches either attempt to synthesize visible faces from thermal faces or extract robust features from these modalities for cross-modal matching. In this paper, we take a different approach in which we make use of the attributes extracted from the visible image to synthesize the attribute-preserved visible image from the input thermal image for cross-modal matching. A pre-trained VGG-Face network is used to extract the attributes from the visible image. Then, a novel Attribute Preserved Generative Adversarial Network (AP-GAN) is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a deep network is used to extract features from the synthesized image and the input visible image for verification. Extensive experiments on the ARL Polarimetric face dataset show that the proposed method achieves significant improvements over the state-of-the-art methods.
As described in Figure , traditional thermal to visible face verification methods first extract features from the visible and thermal images and then verify the identity based on the extRacted features. Both hand-crafted and learned features have been investigated in the literature. Hu al @cite_17 proposed a partial least squares (PLS) regression-based approach for cross-modal matching. Klare al @cite_16 developed a generic framework for heterogeneous face recognition based on kernel prototype nonlinear similarities. Another multiple texture descriptor fusion-based method was proposed by Bourlai al in @cite_4 for cross-modal face recognition. In @cite_11 PLS-based discriminant analysis approaches were used to correlate the thermal face signatures to the visible face signatures. Some of the other visible to thermal cross-modal matching methods include @cite_21 @cite_20 @cite_30 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_21", "@cite_17", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2032213187", "1998165562", "1991410148", "2021242760", "2152788298", "2204652942", "2012648224" ], "abstract": [ "We investigate the performance of polarimetric imaging in the long-wave infrared (LWIR) spectrum for cross-modal face recognition. For this work, polarimetric imagery is generated as stacks of three components: the conventional thermal intensity image (referred to as S0), and the two Stokes images, S1 and S2, which contain combinations of different polarizations. The proposed face recognition algorithm extracts and combines local gradient magnitude and orientation information from S0, S1, and S2 to generate a robust feature set that is well-suited for cross-modal face recognition. Initial results show that polarimetric LWIR-to-visible face recognition achieves an 18 increase in Rank-1 identification rate compared to conventional LWIR-to-visible face recognition. We conclude that a substantial improvement in automatic face recognition performance can be achieved by exploiting the polarization-state of radiance, as compared to using conventional thermal imagery.", "The problem of face identication in the Mid-Wave InfraRed (MWIR) spectrum is studied in order to understand the performance of intra-spectral (MWIR to MWIR) and cross-spectral (visible to MWIR) matching. The contributions of this work are two-fold. First, a database of 50 subjects is assembled and used to illustrate the challenges associated with the problem. Second, a set of experiments is performed in order to demonstrate the possibility of MWIR intra-spectral and cross-spectral matching. Experiments show that images captured in the MWIR band can be eciently matched to MWIR images using existing techniques (originally not designed to address such a problem). These results are comparable to the baseline results, i.e., when comparing visible to visible face images. Experiments also show that cross-spectral matching (the heterogeneous problem, where gallery and probe sets have face images acquired in dierent spectral bands) is a very challenging problem. In order to perform cross-spectral matching, we use multiple texture descriptors and demonstrate that fusing these descriptors improves recognition performance. Experiments on a small database, suggests that the problem of cross-spectral matching requires further investigation.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.", "We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called “ghosting” effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.", "Although visible face recognition has been an active area of research for several decades, cross-modal face recognition has only been explored by the biometrics community relatively recently. Thermal-to-visible face recognition is one of the most difficult cross-modal face recognition challenges, because of the difference in phenomenology between the thermal and visible imaging modalities. We address the cross-modal recognition problem using a partial least squares (PLS) regression-based approach consisting of preprocessing, feature extraction, and PLS model building. The preprocessing and feature extraction stages are designed to reduce the modality gap between the thermal and visible facial signatures, and facilitate the subsequent one-vs-all PLS-based model building. We incorporate multi-modal information into the PLS model building stage to enhance cross-modal recognition. The performance of the proposed recognition algorithm is evaluated on three challenging datasets containing visible and thermal imagery acquired under different experimental scenarios: time-lapse, physical tasks, mental tasks, and subject-to-camera range. These scenarios represent difficult challenges relevant to real-world applications. We demonstrate that the proposed method performs robustly for the examined scenarios.", "Heterogeneous face recognition (HFR) involves matching two face images from alternate imaging modalities, such as an infrared image to a photograph or a sketch to a photograph. Accurate HFR systems are of great value in various applications (e.g., forensics and surveillance), where the gallery databases are populated with photographs (e.g., mug shot or passport photographs) but the probe images are often limited to some alternate modality. A generic HFR framework is proposed in which both probe and gallery images are represented in terms of nonlinear similarities to a collection of prototype face images. The prototype subjects (i.e., the training set) have an image in each modality (probe and gallery), and the similarity of an image is measured against the prototype images from the corresponding modality. The accuracy of this nonlinear prototype representation is improved by projecting the features into a linear discriminant subspace. Random sampling is introduced into the HFR framework to better handle challenges arising from the small sample size problem. The merits of the proposed approach, called prototype random subspace (P-RS), are demonstrated on four different heterogeneous scenarios: 1) near infrared (NIR) to photograph, 2) thermal to photograph, 3) viewed sketch to photograph, and 4) forensic sketch to photograph.", "Face recognition research has primarily focused on the visible spectrum, due to the prevalence and low cost of visible cameras. However, face recognition in the visible spectrum is sensitive to illumination variations, and is infeasible in low-light or nighttime settings. In contrast, thermal imaging acquires naturally emitted radiation from facial skin tissue, and is therefore ideal for nighttime surveillance and intelligence gathering operations. However, conventional thermal face imagery lacks textural and geometrics details that are present in visible spectrum face signatures. In this work, we further explore the impact of polarimetric imaging in the LWIR spectrum for face recognition. Polarization-state information provides textural and geometric facial details unavailable with conventional thermal imaging. Since the frequency content of the conventional thermal, polarimetric thermal, and visible images is quite different, we propose a spatial correlation based procedure to optimize the filtering of polarimetric thermal and visible face images to further facilitate cross-spectrum face recognition. Additionally, we use a more extensive gallery database to more robustly demonstrate an improvement in the performance of cross-spectrum face recognition using polarimetric thermal imaging.", "ABSTRACT In low light conditions, visible light face identi“cation is infeasible due to the lack of illumination. For nighttimesurveillance, thermal imaging is commonly used because of the intrinsic emissivity of thermal radiation from thehuman body. However, matching thermal images of faces acquired at nighttime to the predominantly visiblelight face imagery in existing government databases and watch lists is a challenging task. The diculty arisesfrom the signi“cant dierence between the faces thermal signature and its visible signature (i.e. the modalitygap). To match the thermal face to the visible face acquired by the two dierent modalities, we applied facerecognition algorithms that reduce the modality gap in each step of face identi“cation, from low-level analysis tomachine learning techniques. Speci“cally, partial least squares-discriminant analysis (PLS-DA) based approacheswere used to correlate the thermal face signatures to the visible face signatures, yielding a thermal-to-visible faceidenti“cation rate of 49.9 . While this work makes progress for thermal-to-visible face recognition, more eortsneed to be devoted to solving this dicult task. Successful development of a thermal-to-visible face recognitionsystem would signi“cantly enhance the Nations nighttime surveillance capabilities.Keywords: thermal, visible, thermal-to-visible face recognition, face, recognition, multi, modal" ] }
1901.00889
2907963126
Thermal to visible face verification is a challenging problem due to the large domain discrepancy between the modalities. Existing approaches either attempt to synthesize visible faces from thermal faces or extract robust features from these modalities for cross-modal matching. In this paper, we take a different approach in which we make use of the attributes extracted from the visible image to synthesize the attribute-preserved visible image from the input thermal image for cross-modal matching. A pre-trained VGG-Face network is used to extract the attributes from the visible image. Then, a novel Attribute Preserved Generative Adversarial Network (AP-GAN) is proposed to synthesize the visible image from the thermal image guided by the extracted attributes. Finally, a deep network is used to extract features from the synthesized image and the input visible image for verification. Extensive experiments on the ARL Polarimetric face dataset show that the proposed method achieves significant improvements over the state-of-the-art methods.
Unlike the above mentioned traditional methods, synthesis-based thermal to visible face verification algorithms leverage the synthesized visible faces for verification. Due to the success of CNNs and recently introduced generative adversarial networks (GANs) in synthesizing realistic images, various deep learning-based approaches have been proposed in the literature for thermal to visible face synthesis @cite_26 @cite_28 @cite_0 @cite_9 . For example, Riggan al @cite_9 proposed a two-step procedure (visible feature estimation and visible image reconstruction) to solve the thermal-visible verification problem. Zhang al @cite_28 proposed an end-to-end GAN-based approach for synthesizing photo-realistic visible face images from their corresponding polarimetric images. Recently Riggan al @cite_26 proposed a new synthesis method to enhance the discriminative quality of generated visible face images by leveraging both global and local facial regions.
{ "cite_N": [ "@cite_28", "@cite_9", "@cite_26", "@cite_0" ], "mid": [ "2963639219", "2566614872", "2963294002", "2963276927" ], "abstract": [ "The large domain discrepancy between faces captured in polarimetric (or conventional) thermal and visible domain makes cross-domain face recognition quite a challenging problem for both human-examiners and computer vision algorithms. Previous approaches utilize a two-step procedure (visible feature estimation and visible image reconstruction) to synthesize the visible image given the corresponding polarimetric thermal image. However, these are regarded as two disjoint steps and hence may hinder the performance of visible face reconstruction. We argue that joint optimization would be a better way to reconstruct more photo-realistic images for both computer vision algorithms and human-examiners to examine. To this end, this paper proposes a Generative Adversarial Network-based Visible Face Synthesis (GAN-VFS) method to synthesize more photo-realistic visible face images from their corresponding polarimetric images. To ensure that the encoded visible-features contain more semantically meaningful information in reconstructing the visible face image, a guidance sub-network is involved into the training procedure. To achieve photo realistic property while preserving discriminative characteristics for the reconstructed outputs, an identity loss combined with the perceptual loss are optimized in the framework. Multiple experiments evaluated on different experimental protocols demonstrate that the proposed method achieves state-of-the-art performance.", "A method for synthesizing visible spectrum face imagery from polarimetric-thermal face imagery is presented. This work extends recent within-spectrum (i.e., visible-to-visible) reconstruction techniques for image representation understanding using convolutional neural networks. Despite the challenging task, we effectively demonstrate the ability to produce a visible image from a probe polarimetric-thermal image. Moreover, we are able to demonstrate the same capability with conventional thermal imagery, but we report a significant improvement by incorporating polarization-state information. These reconstructions, or estimates, can be used to aid human examiners performing one-to-one verification of matches retrieved from automated cross-spectrum face recognition algorithms.", "Synthesis of visible spectrum faces from thermal facial imagery is a promising approach for heterogeneous face recognition; enabling existing face recognition software trained on visible imagery to be leveraged, and allowing human analysts to verify cross-spectrum matches more effectively. We propose a new synthesis method to enhance the discriminative quality of synthesized visible face imagery by leveraging both global (e.g., entire face) and local regions (e.g., eyes, nose, and mouth). Here, each region provides (1) an independent representation for the corresponding area, and (2) additional regularization terms, which impact the overall quality of synthesized images. We analyze the effects of using multiple regions to synthesize a visible face image from a thermal face. We demonstrate that our approach improves cross-spectrum verification rates over recently published synthesis approaches. Moreover, using our synthesized imagery, we report the results on facial landmark detection—commonly used for image registration— which is a critical part of the face recognition process.", "This work tackles the face recognition task on images captured using thermal camera sensors which can operate in the non-light environment. While it can greatly increase the scope and benefits of the current security surveillance systems, performing such a task using thermal images is a challenging problem compared to face recognition task in the Visible Light Domain (VLD). This is partly due to the significantly smaller amount of thermal imagery data collected compared to the VLD data. Unfortunately, direct application of the existing very strong face recognition models trained using VLD data into the thermal imagery data will not produce a satisfactory performance. This is due to the existence of the domain gap between the thermal and VLD images. To this end, we propose a Thermal-to-Visible Generative Adversarial Network (TV-GAN) that is able to transform thermal face images into their corresponding VLD images whilst maintaining identity information which is sufficient enough for the existing VLD face recognition models to perform recognition. Some examples are presented in Figure 1. Unlike the previous methods, our proposed TV-GAN uses an explicit closed-set face recognition loss to regularize the discriminator network training. This information will then be conveyed into the generator network in the form of gradient loss. In the experiment, we show that by using this additional explicit regularization for the discriminator network, the TV-GAN is able to preserve more identity information when translating a thermal image of a person which is not seen before by the TV-GAN." ] }
1901.00893
2907919191
We present a method for improving segmentation tasks on images affected by adherent rain drops and streaks. We introduce a novel stereo dataset recorded using a system that allows one lens to be affected by real water droplets while keeping the other lens clear. We train a denoising generator using this dataset and show that it is effective at removing the effect of real water droplets, in the context of image reconstruction and road marking segmentation. To further test our de-noising approach, we describe a method of adding computer-generated adherent water droplets and streaks to any images, and use this technique as a proxy to demonstrate the effectiveness of our model in the context of general semantic segmentation. We benchmark our results using the CamVid road marking segmentation dataset, Cityscapes semantic segmentation datasets and our own real-rain dataset, and show significant improvement on all tasks.
Generally speaking, the quality of an image can be affected in two ways by bad weather conditions. Firstly, contaminants in the atmosphere, such as falling rain, fog, smog or snow will hinder visibility or partially occlude a scene but do not significantly distort the image. Secondly, adherent contaminants such as water droplets, which stick to transparent surfaces or lenses, tend to heavily distort the image, essentially acting as a secondary lens with various degrees of blurring. Several techniques are employed to clean the first type of images, such as those used by @cite_15 @cite_35 @cite_34 @cite_36 @cite_24 , however these techniques cannot be used to restore images affected by adherent rain, as the optics involved differ significantly from those of atmospheric droplets. The remainder of this section outlines some of the techniques used to tackle the effects of adherent rain droplets and adherent streaks.
{ "cite_N": [ "@cite_35", "@cite_36", "@cite_24", "@cite_15", "@cite_34" ], "mid": [ "2077946335", "2519481857", "2509784253", "1977808497", "1909316225" ], "abstract": [ "A novel rain (or snow) streak removal algorithm for stereo video sequences is proposed in this work. We observe that rain streaks appear at different locations in spatiotemporally adjacent frames. Thus, to derain a left-view frame, we synthesize it by warping the spatially adjacent right-view frame and the temporally previous and next frames, respectively. We subtract each warped frame from the original frame, and apply the median filter to the three difference images to obtain a reliable rain mask. Then, we remove rain streaks by replacing each rainy pixel value with a weighted average of non-locally neighboring pixel values. Experimental results demonstrate that the proposed algorithm removes rain streaks reliably and recovers original scene contents faithfully.", "The performance of existing image dehazing methods is limited by hand-designed features, such as the dark channel, color disparity and maximum contrast, with complex fusion schemes. In this paper, we propose a multi-scale deep neural network for single-image dehazing by learning the mapping between hazy images and their corresponding transmission maps. The proposed algorithm consists of a coarse-scale net which predicts a holistic transmission map based on the entire image, and a fine-scale net which refines results locally. To train the multi-scale deep network, we synthesize a dataset comprised of hazy images and corresponding transmission maps based on the NYU Depth dataset. Extensive experiments demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both synthetic and real-world images in terms of quality and speed.", "We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with the state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.", "Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.", "A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms." ] }
1901.00893
2907919191
We present a method for improving segmentation tasks on images affected by adherent rain drops and streaks. We introduce a novel stereo dataset recorded using a system that allows one lens to be affected by real water droplets while keeping the other lens clear. We train a denoising generator using this dataset and show that it is effective at removing the effect of real water droplets, in the context of image reconstruction and road marking segmentation. To further test our de-noising approach, we describe a method of adding computer-generated adherent water droplets and streaks to any images, and use this technique as a proxy to demonstrate the effectiveness of our model in the context of general semantic segmentation. We benchmark our results using the CamVid road marking segmentation dataset, Cityscapes semantic segmentation datasets and our own real-rain dataset, and show significant improvement on all tasks.
We base our simple synthetic droplet model on the works of @cite_30 @cite_27 and @cite_20 , by storing proto-droplet normal maps which are subsequently warped and combined at run time using an approach similar to meta-balls @cite_33 .
{ "cite_N": [ "@cite_30", "@cite_27", "@cite_33", "@cite_20" ], "mid": [ "2038133417", "1585341626", "2244686166", "" ], "abstract": [ "In this paper we present a novel approach to improved image registration in rainy weather situations. To this end, we perform monocular raindrop detection in single images based on a photometric raindrop model. Our method is capable of detecting raindrops precisely, even in front of complex backgrounds. The effectiveness is demonstrated by a significant increase in image registration accuracy which also allows for successful image restoration. Experiments on video sequences taken from within a moving vehicle prove the applicability to real-world scenarios.", "In this paper, we propose a novel raindrop shape model for the detection of view-disturbing, adherent raindrops on inclined surfaces. Whereas state-of-the-art techniques do not consider inclined surfaces because they assume the droplets as sphere sections with equal contact angles, our model incorporates cubic Bezier curves that provide a low dimensional and physically interpretable representation of a raindrop surface. The parameters are empirically deduced from numerous observations of different raindrop sizes and surface inclination angles. It can be easily integrated into a probabilistic framework for raindrop recognition, using geometrical optics to simulate the visual raindrop appearance. In comparison to a sphere section model, the proposed model yields an improved droplet surface accuracy up to three orders of magnitude.", "The mathematical description of three dimensional surfaces usually falls in one of two classifications: parametric and algebraic. The form is defined as all points which satisfy some equation: F(x,y,z)=0. This form is ideally suited for image space shaded picture drawing, the pixel coordinates are substituted for x and y and the equation is solved for z. Algorithms for drawing such objects have been developed primarily for first and second order polynomial functions. This paper presents a new algorithm applicable to other functional forms, in particular to the summation of several gaussian density distributions. The algorithm was created to model electron density maps of molecular structures but can be used for other artistically interesting shapes.", "" ] }
1901.00858
2907038162
Breakthroughs in the fields of deep learning and mobile system-on-chips are radically changing the way we use our smartphones. However, deep neural networks inference is still a challenging task for edge AI devices due to the computational overhead on mobile CPUs and a severe drain on the batteries. In this paper, we present a deep neural network inference engine named HG-Caffe, which supports GPUs with half precision. HG-Caffe provides up to 20 times speedup with GPUs compared to the original implementations. In addition to the speedup, the peak memory usage is also reduced to about 80 . With HG-Caffe, more innovative and fascinating mobile applications will be turned into reality.
On the other hand, researchers have also demonstrated many mobile deep learning frameworks, provides various novel features. Deep Compression @cite_16 is a series of techniques aiming to reduce the size of deep neural networks. With pruning, trained quantization and Huffman coding, Deep Compression provides up to @math memory reduction, which reduces the overhead of deploying deep neural networks to embedded devices. DeepX @cite_8 has a pair of resource control algorithms, which decomposes monolithic deep model networks into several unit-blocks and performs principled resource scaling. DeepX supports CPUs and GPUs heterogeneously. DeepSense @cite_11 is a deep learning framework, which is dedicated to time-series tasks. However, most of these programs are not general deep learning frameworks, which may lack the support of common neural network layers. Meanwhile, as these frameworks usually have different types of weight file, they may not be compatible with the pre-trained deep learning models. HG-Caffe instead is a general deep learning framework, which supports all neural network layers of BVLC Caffe. Furthermore, HG-Caffe adopts the same weight file format with BVLC Caffe and the deep learning models trained with BVLC Caffe can be inference with HG-Caffe.
{ "cite_N": [ "@cite_16", "@cite_11", "@cite_8" ], "mid": [ "2119144962", "2626129225", "2297325673" ], "abstract": [ "Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce \"deep compression\", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.", "The rapid emergence of head-mounted devices such as the Microsoft Holo-lens enables a wide variety of continuous vision applications. Such applications often adopt deep-learning algorithms such as CNN and RNN to extract rich contextual information from the first-person-view video streams. Despite the high accuracy, use of deep learning algorithms in mobile devices raises critical challenges, i.e., high processing latency and power consumption. In this paper, we propose DeepMon, a mobile deep learning inference system to run a variety of deep learning inferences purely on a mobile device in a fast and energy-efficient manner. For this, we designed a suite of optimization techniques to efficiently offload convolutional layers to mobile GPUs and accelerate the processing; note that the convolutional layers are the common performance bottleneck of many deep learning models. Our experimental results show that DeepMon can classify an image over the VGG-VeryDeep-16 deep learning model in 644ms on Samsung Galaxy S7, taking an important step towards continuous vision without imposing any privacy concerns nor networking cost.", "Breakthroughs from the field of deep learning are radically changing how sensor data are interpreted to extract the high-level information needed by mobile apps. It is critical that the gains in inference accuracy that deep models afford become embedded in future generations of mobile apps. In this work, we present the design and implementation of DeepX, a software accelerator for deep learning execution. DeepX signif- icantly lowers the device resources (viz. memory, computation, energy) required by deep learning that currently act as a severe bottleneck to mobile adoption. The foundation of DeepX is a pair of resource control algorithms, designed for the inference stage of deep learning, that: (1) decompose monolithic deep model network architectures into unit- blocks of various types, that are then more efficiently executed by heterogeneous local device processors (e.g., GPUs, CPUs); and (2), perform principled resource scaling that adjusts the architecture of deep models to shape the overhead each unit-blocks introduces. Experiments show, DeepX can allow even large-scale deep learning models to execute efficently on modern mobile processors and significantly outperform existing solutions, such as cloud-based offloading." ] }
1901.00579
2907187671
As Internet streaming of live content has gained on traditional cable TV viewership, we have also seen significant growth of free live streaming services which illegally provide free access to copyrighted content over the Internet. Some of these services draw millions of viewers each month. Moreover, this viewership has continued to increase, despite the consistent coupling of this free content with deceptive advertisements and user-hostile tracking. In this paper, we explore the ecosystem of free illegal live streaming services by collecting and examining the behavior of a large corpus of illegal sports streaming websites. We explore and quantify evidence of user tracking via third-party HTTP requests, cookies, and fingerprinting techniques on more than @math unique video streams provided by @math unique illegal live streaming domains. We compare the behavior of illegal live streaming services with legitimate services and find that the illegal services go to much greater lengths to track users than most legitimate services, and use more obscure tracking services. Similarly, we find that moderated sites that aggregate links to illegal live streaming content fail to moderate out sites that go to significant lengths to track users. In addition, we perform several case studies which highlight deceptive behavior and modern techniques used by some domains to avoid detection, monetize traffic, or otherwise exploit their viewers. Overall, we find that despite recent improvements in mechanisms for detecting malicious browser extensions, ad-blocking, and browser warnings, users of free illegal live streaming services are still exposed to deceptive ads, malicious browser extensions, scams, and extensive tracking. We conclude with insights into the ecosystem and recommendations for addressing the challenges highlighted by this study.
Measuring Online Tracking. @cite_26 presents extensive measurements of online tracking across the Alexa top million websites and presents OpenWPM, the tool we utilize in our work to collect data on illegal stream URLs. Similarly, @cite_14 studies third-party tracking on websites and mobile applications while @cite_16 examines the differences in tracking activity between geographic locations. While these studies measure tracking on the web generally, they do not differentiate between sites, and none focuses specifically on sites for which visiting these sites could be considered criminal activity.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_16" ], "mid": [ "2963967433", "2535603283", "2272969119" ], "abstract": [ "Third-party networks collect vast amounts of data about users via websites and mobile applications. Consolidations among tracker companies can significantly increase their individual tracking capabilities, prompting scrutiny by competition regulators. Traditional measures of market share, based on revenue or sales, fail to represent the tracking capability of a tracker, especially if it spans both web and mobile. This article proposes a new approach to measure the concentration of tracking capability, based on the reach of a tracker on popular websites and apps. Our results reveal that tracker prominence and parent–subsidiary relationships have significant impact on accurately measuring concentration.", "We present the largest and most detailed measurement of online tracking conducted to date, based on a crawl of the top 1 million websites. We make 15 types of measurements on each site, including stateful (cookie-based) and stateless (fingerprinting-based) tracking, the effect of browser privacy tools, and the exchange of tracking data between different sites (\"cookie syncing\"). Our findings include multiple sophisticated fingerprinting techniques never before measured in the wild. This measurement is made possible by our open-source web privacy measurement tool, OpenWPM, which uses an automated version of a full-fledged consumer browser. It supports parallelism for speed and scale, automatic recovery from failures of the underlying browser, and comprehensive browser instrumentation. We demonstrate our platform's strength in enabling researchers to rapidly detect, quantify, and characterize emerging online tracking behaviors.", "Different countries have different privacy regulatory models. These models impact the perspectives and laws surrounding internet privacy. However, little is known about how effective the regulatory models are when it comes to limiting online tracking and advertising activity. In this paper, we propose a method for investigating tracking behavior by analyzing cookies and HTTP requests from browsing sessions originating in different countries. We collect browsing data from visits to top websites in various countries that utilize different regulatory models. We found that there are significant differences in tracking activity between different countries using several metrics. We also suggest various ways to extend this study which may yield a more complete representation of tracking from a global perspective." ] }
1901.00579
2907187671
As Internet streaming of live content has gained on traditional cable TV viewership, we have also seen significant growth of free live streaming services which illegally provide free access to copyrighted content over the Internet. Some of these services draw millions of viewers each month. Moreover, this viewership has continued to increase, despite the consistent coupling of this free content with deceptive advertisements and user-hostile tracking. In this paper, we explore the ecosystem of free illegal live streaming services by collecting and examining the behavior of a large corpus of illegal sports streaming websites. We explore and quantify evidence of user tracking via third-party HTTP requests, cookies, and fingerprinting techniques on more than @math unique video streams provided by @math unique illegal live streaming domains. We compare the behavior of illegal live streaming services with legitimate services and find that the illegal services go to much greater lengths to track users than most legitimate services, and use more obscure tracking services. Similarly, we find that moderated sites that aggregate links to illegal live streaming content fail to moderate out sites that go to significant lengths to track users. In addition, we perform several case studies which highlight deceptive behavior and modern techniques used by some domains to avoid detection, monetize traffic, or otherwise exploit their viewers. Overall, we find that despite recent improvements in mechanisms for detecting malicious browser extensions, ad-blocking, and browser warnings, users of free illegal live streaming services are still exposed to deceptive ads, malicious browser extensions, scams, and extensive tracking. We conclude with insights into the ecosystem and recommendations for addressing the challenges highlighted by this study.
Illegal Media Streaming. @cite_9 studies security and privacy concerns related to on-demand media streaming services and targets platforms that are known to host illegal content. Specifically, they study over 20 media streaming platforms (e.g., Kodi, Enigma 2, MediaTomb, etc.) and their attack surfaces, and find that there are over @math devices using these platforms which are discoverable through simple search queries. Similarly, @cite_2 explores the ecosystem of illegal streaming from the perspective of video piracy, where content is streamed on-demand, as opposed to our work, which will focus specifically on live-streamed content. @cite_8 studies the architectures and protocols used to stream illegal content over the Internet and explores the value chain from content acquisition, preparation and distribution, web hosting, and content discovery. This study considers peer-to-peer streaming as well as web streaming, but does not study malicious behavior outside of breaking copyright law.
{ "cite_N": [ "@cite_8", "@cite_9", "@cite_2" ], "mid": [ "2331802876", "2792199623", "2796057488" ], "abstract": [ "Over recent years, a major shift has occurred in piracy of paid-for content services toward illegal redistribution of live content in real-time over the Internet. This paper will provide insight into pirate content platforms, covering the various architectures and protocols used, from peer-to-peer protocols adapted for live streaming to more traditional Web streaming protocols. More specifically, it will focus on the methods generally employed to set up and scale ad-based illegal services using some of the above-mentioned protocols with streaming media platforms, while securing streaming servers, enabling these sites to remain hidden. A thorough analysis of the used architectures and protocols makes it possible to measure the actual audience viewing illegal streams, typically leveraging peer-to-peer networks data. This enables content service providers to assess the piracy threat level of any content, while illustrating the need for a business intelligence tool that provides relevant information on viewers' behavior.", "Abstract Streaming media are currently conquering traditional multimedia by means of services like Netflix, Amazon Prime and Hulu which provide to millions of users worldwide with paid subscriptions in order to watch the desired content on-demand. Simultaneously, numerous applications and services infringing this content by sharing it for free have emerged. The latter has given ground to a new market based on illegal downloads which monetizes from ads and custom hardware, often aggregating peers to maximize multimedia content sharing. Regardless of the ethical and legal issues involved, the users of such streaming services are millions and they are severely exposed to various threats, mainly due to poor hardware and software configurations. Recent attacks have also shown that they may, in turn, endanger others as well. This work details these threats and presents new attacks on these systems as well as forensic evidence that can be collected in specific cases.", "Online video piracy (OVP) is a contentious topic, with strong proponents on both sides of the argument. Recently, a number of illegal websites, called streaming cyberlockers, have begun to dominate OVP. These websites specialise in distributing pirated content, underpinned by third party indexing services offering easy-to-access directories of content. This paper performs the first exploration of this new ecosystem. It characterises the content, as well the streaming cyberlockers' individual attributes. We find a remarkably centralised system with just a few networks, countries and cyberlockers underpinning most provisioning. We also investigate the actions of copyright enforcers. We find they tend to target small subsets of the ecosystem, although they appear quite successful. 84 of copyright notices see content removed." ] }
1901.00579
2907187671
As Internet streaming of live content has gained on traditional cable TV viewership, we have also seen significant growth of free live streaming services which illegally provide free access to copyrighted content over the Internet. Some of these services draw millions of viewers each month. Moreover, this viewership has continued to increase, despite the consistent coupling of this free content with deceptive advertisements and user-hostile tracking. In this paper, we explore the ecosystem of free illegal live streaming services by collecting and examining the behavior of a large corpus of illegal sports streaming websites. We explore and quantify evidence of user tracking via third-party HTTP requests, cookies, and fingerprinting techniques on more than @math unique video streams provided by @math unique illegal live streaming domains. We compare the behavior of illegal live streaming services with legitimate services and find that the illegal services go to much greater lengths to track users than most legitimate services, and use more obscure tracking services. Similarly, we find that moderated sites that aggregate links to illegal live streaming content fail to moderate out sites that go to significant lengths to track users. In addition, we perform several case studies which highlight deceptive behavior and modern techniques used by some domains to avoid detection, monetize traffic, or otherwise exploit their viewers. Overall, we find that despite recent improvements in mechanisms for detecting malicious browser extensions, ad-blocking, and browser warnings, users of free illegal live streaming services are still exposed to deceptive ads, malicious browser extensions, scams, and extensive tracking. We conclude with insights into the ecosystem and recommendations for addressing the challenges highlighted by this study.
Illegal Live Media Streaming. @cite_17 studies the ecosystem of free live streaming websites with an analysis of over 5600 live-streaming domains discovered from live-streaming domains through aggregator websites. This study does not focus on user tracking, and instead highlights other aspects of their behavior such as trademark infringements, malware distribution, and anti-ad-block techniques, and uses these intuitions to build a classifier of these sites. Though this study is over two years old, it notably does not include Reddit as one of their aggregators, despite our finding that Reddit is now one of the most popular aggregators (see ).
{ "cite_N": [ "@cite_17" ], "mid": [ "2467763829" ], "abstract": [ "Recent years have seen extensive growth of services enabling free broadcasts of live streams on the Web. Free live streaming (FLIS) services attract millions of viewers and make heavy use of deceptive advertisements. Despite the immense popularity of these services, little is known about the parties that facilitate it and maintain webpages to index links for free viewership. This paper presents a comprehensive analysis of the FLIS ecosystem by mapping all parties involved in the anonymous broadcast of live streams, discovering their modus operandi, and quantifying the consequences for common Internet users who utilize these services. We develop an infrastructure that enables us to perform more than 850,000 visits by identifying 5,685 free live streaming domains, and analyze more than 1 Terabyte of traffic to map the parties that constitute the FLIS ecosystem. On the one hand, our analysis reveals that users of FLIS websites are generally exposed to deceptive advertisements, malware, malicious browser extensions, and fraudulent scams. On the other hand, we find that FLIS parties are often reported for copyright violations and host their infrastructure predomi- nantly in Europe and Belize. At the same time, we encounter substandard advertisement set-ups by the FLIS parties, along with potential trademark infringements through the abuse of domain names and logos of popular TV channels. Given the magnitude of the discovered abuse, we engineer features that characterize FLIS pages and build a classifier to identify FLIS pages with high accuracy and low false positives, in an effort to help human analysts identify malicious services and, whenever appropriate, initiate content-takedown requests." ] }
1901.00484
2908138876
We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.
One domain where joint models of video and language arise naturally is in the context of video captioning @cite_4 @cite_26 . The task of video captioning faces similar challenges to our problem, but however in captioning the focus is not building and testing a distributed representation but rather focus on the mapping from video to stream of text. Video captioning methods require strong encoder and decoder networks. In our evaluations we demonstrate the effectiveness of our HRNN architecture as an encoder that could possibly be used for captioning models.
{ "cite_N": [ "@cite_26", "@cite_4" ], "mid": [ "1573040851", "2963843052" ], "abstract": [ "Automatically describing video content with natural language is a fundamental challenge of computer vision. Re-current Neural Networks (RNNs), which models sequence dynamics, has attracted increasing attention on visual interpretation. However, most existing approaches generate a word locally with the given previous words and the visual content, while the relationship between sentence semantics and visual content is not holistically exploited. As a result, the generated sentences may be contextually correct but the semantics (e.g., subjects, verbs or objects) are not true. This paper presents a novel unified framework, named Long Short-Term Memory with visual-semantic Embedding (LSTM-E), which can simultaneously explore the learning of LSTM and visual-semantic embedding. The former aims to locally maximize the probability of generating the next word given previous words and visual content, while the latter is to create a visual-semantic embedding space for enforcing the relationship between the semantics of the entire sentence and visual content. The experiments on YouTube2Text dataset show that our proposed LSTM-E achieves to-date the best published performance in generating natural sentences: 45.3 and 31.0 in terms of BLEU@4 and METEOR, respectively. Superior performances are also reported on two movie description datasets (M-VAD and MPII-MD). In addition, we demonstrate that LSTM-E outperforms several state-of-the-art techniques in predicting Subject-Verb-Object (SVO) triplets.", "Recently, deep learning approach, especially deep Convolutional Neural Networks (ConvNets), have achieved overwhelming accuracy with fast processing speed for image classification. Incorporating temporal structure with deep ConvNets for video representation becomes a fundamental problem for video content analysis. In this paper, we propose a new approach, namely Hierarchical Recurrent Neural Encoder (HRNE), to exploit temporal information of videos. Compared to recent video representation inference approaches, this paper makes the following three contributions. First, our HRNE is able to efficiently exploit video temporal structure in a longer range by reducing the length of input information flow, and compositing multiple consecutive inputs at a higher level. Second, computation operations are significantly lessened while attaining more non-linearity. Third, HRNE is able to uncover temporal tran-sitions between frame chunks with different granularities, i.e. it can model the temporal transitions between frames as well as the transitions between segments. We apply the new method to video captioning where temporal information plays a crucial role. Experiments demonstrate that our method outperforms the state-of-the-art on video captioning benchmarks." ] }
1901.00484
2908138876
We describe a novel cross-modal embedding space for actions, named Action2Vec, which combines linguistic cues from class labels with spatio-temporal features derived from video clips. Our approach uses a hierarchical recurrent network to capture the temporal structure of video features. We train our embedding using a joint loss that combines classification accuracy with similarity to Word2Vec semantics. We evaluate Action2Vec by performing zero shot action recognition and obtain state of the art results on three standard datasets. In addition, we present two novel analogy tests which quantify the extent to which our joint embedding captures distributional semantics. This is the first joint embedding space to combine verbs and action videos, and the first to be thoroughly evaluated with respect to its distributional semantics.
The final body of related work is recent deep learning approaches that construct video representations for supervised prediction tasks such as action recognition. We build on these approaches in our own work. In particular, our model utilizes the C3D @cite_13 architecture to extract features from video frames. We also experimented with two-stream approaches similar to @cite_31 @cite_19 @cite_36 , although in our application we found only minimal benefit from the additional network structure.
{ "cite_N": [ "@cite_36", "@cite_19", "@cite_31", "@cite_13" ], "mid": [ "2156303437", "2342662179", "2619082050", "1522734439" ], "abstract": [ "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use." ] }
1901.00568
2907940234
Three Dimensional Integrated Circuits (3D IC) offer lower power consumption, higher performance, higher bandwidth, and scalability over the conventional two dimensional ICs. Through-Silicon Via (TSV) is one of the fabrication mechanisms that connects stacked dies to each other. The large size of TSVs and the proximity between them lead to undesirable coupling capacitance. This interference causes mutual influences between adjacent TSVs and produces crosstalk noise. Furthermore, this effect threats the reliability of data during traversal between layers. This paper proposes a mechanism that efficiently reduces crosstalk noise between TSVs with lower area overhead as compared to previous works. This mechanism revolves around the fact that retaining TSV value in current state can reduce coupling in some cases. To evaluate the mechanism, gem5 simulator is used for data extraction and several benchmarks are taken from the SPEC2006 suite. The simulation results show that the proposed mechanism reduces crosstalk noise with only 30 imposed TSV overhead while delay decreased up to 25.7 as compared to a recent related work.
In the context of (2D NoC), there are plenty of works that target power consumption @cite_5 @cite_2 @cite_25 , reliability @cite_5 , security @cite_17 , or performance @cite_5 of the interconnections. Particularly, crosstalk minimization methods can be classified in three categories: physical level, transistor level and, (RTL) techniques. Wire spacing @cite_4 , active and passive shielding @cite_9 , and buffer insertion @cite_11 are examples of physical level techniques. @cite_15 is a transistor level mechanism which reduces the crosstalk noise by skewing the simultaneous opposite transitions. Although this approach reduces the crosstalk, it requires timing adjustment between senders and receivers. Furthermore, this approach suffers from run-time management. The general idea behind RTL level techniques is to omit some undesirable transition patterns by using coding schemes. There are variety of works that focused on analytical aspect and coding concepts @cite_26 @cite_7 . Error detection codes and error correction codes @cite_0 , joint crosstalk avoidance mechanism @cite_7 , and CACs @cite_0 are examples of these coding schemes.
{ "cite_N": [ "@cite_11", "@cite_4", "@cite_26", "@cite_7", "@cite_9", "@cite_0", "@cite_2", "@cite_5", "@cite_15", "@cite_25", "@cite_17" ], "mid": [ "2130315896", "2153878017", "2115468262", "", "2117549011", "1999402023", "2528200010", "2149935279", "2043630110", "", "1977738269" ], "abstract": [ "Capacitive crosstalk between adjacent signal wires has significant effect on performance and delay uncertainty of point-to-point on-chip buses in deep submicrometer (DSM) VLSI technologies. We propose a hybrid polarity repeater insertion technique that combines inverting and non-inverting repeater insertion to achieve constant average effective coupling capacitance per wire transition for all possible switching patterns. Theoretical analysis shows the superiority of the proposed method in terms of performance and delay uncertainty compared to conventional and staggered repeater insertion methods. Simulations at the 90-nm node on semi-global METAL5 layer show around 25 reduction in worst case delay and around 86 delay uncertainty minimization compared to standard bus with optimal repeater configuration. The reduction in worst case capacitive coupling reduces peak energy which is a critical factor for thermal regulation and packaging. Isodelay comparisons with standard bus show that the proposed technique achieves considerable reduction in total buffers area, which in turn reduces average energy and peak current. Comparisons with staggered repeater which is one of the simplest and most effective crosstalk reduction techniques in the literature show that hybrid polarity repeater offers higher performance, less delay uncertainty, and reduced sensitivity to repeater placement variation.", "In this paper, statistical models for the efficient analysis of interconnect delay and crosstalk noise in the presence of back-end process variations are developed. The proposed models enable closed-form computation of means and variances of interconnect-delay, crosstalk-noise peak, and coupling-induced-delay change for given magnitudes of variation in relevant process parameters, such as linewidth, met al thickness, met al spacing, and interlayer dielectric (ILD) thickness. The proposed approach is based on the observation that if the variations in different physical dimensions are assumed to be independent normal random variables, then the interconnect behavior also tends to have a Gaussian distribution. In the proposed statistical models, delay and noise are expressed directly as functions of changes in physical parameters. This formulation allows us to preserve all correlations and can be very useful in evaluating delay and noise sensitivities due to changes in various physical dimensions. For interconnect-delay computation, the authors express the resistance and capacitance of a line as a linear function of random variables and then use these to compute circuit moments. They show that ignoring higher order terms in the resulting variational moments does not result in a loss of accuracy. Finally, these variability-aware moments are used in known closed-form delay and slew metrics to compute interconnect-delay probability density functions (pdfs). Similarly for coupling noise and dynamic-delay analysis, the authors rely on the linearity (Gaussian) assumption, allowing us to truncate nonlinear terms and express noise and dynamic-delay pdfs as linear functions of variations in relevant geometric dimensions. They compare their approach to SPICE-based Monte Carlo simulations and report the error in mean and standard deviation of interconnect delay to be 1 and 4 on average, respectively", "Interconnect delay has become a limiting factor for circuit performance in deep sub-micrometer designs. As the crosstalk in an on-chip bus is highly dependent on the data patterns transmitted on the bus, different crosstalk avoidance coding schemes have been proposed to boost the bus speed and or reduce the overall energy consumption. Despite the availability of the codes, no systematic mapping of data words to codewords has been proposed for CODEC design. This is mainly due to the nonlinear nature of the crosstalk avoidance codes (CAC). The lack of practical CODEC construction schemes has hampered the use of such codes in practical designs. This work presents guidelines for the CODEC design of the ldquoforbidden pattern free crosstalk avoidance coderdquo (FPF-CAC). We analyze the properties of the FPF-CAC and show that mathematically, a mapping scheme exists based on the representation of numbers in the Fibonacci numeral system. Our first proposed CODEC design offers a near-optimal area overhead performance. An improved version of the CODEC is then presented, which achieves theoretical optimal performance. We also investigate the implementation details of the CODECs, including design complexity and the speed. Optimization schemes are provided to reduce the size of the CODEC and improve its speed.", "", "Placing shields around a victim signal line is a common way to enhance signal integrity while minimizing delay uncertainty. For two coupled interconnects with a shield between the lines, the coupling noise can produce a peak noise of 15 of V sub dd in a 0.18 spl mu m CMOS technology. A pseudo-2 spl pi RC model is used to develop an analytic estimate of the peak noise for shielded interconnects. The peak noise model is accurate within an average error of 4.4 as compared to SPICE. The effects of the shield width, length, separation between the shield and the signal, and the number of connections tieing the shield to ground on the overall crosstalk noise are described in this paper. Based on the peak noise model, a minimum number of ground connections for a target shield line with noise constraints is obtained. Inserting a shield line between two coupled interconnects is shown to be more effective in reducing crosstalk noise than increasing the physical separation.", "Network on Chip (NoC) is an enabling methodology of integrating a very high number of intellectual property (IP) blocks in a single System on Chip (SoC). A major challenge that NoC design is expected to face is the intrinsic unreliability of the interconnect infrastructure under technology limitations. Research must address the combination of new device-level defects or error-prone technologies within systems that must deliver high levels of reliability and dependability while satisfying other hard constraints such as low energy consumption. By incorporating novel error correcting codes it is possible to protect the NoC communication fabric against transient errors and at the same time lower the energy dissipation. We propose a novel, simple coding scheme called Crosstalk Avoiding Double Error Correction Code (CADEC). Detailed analysis followed by simulations with three commonly used NoC architectures show that CADEC provides significant energy savings compared to previously proposed crosstalk avoiding single error correcting codes and error-detection retransmission schemes.", "With the advent in technology and shrinking the transistor size down to nano scale, static power may become the dominant power component in Networks-on-Chip (NoCs). Powergating is an efficient technique to reduce the static power of under-utilized resources in different types of circuits. For NoC, routers are promising candidates for power gating, since they present high idle time. However, routers in a NoC are not usually idle for long consecutive cycles due to distribution of resources in NoC and its communication-based nature, even in low network utilizations. Therefore, power-gating loses its efficiency due to performance and power overhead of the packets that encounter powered-off routers. In this paper, we propose Turn-on on Turn (TooT) which reduces the number of wake-ups by leveraging the characteristics of deterministic routing algorithms and mesh topology. In the proposed method, we avoid powering a router on when it forwards a straight packet or ejects a packet, i.e., a router is powered on only when either a packet turns through it or its associated node injects a packet. Experimental results on PARSEC benchmarks demonstrate that, compared with the conventional power-gating, the proposed method improves static power and performance by 57.9 and 35.3 , respectively, at the cost of a negligible area overhead.", "To alleviate the complex communication problems that arise as the number of on-chip components increases, network-on-chip (NoC) architectures have been recently proposed to replace global interconnects. In this paper, we first provide a general description of NoC architectures and applications. Then, we enumerate several related research problems organized under five main categories: Application characterization, communication paradigm, communication infrastructure, analysis, and solution evaluation. Motivation, problem description, proposed approaches, and open issues are discussed for each problem from system, microarchitecture, and circuit perspectives. Finally, we address the interactions among these research problems and put the NoC design process into perspective.", "As the CMOS technology scaled down, the horizontal coupling capacitance between adjacent wires plays a dominant part in wire load, crosstalk interference becomes a serious problem for VLSI design. We focused on the delay increase caused by crosstalk. On-chip bus delay is maximized by the crosstalk effect when adjacent wires simultaneously switch for opposite signal transition directions. This paper proposes a bus delay reduction technique by intentional skewing signal transition timing of adjacent wires. An approximated equation of bus delay shows our delay reduction technique is effective for a repeater-inserted bus. The result of SPICE simulation shows that the total bus delay reduction by from 5 to 20 can be achieved.", "", "Systems-on-chip (SoCs) based on many core architectures can be attacked. Malicious processes can infer secrets from on-chip sensible traffic by evaluating the degradation on their communication performance. Such a threat rises from the resource sharing. In order to avoid such time-driven attacks, the network-on-chip (NoC) can integrate mechanisms to isolate different communication flows. In this letter, we propose two mechanisms, random arbitration and adaptive routing, that dynamically allocate the SoC resources to avoid such attacks. We compare our approach to the unique previous work under several traffic conditions. We demonstrate that our mechanisms are effective to protect the SoC while increasing the overall performance." ] }
1901.00568
2907940234
Three Dimensional Integrated Circuits (3D IC) offer lower power consumption, higher performance, higher bandwidth, and scalability over the conventional two dimensional ICs. Through-Silicon Via (TSV) is one of the fabrication mechanisms that connects stacked dies to each other. The large size of TSVs and the proximity between them lead to undesirable coupling capacitance. This interference causes mutual influences between adjacent TSVs and produces crosstalk noise. Furthermore, this effect threats the reliability of data during traversal between layers. This paper proposes a mechanism that efficiently reduces crosstalk noise between TSVs with lower area overhead as compared to previous works. This mechanism revolves around the fact that retaining TSV value in current state can reduce coupling in some cases. To evaluate the mechanism, gem5 simulator is used for data extraction and several benchmarks are taken from the SPEC2006 suite. The simulation results show that the proposed mechanism reduces crosstalk noise with only 30 imposed TSV overhead while delay decreased up to 25.7 as compared to a recent related work.
Although the above approaches may cope with crosstalk in 2D ICs, they cannot be directly applied in 3D technologies because the additional dimension makes consequential differences in crosstalk problem analysis. Gathering the long and thick TSVs causes new reliability issues which have been studied recently @cite_24 @cite_1 . Several mechanisms have been proposed to make 3D ICs more reliable against crosstalk noise, e.g., @cite_12 @cite_3 @cite_8 @cite_6 @cite_22 . The TSV-to-TSV capacitance and inductance coupling are two major threats to 3D IC reliability. Previous works have concentrated on these effects from two perspectives. @cite_12 @cite_8 @cite_6 proposed capacitance-based mechanisms and @cite_3 @cite_22 @cite_13 proposed inductance-based techniques to reduce crosstalk effects in 3D ICs.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_1", "@cite_3", "@cite_6", "@cite_24", "@cite_13", "@cite_12" ], "mid": [ "", "1999816809", "2155351061", "2009570976", "2003631910", "2004420027", "2069756028", "2017619982" ], "abstract": [ "", "3D integration is one of the promising solutions to overcome the interconnect bottleneck with vertical interconnect through-silicon vias (TSVs). This paper investigates the crosstalk in 3D IC designs, especially the capacitive crosstalk in TSV interconnects. We propose a novel ω-LAT coding scheme to reduce the capacitive crosstalk and minimize the power consumption overhead in the TSV array. Combining with the Transition Signaling, the LAT coding scheme restricts the number of transitions in every transmission cycle to minimize the crosstalk and power consumption. Compared to other 3D crosstalk minimization coding schemes, the proposed coding can provide the same delay reduction with more affordable overhead. The performance and power analysis show that when ω is 4, the proposed LAT coding scheme can achieve 38 interconnect crosstalk delay reduction compared to the data transmission without coding. By reducing the value of ω, further reduction can be achieved1.", "Three-dimensional integrated circuit (3D IC) with through-silicon-via (TSV) is believed to offer new levels of efficiency, power, performance, and form-factor advantages over the conventional 2D IC. However, 3D IC involves disruptive manufacturing technologies compared to conventional 2D IC. TSVs cause significant thermomechanical stress that may seriously affect performance, leakage, and reliability of circuits. In this paper, we discuss an efficient and accurate full-chip thermomechanical stress and reliability analysis tool as well as a design optimization methodology to alleviate mechanical reliability issues in 3D ICs. First, we analyze detailed thermomechanical stress induced by TSVs in conjunction with various associated structures such as landing pad and dielectric liner. Then, we explore and validate the linear superposition principle of stress tensors and demonstrate the accuracy of this method against detailed finite element analysis (FEA) simulations. Next, we apply this linear superposition method to full-chip stress simulation and a reliability metric named the von Mises yield criterion. Finally, we propose a design optimization methodology to mitigate the mechanical reliability problems in 3D ICs.", "A reliable Three Dimensional Network-on-Chip (3D NoC) is required for future many-core systems. Through-silicon Via (TSV) is the prominent component of 3D NoC to support better performance and lower power consumption. Inductive TSV coupling has large disruptive effects on Signal Integrity (SI) and transmission delay. In this paper, TSV inductive coupling is analyzed based on technology process, TSV length, and TSV radius for a range of frequencies. A classification of inductive coupling voltage is presented for different TSV configurations. A novel coding technique is devised to mitigate the inductive coupling effects by adjusting the current flow pattern. Simulations for a 4×8 TSV matrix show 23 coupled voltage mitigation, imposing 12.5 information redundancy.", "3D IC is a promising technology to meet the demands of high throughput, high scalability, and low power consumption for future generation integrated circuits. One way to implement the 3D IC is to interconnect layers of two-dimensional (2D) IC with Through-Silicon Via (TSV), which shortens the signal lengths. Unfortunately, while TSVs are bundled together as a cluster, the crosstalk coupling noise may lead to transmission errors. As a result, the working frequency of TSVs has to be lowered to avoid the errors, leading to narrower bandwidth that TSVs can provide. In this paper, we first derive the crosstalk noise model from the perspective of 3D chip and then propose ShieldUS, a runtime data-to-TSVs remapping strategy. With ShieldUS, the transition patterns of data over TSVs are observed at runtime, and relatively stable bits will be mapped to the TSVs which act as shields to protect the other bits which have more fluctuations. We evaluate the performance of ShieldUS with address lines from real benchmark traces and data lines of different similarities. The results show that ShieldUS is accurate and flexible. We further study dynamic shielding and our design of Interval Equilibration Unit (IEU) can intelligently select suitable parameters for dynamic shielding, which makes dynamic shielding practical and does not need to predefine parameters. This also improves the practicability of ShieldUS.", "In this paper, the reliability of through-silicon via (TSV) daisy chains under thermal cycling conditions was examined. The electrical resistance of TSV daisy chains was found to increase with the number of thermal cycles, due to thermally induced damage leading to the formation and growth of defects. The contributions of each identified damage type to the change in the electrical resistance of the TSV chain were evaluated by electrical modeling. Thermo-mechanical modeling showed a good correlation between the observed damage locations and the simulated stress-concentration regions of the TSV.", "Recently, the development of three-dimensional large-scale integration (3D-LSI) has been accelerated. Its stage has changed from the research level or limited production level to the investigation level with a view to mass production. The 3D-LSI using through-silicon via (TSV) has the simplest structure and is expected to realize a high-performance, high-functionality, and high-density LSI cube. This paper describes the current and future 3D-LSI technologies with TSV.", "In 3D VLSI, through-silicon vias (TSVs) are relatively large, and closely spaced. This results in a situation in which noise on one or more TSVs may deteriorate the delay and signal integrity of neighboring TSVs. In this paper, we first quantify the parasitics in contemporary TSVs, and then come up with a classification of crosstalk sequences as OC, 1C,... 8C sequences. Next, we present inductive approaches to quantify the exact overhead for 8C, 6C and 4C crosstalk avoidance codes (CACs) for a 3 x n mesh arrangement of TSVs. These overheads for different CACs for a 3 x n mesh arrangement of TSVs are used to calculate the lower bounds on the corresponding overheads for an n x n mesh arrangements of TSVs. We also discuss an efficient way to implement the coding and decoding (CODEC) circuitry for limiting the maximum crosstalk to 6C. Our experimental results show that for a TSV mesh arrangement driven by inverters implemented in a 22nm technology, the coding based approaches yields improvements which are in line with the theoretical predictions." ] }
1901.00568
2907940234
Three Dimensional Integrated Circuits (3D IC) offer lower power consumption, higher performance, higher bandwidth, and scalability over the conventional two dimensional ICs. Through-Silicon Via (TSV) is one of the fabrication mechanisms that connects stacked dies to each other. The large size of TSVs and the proximity between them lead to undesirable coupling capacitance. This interference causes mutual influences between adjacent TSVs and produces crosstalk noise. Furthermore, this effect threats the reliability of data during traversal between layers. This paper proposes a mechanism that efficiently reduces crosstalk noise between TSVs with lower area overhead as compared to previous works. This mechanism revolves around the fact that retaining TSV value in current state can reduce coupling in some cases. To evaluate the mechanism, gem5 simulator is used for data extraction and several benchmarks are taken from the SPEC2006 suite. The simulation results show that the proposed mechanism reduces crosstalk noise with only 30 imposed TSV overhead while delay decreased up to 25.7 as compared to a recent related work.
Increasing TSV distances from each other, shielding TSVs, inserting buffers at the victim side, inserting buffers, decreasing driver size at the aggressor side, and increasing load at the wires are the mechanisms examined in @cite_20 to mitigate TSVs crosstalk noise. According to their experiments, unlike 2D wires, increasing TSV distances is not an effective solution to TSV-to-TSV coupling problem and the other solutions either need high effort at post-design time or have negative impact on timing performance.
{ "cite_N": [ "@cite_20" ], "mid": [ "2133279383" ], "abstract": [ "This paper studies TSV-to-TSV coupling in 3D ICs. A full-chip SI analysis flow is proposed based on the proposed coupling model. Analysis results show that TSVs cause significant coupling noise and timing problems despite that TSV count is much smaller com- pared with the gate count. Two approaches are proposed to alleviate TSV-to-TSV coupling, namely TSV shielding and buffer insertion. Analysis results show that both approaches are effective in reducing the TSV-caused-coupling and improving timing." ] }
1901.00568
2907940234
Three Dimensional Integrated Circuits (3D IC) offer lower power consumption, higher performance, higher bandwidth, and scalability over the conventional two dimensional ICs. Through-Silicon Via (TSV) is one of the fabrication mechanisms that connects stacked dies to each other. The large size of TSVs and the proximity between them lead to undesirable coupling capacitance. This interference causes mutual influences between adjacent TSVs and produces crosstalk noise. Furthermore, this effect threats the reliability of data during traversal between layers. This paper proposes a mechanism that efficiently reduces crosstalk noise between TSVs with lower area overhead as compared to previous works. This mechanism revolves around the fact that retaining TSV value in current state can reduce coupling in some cases. To evaluate the mechanism, gem5 simulator is used for data extraction and several benchmarks are taken from the SPEC2006 suite. The simulation results show that the proposed mechanism reduces crosstalk noise with only 30 imposed TSV overhead while delay decreased up to 25.7 as compared to a recent related work.
RTL mechanisms in 3D IC have been proposed and experimented recently. @cite_12 proposed a coding scheme that reduces the maximum crosstalk about 28 The authors in @cite_8 introduce use of less adjacent transition code along with transition signaling to minimize the number of transitions. Furthermore, 3DLAT reduces higher crosstalk class frequency. This scheme has a significant TSV overhead which is not negligible. According to the authors' report, TSV overhead of 3DLAT is about 80
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2017619982", "1999816809" ], "abstract": [ "In 3D VLSI, through-silicon vias (TSVs) are relatively large, and closely spaced. This results in a situation in which noise on one or more TSVs may deteriorate the delay and signal integrity of neighboring TSVs. In this paper, we first quantify the parasitics in contemporary TSVs, and then come up with a classification of crosstalk sequences as OC, 1C,... 8C sequences. Next, we present inductive approaches to quantify the exact overhead for 8C, 6C and 4C crosstalk avoidance codes (CACs) for a 3 x n mesh arrangement of TSVs. These overheads for different CACs for a 3 x n mesh arrangement of TSVs are used to calculate the lower bounds on the corresponding overheads for an n x n mesh arrangements of TSVs. We also discuss an efficient way to implement the coding and decoding (CODEC) circuitry for limiting the maximum crosstalk to 6C. Our experimental results show that for a TSV mesh arrangement driven by inverters implemented in a 22nm technology, the coding based approaches yields improvements which are in line with the theoretical predictions.", "3D integration is one of the promising solutions to overcome the interconnect bottleneck with vertical interconnect through-silicon vias (TSVs). This paper investigates the crosstalk in 3D IC designs, especially the capacitive crosstalk in TSV interconnects. We propose a novel ω-LAT coding scheme to reduce the capacitive crosstalk and minimize the power consumption overhead in the TSV array. Combining with the Transition Signaling, the LAT coding scheme restricts the number of transitions in every transmission cycle to minimize the crosstalk and power consumption. Compared to other 3D crosstalk minimization coding schemes, the proposed coding can provide the same delay reduction with more affordable overhead. The performance and power analysis show that when ω is 4, the proposed LAT coding scheme can achieve 38 interconnect crosstalk delay reduction compared to the data transmission without coding. By reducing the value of ω, further reduction can be achieved1." ] }
1901.00512
2908315377
A brain-computer interface (BCI) based on the motor imagery (MI) paradigm translates one's motor intention into a control signal by classifying the Electroencephalogram (EEG) signal of different tasks. However, most existing systems either (i) use a high-quality algorithm to train the data off-line and run only classification in real-time, since the off-line algorithm is too slow, or (ii) use low-quality heuristics that are sufficiently fast for real-time training but introduces relatively large classification error. In this work, we propose a novel processing pipeline that allows real-time and parallel learning of EEG signals using high-quality but possibly inefficient algorithms. This is done by forging a link between BCI and core-sets, a technique that originated in computational geometry for handling streaming data via data summarization. We suggest an algorithm that maintains the representation such coreset tailored to handle the EEG signal which enables: (i) real time and continuous computation of the Common Spatial Pattern (CSP) feature extraction method on a coreset representation of the signal (instead on the signal itself) , (ii) improvement of the CSP algorithm efficiency with provable guarantees by applying CSP algorithm on the coreset, and (iii) real time addition of the data trials (EEG data windows) to the coreset. For simplicity, we focus on the CSP algorithm, which is a classic algorithm. Nevertheless, we expect that our coreset will be extended to other algorithms in future papers. In the experimental results we show that our system can indeed learn EEG signals in real-time for example a 64 channels setup with hundreds of time samples per second. Full open source is provided to reproduce the experiment and in the hope that it will be used and extended to more coresets and BCI applications in the future.
Improved techniques for using coresets for distributed data and low communication on the cloud, with both theoretical guarantees and experimental results were recently suggested in conferences such as @cite_53 @cite_22 . Classical techniques such as Frank-Wolfe @cite_44 and semi-definite programming @cite_45 appear to produce deterministic and smaller types of coresets. In coresets were suggested for matrix approximations @cite_59 @cite_19 @cite_29 using random projections, called . The first coresets for with applications to GPS or video data were suggested in @cite_24 @cite_11 @cite_25 . The first results for appeared recently @cite_32 @cite_46
{ "cite_N": [ "@cite_22", "@cite_53", "@cite_29", "@cite_32", "@cite_44", "@cite_19", "@cite_24", "@cite_45", "@cite_59", "@cite_46", "@cite_25", "@cite_11" ], "mid": [ "2088424151", "2003895866", "1813460488", "2157754768", "2109706083", "2043804332", "2149880269", "2410099853", "2042465463", "2963075731", "2069544315", "2125954346" ], "abstract": [ "A sketch of a matrix A is another matrix B which is significantly smaller than A but still approximates it well. Finding such sketches efficiently is an important building block in modern algorithms for approximating, for example, the PCA of massive matrices. This task is made more challenging in the streaming model, where each row of the input matrix can only be processed once and storage is severely limited. In this paper we adapt a well known streaming algorithm for approximating item frequencies to the matrix sketching setting. The algorithm receives n rows of a large matrix A e ℜ n x m one after the other in a streaming fashion. It maintains a sketch B ℜ l x m containing only l This gives a streaming algorithm whose error decays proportional to 1 l using O(ml) space. For comparison, random-projection, hashing or sampling based algorithms produce convergence bounds proportional to 1 √l. Sketch updates per row in A require amortized O(ml) operations and the algorithm is perfectly parallelizable. Our experiments corroborate the algorithm's scalability and improved convergence rate. The presented algorithm also stands out in that it is deterministic, simple to implement and elementary to prove.", "We suggest a generic data reduction technique with provable guarantees for computing the low rank approximation of a matrix under some $ellz error, and constrained factorizations, such as the Non-negative Matrix Factorization (NMF). Our main algorithm reduces a given n x d matrix into a small, e-dependent, weighted subset C of its rows (known as a coreset), whose size is independent of both n and d. We then prove that applying existing algorithms on the resulting coreset can be turned into (1+e)-approximations for the original (large) input matrix. In particular, we provide the first linear time approximation scheme (LTAS) for the rank-one NMF. The coreset C can be computed in parallel and using only one pass over a possibly unbounded stream of row vectors. In this sense we improve the result in [4] (Best paper of STOC 2013). Moreover, since C is a subset of these rows, its construction time, as well as its sparsity (number of non-zeroes entries) and the sparsity of the resulting low rank approximation depend on the maximum sparsity of an input row, and not on the actual dimension d. In this sense, we improve the result of Libery [21](Best paper of KDD 2013) and answer affirmably, and in a more general setting, his open question of computing such a coreset. Source code is provided for reproducing the experiments and integration with existing and future algorithms.", "The ep regression problem takes as input a matrix A ∈ ℝn, a vector b ∈ ℝn, and a number p ∈ [1, ∞), and it returns as output a number Z and a vector xOPT ∈ ℝd such that Z = minx∈ℝd ||Ax - b||p = ||AxOPT - b||p. In this paper, we construct coresets and obtain an efficient two-stage sampling-based approximation algorithm for the very overconstrained (n G d) version of this classical problem, for all p ∈ [1, ∞). The first stage of our algorithm non-uniformly samples r1 = O(36pdmax p 2+1, p +1) rows of A and the corresponding elements of b, and then it solves the lp regression problem on the sample; we prove this is an 8-approximation. The second stage of our algorithm uses the output of the first stage to resample r1 e2 constraints, and then it solves the lp regression problem on the new sample; we prove this is a (1 + e)-approximation. Our algorithm unifies, improves upon, and extends the existing algorithms for special cases of ep regression, namely p = 1,2 [10, 13]. In course of proving our result, we develop two concepts--well-conditioned bases and subspace-preserving sampling--that are of independent interest.", "This paper deals with computing the smallest enclosing ball of a set of points subject to probabilistic data. In our setting, any of the n points may not or may occur at one of finitely many locations, following its own discrete probability distribution. The objective is therefore considered to be a random variable and we aim at finding a center minimizing the expected maximum distance to the points according to their distributions. Our main contribution presented in this paper is the first polynomial time (1 + e)-approximation algorithm for the probabilistic smallest enclosing ball problem with extensions to the streaming setting.", "The problem of maximizing a concave function f(x) in the unit simplex Δ can be solved approximately by a simple greedy algorithm. For given k, the algorithm can find a point x(k) on a k-dimensional face of Δ, such that f(x(k) ≥ f(x*) − O(1 k). Here f(x*) is the maximum value of f in Δ, and the constant factor depends on f. This algorithm and analysis were known before, and related to problems of statistics and machine learning, such as boosting, regression, and density mixture estimation. In other work, coming from computational geometry, the existence of e-coresets was shown for the minimum enclosing ball problem by means of a simple greedy algorithm. Similar greedy algorithms, which are special cases of the Frank-Wolfe algorithm, were described for other enclosure problems. Here these results are tied together, stronger convergence results are reviewed, and several coreset bounds are generalized or strengthened.", "We present and analyze a sampling algorithm for the basic linear-algebraic problem of l 2 regression. The l 2 regression (or least-squares fit) problem takes as input a matrix A ∈ Rn×d (where we assume n G d) and a target vector b ∈ Rn, and it returns as output Z = min x∈Rd |b - Ax| 2 . Also of interest is x opt = A+b, where A+ is the Moore-Penrose generalized inverse, which is the minimum-length vector achieving the minimum. Our algorithm randomly samples r rows from the matrix A and vector b to construct an induced l 2 regression problem with many fewer rows, but with the same number of columns. A crucial feature of the algorithm is the nonuniform sampling probabilities. These probabilities depend in a sophisticated manner on the lengths, i.e., the Euclidean norms, of the rows of the left singular vectors of A and the manner in which b lies in the complement of the column space of A. Under appropriate assumptions, we show relative error approximations for both Z and x opt . Applications of this sampling methodology are briefly discussed.", "This paper describes a system that takes as input GPS data streams generated by users' phones and creates a searchable database of locations and activities. The system is called iDiary and turns large GPS signals collected from smartphones into textual descriptions of the trajectories. The system features a user interface similar to Google Search that allows users to type text queries on their activities (e.g., \"Where did I buy books?\") and receive textual answers based on their GPS signals. iDiary uses novel algorithms for semantic compression (known as coresets) and trajectory clustering of massive GPS signals in parallel to compute the critical locations of a user. Using an external database, we then map these locations to textual descriptions and activities so that we can apply text mining techniques on the resulting data (e.g. LSA or transportation mode recognition). We provide experimental results for both the system and algorithms and compare them to existing commercial and academic state-of-the-art. This is the first GPS system that enables text-searchable activities from GPS data.", "In this paper we provide faster algorithms for solving the geometric median problem: given n points in d compute a point that minimizes the sum of Euclidean distances to the points. This is one of the oldest non-trivial problems in computational geometry yet despite a long history of research the previous fastest running times for computing a (1+є)-approximate geometric median were O(d· n4 3є−8 3) by Chin et. al, O(dexpє−4logє−1) by Badoiu et. al, O(nd+poly(d,є−1)) by Feldman and Langberg, and the polynomial running time of O((nd)O(1)log1 є) by Parrilo and Sturmfels and Xue and Ye. In this paper we show how to compute such an approximate geometric median in time O(ndlog3n є) and O(dє−2). While our O(dє−2) is a fairly straightforward application of stochastic subgradient descent, our O(ndlog3n є) time algorithm is a novel long step interior point method. We start with a simple O((nd)O(1)log1 є) time interior point method and show how to improve it, ultimately building an algorithm that is quite non-standard from the perspective of interior point literature. Our result is one of few cases of outperforming standard interior point theory. Furthermore, it is the only case we know of where interior point methods yield a nearly linear time algorithm for a canonical optimization problem that traditionally requires superlinear time.", "Motivated by applications in which the data may be formulated as a matrix, we consider algorithms for several common linear algebra problems. These algorithms make more efficient use of computational resources, such as the computation time, random access memory (RAM), and the number of passes over the data, than do previously known algorithms for these problems. In this paper, we devise two algorithms for the matrix multiplication problem. Suppose @math and @math (which are @math and @math , respectively) are the two input matrices. In our main algorithm, we perform @math independent trials, where in each trial we randomly sample an element of @math with an appropriate probability distribution @math on @math . We form an @math matrix @math consisting of the sampled columns of @math , each scaled appropriately, and we form a @math matrix @math using the corresponding rows of @math , again scaled appropriately. The choice of @math and the column and row scaling are crucial features of the algorithm. When these are chosen judiciously, we show that @math is a good approximation to @math . More precisely, we show that @math where @math denotes the Frobenius norm, i.e., @math . This algorithm can be implemented without storing the matrices @math and @math in RAM, provided it can make two passes over the matrices stored in external memory and use @math additional RAM to construct @math and @math . We then present a second matrix multiplication algorithm which is similar in spirit to our main algorithm. In addition, we present a model (the pass-efficient model) in which the efficiency of these and other approximate matrix algorithms may be studied and which we argue is well suited to many applications involving massive data sets. In this model, the scarce computational resources are the number of passes over the data and the additional space and time required by the algorithm. The input matrices may be presented in any order of the entries (and not just row or column order), as is the case in many applications where, e.g., the data has been written in by multiple agents. In addition, the input matrices may be presented in a sparse representation, where only the nonzero entries are written.", "With the dramatic growth in the number of application domains that generate probabilistic, noisy and uncertain data, there has been an increasing interest in designing algorithms for geometric or combinatorial optimization problems over such data. In this paper, we initiate the study of constructing epsilon-kernel coresets for uncertain points. We consider uncertainty in the existential model where each point's location is fixed but only occurs with a certain probability, and the locational model where each point has a probability distribution describing its location. An epsilon-kernel coreset approximates the width of a point set in any direction. We consider approximating the expected width (an epsilon-EXP-KERNEL), as well as the probability distribution on the width (an (epsilon, tau)-QUANT-KERNEL) for any direction. We show that there exists a set of O(epsilon^ -(d-1) 2 ) deterministic points which approximate the expected width under the existential and locational models, and we provide efficient algorithms for constructing such coresets. We show, however, it is not always possible to find a subset of the original uncertain points which provides such an approximation. However, if the existential probability of each point is lower bounded by a constant, an epsilon-EXP-KERNEL is still possible. We also provide efficient algorithms for construct an (epsilon, tau)-QUANT-KERNEL coreset in nearly linear time. Our techniques utilize or connect to several important notions in probability and geometry, such as Kolmogorov distances, VC uniform convergence and Tukey depth, and may be useful in other geometric optimization problem in stochastic settings. Finally, combining with known techniques, we show a few applications to approximating the extent of uncertain functions, maintaining extent measures for stochastic moving points and some shape fitting problems under uncertainty.", "We investigate a data-driven approach to robotic path planning and analyze its performance in the context of interception tasks. Trajectories of moving objects often contain repeated patterns of motion, and learning those patterns can yield interception paths that succeed more often. We therefore propose an original trajectory clustering algorithm for extracting motion patterns from trajectory data and demonstrate its effectiveness over the more common clustering approach of using k-means. We use the results to build a Hidden Markov Model of a target's motion and predict movement. Our simulations show that these predictions lead to more effective interception. The results of this work have potential applications in coordination of multi-robot systems, tracking and surveillance tasks, and dynamic obstacle avoidance.", "Life-logging video streams, financial time series, and Twitter tweets are a few examples of high-dimensional signals over practically unbounded time. We consider the problem of computing optimal segmentation of such signals by a k-piecewise linear function, using only one pass over the data by maintaining a coreset for the signal. The coreset enables fast further analysis such as automatic summarization and analysis of such signals. A coreset (core-set) is a compact representation of the data seen so far, which approximates the data well for a specific task - in our case, segmentation of the stream. We show that, perhaps surprisingly, the segmentation problem admits coresets of cardinality only linear in the number of segments k, independently of both the dimension d of the signal, and its number n of points. More precisely, we construct a representation of size O(k log n e2) that provides a (1+e)-approximation for the sum of squared distances to any given k-piecewise linear function. Moreover, such coresets can be constructed in a parallel streaming approach. Our results rely on a novel reduction of statistical estimations to problems in computational geometry. We empirically evaluate our algorithms on very large synthetic and real data sets from GPS, video and financial domains, using 255 machines in Amazon cloud." ] }
1901.00366
2908467437
Knowledge Distillation (KD) has been used in image classification for model compression. However, rare studies apply this technology on single-stage object detectors. Focal loss shows that the accumulated errors of easily-classified samples dominate the overall loss in the training process. This problem is also encountered when applying KD in the detection task. For KD, the teacher-defined hard samples are far more important than any others. We propose ADL to address this issue by adaptively mimicking the teacher's logits, with more attention paid on two types of hard samples: hard-to-learn samples predicted by teacher with low certainty and hard-to-mimic samples with a large gap between the teacher's and the student's prediction. ADL enlarges the distillation loss for hard-to-learn and hard-to-mimic samples and reduces distillation loss for the dominant easy samples, enabling distillation to work on the single-stage detector first time, even if the student and the teacher are identical. Besides, ADL is effective in both the supervised setting and the semi-supervised setting, even when the labeled data and unlabeled data are from different distributions. For distillation on unlabeled data, ADL achieves better performance than existing data distillation which simply utilizes hard targets, making the student detector surpass its teacher. On the COCO database, semi-supervised adaptive distillation (SAD) makes a student detector with a backbone of ResNet-50 surpasses its teacher with a backbone of ResNet-101, while the student has half of the teacher's computation complexity. The code is avaiable at this https URL
Many works are proposed to accelerate the convolution neural network due to the demand for practice applications. Knowledge transferring is one approach that transfers knowledge from the teacher model to the student model. Previous work explores this area by representing knowledge in different forms. FitNet @cite_3 makes the student mimic the full feature maps of the teacher. KD @cite_2 proposes to supervise the student by soft targets predicted by the teacher. The probability distribution from the teacher model providing extra information than one-hot targets encoding. Our work is closely related to knowledge distillation.
{ "cite_N": [ "@cite_3", "@cite_2" ], "mid": [ "1690739335", "1821462560" ], "abstract": [ "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.", "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel." ] }
1901.00366
2908467437
Knowledge Distillation (KD) has been used in image classification for model compression. However, rare studies apply this technology on single-stage object detectors. Focal loss shows that the accumulated errors of easily-classified samples dominate the overall loss in the training process. This problem is also encountered when applying KD in the detection task. For KD, the teacher-defined hard samples are far more important than any others. We propose ADL to address this issue by adaptively mimicking the teacher's logits, with more attention paid on two types of hard samples: hard-to-learn samples predicted by teacher with low certainty and hard-to-mimic samples with a large gap between the teacher's and the student's prediction. ADL enlarges the distillation loss for hard-to-learn and hard-to-mimic samples and reduces distillation loss for the dominant easy samples, enabling distillation to work on the single-stage detector first time, even if the student and the teacher are identical. Besides, ADL is effective in both the supervised setting and the semi-supervised setting, even when the labeled data and unlabeled data are from different distributions. For distillation on unlabeled data, ADL achieves better performance than existing data distillation which simply utilizes hard targets, making the student detector surpass its teacher. On the COCO database, semi-supervised adaptive distillation (SAD) makes a student detector with a backbone of ResNet-50 surpasses its teacher with a backbone of ResNet-101, while the student has half of the teacher's computation complexity. The code is avaiable at this https URL
Recently, model compression has been studied to facilitate the application of cnn-based object detector in devices with limited computation resources. Chen al @cite_17 utilizes soft targets to guide the student model in both region proposal network and region convolution neural network and balance the positive and negative examples by re-weighting the loss of positive and negative samples. Instead of addressing the class imbalance problem directly, Li al @cite_7 proposes to match the feature map after roi-pooling layer where the candidate regions have been significantly reduced. These methods are designed for two-stage detector and cannot be applied to single-stage detector directly. In contrast, our insightful designed loss is another way to address the class imbalance problems.
{ "cite_N": [ "@cite_7", "@cite_17" ], "mid": [ "2750432752", "2750784772" ], "abstract": [ "Current CNN based object detectors need initialization from pre-trained ImageNet classification models, which are usually time-consuming. In this paper, we present a fully convolutional feature mimic framework to train very efficient CNN based detectors, which do not need ImageNet pre-training and achieve competitive performance as the large and slow models. We add supervision from high-level features of the large networks in training to help the small network better learn object representation. More specifically, we conduct a mimic method for the features sampled from the entire feature map and use a transform layer to map features from the small network onto the same dimension of the large network. In training the small network, we optimize the similarity between features sampled from the same region on the feature maps of both networks. Extensive experiments are conducted on pedestrian and common object detection tasks using VGG, Inception and ResNet. On both Caltech and Pascal VOC, we show that the modified 2.5× accelerated Inception network achieves competitive performance as the full Inception Network. Our faster model runs at 80 FPS for a 1000×1500 large input with only a minor degradation of performance on Caltech.", "Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models." ] }
1907.04251
2959979976
We propose an algorithm for low rank matrix completion for matrices with binary entries which obtains explicit binary factors. Our algorithm, which we call TBMC (), gives interpretable output in the form of binary factors which represent a decomposition of the matrix into tiles. Our approach is inspired by a popular algorithm from the data mining community called PROXIMUS: it adopts the same recursive partitioning approach while extending to missing data. The algorithm relies upon rank-one approximations of incomplete binary matrices, and we propose a linear programming (LP) approach for solving this subproblem. We also prove a @math -approximation result for the LP approach which holds for any level of subsampling and for any subsampling pattern. Our numerical experiments show that TBMC outperforms existing methods on recommender systems arising in the context of real datasets.
We propose TBMC (Tiling for Binary Matrix Completion), a low rank binary matrix completion algorithm ( sec:TBMC ). The algorithm is inspired by the approach in @cite_9 for BMF by recursively partitioning the database by means of rank-one approximations. In particular, we propose using an LP rank-one approximation for missing data. We support this choice with a guarantee that it provides a @math -approximation to the optimal objective value, showing that the reasoning of @cite_14 holds in the missing data case ( sec:approximation ). We show that our algorithm outperforms alternatives based on related heuristics and techniques for non-negative matrix completion and binary matrix completion, when tested on synthetic and real life data ( sec:numerical ). @math The most closely related work we are aware of is the proposed in @cite_25 for bi-clustering databases with missing data. The authors use low rank completion to cluster neighbour rows and then redefine the column clusters based on cluster membership, in a similar fashion to @math -means. We show that our algorithm outperforms the Spectral method when solving Problem for real world datasets.
{ "cite_N": [ "@cite_9", "@cite_14", "@cite_25" ], "mid": [ "2167851099", "1977877565", "1995565521" ], "abstract": [ "This article presents the design and implementation of a software tool, PROXIMUS, for error-bounded approximation of high-dimensional binary attributed datasets based on nonorthogonal decomposition of binary matrices. This tool can be used for analyzing data arising in a variety of domains ranging from commercial to scientific applications. Using a combination of innovative algorithms, novel data structures, and efficient implementation, PROXIMUS demonstrates excellent accuracy, performance, and scalability to large datasets. We experimentally demonstrate these on diverse applications in association rule mining and DNA microarray analysis. In limited beta release, PROXIMUS currently has over 300 installations in over 10 countries.", "Mining discrete patterns in binary data is important for subsampling, compression, and clustering. We consider rank-one binary matrix approximations that identify the dominant patterns of the data, while preserving its discrete property. A best approximation on such data has a minimum set of inconsistent entries, i.e., mismatches between the given binary data and the approximate matrix. Due to the hardness of the problem, previous accounts of such problems employ heuristics and the resulting approximation may be far away from the optimal one. In this paper, we show that the rank-one binary matrix approximation can be reformulated as a 0-1 integer linear program (ILP). However, the ILP formulation is computationally expensive even for small-size matrices. We propose a linear program (LP) relaxation, which is shown to achieve a guaranteed approximation error bound. We further extend the proposed formulations using the regularization technique, which is commonly employed to address overfitting. The LP formulation is restricted to medium-size matrices, due to the large number of variables involved for large matrices. Interestingly, we show that the proposed approximate formulation can be transformed into an instance of the minimum s-t cut problem, which can be solved efficiently by finding maximum flows. Our empirical study shows the efficiency of the proposed algorithm based on the maximum flow. Results also confirm the established theoretical bounds.", "In standard clustering problems, data points are represented by vectors, and by stacking them together, one forms a data matrix with row or column cluster structure. In this paper, we consider a class of binary matrices, arising in many applications, which exhibit both row and column cluster structure, and our goal is to exactly recover the underlying row and column clusters by observing only a small fraction of noisy entries. We first derive a lower bound on the minimum number of observations needed for exact cluster recovery. Then, we study three algorithms with different running time and compare the number of observations needed by them for successful cluster recovery. Our analytical results show smooth time-data trade offs: one can gradually reduce the computational complexity when increasingly more observations are available." ] }
1907.03928
2958621236
Probabilistic game structures combine both nondeterminism and stochasticity, where players repeatedly take actions simultaneously to move to the next state of the concurrent game. Probabilistic alternating simulation is an important tool to compare the behaviour of different probabilistic game structures. In this paper, we present a sound and complete modal characterisation of this simulation relation by proposing a new logic based on probabilistic distributions. The logic enables a player to enforce a property in the next state or distribution. Its extension with fixpoints, which also characterises the simulation relation, can express a lot of interesting properties in practical applications.
Segala and Lynch @cite_5 introduce a probabilistic simulation relation which preserves probabilistic computation tree logic (PCTL) formulas without negation and existential quantification. Segala introduces the notion of probabilistic forward simulation, which relates states to probability distributions over states and is sound and complete for trace distribution precongruence @cite_27 @cite_7 . Parma and Segala @cite_4 use a probabilistic extension of the Hennessy-Milner logic which allows countable conjunction and admits a new operator @math -- a distribution satisfies @math if the probability on the set of states satisfying @math is at least @math , with a sound and complete logic characterisation. @cite_6 further extend this result for image-infinite probabilistic automata. @cite_24 @cite_10 introduce a few probabilistic operators to derive a probabilistic modal mu-calculus (pMu). A fragment of pMu is proved to characterise (strong) probabilistic simulation in finite-state probabilistic automata.
{ "cite_N": [ "@cite_4", "@cite_7", "@cite_6", "@cite_24", "@cite_27", "@cite_5", "@cite_10" ], "mid": [ "1505927408", "2037507558", "", "2152206644", "2168098347", "138290785", "1548037201" ], "abstract": [ "We give logical characterizations of bisimulation relations for the probabilistic automata of Segala in terms of three Hennessy-Milner style logics. The three logics characterize strong, strong probabilistic and weak probabilistic bisimulation, and differ only in the kind of diamond operator used. Compared to the Larsen and Skou logic for reactive systems, these logics introduce a new operator that measures the probability of the set of states that satisfy a formula. Moreover, the satisfaction relation is defined on measures rather than single states. We rederive previous results of by defining sublogics for Reactive and Alternating Models viewed as restrictions of probabilistic automata. Finally, we identify restrictions on probabilistic automata, weaker than those imposed by the Alternating Models, that preserve the logical characterization of These restrictions require that each state either enables several ordinary transitions or enables a single probabilistic transition.", "Probabilistic automata (PAs) constitute a general framework for modeling and analyzing discrete event systems that exhibit both nondeterministic and probabilistic behavior, such as distributed algorithms and network protocols. The behavior of PAs is commonly defined using schedulers (also called adversaries or strategies), which resolve all nondeterministic choices based on past history. From the resulting purely probabilistic structures, trace distributions can be extracted, whose intent is to capture the observable behavior of a PA. However, when PAs are composed via an (asynchronous) parallel composition operator, a global scheduler may establish strong correlations between the behavior of system components and, for example, resolve nondeterministic choices in one PA based on the outcome of probabilistic choices in the other. It is well known that, as a result of this, the (linear-time) trace distribution precongruence is not compositional for PAs. In his 1995 Ph.D. thesis, Segala has shown that the (branching-time) probabilistic simulation preorder is compositional for PAs. In this paper, we establish that the simulation preorder is, in fact, the coarsest refinement of the trace distribution preorder that is compositional. We prove our characterization result by providing (1) a context of a given PA @math , called the tester, which may announce the state of @math to the outside world, and (2) a specific global scheduler, called the observer, which ensures that the state information that is announced is actually correct. Now when another PA @math is composed with the tester, it may generate the same external behavior as the observer only when it is able to simulate @math in the sense that whenever @math goes to some state @math , @math can go to a corresponding state @math , from which it may generate the same external behavior. Our result shows that probabilistic contexts together with global schedulers are able to exhibit the branching structure of PAs.", "", "In 1992 Wang & Larsen extended the may- and must preorders of De Nicola and Hennessy to processes featuring probabilistic as well as nondeterministic choice. They concluded with two problems that have remained open throughout the years, namely to find complete axiomatisations and alternative characterisations for these preorders. This paper solves both problems for finite processes with silent moves. It characterises the may preorder in terms of simulation, and the must preorder in terms of failure simulation. It also gives a characterisation of both preorders using a modal logic. Finally it axiomatises both preorders over a probabilistic version of CSP.", "We extend the trace semantics for labeled transition systems to a randomized model of concurrent computation. The main objective is to obtain a compositional semantics. The role of a trace in the randomized model is played by a probability distribution over traces, called a trace distribution. We show that the preorder based on trace distribution inclusion is not a precongruence, and we build an elementary context, called the principal context, that is sufficiently powerful to characterize the coarsest precongruence that is contained in the trace distribution preorder. Finally, we introduce a notion of a probabilistic forward simulation and we prove that it is sound for the trace distribution precongruence. An important characteristic of probabilistic forward simulations is that they relate states to probability distributions over states.", "Several probabilistic simulation relations for probabili stic systems are defined and evaluated according to two criteria: compositionality and preservation of \"interesting\" properties. Here, the interesting properties of a system are identified with those that are expressible in an untimed version of the Timed Probabilistic concurrent Computation Tree Logic (TPCTL) of Hansson. The definitions are made, and the evaluations carried out, in terms of a general labeled transition system model for concurrent probabilistic computation. The results cover weak simulations, which abstract from internal computation, as well as strong simulations, which do not", "" ] }
1907.03928
2958621236
Probabilistic game structures combine both nondeterminism and stochasticity, where players repeatedly take actions simultaneously to move to the next state of the concurrent game. Probabilistic alternating simulation is an important tool to compare the behaviour of different probabilistic game structures. In this paper, we present a sound and complete modal characterisation of this simulation relation by proposing a new logic based on probabilistic distributions. The logic enables a player to enforce a property in the next state or distribution. Its extension with fixpoints, which also characterises the simulation relation, can express a lot of interesting properties in practical applications.
Metric-based simulation on game structures have been studied by de @cite_15 regarding the probability of winning games whose goals are expressed in quantitative @math -calculus (qMu) @cite_1 . Two states are equivalent if the players can win the same games with the same probability from both states, and among states can thus be measured. Algorithmic verification complexities are further studied for MDP and turn-based games @cite_13 .
{ "cite_N": [ "@cite_13", "@cite_15", "@cite_1" ], "mid": [ "197133740", "2038193812", "2091027967" ], "abstract": [ "A method of producing long life precision abrasive articles for use in met al removal operations. In this method abrasive particles are impinged against the inner surfaces of a rotating cylindrical mold by centrifugal force. During rotation of the mold a met allic matrix is deposited electrolytically on the inner surfaces of the mold until a matrix supporting the abrasive particles is formed. The matrix is then removed to receive core material and the core is then machined to the finished dimensions. The method is carried out at low temperatures, thereby avoiding heat distortion of the end product.", "We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves determine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal, for example, reach a target state, the question of winning is thus a probabilistic one: what is the maximal probability of winning from a given state? On these game structures, two fundamental notions are those of equivalences and metrics. Given a set of winning conditions, two states are equivalent if the players can win the same games with the same probability from both states. Metrics provide a bound on the difference in the probabilities of winning across states, capturing a quantitative notion of state similarity. We introduce equivalences and metrics for two-player game structures, and we show that they characterize the difference in probability of winning games whose goals are expressed in the quantitative mu-calculus. The quantitative mu-calculus can express a large set of goals, including reachability, safety, and omega-regular properties. Thus, we claim that our relations and metrics provide the canonical extensions to games, of the classical notion of bisimulation for transition systems. We develop our results both for equivalences and metrics, which generalize bisimulation, and for asymmetrical versions, which generalize simulation.", "The μ-calculus is a powerful tool for specifying and verifying transition systems, including those with both demonic (universal) and angelic (existential) choice; its quantitative generalization qMμ extends to include probabilistic choice.We make two major contributions to the theory of such systems. The first is to show that for a finite-state system, the logical interpretation of qMμ, via fixed points in a domain of real-valued functions into [0, 1], is equivalent to an operational interpretation given as a turn-based gambling game between two players.The second contribution is to show that each player in the gambling game has an optimal memoryless strategy---that is, a strategy which is independent of the game's history, and with which a player can achieve his optimal expected reward however his opponent chooses to play. Moreover, since qMμ is expressive enough to encode stochastic parity games, our result implies the existence of memoryless strategies in that framework, as well.As an additional feature, we include an extensive case study demonstrating the aforementioned duality between games and logic. Among other things, it shows that the use of algorithmic verification techniques is mathematically justified in the practical computation of probabilistic system properties." ] }
1907.03928
2958621236
Probabilistic game structures combine both nondeterminism and stochasticity, where players repeatedly take actions simultaneously to move to the next state of the concurrent game. Probabilistic alternating simulation is an important tool to compare the behaviour of different probabilistic game structures. In this paper, we present a sound and complete modal characterisation of this simulation relation by proposing a new logic based on probabilistic distributions. The logic enables a player to enforce a property in the next state or distribution. Its extension with fixpoints, which also characterises the simulation relation, can express a lot of interesting properties in practical applications.
More recently, algorithmic verification of turn-based and concurrent games have been implemented as an extension of PRISM @cite_23 @cite_21 . The properties can be specified as state formulas, path formulas and reward formulas. The verification procedure requires solving matrix games for concurrent game structures, and it applies value iteration algorithms to approach the goal (similar to @cite_2 @cite_15 ). For unbounded properties, the synthesised strategy is memoryless (but only @math -optimal strategies). Finite-memory strategies are synthesised for bounded properties.
{ "cite_N": [ "@cite_15", "@cite_21", "@cite_23", "@cite_2" ], "mid": [ "2038193812", "2886584225", "2771041648", "2060155289" ], "abstract": [ "We consider two-player games played over finite state spaces for an infinite number of rounds. At each state, the players simultaneously choose moves; the moves determine a successor state. It is often advantageous for players to choose probability distributions over moves, rather than single moves. Given a goal, for example, reach a target state, the question of winning is thus a probabilistic one: what is the maximal probability of winning from a given state? On these game structures, two fundamental notions are those of equivalences and metrics. Given a set of winning conditions, two states are equivalent if the players can win the same games with the same probability from both states. Metrics provide a bound on the difference in the probabilities of winning across states, capturing a quantitative notion of state similarity. We introduce equivalences and metrics for two-player game structures, and we show that they characterize the difference in probability of winning games whose goals are expressed in the quantitative mu-calculus. The quantitative mu-calculus can express a large set of goals, including reachability, safety, and omega-regular properties. Thus, we claim that our relations and metrics provide the canonical extensions to games, of the classical notion of bisimulation for transition systems. We develop our results both for equivalences and metrics, which generalize bisimulation, and for asymmetrical versions, which generalize simulation.", "We present automatic verification techniques for concurrent stochastic multi-player games (CSGs) with rewards. To express properties of such models, we adapt the temporal logic rPATL (probabilistic alternating-time temporal logic with rewards), originally introduced for the simpler model of turn-based games, which enables quantitative reasoning about the ability of coalitions of players to achieve goals related to the probability of an event or reward measures. We propose and implement a modelling approach and model checking algorithms for property verification and strategy synthesis of CSGs, as an extension of PRISM-games. We evaluate the performance, scalability and applicability of our techniques on case studies from domains such as security, networks and finance, showing that we can analyse systems with probabilistic, cooperative and competitive behaviour between concurrent components, including many scenarios that cannot be analysed with turn-based models.", "PRISM-games is a tool for modelling, verification and strategy synthesis for stochastic multi-player games. These allow models to incorporate both probability, to represent uncertainty, unreliability or randomisation, and game-theoretic aspects, for systems where different entities have opposing objectives. Applications include autonomous transport, security protocols, energy management systems and many more. We provide a detailed overview of the PRISM-games tool, including its modelling and property specification formalisms, and its underlying architecture and implementation. In particular, we discuss some of its key features, which include multi-objective and compositional approaches to verification and strategy synthesis. We also discuss the scalability and efficiency of the tool and give an overview of some of the case studies to which it has been applied.", "We consider two-player games played for an infinite number of rounds, with ω-regular winning conditions. The games may be concurrent, in that the players choose their moves simultaneously and independently, and probabilistic, in that the moves determine a probability distribution for the successor state. We introduce quantitative game µ-calculus, and we show that the maximal probability of winning such games can be expressed as the fixpoint formulas in this calculus. We develop the arguments both for deterministic and for probabilistic concurrent games; as a special case, we solve probabilistic turn-based games with ω-regular winning conditions, which was also open. We also characterize the optimality, and the memory requirements, of the winning strategies. In particular, we show that while memoryless strategies suffice for winning games with safety and reachability conditions, Buchi conditions require the use of strategies with infinite memory. The existence of optimal strategies, as opposed to e-optimal, is only guaranteed in games with safety winning conditions." ] }
1907.03993
2961402400
Many complex networks in the real world have community structures -- groups of well-connected nodes with important functional roles. It has been well recognized that the identification of communities bears numerous practical applications. While existing approaches mainly apply statistical or graph theoretical combinatorial methods for community detection, in this paper, we present a novel geometric approach which enables us to borrow powerful classical geometric methods and properties. By considering networks as geometric objects and communities in a network as a geometric decomposition, we apply curvature and discrete Ricci flow, which have been used to decompose smooth manifolds with astonishing successes in mathematics, to break down communities in networks. We tested our method on networks with ground-truth community structures, and experimentally confirmed the effectiveness of this geometric approach.
Ricci curvature on general spaces without Riemannian structures has been recently studied, in the work of Ollivier @cite_67 @cite_12 on Markov chains, and Bakry and Emery @cite_13 , Lott, Villani @cite_24 , Bonciocat and Sturm @cite_62 @cite_3 on general metric spaces. Ricci curvature based on optimal transportation theory, proposed by Ollivier (Ollivier-Ricci curvature) @cite_67 @cite_12 , has become a popular topic and been applied in various fields -- for distinguishing cancer-related genes from normal genes @cite_37 , for studying financial market fragility @cite_19 , for understanding phylogenetic trees @cite_60 , and for detecting network backbone and congestion @cite_4 @cite_36 @cite_5 . In @cite_15 , Pal al proposed to use Jaccard coefficients for a proxy for Ollivier-Ricci Curvature. Besides, discrete Ricci curvature has also been defined on cell complexes, proposed by Forman @cite_54 (Forman curvature or Forman-Ricci curvature). Forman curvature is based on graph Laplacian. It is easier and faster to compute than Ollivier-Ricci curvature, but is less geometrical. It is more suitable for large scale network analysis @cite_46 @cite_6 @cite_59 @cite_14 and image processing @cite_51 . We have also experimented with Forman curvature for community detection. The results were less satisfying. So here we focus on Ollivier-Ricci curvature.
{ "cite_N": [ "@cite_36", "@cite_54", "@cite_3", "@cite_5", "@cite_15", "@cite_67", "@cite_4", "@cite_60", "@cite_46", "@cite_37", "@cite_6", "@cite_19", "@cite_12", "@cite_13", "@cite_62", "@cite_14", "@cite_24", "@cite_59", "@cite_51" ], "mid": [ "2483531764", "2033894891", "", "", "2792707137", "1967759674", "2963340087", "2962693999", "2780222030", "", "", "", "", "", "1972708599", "2901370829", "1968781546", "", "2129775270" ], "abstract": [ "This paper proceeds from the premise that the topology of interference constrained wireless networks heavily impacts their node-to-node delay, routing energy, and capacity region. We quantitatively analyze how the discrete Ollivier-Ricci curvature of a network affects the performance metrics of several routing protocols. Since different protocols are optimal relative to different metrics under different topologies, an adaptive control system is proposed that identifies the topology curvature and selects the best protocol under current circumstances subject to user needs. Also, we analyze how sensitive the four routing protocols (Heat Diffusion, Dirichlet, Back Pressure and Shortest Path Routing) under examination are to varying topological environment, as it would commonly be encountered in wireless networks.", "", "", "", "The discrete version of the Ollivier-Ricci (OR) curvature, applicable to networks, has recently found utility in diverse fields. OR curvature requires solving an optimal mass transport problem for each edge, which can be computationally expensive for large and or dense networks. We propose two alternative proxies of curvature to OR that are motivated by the Jaccard index and are demonstrably less computationally intensive. Jaccard curvature (JC) is a simple shift and scaling of the Jaccard index that captures the overlap of edge node neighborhoods. Generalized Jaccard curvature (gJC) captures the shortest path distances in a mass exchange problem. We study the goodness of approximation between the proposed curvatures and an alternative metric, Forman-Ricci curvature, with OR curvature for several network models and real networks. Our results suggest that the gJC exhibits a reasonably good fit to the OR curvature for a wide range of networks, while the JC is shown to be a good proxy only for certain scenarios.", "Publisher Summary This chapter describes Seifert Fibered Spaces in 3-Manifolds. There exist finitely many disjoint, non-contractible, pairwise non-parallel, embedded 2-spheres in M, whose homotopy classes generate π2 (M) as a π2 (M)-module; and modulo the Poincare conjecture, these 2-spheres are unique up to ambient homeomorphism. Thus, all singular 2-spheres in M, that is, maps of S2 into M, may be described, up to homotopy, in terms of a geometric picture in M. The strong version of the sphere theorem presented in the chapter gives a great deal of information about fundamental groups of compact 3-manifolds, for example that they are finite free products of torsion-free groups and finite groups. It also provides in a slightly refined version a reduction of the classification problem for compact, oriented 3-manifolds to the classification problem for compact, irreducible, 3-manifolds.", "Analysis of Internet topologies has shown that the Internet topology has negative curvature, measured by Gromov's “thin triangle condition”, which is tightly related to core congestion and route reliability. In this work we analyze the discrete Ricci curvature of the Internet, defined by Ollivier [1], [2], etc. Ricci curvature measures whether local distances diverge or converge. It is a more local measure which allows us to understand the distribution of curvatures in the network. We show by various Internet data sets that the distribution of Ricci cuvature is spread out, suggesting the network topology to be non-homogenous. We also show that the Ricci curvature has interesting connections to both local measures such as node degree and clustering coefficient, global measures such as betweenness centrality and network connectivity, as well as auxilary attributes such as geographical distances. These observations add to the richness of geometric structures in complex network theory.", "Statistical phylogenetic inference methods use tree rearrangement operations such as subtree–prune–regraft (SPR) to perform Markov chain Monte Carlo (MCMC) across tree topologies. The structure of the graph induced by tree rearrangement operations is an important determinant of the mixing properties of MCMC, motivating the study of the underlying SPR graph in greater detail. In this paper, we investigate the SPR graph of rooted trees (rSPR graph) in a new way: by calculating the Ricci–Ollivier curvature with respect to uniform and Metropolis–Hastings random walks. This value quantifies the degree to which a pair of random walkers from specified points move towards each other; negative curvature means that they move away from one another on average, while positive curvature means that they move towards each other. In order to calculate this curvature, we develop fast new algorithms for rSPR graph computation. We then develop formulas characterizing how the number of rSPR neighbors of a tree changes after an rSPR operation is applied to that tree. These give bounds on the curvature, as well as a flatness-in-the-limit theorem indicating that paths of small topology changes are easy to traverse. However, we find that large topology changes (i.e. moving a large subtree) give pairs of trees with negative curvature. We show using simulation that mean access time distributions depend on distance, degree, and curvature, demonstrating the relevance of these results to stochastic tree search.", "We have performed an empirical comparison of two distinct notions of discrete Ricci curvature for graphs or networks, namely, the Forman-Ricci curvature and Ollivier-Ricci curvature. Importantly, these two discretizations of the Ricci curvature were developed based on different properties of the classical smooth notion, and thus, the two notions shed light on different aspects of network structure and behavior. Nevertheless, our extensive computational analysis in a wide range of both model and real-world networks shows that the two discretizations of Ricci curvature are highly correlated in many networks. Moreover, we show that if one considers the augmented Forman-Ricci curvature which also accounts for the two-dimensional simplicial complexes arising in graphs, the observed correlation between the two discretizations is even higher, especially, in real networks. Besides the potential theoretical implications of these observations, the close relationship between the two discretizations has practical implications whereby Forman-Ricci curvature can be employed in place of Ollivier-Ricci curvature for faster computation in larger real-world networks whenever coarse analysis suffices.", "", "", "", "", "", "Abstract We introduce and study rough (approximate) lower curvature bounds for discrete spaces and for graphs. This notion agrees with the one introduced in [J. Lott, C. Villani, Ricci curvature for metric-measure spaces via optimal transport, Ann. of Math. 169 (2009), in press] and [K.T. Sturm, On the geometry of metric measure spaces. I, Acta Math. 196 (2006) 65–131], in the sense that the metric measure space which is approximated by a sequence of discrete spaces with rough curvature ⩾ K will have curvature ⩾ K in the sense of [J. Lott, C. Villani, Ricci curvature for metric-measure spaces via optimal transport, Ann. of Math. 169 (2009), in press; K.T. Sturm, On the geometry of metric measure spaces. I, Acta Math. 196 (2006) 65–131]. Moreover, in the converse direction, discretizations of metric measure spaces with curvature ⩾ K will have rough curvature ⩾ K . We apply our results to concrete examples of homogeneous planar graphs.", "", "We dene a notion of a measured length space X having nonnegative N-Ricci curvature, for N 2 [1;1), or having1-Ricci curvature bounded below byK, forK2 R. The denitions are in terms of the displacement convexity of certain functions on the associated Wasserstein metric space P2(X) of probability measures. We show that these properties are preserved under measured Gromov-Hausdor limits. We give geometric and analytic consequences. This paper has dual goals. One goal is to extend results about optimal transport from the setting of smooth Riemannian manifolds to the setting of length spaces. A second goal is to use optimal transport to give a notion for a measured length space to have Ricci curvature bounded below. We refer to [11] and [44] for background material on length spaces and optimal transport, respectively. Further bibliographic notes on optimal transport are in Appendix F. In the present introduction we motivate the questions that we address and we state the main results. To start on the geometric side, there are various reasons to try to extend notions of curvature from smooth Riemannian manifolds to more general spaces. A fairly general setting is that of length spaces, meaning metric spaces (X;d) in which the distance between two points equals the inmum of the lengths of curves joining the points. In the rest of this introduction we assume that X is a compact length space. Alexandrov gave a good notion of a length space having bounded below by K\", with K a real number, in terms of the geodesic triangles in X. In the case of a Riemannian manifold M with the induced length structure, one recovers the Riemannian notion of having sectional curvature bounded below by K. Length spaces with Alexandrov curvature bounded below by K behave nicely with respect to the GromovHausdor topology on compact metric spaces (modulo isometries); they form", "", "A new Combinatorial Ricci curvature and Laplacian oper- ators for grayscale images are introduced and tested on 2D synthetic, natural and medical images. Analogue formulae for voxels are also ob- tained. These notions are based upon more general concepts developed by R. Forman. Further applications, in particular a fltting Ricci ∞ow, are discussed." ] }
1907.03993
2961402400
Many complex networks in the real world have community structures -- groups of well-connected nodes with important functional roles. It has been well recognized that the identification of communities bears numerous practical applications. While existing approaches mainly apply statistical or graph theoretical combinatorial methods for community detection, in this paper, we present a novel geometric approach which enables us to borrow powerful classical geometric methods and properties. By considering networks as geometric objects and communities in a network as a geometric decomposition, we apply curvature and discrete Ricci flow, which have been used to decompose smooth manifolds with astonishing successes in mathematics, to break down communities in networks. We tested our method on networks with ground-truth community structures, and experimentally confirmed the effectiveness of this geometric approach.
Unlike discrete Ricci curvature, discrete Ricci flow has not been studied as much. Chow and Luo introduced the first discrete Ricci flow on surfaces @cite_65 . In @cite_32 , Weber al suggested applying Forman-Ricci flow for anomaly detection in the complex network. In @cite_38 , Ni al used the Ollivier-Ricci curvature flow to compute the Ricci flow metric as edge weights for the problem of network alignment (noisy graph matching).
{ "cite_N": [ "@cite_38", "@cite_65", "@cite_32" ], "mid": [ "2891096760", "2143075606", "2963757685" ], "abstract": [ "In this paper, we consider the problem of approximately aligning matching two graphs. Given two graphs (G_ 1 =(V_ 1 ,E_ 1 ) ) and (G_ 2 =(V_ 2 ,E_ 2 ) ), the objective is to map nodes (u, v G_1 ) to nodes (u',v' G_2 ) such that when u, v have an edge in (G_1 ), very likely their corresponding nodes (u', v' ) in (G_2 ) are connected as well. This problem with subgraph isomorphism as a special case has extra challenges when we consider matching complex networks exhibiting the small world phenomena. In this work, we propose to use ‘Ricci flow metric’, to define the distance between two nodes in a network. This is then used to define similarity of a pair of nodes in two networks respectively, which is the crucial step of network alignment. Specifically, the Ricci curvature of an edge describes intuitively how well the local neighborhood is connected. The graph Ricci flow uniformizes discrete Ricci curvature and induces a Ricci flow metric that is insensitive to node edge insertions and deletions. With the new metric, we can map a node in (G_1 ) to a node in (G_2 ) whose distance vector to only a few preselected landmarks is the most similar. The robustness of the graph metric makes it outperform other methods when tested on various complex graph models and real world network data sets (Emails, Internet, and protein interaction networks) (The source code of computing Ricci curvature and Ricci flow metric are available: https: github.com saibalmars GraphRicciCurvature).", "We show that the analogue of Hamilton’s Ricci flow in the combinatorial setting produces solutions which converge exponentially fast to Thurston’s circle packing on surfaces. As a consequence, a new proof of Thurston’s existence of circle packing theorem is obtained. As another consequence, Ricci flow suggests a new algorithm to find circle packings.", "" ] }
1907.03993
2961402400
Many complex networks in the real world have community structures -- groups of well-connected nodes with important functional roles. It has been well recognized that the identification of communities bears numerous practical applications. While existing approaches mainly apply statistical or graph theoretical combinatorial methods for community detection, in this paper, we present a novel geometric approach which enables us to borrow powerful classical geometric methods and properties. By considering networks as geometric objects and communities in a network as a geometric decomposition, we apply curvature and discrete Ricci flow, which have been used to decompose smooth manifolds with astonishing successes in mathematics, to break down communities in networks. We tested our method on networks with ground-truth community structures, and experimentally confirmed the effectiveness of this geometric approach.
Taking a geometric view of complex networks is an emerging trend, as shown in a number of recent work. For example, the community structures were used as a coarse version of its embedding in a hidden space with hyperbolic geometry @cite_29 . Topological data analysis, a typical geometric approach for data analysis, has been applied for analyzing complex systems @cite_8 .
{ "cite_N": [ "@cite_29", "@cite_8" ], "mid": [ "2888988444", "2896791121" ], "abstract": [ "We show that the community structure of a network can be used as a coarse version of its embedding in a hidden space with hyperbolic geometry. The finding emerges from a systematic analysis of several real-world and synthetic networks. We take advantage of the analogy for reinterpreting results originally obtained through network hyperbolic embedding in terms of community structure only. First, we show that the robustness of a multiplex network can be controlled by tuning the correlation between the community structures across different layers. Second, we deploy an efficient greedy protocol for network navigability that makes use of routing tables based on community structure.", "We provide a short introduction to the field of topological data analysis and discuss its possible relevance for the study of complex systems. Topological data analysis provides a set of tools to characterise the shape of data, in terms of the presence of holes or cavities between the points. The methods, based on notion of simplicial complexes, generalise standard network tools by naturally allowing for many-body interactions and providing results robust under continuous deformations of the data. We present strengths and weaknesses of current methods, as well as a range of empirical studies relevant to the field of complex systems, before identifying future methodological challenges to help understand the emergence of collective phenomena." ] }
1907.03956
2958276832
This paper presents planning algorithms for a robotic manipulator with a fixed base in order to grasp a target object in cluttered environments. We consider a configuration of objects in a confined space with a high density so no collision-free path to the target exists. The robot must relocate some objects to retrieve the target while avoiding collisions. For fast completion of the retrieval task, the robot needs to compute a plan optimizing an appropriate objective value directly related to the execution time of the relocation plan. We propose planning algorithms that aim to minimize the number of objects to be relocated. Our objective value is appropriate for the object retrieval task because grasping and releasing objects often dominate the total running time. In addition to the algorithm working in fully known and static environments, we propose algorithms that can deal with uncertain and dynamic situations incurred by occluded views. The proposed algorithms are shown to be complete and run in polynomial time. Our methods reduce the total running time significantly compared to a baseline method (e.g., 25.1 of reduction in a known static environment with 10 objects
The work presented in @cite_13 proposes a planning framework to grasp a target in cluttered and known environments. It removes obstacles that are in the shortest path of the end-effector to the target (like Fig. -L). Although this method finds the distance-optimal path, some obstacles could have to be removed unnecessarily since the objective value is not the number of obstacles to be removed. Other works, such as @cite_12 @cite_17 @cite_20 , also do not directly optimize the relocation plan but mainly concern about validity of the plan.
{ "cite_N": [ "@cite_20", "@cite_13", "@cite_12", "@cite_17" ], "mid": [ "2963566599", "1989021449", "2141841102", "1608892862" ], "abstract": [ "Planning motions to grasp an object in cluttered and uncertain environments is a challenging task, particularly when a collision-free trajectory does not exist and objects obstructing the way are required to be carefully grasped and moved out. This letter takes a different approach and proposes to address this problem by using a randomized physics-based motion planner that permits robot-object and object-object interactions. The main idea is to avoid an explicit high-level reasoning of the task by providing the motion planner with a physics engine to evaluate possible complex multibody dynamical interactions. The approach is able to solve the problem in complex scenarios, also considering uncertainty in the objects' pose and in the contact dynamics. The work enhances the state validity checker, the control sampler, and the tree exploration strategy of a kinodynamic motion planner called KPIECE. The enhanced algorithm, called p-KPIECE, has been validated in simulation and with real experiments. The results have been compared with an ontological physics-based motion planner and with task and motion planning approaches, resulting in a significant improvement in terms of planning time, success rate, and quality of the solution path.", "Robotic manipulation systems suffer from two main problems in unstructured human environments: uncertainty and clutter. We introduce a planning framework addressing these two issues. The framework plans rearrangement of clutter using non-prehensile actions, such as pushing. Pushing actions are also used to manipulate object pose uncertainty. The framework uses an action library that is derived analytically from the mechanics of pushing and is provably conservative. The framework reduces the problem to one of combinatorial search, and demonstrates planning times on the order of seconds. With the extra functionality, our planner succeeds where traditional grasp planners fail, and works under high uncertainty by utilizing the funneling effect of pushing. We demonstrate our results with experiments in simulation and on HERB, a robotic platform developed at the Personal Robotics Lab at Carnegie Mellon University.", "The need for combined task and motion planning in robotics is well understood. Solutions to this problem have typically relied on special purpose, integrated implementations of task planning and motion planning algorithms. We propose a new approach that uses off-the-shelf task planners and motion planners and makes no assumptions about their implementation. Doing so enables our approach to directly build on, and benefit from, the vast literature and latest advances in task planning and motion planning. It uses a novel representational abstraction and requires only that failures in computing a motion plan for a high-level action be identifiable and expressible in the form of logical predicates at the task level. We evaluate the approach and illustrate its robustness through a number of experiments using a state-of-the-art robotics simulator and a PR2 robot. These experiments show the system accomplishing a diverse set of challenging tasks such as taking advantage of a tray when laying out a table for dinner and picking objects from cluttered environments where other objects need to be re-arranged before the target object can be reached.", "In this work we present a fast kinodynamic RRT-planner that uses dynamic nonprehensile actions to rearrange cluttered environments. In contrast to many previous works, the presented planner is not restricted to quasi-static interactions and monotonicity. Instead the results of dynamic robot actions are predicted using a black box physics model. Given a general set of primitive actions and a physics model, the planner randomly explores the configuration space of the environment to find a sequence of actions that transform the environment into some goal configuration." ] }
1907.03956
2958276832
This paper presents planning algorithms for a robotic manipulator with a fixed base in order to grasp a target object in cluttered environments. We consider a configuration of objects in a confined space with a high density so no collision-free path to the target exists. The robot must relocate some objects to retrieve the target while avoiding collisions. For fast completion of the retrieval task, the robot needs to compute a plan optimizing an appropriate objective value directly related to the execution time of the relocation plan. We propose planning algorithms that aim to minimize the number of objects to be relocated. Our objective value is appropriate for the object retrieval task because grasping and releasing objects often dominate the total running time. In addition to the algorithm working in fully known and static environments, we propose algorithms that can deal with uncertain and dynamic situations incurred by occluded views. The proposed algorithms are shown to be complete and run in polynomial time. Our methods reduce the total running time significantly compared to a baseline method (e.g., 25.1 of reduction in a known static environment with 10 objects
Some recent work considers partially known environments. The algorithm proposed in @cite_7 computes a sequence of objects to be removed while minimizing the expected time to find a hidden target. The strength of this work is the mathematical formalization of the search and grasp planning problem. However, the algorithm shows exponential running time so may not be practically useful in environments with densely packed objects. In the experiment with five objects, planning takes longer than 25 ,sec. Another work @cite_18 finds a sequence of actions of a mobile manipulator that minimizes the expected time to reveal all possible hidden target poses. This work defines admissible costs for its A @math search, but planning takes long time owing to the high branching factor of the search (e.g., 40 ,sec with five objects). There have been several approaches @cite_1 @cite_9 modeling the problem as a Partially Observable Markov Decision Process but they do not seem to scale even with moderate-sized instances.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_1", "@cite_7" ], "mid": [ "2561079189", "1600604169", "2419360507", "2084006613" ], "abstract": [ "We study the problem of objects search in clutter. In cluttered environments, partial occlusion among objects prevents vision systems from correctly recognizing objects. Hence, the agent needs to move objects around to gather information, which helps reduce uncertainty in perception. At the same time, the agent needs to minimize the efforts of moving objects to reduce the time required to complete the task. We model the problem as a Partially Observable Markov Decision Process (POMDP), formulating it as a problem of optimal decision making under uncertainty. By exploiting spatial constraints, we are able to adapt online POMDP planners to handle objects search problems with large state space and action space. Experiments show that the POMDP solution outperforms greedy approaches, especially in cases where multi-step manipulation is required.", "Object search is a fundamental ability for a service robot to provide higher level services. We focus on object search in an environment with limited free space to place objects and constrained viewpoints to observe the environment, such as a shelf or a cupboard. We propose an object search planner based on A* search algorithm with tree node sampling. The proposed approach also combines visual and arm manipulation search. In other words, the robot searches occluded target object by either repositioning one of the accessible object with its arm or moving its platform to view the environment from a different pose. We evaluate the proposed approach with experiment performed by real robot in the scenario which objects may occlude or block access to one another.", "We address the problem of a mobile manipulation robot searching for an object in a cluttered domain that is populated with an unknown number of objects in an unknown arrangement. The robot must move around its environment, looking in containers, moving occluding objects to improve its view, and reasoning about collocation of objects of different types, all in service of finding a desired object. The key contribution in reasoning is a Markov-chain Monte Carlo (MCMC) method for drawing samples of the arrangements of objects in an occluded container, conditioned on previous observations of other objects as well as spatial constraints. The key contribution in planning is a receding-horizon forward search in the space of distributions over arrangements (including number and type) of objects in the domain; to maintain tractability the search is formulated in a model that abstracts both the observations and actions available to the robot. The strategy is shown empirically to improve upon a baseline systematic search strategy, and sometimes outperforms a method from previous work.", "We investigate the problem of a robot searching for an object. This requires reasoning about both perception and manipulation: certain objects are moved because the target may be hidden behind them and others are moved because they block the manipulator's access to other objects. We contribute a formulation of the object search by manipulation problem using visibility and accessibility relations between objects. We also propose a greedy algorithm and show that it is optimal under certain conditions. We propose a second algorithm which is optimal under all conditions. This algorithm takes advantage of the structure of the visibility and accessibility relations between objects to quickly generate optimal plans. Finally, we demonstrate an implementation of both algorithms on a real robot using a real object detection system." ] }
1907.03956
2958276832
This paper presents planning algorithms for a robotic manipulator with a fixed base in order to grasp a target object in cluttered environments. We consider a configuration of objects in a confined space with a high density so no collision-free path to the target exists. The robot must relocate some objects to retrieve the target while avoiding collisions. For fast completion of the retrieval task, the robot needs to compute a plan optimizing an appropriate objective value directly related to the execution time of the relocation plan. We propose planning algorithms that aim to minimize the number of objects to be relocated. Our objective value is appropriate for the object retrieval task because grasping and releasing objects often dominate the total running time. In addition to the algorithm working in fully known and static environments, we propose algorithms that can deal with uncertain and dynamic situations incurred by occluded views. The proposed algorithms are shown to be complete and run in polynomial time. Our methods reduce the total running time significantly compared to a baseline method (e.g., 25.1 of reduction in a known static environment with 10 objects
Among these, no work has formulated the problem as an optimization problem whose objective value is the number of obstacles to be relocated. The methods presented in these work require substantial planning time in clutter. The examples that we will consider are significantly more cluttered so we need faster planning algorithms. In our own work @cite_8 , we present a fast algorithm for relocation in known environments by employing a collision avoidance method called Vector Field Histogram+ (VFH+) @cite_10 . Although it shows good performance in dense clutter, it does not aim to find a global optimal solution since VFH+ is a local planning method which focuses on the vicinity of the target but not the entire space. The present work sets out to achieve the global optimum and considers partially known environments.
{ "cite_N": [ "@cite_10", "@cite_8" ], "mid": [ "2114476723", "2968156685" ], "abstract": [ "This paper presents further improvements on the earlier vector field histogram (VFH) method developed by Borenstein-Koren (1991) for real-time mobile robot obstacle avoidance. The enhanced method, called VFH+, offers several improvements that result in smoother robot trajectories and greater reliability. VFH+ reduces some of the parameter tuning of the original VFH method by explicitly compensating for the robot width. Also added in VFH+ is a better approximation of the mobile robot trajectory, which results in higher reliability.", "We present an algorithm that produces a plan for relocating obstacles in order to grasp a target in clutter by a robotic manipulator without collisions. We consider configurations where objects are densely populated in a constrained and confined space. Thus, there exists no collision-free path for the manipulator without relocating obstacles. Since the problem of planning for object rearrangement has shown to be NP-hard, it is difficult to perform manipulation tasks efficiently which could frequently happen in service domains (e.g., taking out a target from a shelf or a fridge).Our proposed planner employs a collision avoidance scheme which has been widely used in mobile robot navigation. The planner determines an obstacle to be removed quickly in real time. It also can deal with dynamic changes in the configuration (e.g., changes in object poses). Our method is shown to be complete and runs in polynomial time. Experimental results in a realistic simulated environment show that our method improves up to 31 of the execution time compared to other competitors." ] }
1907.04068
2961307394
We consider the hypothesis testing problem of detecting conditional dependence, with a focus on high-dimensional feature spaces. Our contribution is a new test statistic based on samples from a generative adversarial network designed to approximate directly a conditional distribution that encodes the null hypothesis, in a manner that maximizes power (the rate of true negatives). We show that such an approach requires only that density approximation be viable in order to ensure that we control type I error (the rate of false positives); in particular, no assumptions need to be made on the form of the distributions or feature dependencies. Using synthetic simulations with high-dimensional data we demonstrate significant gains in power over competing methods. In addition, we illustrate the use of our test to discover causal markers of disease in genetic data.
A recent favoured line of research has characterized conditional independence in a (RKHS) @cite_26 @cite_24 . The dependence between variables is assessed considering all moments of the joint distributions which potentially captures finer differences between them. @cite_26 uses a measure of partial association in a RKHS to define the KCIT test with provable control on type I error asymptotically in the number of samples. Numerous extensions have also been proposed to remedy high computational costs, such as @cite_5 that approximates the KCIT with random Fourier features making it significantly faster. Computing the limiting distribution of the test becomes harder to accurately estimate in practice @cite_26 , and different bandwidth parameters give widely divergent results with dimensionality @cite_29 , which affects power.
{ "cite_N": [ "@cite_24", "@cite_5", "@cite_29", "@cite_26" ], "mid": [ "609741286", "2601307951", "2964340499", "2951039901" ], "abstract": [ "Determining conditional independence (CI) relationships between random variables is a challenging but important task for problems such as Bayesian network learning and causal discovery. We propose a new kernel CI test that uses a single, learned permutation to convert the CI test problem into an easier two-sample test problem. The learned permutation leaves the joint distribution unchanged if and only if the null hypothesis of CI holds. Then, a kernel two-sample test, which has been studied extensively in prior work, can be applied to a permuted and an unpermuted sample to test for CI. We demonstrate that the test (1) easily allows the incorporation of prior knowledge during the permutation step, (2) has power competitive with state-of-the-art kernel CI tests, and (3) accurately estimates the null distribution of the test statistic, even as the dimensionality of the conditioning variable grows.", "Constraint-based causal discovery (CCD) algorithms require fast and accurate conditional independence (CI) testing. The Kernel Conditional Independence Test (KCIT) is currently one of the most popular CI tests in the non-parametric setting, but many investigators cannot use KCIT with large datasets because the test scales cubicly with sample size. We therefore devise two relaxations called the Randomized Conditional Independence Test (RCIT) and the Randomized conditional Correlation Test (RCoT) which both approximate KCIT by utilizing random Fourier features. In practice, both of the proposed tests scale linearly with sample size and return accurate p-values much faster than KCIT in the large sample size context. CCD algorithms run with RCIT or RCoT also return graphs at least as accurate as the same algorithms run with KCIT but with large reductions in run time.", "This paper is about two related decision theoretic problems, nonparametric two-sample testing and independence testing. There is a belief that two recently proposed solutions, based on kernels and distances between pairs of points, behave well in high-dimensional settings. We identify different sources of misconception that give rise to the above belief. Specifically, we differentiate the hardness of estimation of test statistics from the hardness of testing whether these statistics are zero or not, and explicitly discuss a notion of \"fair\" alternative hypotheses for these problems as dimension increases. We then demonstrate that the power of these tests actually drops polynomially with increasing dimension against fair alternatives. We end with some theoretical insights and shed light on the median heuristic for kernel bandwidth selection. Our work advances the current understanding of the power of modern nonpara-metric hypothesis tests in high dimensions.", "Conditional independence testing is an important problem, especially in Bayesian network learning and causal discovery. Due to the curse of dimensionality, testing for conditional independence of continuous variables is particularly challenging. We propose a Kernel-based Conditional Independence test (KCI-test), by constructing an appropriate test statistic and deriving its asymptotic distribution under the null hypothesis of conditional independence. The proposed method is computationally efficient and easy to implement. Experimental results show that it outperforms other methods, especially when the conditioning set is large or the sample size is not very large, in which case other methods encounter difficulties." ] }
1907.04214
2956996691
An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α -divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α -divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.
Apart from computational advantages, information-theoretic approaches provide a solid framework for describing and studying aspects of intelligent behavior @cite_27 , from autonomy @cite_39 and curiosity @cite_42 to bounded rationality @cite_17 and game theory @cite_11 .
{ "cite_N": [ "@cite_11", "@cite_42", "@cite_39", "@cite_27", "@cite_17" ], "mid": [ "1487708124", "2020920737", "2054162326", "2239029832", "2211766770" ], "abstract": [ "A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.", "We provide a fresh look at the problem of exploration in reinforcement learning, drawing on ideas from information theory. First, we show that Boltzmann-style exploration, one of the main exploration methods used in reinforcement learning, is optimal from an information-theoretic point of view, in that it optimally trades expected return for the coding cost of the policy. Second, we address the problem of curiosity-driven learning. We propose that, in addition to maximizing the expected return, a learner should choose a policy that also maximizes the learner’s predictive power. This makes the world both interesting and exploitable. Optimal policies then have the form of Boltzmann-style exploration with a bonus, containing a novel exploration–exploitation trade-off which emerges naturally from the proposed optimization principle. Importantly, this exploration–exploitation trade-off persists in the optimal deterministic policy, i.e., when there is no exploration due to randomness. As a result, exploration is understood as an emerging behavior that optimizes information gain, rather than being modeled as pure randomization of action choices.", "Abstract We present a tentative proposal for a quantitative measure of autonomy. This is something that, surprisingly, is rarely found in the literature, even though autonomy is considered to be a basic concept in many disciplines, including artificial life. We work in an information theoretic setting for which the distinction between system and environment is the starting point. As a first measure for autonomy, we propose the conditional mutual information between consecutive states of the system conditioned on the history of the environment. This works well when the system cannot influence the environment at all and the environment does not interact synergetically with the system. When, in contrast, the system has full control over its environment, we should instead neglect the environment history and simply take the mutual information between consecutive system states as a measure of autonomy. In the case of mutual interaction between system and environment there remains an ambiguity regarding whether system or environment has caused observed correlations. If the interaction structure of the system is known, we define a “causal” autonomy measure which allows this ambiguity to be resolved. Synergetic interactions still pose a problem since in this case causation cannot be attributed to the system or the environment alone. Moreover, our analysis reveals some subtle facets of the concept of autonomy, in particular with respect to the seemingly innocent system–environment distinction we took for granted, and raises the issue of the attribution of control, i.e. the responsibility for observed effects. To further explore these issues, we evaluate our autonomy measure for simple automata, an agent moving in space, gliders in the game of life, and the tessellation automaton for autopoiesis of [Varela, F.J., Maturana, H.R., Uribe, R., 1974. Autopoiesis: the organization of living systems, its characterization and a model. BioSystems 5, 187–196].", "The perception–action cycle is often defined as “the circular flow of information between an organism and its environment in the course of a sensory guided sequence of actions towards a goal” (Fuster, Neuron 30:319–333, 2001; International Journal of Psychophysiology 60(2):125–132, 2006). The question we address in this chapter is in what sense this “flow of information” can be described by Shannon’s measures of information introduced in his mathematical theory of communication. We provide an affirmative answer to this question using an intriguing analogy between Shannon’s classical model of communication and the perception–action cycle. In particular, decision and action sequences turn out to be directly analogous to codes in communication, and their complexity – the minimal number of (binary) decisions required for reaching a goal – directly bounded by information measures, as in communication. This analogy allows us to extend the standard reinforcement learning framework. The latter considers the future expected reward in the course of a behaviour sequence towards a goal (value-to-go). Here, we additionally incorporate a measure of information associated with this sequence: the cumulated information processing cost or bandwidth required to specify the future decision and action sequence (information-to-go). Using a graphical model, we derive a recursive Bellman optimality equation for information measures, in analogy to reinforcement learning; from this, we obtain new algorithms for calculating the optimal trade-off between the value-to-go and the required information-to-go, unifying the ideas behind the Bellman and the Blahut–Arimoto iterations. This trade-off between value-to-go and information-to-go provides a complete analogy with the compression–distortion trade-off in source coding. The present new formulation connects seemingly unrelated optimization problems. The algorithm is demonstrated on grid world examples.", "Abstraction and hierarchical information-processing are hallmarks of human and animal intelligence underlying the unrivaled flexibility of behavior in biological systems. Achieving such a flexibility in artificial systems is challenging, even with more and more computational power. Here we investigate the hypothesis that abstraction and hierarchical information-processing might in fact be the consequence of limitations in information-processing power. In particular, we study an information-theoretic framework of bounded rational decision-making that trades off utility maximization against information-processing costs. We apply the basic principle of this framework to perception-action systems with multiple information-processing nodes and derive bounded optimal solutions. We show how the formation of abstractions and decision-making hierarchies depends on information-processing costs. We illustrate the theoretical ideas with example simulations and conclude by formalizing a mathematically unifying optimization principle that could potentially be extended to more complex systems." ] }
1907.04214
2956996691
An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α -divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α -divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.
Entropic proximal mappings were introduced in @cite_29 as a general framework for constructing approximation and smoothing schemes for optimization problem. Problem formulation presented here can be considered as an application of this general theory to policy optimization in Markov decision processes. Following the recent work @cite_22 that establishes links between popular in reinforcement learning KL-divergence-regularized policy iteration algorithms @cite_44 @cite_14 and a well-known in optimization stochastic mirror descent algorithm @cite_12 @cite_34 , one can view our Algorithm as an analog of the mirror descent with an @math -divergence penalty.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_29", "@cite_44", "@cite_34", "@cite_12" ], "mid": [ "1771410628", "2619268125", "2009274429", "1499669280", "2016384870", "1505731132" ], "abstract": [ "In this article, we describe a method for optimizing control policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified scheme, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.", "We propose a general framework for entropy-regularized average-reward reinforcement learning in Markov decision processes (MDPs). Our approach is based on extending the linear-programming formulation of policy optimization in MDPs to accommodate convex regularization functions. Our key result is showing that using the conditional entropy of the joint state-action distributions as regularization yields a dual optimization problem closely resembling the Bellman optimality equations. This result enables us to formalize a number of state-of-the-art entropy-regularized reinforcement learning algorithms as approximate variants of Mirror Descent or Dual Averaging, and thus to argue about the convergence properties of these methods. In particular, we show that the exact version of the TRPO algorithm of (2015) actually converges to the optimal policy, while the entropy-regularized policy gradient methods of (2016) may fail to converge to a fixed point. Finally, we illustrate empirically the effects of using various regularization techniques on learning performance in a simple reinforcement learning setup.", "We introduce a family of new transforms based on imitating the proximal mapping of Moreau and the associated Moreau-Yosida proximal approximation of a function. The transforms are constructed in terms of the AÂ†-divergence functional a generalization of the relative entropy and of Bregman's measure of distance. An analogue of Moreau's theorem associated with these entropy-like distances is proved. We show that the resulting Entropic Proximal Maps share properties similar to the proximal mapping and provide a fairly general framework for constructing approximation and smoothing schemes for optimization problems. Applications of the results to the construction of generalized augmented Lagrangians for nonlinear programs and the minimax problem are presented.", "Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.", "The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an efficiency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonlinear projected-subgradient type method, derived from using a general distance-like function instead of the usual Euclidean squared distance. Within this interpretation, we derive in a simple way convergence and efficiency estimates. We then propose an Entropic mirror descent algorithm for convex minimization over the unit simplex, with a global efficiency estimate proven to be mildly dependent in the dimension of the problem.", "" ] }
1907.04214
2956996691
An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration. However, if the system dynamics and the reward function are unknown, a learning agent must discover an optimal controller via direct interaction with the environment. Such interactive data gathering commonly leads to divergence towards dangerous or uninformative regions of the state space unless additional regularization measures are taken. Prior works proposed bounding the information loss measured by the Kullback–Leibler (KL) divergence at every policy improvement step to eliminate instability in the learning dynamics. In this paper, we consider a broader family of f-divergences, and more concretely α -divergences, which inherit the beneficial property of providing the policy improvement step in closed form at the same time yielding a corresponding dual objective for policy evaluation. Such entropic proximal policy optimization view gives a unified perspective on compatible actor-critic architectures. In particular, common least-squares value function estimation coupled with advantage-weighted maximum likelihood policy improvement is shown to correspond to the Pearson χ 2 -divergence penalty. Other actor-critic pairs arise for various choices of the penalty-generating function f. On a concrete instantiation of our framework with the α -divergence, we carry out asymptotic analysis of the solutions for different values of α and demonstrate the effects of the divergence function choice on common standard reinforcement learning problems.
An alternative proximal reinforcement learning scheme was introduced in @cite_28 based on the extragradient method for solving variational inequalities and leveraging operator splitting techniques. Although the idea of exploiting proximal maps and updates in the primal and dual spaces is similar to ours, regularization in @cite_28 is applied in the value function space to smoothen generalized TD learning algorithms, whereas we study regularization in the primal space.
{ "cite_N": [ "@cite_28" ], "mid": [ "1835716857" ], "abstract": [ "In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified \"safety\" guarantees, and remains in a stable region of the parameter space (iii) how to design \"off-policy\" temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The key idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design \"true\" stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators." ] }
1907.04072
2957984643
Online social media platforms have made the world more connected than ever before, thereby making it easier for everyone to spread their content across a wide variety of audiences. Twitter is one such popular platform where people publish tweets to spread their messages to everyone. Twitter allows users to Retweet other users' tweets in order to broadcast it to their network. The more retweets a particular tweet gets, the faster it spreads. This creates incentives for people to obtain artificial growth in the reach of their tweets by using certain blackmarket services to gain inorganic appraisals for their content. In this paper, we attempt to detect such tweets that have been posted on these blackmarket services in order to gain artificially boosted retweets. We use a multitask learning framework to leverage soft parameter sharing between a classification and a regression based task on separate inputs. This allows us to effectively detect tweets that have been posted to these blackmarket services, achieving an F1-score of 0.89 when classifying tweets as blackmarket or genuine.
: The problem of fake and spam tweets is not new. Many solutions have been proposed to tackle this problem. @cite_16 showed that the network structure of spammers and non-spammers is different, and also tracked the life cycle of endogenous Twitter content. @cite_15 conducted a comprehensive evaluation of several machine learning algorithms for timely detection of spam. Fake tweets, on the other hand, are the tweets which spread misinformation. @cite_14 provided an extensive survey on fake tweet detection. Unlike spam tweets, fake tweets are mostly associated with major events, and the accounts that produce these fake contents are mostly created during these events @cite_12 @cite_11 .
{ "cite_N": [ "@cite_14", "@cite_15", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2280128323", "1526831942", "2074835059", "1796766288", "1085730058" ], "abstract": [ "Viral marketing, marketing techniques that use pre-existing social networks, has experienced a significant encouragement in the last years. In this scope, Twitter is the most studied social network in viral marketing and the rumor spread is a widely researched problem. This paper contributes with a survey of research works which study rumor diffusion in Twitter. Moreover, the most useful aspects of these works to build new multi-agent based simulations dealing with this interesting and complex problem are discussed. The main four research lines in rumor dissemination found and discussed in this paper are: exploratory data analysis, rumor detection, epidemiological modeling, and multi-agent based social simulation. The survey shows that the reproducibility in the specialized literature has to be considerably improved. Finally, a free and open-source simulation tool implementing several of the models considered in this survey is presented.", "Twitter has changed the way of communication and getting news for people's daily life in recent years. Meanwhile, due to the popularity of Twitter, it also becomes a main target for spamming activities. In order to stop spammers, Twitter is using Google SafeBrowsing to detect and block spam links. Despite that blacklists can block malicious URLs embedded in tweets, their lagging time hinders the ability to protect users in real-time. Thus, researchers begin to apply different machine learning algorithms to detect Twitter spam. However, there is no comprehensive evaluation on each algorithms' performance for real-time Twitter spam detection due to the lack of large groundtruth. To carry out a thorough evaluation, we collected a large dataset of over 600 million public tweets. We further labelled around 6.5 million spam tweets and extracted 12 light-weight features, which can be used for online detection. In addition, we have conducted a number of experiments on six machine learning algorithms under various conditions to better understand their effectiveness and weakness for timely Twitter spam detection. We will make our labelled dataset for researchers who are interested in validating or extending our work.", "Spam becomes a problem as soon as an online communication medium becomes popular. Twitter’s behavioral and structural properties make it a fertile breeding ground for spammers to proliferate. In this article we examine spam around a one-time Twitter meme—“robotpickuplines”. We show the existence of structural network differences between spam accounts and legitimate users. We conclude by highlighting challenges in disambiguating spammers from legitimate users.", "In today's world, online social media plays a vital role during real world events, especially crisis events. There are both positive and negative effects of social media coverage of events, it can be used by authorities for effective disaster management or by malicious entities to spread rumors and fake news. The aim of this paper, is to highlight the role of Twitter, during Hurricane Sandy (2012) to spread fake images about the disaster. We identified 10,350 unique tweets containing fake images that were circulated on Twitter, during Hurricane Sandy. We performed a characterization analysis, to understand the temporal, social reputation and influence patterns for the spread of fake images. Eighty six percent of tweets spreading the fake images were retweets, hence very few were original tweets. Our results showed that top thirty users out of 10,215 users (0.3 ) resulted in 90 of the retweets of fake images; also network links such as follower relationships of Twitter, contributed very less (only 11 ) to the spread of these fake photos URLs. Next, we used classification models, to distinguish fake images from real images of Hurricane Sandy. Best results were obtained from Decision Tree classifier, we got 97 accuracy in predicting fake images from real. Also, tweet based features were very effective in distinguishing fake images tweets from real, while the performance of user based features was very poor. Our results, showed that, automated techniques can be used in identifying real images from fake images posted on Twitter.", "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43 accuracy and 0.961 F-measure." ] }
1907.04072
2957984643
Online social media platforms have made the world more connected than ever before, thereby making it easier for everyone to spread their content across a wide variety of audiences. Twitter is one such popular platform where people publish tweets to spread their messages to everyone. Twitter allows users to Retweet other users' tweets in order to broadcast it to their network. The more retweets a particular tweet gets, the faster it spreads. This creates incentives for people to obtain artificial growth in the reach of their tweets by using certain blackmarket services to gain inorganic appraisals for their content. In this paper, we attempt to detect such tweets that have been posted on these blackmarket services in order to gain artificially boosted retweets. We use a multitask learning framework to leverage soft parameter sharing between a classification and a regression based task on separate inputs. This allows us to effectively detect tweets that have been posted to these blackmarket services, achieving an F1-score of 0.89 when classifying tweets as blackmarket or genuine.
: Blackmarket services have recently received considerable attention due to the increase in the number of users using them. Analysis of such underground services was first documented in @cite_2 where the authors examined the properties of social networks formed for blackmarket services. @cite_4 proposed DetectVC which incorporates graph structure and the prior knowledge from the collusive followers to solve a voluntary following problem. @cite_2 provided a detailed analysis of six underground forms, examining the properties of those social network structures that are formed and services that are being exchanged. @cite_10 investigated the customers involved in gaining fake retweets. @cite_0 proposed CoReRank, an unsupervised model and CoReRank+, a semi-supervised model which extends CoReRank to detect collusive users involved in retweeting activities.
{ "cite_N": [ "@cite_0", "@cite_10", "@cite_4", "@cite_2" ], "mid": [ "2907374582", "2809692089", "2572911049", "2122551442" ], "abstract": [ "Twitter's popularity has fostered the emergence of various illegal user activities - one such activity is to artificially bolster visibility of tweets by gaining large number of retweets within a short time span. The natural way to gain visibility is time-consuming. Therefore, users who want their tweets to get quick visibility try to explore shortcuts - one such shortcut is to approach the blackmarket services, and gain retweets for their own tweets by retweeting other customers' tweets. Thus the users intrinsically become a part of a collusive ecosystem controlled by these services. In this paper, we propose CoReRank, an unsupervised framework to detect collusive users (who are involved in producing artificial retweets), and suspicious tweets (which are submitted to the blackmarket services) simultaneously. CoReRank leverages the retweeting (or quoting) patterns of users, and measures two scores - the 'credibility' of a user and the 'merit' of a tweet. We propose a set of axioms to derive the interdependency between these two scores, and update them in a recursive manner. The formulation is further extended to handle the cold start problem. CoReRank is guaranteed to converge in a finite number of iterations and has linear time complexity. We also propose a semi-supervised version of CoReRank (called CoReRank+) which leverages a partial ground-truth labeling of users and tweets. Extensive experiments are conducted to show the superiority of CoReRank compared to six baselines on a novel dataset we collected and annotated. CoReRank beats the best unsupervised baseline method by 269 (20 ) (relative) average precision and 300 (22.22 ) (relative) average recall in detecting collusive (genuine) users. CoReRank+ beats the best supervised baseline method by 33.18 AUC. CoReRank also detects suspicious tweets with 0.85 (0.60) average precision (recall). To our knowledge, CoReRank is the first unsupervised method to detect collusive users and suspicious tweets simultaneously with theoretical guarantees.", "Twitter has increasingly become a popular platform to share news and user opinion. A tweet is considered to be important if it receives high number of affirmative reactions from other Twitter users via Retweets. Retweet count is thus considered as a surrogate measure for positive crowd-sourced reactions - high number of retweets of a tweet not only help the tweet being broadcasted, but also aid in making its topic trending. This in turn bolsters the social reputation of the author of the tweet. Since social reputation impact of users t weets influences many decisions (such as promoting brands, advertisement etc.), several blackmarket syndicates have actively been engaged in producing fake retweets in a collusive manner. Users who want to boost the impact of their tweets approach the blackmarket services, and gain retweets for their own tweets by retweeting other customers' tweets. Thus they become customers of blackmarket syndicates and engage in fake activities. Interestingly, these customers are neither bots, nor even fake users - they are usually normal human beings; they express a mix of organic and inorganic retweeting activities, and there is no synchronicity across their behaviors. In this paper, we make a first attempt to investigate such blackmarket customers engaged in producing fake retweets. We collected and annotated a novel dataset comprising of customers of many blackmarket services and characterize them using a set of 64 novel features. We show how their social behavior differs from genuine users. We then use state-of-the-art supervised models to detect three types of customers (bots, promotional, normal) and genuine users. We achieve a Macro Fl-score of 0.87 with SVM, outperforming four other baselines significantly. We further design a browser extension, SCoRe which, given the link of a tweet, spots its fake retweeters in real-time. We also collected users' feedback on the performance of SCoRe and obtained 85 accuracy.", "A number of existing works have focused on the problem of malicious following activity detection in microblog services. However, most of them make the assumption that the spamming following relationships are either from fraudulent accounts or compromised legitimate users. They therefore developed detection methodologies based on the features derived from this assumption. Recently, a new type of malicious crowdturfing following relationship is provided by the follower market, called voluntary following. Followers who provide voluntary following services (or named volowers) are normal users who are willing to trade their following activities for profit. Since most of their behaviors follow normal patterns, it is difficult for existing methods to detect volowers and their corresponding customers. In this work, we try to solve the voluntary following problem through a newly proposed detection method named DetectVC. This method incorporates both structure information in user following behavior graphs and prior knowledge collected from follower markets. Experimental results on large scale practical microblog data set show that DetectVC is able to detect volowers and their customers simultaneously and it also significantly outperforms existing solutions.", "Underground forums, where participants exchange information on abusive tactics and engage in the sale of illegal goods and services, are a form of online social network (OSN). However, unlike traditional OSNs such as Facebook, in underground forums the pattern of communications does not simply encode pre-existing social relationships, but instead captures the dynamic trust relationships forged between mutually distrustful parties. In this paper, we empirically characterize six different underground forums --- BlackHatWorld, Carders, HackSector, HackE1ite, Freehack, and L33tCrew --- examining the properties of the social networks formed within, the content of the goods and services being exchanged, and lastly, how individuals gain and lose trust in this setting." ] }
1907.04072
2957984643
Online social media platforms have made the world more connected than ever before, thereby making it easier for everyone to spread their content across a wide variety of audiences. Twitter is one such popular platform where people publish tweets to spread their messages to everyone. Twitter allows users to Retweet other users' tweets in order to broadcast it to their network. The more retweets a particular tweet gets, the faster it spreads. This creates incentives for people to obtain artificial growth in the reach of their tweets by using certain blackmarket services to gain inorganic appraisals for their content. In this paper, we attempt to detect such tweets that have been posted on these blackmarket services in order to gain artificially boosted retweets. We use a multitask learning framework to leverage soft parameter sharing between a classification and a regression based task on separate inputs. This allows us to effectively detect tweets that have been posted to these blackmarket services, achieving an F1-score of 0.89 when classifying tweets as blackmarket or genuine.
: Multitask learning is used whenever we have two or more similar tasks to optimise together. Most of the related studies on multitask learning are based on how the tasks can be better learned together. @cite_7 classified multitask learning models into five types and reported the characteristics of each approach. Cross-Stitch units were introduced by @cite_8 , which can learn an optimal combination of shared and task-specific representations. @cite_13 proposed GIRNet, a unified position-sensitive multitask recurrent neural network architecture.
{ "cite_N": [ "@cite_13", "@cite_7", "@cite_8" ], "mid": [ "2903014193", "2742079690", "2963877604" ], "abstract": [ "In several natural language tasks, labeled sequences are available in separate domains (say, languages), but the goal is to label sequences with mixed domain (such as code-switched text). Or, we may have available models for labeling whole passages (say, with sentiments), which we would like to exploit toward better position-specific label inference (say, target-dependent sentiment annotation). A key characteristic shared across such tasks is that different positions in a primary instance can benefit from different experts' trained from auxiliary data, but labeled primary instances are scarce, and labeling the best expert for each position entails unacceptable cognitive burden. We propose GITNet, a unified position-sensitive multi-task recurrent neural network (RNN) architecture for such applications. Auxiliary and primary tasks need not share training instances. Auxiliary RNNs are trained over auxiliary instances. A primary instance is also submitted to each auxiliary RNN, but their state sequences are gated and merged into a novel composite state sequence tailored to the primary inference task. Our approach is in sharp contrast to recent multi-task networks like the cross-stitch and sluice network, which do not control state transfer at such fine granularity. We demonstrate the superiority of GIRNet using three applications: sentiment classification of code-switched passages, part-of-speech tagging of code-switched text, and target position-sensitive annotation of sentiment in monolingual passages. In all cases, we establish new state-of-the-art performance beyond recent competitive baselines.", "Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL. First, we classify different MTL algorithms into several categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach, and decomposition approach, and then discuss the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, batch MTL models are difficult to handle this situation and online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing are reviewed to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works. Finally, we present theoretical analyses and discuss several future directions for MTL.", "Multi-task learning in Convolutional Networks has displayed remarkable success in the field of recognition. This success can be largely attributed to learning shared representations from multiple supervisory tasks. However, existing multi-task approaches rely on enumerating multiple network architectures specific to the tasks at hand, that do not generalize. In this paper, we propose a principled approach to learn shared representations in ConvNets using multitask learning. Specifically, we propose a new sharing unit: \"cross-stitch\" unit. These units combine the activations from multiple networks and can be trained end-to-end. A network with cross-stitch units can learn an optimal combination of shared and task-specific representations. Our proposed method generalizes across multiple tasks and shows dramatically improved performance over baseline methods for categories with few training examples." ] }
1901.00326
2906748410
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
: Exploiting additional contextual cues in visual recognition tasks gained a lot of attention from the computer vision community @cite_14 @cite_24 @cite_20 . Contextual information related to semantics was used to improve object detection @cite_23 . Social media meta-data was also used in the contest of multilabel image annotation in @cite_20 . Although adding context proved to be successful in increasing the quality of the visual recognition tasks, all of the above mentioned methods used the context in conjunction with the input uni-modal (visual) image during the training of an entire system. In this work, we propose a fundamentally different approach since the context (in the form of known labels) is learned only after training of the main model is finished and our approach allows to extend this pre-trained model with additional information a posteriori .
{ "cite_N": [ "@cite_24", "@cite_14", "@cite_23", "@cite_20" ], "mid": [ "2160254296", "2141364309", "2125215748", "1908139891" ], "abstract": [ "In this work we introduce a novel approach to object categorization that incorporates two types of context-co-occurrence and relative location - with local appearance-based features. Our approach, named CoLA (for co-occurrence, location and appearance), uses a conditional random field (CRF) to maximize object label agreement according to both semantic and spatial relevance. We model relative location between objects using simple pairwise features. By vector quantizing this feature space, we learn a small set of prototypical spatial relationships directly from the data. We evaluate our results on two challenging datasets: PASCAL 2007 and MSRC. The results show that combining co-occurrence and spatial context improves accuracy in as many as half of the categories compared to using co-occurrence alone.", "This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task - the PASCAL VOC 2008. Previous experiments with context have mostly been done on home-grown datasets, often with non-standard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using top-performing local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support.", "In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of exist ing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales.", "Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata." ] }
1901.00326
2906748410
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
: Some authors proposed to model co-occurrence of labels available at training time to improve recognition performance @cite_22 . @cite_26 , on the other hand, uses a special structure to store the relations between the labels using a graph designed specifically to capture semantic similarities between the labels. Other form of external knowledge can be found in @cite_13 and @cite_17 where they use WordNet taxonomy of tags to increase the accuracy of their visual recognition systems. Similarly to @cite_20 , also @cite_11 used social media meta-data to improve the quality of the results obtained for image recognition task. Finally, @cite_0 estimated entry-level labels of visual objects by exploiting image captions. Contrary to our method, the above-mentioned approaches focus on finding the relationships between the labels and driving the training algorithm to encompass those structures. In this work, we do not model explicitly any label structures -- the only input related to labels we give to the network is a set of known labels related to an image with no information about their relationship with the others.
{ "cite_N": [ "@cite_26", "@cite_11", "@cite_22", "@cite_0", "@cite_13", "@cite_20", "@cite_17" ], "mid": [ "64813323", "2125204570", "2706729717", "2135166986", "2153419563", "1908139891", "2098728436" ], "abstract": [ "In this paper we study how to perform object classification in a principled way that exploits the rich structure of real world labels. We develop a new model that allows encoding of flexible relations between labels. We introduce Hierarchy and Exclusion (HEX) graphs, a new formalism that captures semantic relations between any two labels applied to the same object: mutual exclusion, overlap and subsumption. We then provide rigorous theoretical analysis that illustrates properties of HEX graphs such as consistency, equivalence, and computational implications of the graph structure. Next, we propose a probabilistic classification model based on HEX graphs and show that it enjoys a number of desirable properties. Finally, we evaluate our method using a large-scale benchmark. Empirical results demonstrate that our model can significantly improve object classification by exploiting the label relations.", "Large-scale image retrieval benchmarks invariably consist of images from the Web. Many of these benchmarks are derived from online photo sharing networks, like Flickr, which in addition to hosting images also provide a highly interactive social community. Such communities generate rich metadata that can naturally be harnessed for image classification and retrieval. Here we study four popular benchmark datasets, extending them with social-network metadata, such as the groups to which each image belongs, the comment thread associated with the image, who uploaded it, their location, and their network of friends. Since these types of data are inherently relational, we propose a model that explicitly accounts for the interdependencies between images sharing common properties. We model the task as a binary labeling problem on a network, and use structured learning techniques to learn model parameters. We find that social-network metadata are useful in a variety of classification tasks, in many cases outperforming methods based on image content.", "Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at modeling in-terdependencies between features. We evaluate the method on the multi-modal Youtube-8M Large-Scale Video Understanding dataset using pre-extracted visual and audio features. We demonstrate improvements provided by the Context Gating as well as by the combination of learnable pooling methods. We finally show how this leads to the best performance, out of more than 600 teams, in the Kaggle Youtube-8M Large-Scale Video Understanding challenge.", "Entry level categories - the labels people will use to name an object - were originally defined and studied by psychologists in the 1980s. In this paper we study entry-level categories at a large scale and learn the first models for predicting entry-level categories for images. Our models combine visual recognition predictions with proxies for word \"naturalness\" mined from the enormous amounts of text on the web. We demonstrate the usefulness of our models for predicting nouns (entry-level words) associated with images by people. We also learn mappings between concepts predicted by existing visual recognition systems and entry-level concepts that could be useful for improving human-focused applications such as natural language image description or retrieval.", "We introduce an approach to learn discriminative visual representations while exploiting external semantic knowledge about object category relationships. Given a hierarchical taxonomy that captures semantic similarity between the objects, we learn a corresponding tree of metrics (ToM). In this tree, we have one metric for each non-leaf node of the object hierarchy, and each metric is responsible for discriminating among its immediate subcategory children. Specifically, a Mahalanobis metric learned for a given node must satisfy the appropriate (dis)similarity constraints generated only among its subtree members' training instances. To further exploit the semantics, we introduce a novel regularizer coupling the metrics that prefers a sparse disjoint set of features to be selected for each metric relative to its ancestor (supercategory) nodes' metrics. Intuitively, this reflects that visual cues most useful to distinguish the generic classes (e.g., feline vs. canine) should be different than those cues most useful to distinguish their component fine-grained classes (e.g., Persian cat vs. Siamese cat). We validate our approach with multiple image datasets using the WordNet taxonomy, show its advantages over alternative metric learning approaches, and analyze the meaning of attribute features selected by our algorithm.", "Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata.", "When learning features for complex visual recognition problems, labeled image exemplars alone can be insufficient. While an object taxonomy specifying the categories' semantic relationships could bolster the learning process, not all relationships are relevant to a given visual classification task, nor does a single taxonomy capture all ties that are relevant. In light of these issues, we propose a discriminative feature learning approach that leverages multiple hierarchical taxonomies representing different semantic views of the object categories (e.g., for animal classes, one taxonomy could reflect their phylogenicties, while another could reflect their habitats). For each taxonomy, we first learn a tree of semantic kernels, where each node has a Mahalanobis kernel optimized to distinguish between the classes in its children nodes. Then, using the resulting semantic kernel forest, we learn class-specific kernel combinations to select only those relationships relevant to recognize each object class. To learn the weights, we introduce a novel hierarchical regularization term that further exploits the taxonomies' structure. We demonstrate our method on challenging object recognition datasets, and show that interleaving multiple taxonomic views yields significant accuracy improvements." ] }
1901.00326
2906748410
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
: Somehow related to our work is a recently thriving area of multi-task learning. Motivated by the phenomenon of catastrophic forgetting, multi-task learning tries to address the problem of lifelong learning and adaptation of a neural network to a set of changing tasks while preserving network's structure. In @cite_6 , Lee aim at solving this problem by continuous matching of network distribution. @cite_19 the same problem is solved through residual adapters -- neural network modules plugged into a network, similarly to our Plugin Networks -- that are the only structures trained between the tasks while the backbone network remains untouched. Although we do not aim to solve multi-task learning problem in this work, our approach is inspired by the above-mentioned methods that focus on designing robust network architecture that can dynamically adjust to additional data point sources unseen during training.
{ "cite_N": [ "@cite_19", "@cite_6" ], "mid": [ "2963211188", "2605043629" ], "abstract": [ "There is a growing interest in learning data representations that work well for many different types of problems and data. In this paper, we look in particular at the task of learning a single visual representation that can be successfully utilized in the analysis of very different types of images, from dog breeds to stop signs and digits. Inspired by recent work on learning networks that predict the parameters of another, we develop a tunable deep network architecture that, by means of adapter residual modules, can be steered on the fly to diverse visual domains. Our method achieves a high degree of parameter sharing while maintaining or even improving the accuracy of domain-specific representations. We also introduce the Visual Decathlon Challenge, a benchmark that evaluates the ability of representations to capture simultaneously ten very different visual domains and measures their ability to recognize well uniformly.", "Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task. Here, we propose a method, i.e. incremental moment matching (IMM), to resolve this problem. IMM incrementally matches the moment of the posterior distribution of the neural network which is trained on the first and the second task, respectively. To make the search space of posterior parameter smooth, the IMM procedure is complemented by various transfer learning techniques including weight transfer, L2-norm of the old and the new parameter, and a variant of dropout with the old parameter. We analyze our approach on a variety of datasets including the MNIST, CIFAR-10, Caltech-UCSD-Birds, and Lifelog datasets. The experimental results show that IMM achieves state-of-the-art performance by balancing the information between an old and a new network." ] }
1901.00326
2906748410
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
Finally, the most relevant to the work presented in this paper are two methods proposed by Hu al @cite_15 and Wang al @cite_7 . Both of them address the problem of visual tasks in the presence of partial evidence.
{ "cite_N": [ "@cite_15", "@cite_7" ], "mid": [ "2963513598", "2963175631" ], "abstract": [ "Images of scenes have various objects as well as abundant attributes, and diverse levels of visual categorization are possible. A natural image could be assigned with finegrained labels that describe major components, coarsegrained labels that depict high level abstraction, or a set of labels that reveal attributes. Such categorization at different concept layers can be modeled with label graphs encoding label information. In this paper, we exploit this rich information with a state-of-art deep learning framework, and propose a generic structured model that leverages diverse label relations to improve image classification performance. Our approach employs a novel stacked label prediction neural network, capturing both inter-level and intra-level label semantics. We evaluate our method on benchmark image datasets, and empirical results illustrate the efficacy of our model.", "We propose an inference procedure for deep convolutional neural networks (CNNs) when partial evidence is available. Our method consists of a general feedback-based propagation approach (feedback-prop) that boosts the prediction accuracy for an arbitrary set of unknown target labels when the values for a non-overlapping arbitrary set of target labels are known. We show that existing models trained in a multi-label or multi-task setting can readily take advantage of feedback-prop without any retraining or fine-tuning. Our feedback-prop inference procedure is general, simple, reliable, and works on different challenging visual recognition tasks. We present two variants of feedback-prop based on layer-wise and residual iterative updates. We experiment using several multi-task models and show that feedback-prop is effective in all of them. Our results unveil a previously unreported but interesting dynamic property of deep CNNs. We also present an associated technical approach that takes advantage of this property for inference under partial evidence in general visual recognition tasks." ] }
1901.00326
2906748410
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
Hu al @cite_15 tackles this challenge by proposing a Structured Inference Neural Network (SINN). The SINN method is designed to discover the hierarchical structure of labels but can be used in partial evidence setup if labels at given hierarchy are clamped at inference. The SINN model, however, uses CNN and LSTM to discover label relations, it has a big amount of learnable parameters which makes model training hard. To solve this issue authors use positive and negative correlations of labels as prior knowledge, which is inferred from WordNet relations. We compare our method with SINN and show that we achieve significantly better performance with a much simpler model.
{ "cite_N": [ "@cite_15" ], "mid": [ "2963513598" ], "abstract": [ "Images of scenes have various objects as well as abundant attributes, and diverse levels of visual categorization are possible. A natural image could be assigned with finegrained labels that describe major components, coarsegrained labels that depict high level abstraction, or a set of labels that reveal attributes. Such categorization at different concept layers can be modeled with label graphs encoding label information. In this paper, we exploit this rich information with a state-of-art deep learning framework, and propose a generic structured model that leverages diverse label relations to improve image classification performance. Our approach employs a novel stacked label prediction neural network, capturing both inter-level and intra-level label semantics. We evaluate our method on benchmark image datasets, and empirical results illustrate the efficacy of our model." ] }
1901.00326
2906748410
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
The FeedbackProp proposed by Wang al @cite_7 , on the other hand, uses an iterative procedure which is applied at inference time. The idea is to modify network activations to maximize probabilities of labels under the partial evidence. The method does not require to re-train base model. However, due to iterative procedure introduce at inference time it requires more computational effort. In addition, they introduce hyperparameters like a number of iterations and learning rate to inference phase. Finally, in the case of FeedbackProp, the partial evidence labels can only be a subset of labels that base model can recognize. Our method, however, can accept any kind of labels as partial evidence. In addition, our method does not introduce any additional computations nor additional parameters to inference phase. The comparison shows that our method outperforms FeedbackProp while being an significantly faster at inference phase.
{ "cite_N": [ "@cite_7" ], "mid": [ "2963175631" ], "abstract": [ "We propose an inference procedure for deep convolutional neural networks (CNNs) when partial evidence is available. Our method consists of a general feedback-based propagation approach (feedback-prop) that boosts the prediction accuracy for an arbitrary set of unknown target labels when the values for a non-overlapping arbitrary set of target labels are known. We show that existing models trained in a multi-label or multi-task setting can readily take advantage of feedback-prop without any retraining or fine-tuning. Our feedback-prop inference procedure is general, simple, reliable, and works on different challenging visual recognition tasks. We present two variants of feedback-prop based on layer-wise and residual iterative updates. We experiment using several multi-task models and show that feedback-prop is effective in all of them. Our results unveil a previously unreported but interesting dynamic property of deep CNNs. We also present an associated technical approach that takes advantage of this property for inference under partial evidence in general visual recognition tasks." ] }
1901.00117
2907704766
Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters. It is particularly relevant for transferring policies learned in a simulation environment to the real world. Several existing approaches involve sampling large batches of trajectories which reflect the differences in various possible environments, and then selecting some subset of these to learn robust policies, such as the ones that result in the worst performance. We propose an active learning based framework, EffAcTS, to selectively choose model parameters for this purpose so as to collect only as much data as necessary to select such a subset. We apply this framework to an existing method, namely EPOpt, and experimentally validate the gains in sample efficiency and the performance of our approach on standard continuous control tasks. We also present a Multi-Task Learning perspective to the problem of Robust Policy Search, and draw connections from our proposed framework to existing work on Multi-Task Learning.
@cite_29 learn controllers with a specific funtional form using trajectories sampled for parameters drawn from an ensemble, and optimize for the average case performance. @cite_28 propose EPOpt, which learns a Neural Network (NN) policy using a model-free DRL algorithm, but on simulated domains sampled from an ensemble of models. An adversarial approach to training is taken that involves selectively exposing to the model-free learner only data from those sampled models on which it learner exhibits the least performance. Even though this is a more sophisticated approach than the former and is demonstrated to have greater performance and robustness, the number of trajectories collected is still very large. @cite_17 also propose an approach that optimizes the average case performance, but additionally performs explicit system identification, and the estimated model parameters are fed to a NN policy as additional context information alongside the original observations. Again, the data requirements are quite large, both for policy learning as well as system identification. Approaches related to learning from an ensemble of models have also been studied under Domain Randomization ( @cite_1 ).
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_1", "@cite_17" ], "mid": [ "2964173023", "1966784014", "2605102758", "2963614114" ], "abstract": [ "Sample complexity and safety are major challenges when learning policies with reinforcement learning for real-world tasks, especially when the policies are represented using rich function approximators like deep neural networks. Model-based methods where the real-world target domain is approximated using a simulated source domain provide an avenue to tackle the above challenges by augmenting real data with simulated data. However, discrepancies between the simulated source domain and the target domain pose a challenge for simulated training. We introduce the EPOpt algorithm, which uses an ensemble of simulated source domains and a form of adversarial training to learn policies that are robust and generalize to a broad range of possible target domains, including to unmodeled effects. Further, the probability distribution over source domains in the ensemble can be adapted using data from the target domain and approximate Bayesian methods, to progressively make it a better approximation. Thus, learning on a model ensemble, along with source domain adaptation, provides the benefit of both robustness and learning.", "We introduce methods for optimizing physics-based walking controllers for robustness to uncertainty. Many unknown factors, such as external forces, control torques, and user control inputs, cannot be known in advance and must be treated as uncertain. These variables are represented with probability distributions, and a return function scores the desirability of a single motion. Controller optimization entails maximizing the expected value of the return, which is computed by Monte Carlo methods. We demonstrate examples with different sources of uncertainty and task constraints. Optimizing control strategies under uncertainty increases robustness and produces natural variations in style.", "Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.", "" ] }
1901.00117
2907704766
Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters. It is particularly relevant for transferring policies learned in a simulation environment to the real world. Several existing approaches involve sampling large batches of trajectories which reflect the differences in various possible environments, and then selecting some subset of these to learn robust policies, such as the ones that result in the worst performance. We propose an active learning based framework, EffAcTS, to selectively choose model parameters for this purpose so as to collect only as much data as necessary to select such a subset. We apply this framework to an existing method, namely EPOpt, and experimentally validate the gains in sample efficiency and the performance of our approach on standard continuous control tasks. We also present a Multi-Task Learning perspective to the problem of Robust Policy Search, and draw connections from our proposed framework to existing work on Multi-Task Learning.
A recent work that learns from an ensemble of models is ( @cite_13 ), but the ensemble here consists of learned DNN models of the dynamics for use in Model Based RL, rather than being induced by changing physical properties of the environment. A similar ensemble generated by perturbing an already learned model is used for planning through in ( @cite_3 ). This work also does not deal with model uncertainties with physical meaning.
{ "cite_N": [ "@cite_13", "@cite_3" ], "mid": [ "2963846183", "2205975260" ], "abstract": [ "Model-free reinforcement learning (RL) methods are succeeding in a growing number of tasks, aided by recent advances in deep learning. They tend to suffer from high sample complexity, however, which hinders their use in real-world domains. Alternatively, model-based reinforcement learning promises to reduce sample complexity, but tends to require careful tuning and to date have succeeded mainly in restrictive domains where simple models are sufficient for learning. In this paper, we analyze the behavior of vanilla model-based reinforcement learning methods when deep neural networks are used to learn both the model and the policy, and show that the learned policy tends to exploit regions where insufficient data is available for the model to be learned, causing instability in training. To overcome this issue, we propose to use an ensemble of models to maintain the model uncertainty and regularize the learning process. We further show that the use of likelihood ratio derivatives yields much more stable learning. Altogether, our approach Model-Ensemble Trust-Region Policy Optimization (ME-TRPO) significantly reduces the sample complexity compared to model-free deep RL methods on challenging continuous control benchmark tasks", "While a lot of progress has recently been made in dynamic motion planning for humanoid robots, much of this work has remained limited to simulation. Here we show that executing the resulting trajectories on a Darwin-OP robot, even with local feedback derived from the optimizer, does not result in stable movements. We then develop a new trajectory optimization method, adapting our earlier CIO algorithm to plan through ensembles of perturbed models. This makes the plan robust to model uncertainty, and leads to successful execution on the robot. We obtain a high rate of task completion without trajectory divergence (falling) in dynamic forward walking, sideways walking, and turning, and a similarly high success rate in getting up from the floor (the robot broke before we could quantify the latter). Even though the planning is still done offline, the present work represents a significant step towards automating the tedious scripting of complex movements." ] }
1901.00117
2907704766
Robust Policy Search is the problem of learning policies that do not degrade in performance when subject to unseen environment model parameters. It is particularly relevant for transferring policies learned in a simulation environment to the real world. Several existing approaches involve sampling large batches of trajectories which reflect the differences in various possible environments, and then selecting some subset of these to learn robust policies, such as the ones that result in the worst performance. We propose an active learning based framework, EffAcTS, to selectively choose model parameters for this purpose so as to collect only as much data as necessary to select such a subset. We apply this framework to an existing method, namely EPOpt, and experimentally validate the gains in sample efficiency and the performance of our approach on standard continuous control tasks. We also present a Multi-Task Learning perspective to the problem of Robust Policy Search, and draw connections from our proposed framework to existing work on Multi-Task Learning.
Although EPOpt uses only an appropriate subset of models to train on, none of the above approaches consider ways to sample trajectories only as necessary. Our proposed framework employs active learning to decide with data from only a few model parameters the models for which the agent requires more training. Active sampling approaches have also been explored for task selection in Multi-Task learning by @cite_27 , a viewpoint we discuss in more detail in section .
{ "cite_N": [ "@cite_27" ], "mid": [ "2963488722" ], "abstract": [ "One of the long-standing challenges in Artificial Intelligence for learning goal-directed behavior is to build a single agent which can solve multiple tasks. Recent progress in multi-task learning for goal-directed sequential problems has been in the form of distillation based learning wherein a student network learns from multiple task-specific expert networks by mimicking the task-specific policies of the expert networks. While such approaches offer a promising solution to the multi-task learning problem, they require supervision from large expert networks which require extensive data and computation time for training. In this work, we propose an efficient multi-task learning framework which solves multiple goal-directed tasks in an on-line setup without the need for expert supervision. Our work uses active learning principles to achieve multi-task learning by sampling the harder tasks more than the easier ones. We propose three distinct models under our active sampling framework. An adaptive method with extremely competitive multi-tasking performance. A UCB-based meta-learner which casts the problem of picking the next task to train on as a multi-armed bandit problem. A meta-learning method that casts the next-task picking problem as a full Reinforcement Learning problem and uses actor-critic methods for optimizing the multi-tasking performance directly. We demonstrate results in the Atari 2600 domain on seven multi-tasking instances: three 6-task instances, one 8-task instance, two 12-task instances and one 21-task instance." ] }
1901.00148
2907715846
Existing pose estimation approaches fall into two categories: single-stage and multi-stage methods. While multi-stage methods are seemingly more suited for the task, their performance in current practice is not as good as single-stage methods. This work studies this issue. We argue that the current multi-stage methods' unsatisfactory performance comes from the insufficiency in various design choices. We propose several improvements, including the single-stage module design, cross stage feature aggregation, and coarse-to-fine supervision. The resulting method establishes the new state-of-the-art on both MS COCO and MPII Human Pose dataset, justifying the effectiveness of a multi-stage architecture. The source code is publicly available for further research.
Single-Stage Approach Single-stage methods @cite_34 @cite_1 @cite_11 @cite_27 are based on backbone networks that are well tuned on image classification tasks, such as VGG @cite_19 or ResNet @cite_10 . Papandreou al @cite_34 designs a network to generate heat maps as well as their relative offsets to get the final predictions of the keypoints. He al @cite_1 proposes Mask R-CNN to first generate person box proposals and then apply single-person pose estimation. Chen al @cite_11 which is the winner of COCO 2017 keypoint challenge leverages a Cascade Pyramid Network (CPN) to refine the process of pose estimation. The proposed online hard keypoints mining (OHKM) loss is used to deal with hard keypoints. Xiao al @cite_27 provides a baseline method that is simple and effective in the pose estimation task. In spite of their good performance, these methods have encountered a common bottleneck. Simply increasing the model capacity does not give rise to much improvement in performance. This is illustrated in both Figure and Table .
{ "cite_N": [ "@cite_1", "@cite_19", "@cite_27", "@cite_34", "@cite_10", "@cite_11" ], "mid": [ "", "1686810756", "2796779902", "", "2194775991", "2769331938" ], "abstract": [ "", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and evaluation more difficult. This work provides baseline methods that are surprisingly simple and effective, thus helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be released.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge.Code (this https URL) and the detection results are publicly available for further research." ] }
1901.00148
2907715846
Existing pose estimation approaches fall into two categories: single-stage and multi-stage methods. While multi-stage methods are seemingly more suited for the task, their performance in current practice is not as good as single-stage methods. This work studies this issue. We argue that the current multi-stage methods' unsatisfactory performance comes from the insufficiency in various design choices. We propose several improvements, including the single-stage module design, cross stage feature aggregation, and coarse-to-fine supervision. The resulting method establishes the new state-of-the-art on both MS COCO and MPII Human Pose dataset, justifying the effectiveness of a multi-stage architecture. The source code is publicly available for further research.
Multi-Stage Approach Multi-Stage methods @cite_39 @cite_24 @cite_3 @cite_36 @cite_16 @cite_32 aim to produce increasingly refined estimation. They can be bottom-up or top-down. In contrary, single-stage methods are all top-down.
{ "cite_N": [ "@cite_36", "@cite_32", "@cite_3", "@cite_39", "@cite_24", "@cite_16" ], "mid": [ "2307770531", "2795262365", "2555751471", "2964304707", "2559085405", "2742737904" ], "abstract": [ "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "We develop a robust multi-scale structure-aware neural network for human pose estimation. This method improves the recent deep conv-deconv hourglass models with four key improvements: (1) multi-scale supervision to strengthen contextual feature learning in matching body keypoints by combining feature heatmaps across scales, (2) multi-scale regression network at the end to globally optimize the structural matching of the multi-scale features, (3) structure-aware loss used in the intermediate supervision and at the regression to improve the matching of keypoints and respective neighbors to infer a higher-order matching configurations, and (4) a keypoint masking training scheme that can effectively fine-tune our network to robustly localize occluded keypoints via adjacent matches. Our method can effectively improve state-of-the-art pose estimation methods that suffer from difficulties in scale varieties, occlusions, and complex multi-person scenarios. This multi-scale supervision tightly integrates with the regression network to effectively (i) localize keypoints using the ensemble of multi-scale features, and (ii) infer global pose configuration by maximizing structural consistencies across multiple keypoints and scales. The keypoint masking training enhances these advantages to focus learning on hard occlusion samples. Our method achieves the leading position in the MPII challenge leaderboard among the state-of-the-art methods.", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets.", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.", "Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https: github.com bearpaw PyraNet." ] }
1901.00148
2907715846
Existing pose estimation approaches fall into two categories: single-stage and multi-stage methods. While multi-stage methods are seemingly more suited for the task, their performance in current practice is not as good as single-stage methods. This work studies this issue. We argue that the current multi-stage methods' unsatisfactory performance comes from the insufficiency in various design choices. We propose several improvements, including the single-stage module design, cross stage feature aggregation, and coarse-to-fine supervision. The resulting method establishes the new state-of-the-art on both MS COCO and MPII Human Pose dataset, justifying the effectiveness of a multi-stage architecture. The source code is publicly available for further research.
Bottom-up methods firstly predict individual joints in the image and then associate these joints into human instances. Cao al @cite_24 employs a VGG-19 @cite_19 network as a feature encoder, then the output features go through a multi-stage network resulting in heat maps and associations of the keypoints. Newell al @cite_3 proposes a network to simultaneously output keypoints and group assignments.
{ "cite_N": [ "@cite_24", "@cite_19", "@cite_3" ], "mid": [ "2559085405", "1686810756", "2555751471" ], "abstract": [ "We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "We introduce associative embedding, a novel method for supervising convolutional neural networks for the task of detection and grouping. A number of computer vision problems can be framed in this manner including multi-person pose estimation, instance segmentation, and multi-object tracking. Usually the grouping of detections is achieved with multi-stage pipelines, instead we propose an approach that teaches a network to simultaneously output detections and group assignments. This technique can be easily integrated into any state-of-the-art network architecture that produces pixel-wise predictions. We show how to apply this method to multi-person pose estimation and report state-of-the-art performance on the MPII and MS-COCO datasets." ] }
1901.00148
2907715846
Existing pose estimation approaches fall into two categories: single-stage and multi-stage methods. While multi-stage methods are seemingly more suited for the task, their performance in current practice is not as good as single-stage methods. This work studies this issue. We argue that the current multi-stage methods' unsatisfactory performance comes from the insufficiency in various design choices. We propose several improvements, including the single-stage module design, cross stage feature aggregation, and coarse-to-fine supervision. The resulting method establishes the new state-of-the-art on both MS COCO and MPII Human Pose dataset, justifying the effectiveness of a multi-stage architecture. The source code is publicly available for further research.
Top-down approaches first locate the persons using detectors @cite_5 @cite_26 @cite_37 . And a single person pose estimator is then used to predict the keypoints locations. Wei al @cite_39 employs deep convolutional neural networks as feature encoder to estimate human pose. This work designs a sequential architecture composed of convolutional networks to implicitly model long-range dependencies between joints. Hourglass @cite_36 is proposed to apply intermediate supervision to repeated down-sampling, up-sampling processing for pose estimation task. @cite_16 adopts Hourglass and further design a Pyramid Residual Module (PRMs) to enhance the invariance in different scales. Many recent works @cite_32 @cite_20 @cite_38 @cite_31 are based on Hourglass and propose various improvements. While these multi-stage methods work well on MPII @cite_4 , they are not competitive on the more challenging tasks on COCO @cite_28 . For example, the winners of COCO keypoint challenge on 2016 @cite_34 , 2017 @cite_11 are all single-stage based, as well as the recent simple baseline work @cite_27 . In this work, we propose several modifications on existing multi-stage architecture and show that the multi-stage architecture is better.
{ "cite_N": [ "@cite_38", "@cite_37", "@cite_26", "@cite_4", "@cite_36", "@cite_28", "@cite_32", "@cite_34", "@cite_39", "@cite_27", "@cite_5", "@cite_31", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "", "2797527871", "2565639579", "2080873731", "2307770531", "1861492603", "2795262365", "", "2964304707", "2796779902", "2613718673", "", "2742737904", "", "2769331938" ], "abstract": [ "", "Recent CNN based object detectors, no matter one-stage methods like YOLO, SSD, and RetinaNe or two-stage detectors like Faster R-CNN, R-FCN and FPN are usually trying to directly finetune from ImageNet pre-trained models designed for image classification. There has been little work discussing on the backbone feature extractor specifically designed for the object detection. More importantly, there are several differences between the tasks of image classification and object detection. 1. Recent object detectors like FPN and RetinaNet usually involve extra stages against the task of image classification to handle the objects with various scales. 2. Object detection not only needs to recognize the category of the object instances but also spatially locate the position. Large downsampling factor brings large valid receptive field, which is good for image classification but compromises the object location ability. Due to the gap between the image classification and object detection, we propose DetNet in this paper, which is a novel backbone network specifically designed for object detection. Moreover, DetNet includes the extra stages against traditional backbone network for image classification, while maintains high spatial resolution in deeper layers. Without any bells and whistles, state-of-the-art results have been obtained for both object detection and instance segmentation on the MSCOCO benchmark based on our DetNet (4.8G FLOPs) backbone. The code will be released for the reproduction.", "Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But pyramid representations have been avoided in recent object detectors that are based on deep convolutional networks, partially because they are slow to compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.", "Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.", "This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a “stacked hourglass” network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.", "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.", "We develop a robust multi-scale structure-aware neural network for human pose estimation. This method improves the recent deep conv-deconv hourglass models with four key improvements: (1) multi-scale supervision to strengthen contextual feature learning in matching body keypoints by combining feature heatmaps across scales, (2) multi-scale regression network at the end to globally optimize the structural matching of the multi-scale features, (3) structure-aware loss used in the intermediate supervision and at the regression to improve the matching of keypoints and respective neighbors to infer a higher-order matching configurations, and (4) a keypoint masking training scheme that can effectively fine-tune our network to robustly localize occluded keypoints via adjacent matches. Our method can effectively improve state-of-the-art pose estimation methods that suffer from difficulties in scale varieties, occlusions, and complex multi-person scenarios. This multi-scale supervision tightly integrates with the regression network to effectively (i) localize keypoints using the ensemble of multi-scale features, and (ii) infer global pose configuration by maximizing structural consistencies across multiple keypoints and scales. The keypoint masking training enhances these advantages to focus learning on hard occlusion samples. Our method achieves the leading position in the MPII challenge leaderboard among the state-of-the-art methods.", "", "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.", "There has been significant progress on pose estimation and increasing interests on pose tracking in recent years. At the same time, the overall algorithm and system complexity increases as well, making the algorithm analysis and evaluation more difficult. This work provides baseline methods that are surprisingly simple and effective, thus helpful for inspiring and evaluating new ideas for the field. State-of-the-art results are achieved on challenging benchmarks. The code will be released.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.", "", "Articulated human pose estimation is a fundamental yet challenging task in computer vision. The difficulty is particularly pronounced in scale variations of human body parts when camera view changes or severe foreshortening happens. Although pyramid methods are widely used to handle scale changes at inference time, learning feature pyramids in deep convolutional neural networks (DCNNs) is still not well explored. In this work, we design a Pyramid Residual Module (PRMs) to enhance the invariance in scales of DCNNs. Given input features, the PRMs learn convolutional filters on various scales of input features, which are obtained with different subsampling ratios in a multibranch network. Moreover, we observe that it is inappropriate to adopt existing methods to initialize the weights of multi-branch networks, which achieve superior performance than plain networks in many tasks recently. Therefore, we provide theoretic derivation to extend the current weight initialization scheme to multi-branch network structures. We investigate our method on two standard benchmarks for human pose estimation. Our approach obtains state-of-the-art results on both benchmarks. Code is available at https: github.com bearpaw PyraNet.", "", "The topic of multi-person pose estimation has been largely improved recently, especially with the development of convolutional neural network. However, there still exist a lot of challenging cases, such as occluded keypoints, invisible keypoints and complex background, which cannot be well addressed. In this paper, we present a novel network structure called Cascaded Pyramid Network (CPN) which targets to relieve the problem from these \"hard\" keypoints. More specifically, our algorithm includes two stages: GlobalNet and RefineNet. GlobalNet is a feature pyramid network which can successfully localize the \"simple\" keypoints like eyes and hands but may fail to precisely recognize the occluded or invisible keypoints. Our RefineNet tries explicitly handling the \"hard\" keypoints by integrating all levels of feature representations from the GlobalNet together with an online hard keypoint mining loss. In general, to address the multi-person pose estimation problem, a top-down pipeline is adopted to first generate a set of human bounding boxes based on a detector, followed by our CPN for keypoint localization in each human bounding box. Based on the proposed algorithm, we achieve state-of-art results on the COCO keypoint benchmark, with average precision at 73.0 on the COCO test-dev dataset and 72.1 on the COCO test-challenge dataset, which is a 19 relative improvement compared with 60.5 from the COCO 2016 keypoint challenge.Code (this https URL) and the detection results are publicly available for further research." ] }
1901.00282
2907361665
In the presence of large sets of labeled data, Deep Learning (DL) has accomplished extraordinary triumphs in the avenue of computer vision, particularly in object classification and recognition tasks. However, DL cannot always perform well when the training and testing images come from different distributions or in the presence of domain shift between training and testing images. They also suffer in the absence of labeled input data. Domain adaptation (DA) methods have been proposed to make up the poor performance due to domain shift. In this paper, we present a new unsupervised deep domain adaptation method based on the alignment of second order statistics (covariances) as well as maximum mean discrepancy of the source and target data with a two stream Convolutional Neural Network (CNN). We demonstrate the ability of the proposed approach to achieve state-of the-art performance for image classification on three benchmark domain adaptation datasets: Office-31 [27], Office-Home [37] and Office-Caltech [8].
There have been many domain adaptation methods @cite_28 @cite_17 @cite_25 @cite_37 @cite_0 @cite_10 @cite_1 @cite_31 proposed in recent years to solve the problem of domain bias. All the methods can be categorized into two main categories, Conventional Domain Adaptation and Deep Domain Adaptation methods. The conventional domain adaptation methods develop their model into two stages, feature extraction and classification. In the first phase, these domain adaptation methods extract features and in the second phase, a classifiers is trained to classify the objects. However, the performance of these DA methods are not satisfactory.
{ "cite_N": [ "@cite_37", "@cite_28", "@cite_1", "@cite_0", "@cite_31", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "2963403405", "2963767194", "2214409633", "2963864946", "2964288524", "2962687275", "2964057616", "2963993484" ], "abstract": [ "We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-to-image translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.", "Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.", "Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this paper, we address the case when the target domain is unlabeled, requiring unsupervised adaptation. CORAL [18] is a simple unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation. Here, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (Deep CORAL). Experiments on standard benchmark datasets show state-of-the-art performance. Our code is available at: https: github.com VisionLearningGroup CORAL.", "In this work, we present a method for unsupervised domain adaptation. Many adversarial learning methods train domain classifier networks to distinguish the features as either a source or target and train a feature generator network to mimic the discriminator. Two problems exist with these methods. First, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. Second, these methods aim to completely match the feature distributions between different domains, which is difficult because of each domain's characteristics. To solve these problems, we introduce a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to maximize the discrepancy between two classifiers' outputs to detect target samples that are far from the support of the source. A feature generator learns to generate target features near the support to minimize the discrepancy. Our method outperforms other methods on several datasets of image classification and semantic segmentation. The codes are available at https: github.com mil-tokyo MCD_DA", "Recent works showed that Generative Adversarial Networks (GANs) can be successfully applied in unsupervised domain adaptation, where, given a labeled source dataset and an unlabeled target dataset, the goal is to train powerful classifiers for the target samples. In particular, it was shown that a GAN objective function can be used to learn target features indistinguishable from the source ones. In this work, we extend this framework by (i) forcing the learned feature extractor to be domain-invariant, and (ii) training it through data augmentation in the feature space, namely performing feature augmentation. While data augmentation in the image space is a well established technique in deep learning, feature augmentation has not yet received the same level of attention. We accomplish it by means of a feature generator trained by playing the GAN minimax game against source features. Results show that both enforcing domain-invariance and performing feature augmentation lead to superior or comparable performance to state-of-the-art results in several unsupervised domain adaptation benchmarks.", "The objective of unsupervised domain adaptation is to leverage features from a labeled source domain and learn a classifier for an unlabeled target domain, with a similar but different data distribution. Most deep learning approaches to domain adaptation consist of two steps: (i) learn features that preserve a low risk on labeled samples (source domain) and (ii) make the features from both domains to be as indistinguishable as possible, so that a classifier trained on the source can also be applied on the target domain. In general, the classifiers in step (i) consist of fully-connected layers applied directly on the indistinguishable features learned in (ii). In this paper, we propose a different way to do the classification, using similarity learning. The proposed method learns a pairwise similarity function in which classification can be performed by computing similarity between prototype representations of each category. The domain-invariant features and the categorical prototype representations are learned jointly and in an end-to-end fashion. At inference time, images from the target domain are compared to the prototypes and the label associated with the one that best matches the image is outputed. The approach is simple, scalable and effective. We show that our model achieves state-of-the-art performance in different unsupervised domain adaptation scenarios." ] }
1901.00282
2907361665
In the presence of large sets of labeled data, Deep Learning (DL) has accomplished extraordinary triumphs in the avenue of computer vision, particularly in object classification and recognition tasks. However, DL cannot always perform well when the training and testing images come from different distributions or in the presence of domain shift between training and testing images. They also suffer in the absence of labeled input data. Domain adaptation (DA) methods have been proposed to make up the poor performance due to domain shift. In this paper, we present a new unsupervised deep domain adaptation method based on the alignment of second order statistics (covariances) as well as maximum mean discrepancy of the source and target data with a two stream Convolutional Neural Network (CNN). We demonstrate the ability of the proposed approach to achieve state-of the-art performance for image classification on three benchmark domain adaptation datasets: Office-31 [27], Office-Home [37] and Office-Caltech [8].
Obtaining the features using deep neural network even without adaptation technique outperform the conventional DA methods by large margin. However, the results achieved with the Deep Convolutional Activation Features (DeCAF) @cite_20 even without using any adaptation technique to the target data are remarkably better than the outcomes acquired with any conventional domain adaptation methods because DNNs extract more robust features using nonlinear transform. As a result deep neural network based domain adaptation methods are getting popular day by day.
{ "cite_N": [ "@cite_20" ], "mid": [ "2155541015" ], "abstract": [ "We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms." ] }
1901.00282
2907361665
In the presence of large sets of labeled data, Deep Learning (DL) has accomplished extraordinary triumphs in the avenue of computer vision, particularly in object classification and recognition tasks. However, DL cannot always perform well when the training and testing images come from different distributions or in the presence of domain shift between training and testing images. They also suffer in the absence of labeled input data. Domain adaptation (DA) methods have been proposed to make up the poor performance due to domain shift. In this paper, we present a new unsupervised deep domain adaptation method based on the alignment of second order statistics (covariances) as well as maximum mean discrepancy of the source and target data with a two stream Convolutional Neural Network (CNN). We demonstrate the ability of the proposed approach to achieve state-of the-art performance for image classification on three benchmark domain adaptation datasets: Office-31 [27], Office-Home [37] and Office-Caltech [8].
MMD is a popular metric for measuring the distributions of source and target samples. @cite_32 proposed the Deep Domain Confusion (DDC) domain adaptation framework based on a confusion layer for the discrepancy between source and target data. In @cite_1 , the previous work is extended by introducing soft label distribution matching loss. @cite_4 proposed the Domain Adaptation Network (DAN) that propose the integration of MMDs defined among several layers, including the soft prediction layer. This idea was further improved by introducing residual transfer networks @cite_16 and Joint Adaptation Networks @cite_34 . @cite_30 proposed a new Deep Hasing Network for unsupervised domain adaptation where hash codes are used to address the domain adaptation issue.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_1", "@cite_32", "@cite_16", "@cite_34" ], "mid": [ "2627183927", "2159291411", "2214409633", "1565327149", "2279034837", "2964278684" ], "abstract": [ "In recent years, deep neural networks have emerged as a dominant machine learning tool for a wide variety of application domains. However, training a deep neural network requires a large amount of labeled data, which is an expensive process in terms of time, labor and human expertise. Domain adaptation or transfer learning algorithms address this challenge by leveraging labeled data in a different, but related source domain, to develop a model for the target domain. Further, the explosive growth of digital data has posed a fundamental challenge concerning its storage and retrieval. Due to its storage and retrieval efficiency, recent years have witnessed a wide application of hashing in a variety of computer vision applications. In this paper, we first introduce a new dataset, Office-Home, to evaluate domain adaptation algorithms. The dataset contains images of a variety of everyday objects from multiple domains. We then propose a novel deep learning framework that can exploit labeled source data and unlabeled target data to learn informative hash codes, to accurately classify unseen target data. To the best of our knowledge, this is the first research effort to exploit the feature learning capabilities of deep neural networks to learn representative hash codes to address the domain adaptation problem. Our extensive empirical studies on multiple transfer tasks corroborate the usefulness of the framework in learning efficient hash codes which outperform existing competitive baselines for unsupervised domain adaptation.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.", "Deep networks have been successfully applied to learn transferable features for adapting models from a source domain to a different target domain. In this paper, we present joint adaptation networks (JAN), which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion. Adversarial training strategy is adopted to maximize JMMD such that the distributions of the source and target domains are made more distinguishable. Learning can be performed by stochastic gradient descent with the gradients computed by back-propagation in linear-time. Experiments testify that our model yields state of the art results on standard datasets." ] }
1901.00140
2907429861
Low-rank matrix factorization (LRMF) has received much popularity owing to its successful applications in both computer vision and data mining. By assuming the noise term to come from a Gaussian, Laplace or a mixture of Gaussian distributions, significant efforts have been made on optimizing the (weighted) @math or @math -norm loss between an observed matrix and its bilinear factorization. However, the type of noise distribution is generally unknown in real applications and inappropriate assumptions will inevitably deteriorate the behavior of LRMF. On the other hand, real data are often corrupted by skew rather than symmetric noise. To tackle this problem, this paper presents a novel LRMF model called AQ-LRMF by modeling noise with a mixture of asymmetric Laplace distributions. An efficient algorithm based on the expectation-maximization (EM) algorithm is also offered to estimate the parameters involved in AQ-LRMF. The AQ-LRMF model possesses the advantage that it can approximate noise well no matter whether the real noise is symmetric or skew. The core idea of AQ-LRMF lies in solving a weighted @math problem with weights being learned from data. The experiments conducted with synthetic and real datasets show that AQ-LRMF outperforms several state-of-the-art techniques. Furthermore, AQ-LRMF also has the superiority over the other algorithms that it can capture local structural information contained in real images.
Recently, the research community began to focus on probabilistic extensions of robust matrix factorizations. Generally speaking, it is assumed that @math , where @math is a noise matrix. Lakshminarayanan @cite_9 replaced Gaussian noise with Gaussian scale mixture noise. Nevertheless, it may be ineffective when processing heavy-tailed (such as Laplace-type) noise. Wang @cite_1 proposed a probabilistic @math -norm LRMF, but they did not employ a fully Bayesian inference process. Beyond Laplace noise, Meng and Torre @cite_4 presented a robust LRMF with unknown noise modeled by an MoG. In essence, the method iteratively optimizes @math , where @math are the MoG parameters which are automatically updated during optimization, and @math is the weight function of @math . Due to the benefit to adaptively assign small weights to corrupted entries, MoG-LRMF has been reported to be fairly effective. More recently, Cao @cite_12 presented a novel LRMF model by assuming noise as a mixture of exponential power (MoEP) distributions and also offered the corresponding learning algorithm.
{ "cite_N": [ "@cite_9", "@cite_1", "@cite_12", "@cite_4" ], "mid": [ "2188214461", "199797433", "2492899067", "2138507544" ], "abstract": [ "We analyse the noise arising in collaborative filtering when formalised as a probabilistic matrix factorisation problem. We show empirically that modelling row- and column-specific variances is important, the noise being in general non-Gaussian and heteroscedastic. We also advocate for the use of a Student-t prior for the latent features as the standard Gaussian is included as a special case. We derive several variational inference algorithms and estimate the hyperparameters by type-II maximum likelihood. Experiments on real data show that the predictive performance is significantly improved.", "Matrix factorization underlies a large variety of computer vision applications. It is a particularly challenging problem for large-scale applications and when there exist outliers and missing data. In this paper, we propose a novel probabilistic model called Probabilistic Robust Matrix Factorization (PRMF) to solve this problem. In particular, PRMF is formulated with a Laplace error and a Gaussian prior which correspond to an l1 loss and an l2 regularizer, respectively. For model learning, we devise a parallelizable expectation-maximization (EM) algorithm which can potentially be applied to large-scale applications. We also propose an online extension of the algorithm for sequential data to offer further scalability. Experiments conducted on both synthetic data and some practical computer vision applications show that PRMF is comparable to other state-of-the-art robust matrix factorization methods in terms of accuracy and outperforms them particularly for large data matrices.", "Many computer vision problems can be posed as learning a low-dimensional subspace from high dimensional data. The low rank matrix factorization (LRMF) represents a commonly utilized subspace learning strategy. Most of the current LRMF techniques are constructed on the optimization problem using L_1 norm and L_2 norm, which mainly deal with Laplacian and Gaussian noise, respectively. To make LRMF capable of adapting more complex noise, this paper proposes a new LRMF model by assuming noise as Mixture of Exponential Power (MoEP) distributions and proposes a penalized MoEP model by combining the penalized likelihood method with MoEP distributions. Such setting facilitates the learned LRMF model capable of automatically fitting the real noise through MoEP distributions. Each component in this mixture is adapted from a series of preliminary super-or sub-Gaussian candidates. An Expectation Maximization (EM) algorithm is also designed to infer the parameters involved in the proposed PMoEP model. The advantage of our method is demonstrated by extensive experiments on synthetic data, face modeling and hyperspectral image restoration.", "Many problems in computer vision can be posed as recovering a low-dimensional subspace from high-dimensional visual data. Factorization approaches to low-rank subspace estimation minimize a loss function between the observed measurement matrix and a bilinear factorization. Most popular loss functions include the L1 and L2 losses. While L1 is optimal for Laplacian distributed noise, L2 is optimal for Gaussian noise. However, real data is often corrupted by an unknown noise distribution, which is unlikely to be purely Gaussian or Laplacian. To address this problem, this paper proposes a low-rank matrix factorization problem with a Mixture of Gaussians (MoG) noise. The MoG model is a universal approximator for any continuous distribution, and hence is able to model a wider range of real noise distributions. The parameters of the MoG model can be estimated with a maximum likelihood method, while the subspace is computed with standard approaches. We illustrate the benefits of our approach in extensive synthetic, structure from motion, face modeling and background subtraction experiments." ] }
1901.00140
2907429861
Low-rank matrix factorization (LRMF) has received much popularity owing to its successful applications in both computer vision and data mining. By assuming the noise term to come from a Gaussian, Laplace or a mixture of Gaussian distributions, significant efforts have been made on optimizing the (weighted) @math or @math -norm loss between an observed matrix and its bilinear factorization. However, the type of noise distribution is generally unknown in real applications and inappropriate assumptions will inevitably deteriorate the behavior of LRMF. On the other hand, real data are often corrupted by skew rather than symmetric noise. To tackle this problem, this paper presents a novel LRMF model called AQ-LRMF by modeling noise with a mixture of asymmetric Laplace distributions. An efficient algorithm based on the expectation-maximization (EM) algorithm is also offered to estimate the parameters involved in AQ-LRMF. The AQ-LRMF model possesses the advantage that it can approximate noise well no matter whether the real noise is symmetric or skew. The core idea of AQ-LRMF lies in solving a weighted @math problem with weights being learned from data. The experiments conducted with synthetic and real datasets show that AQ-LRMF outperforms several state-of-the-art techniques. Furthermore, AQ-LRMF also has the superiority over the other algorithms that it can capture local structural information contained in real images.
On the other hand, robust principle component analysis (robust PCA) @cite_2 considers an issue that is similar to LRMF, that is, The underlying assumption of robust PCA is that the original data can be decomposed into the sum of a low-rank matrix and a sparse outlier matrix (i.e., the number of non-zero elements in @math is small). Clearly, @math plays the same role as the product of @math and @math . Since Eq. ) involves a non-convex objective function, @cite_2 consider a tractable convex alternative, called principal component pursuit, to handle the corresponding problem, namely, where @math denotes the nuclear norm. It is worthwhile that principal component pursuit sometimes may fail to recover @math when the real observation is also corrupted by a dense inlier matrix. To overcome this shortcoming, Zhou @cite_10 proposed the stable principal component pursuit (SPCP) by solving Actually, the underlying assumption of SPCP is @math , where @math is low-rank component, @math is the sparse outliers and @math is the small-magnitude noise that can be modeled by Gaussian. Both theory and experiments have shown that SPCP guarantees the stable recovery of @math .
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2045983409", "2145962650" ], "abstract": [ "In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entry-wise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.", "This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces." ] }
1901.00132
2963652257
In 5G research, it is traditionally assumed that vertical industries (a.k.a verticals) set the performance requirements for the services they want to offer to mobile users, and the mobile operators alone are in charge of orchestrating their resources so as to meet such requirements. Motivated by the observation that successful orchestration requires reliable traffic predictions, in this paper we investigate the effects of having the verticals, instead of the mobile operators, performing such predictions. Leveraging a real-world, large-scale, crowd-sourced trace, we find that involving the verticals in the prediction process reduces the prediction errors and improves the quality of the resulting orchestration decisions.
Unlike their fourth-generation counterparts, 5G networks will not only transport data, but also process them. Network, computing, and memory resources controlled by mobile network operators (MNOs), will concurrently support multiple services, under the network slicing paradigm @cite_0 @cite_6 . It is universally expected @cite_0 @cite_6 @cite_3 that vertical industries (e.g., automotive or media companies) specify the requirements of their services, i.e., which computation must be performed and the associated target key performance indicators (KPIs). MNOs, on the other hand, have to manage their network so as to ensure that all target KPIs are met at the lowest cost for themselves, a problem known as service orchestration @cite_10 @cite_11 .
{ "cite_N": [ "@cite_3", "@cite_6", "@cite_0", "@cite_10", "@cite_11" ], "mid": [ "2612074600", "2605961225", "2604174486", "2744111766", "2805731797" ], "abstract": [ "5G is envisioned to be a multi-service network supporting a wide range of verticals with a diverse set of performance and service requirements. Slicing a single physical network into multiple isolated logical networks has emerged as a key to realizing this vision. This article is meant to act as a survey, the first to the authors� knowledge, on this topic of prime interest. We begin by reviewing the state of the art in 5G network slicing and present a framework for bringing together and discussing existing work in a holistic manner. Using this framework, we evaluate the maturity of current proposals and identify a number of open research questions.", "We argue for network slicing as an efficient solution that addresses the diverse requirements of 5G mobile networks, thus providing the necessary flexibility and scalability associated with future network implementations. We elaborate on the challenges that emerge when designing 5G networks based on network slicing. We focus on the architectural aspects associated with the coexistence of dedicated as well as shared slices in the network. In particular, we analyze the realization options of a flexible radio access network with focus on network slicing and their impact on the design of 5G mobile networks. In addition to the technical study, this article provides an investigation of the revenue potential of network slicing, where the applications that originate from this concept and the profit capabilities from the network operator�s perspective are put forward.", "5G networks are expected to be able to satisfy users' different QoS requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on network slicing. Due to the diversity of 5G application scenarios, new mobility management schemes are greatly needed to guarantee seamless handover in network-slicing-based 5G systems. In this article, we introduce a logical architecture for network-slicing-based 5G systems, and present a scheme for managing mobility between different access networks, as well as a joint power and subchannel allocation scheme in spectrum-sharing two-tier systems based on network slicing, where both the co-tier interference and cross-tier interference are taken into account. Simulation results demonstrate that the proposed resource allocation scheme can flexibly allocate network resources between different slices in 5G systems. Finally, several open issues and challenges in network-slicing-based 5G networks are discussed, including network reconstruction, network slicing management, and cooperation with other 5G technologies.", "Network slicing is a technique for flexible resource provisioning in future wireless networks. With the powerful SDN and NFV technologies available, network slices can be quickly deployed and centrally managed, leading to simplified management, better resource utilization, and cost efficiency by commoditization of resources. Departing from the one-type-fits-all design philosophy, future wireless networks will employ the network slicing methodology in order to accommodate applications with widely diverse requirements over the same physical network. On the other hand, deciding how to efficiently allocate, manage, and control the slice resources in real time is very challenging. This article focuses on the algorithmic challenges that emerge in efficient network slicing, necessitating novel techniques from the communities of operation research, networking, and computer science.", "The next generation mobile transport networks shall transform into flexible and agile SDN NFV-based transport and computing platforms, capable of simultaneously supporting the needs of different vertical industries, e.g., automotive, e-health and media, by meeting a diverse range of networking and computing requirements. Network slicing, has emerged as the most promising approach to address this challenge by enabling per-slice management of virtualized resources and provisioning and managing slices tailored to the needs of different vertical industries. Service orchestration is the key enabler for slicing that allows efficient placement of virtual network functions over the infrastructure as well as optimal allocation of virtual resources among all network slices to deliver guaranteed, reliable and scalable services of different verticals. Besides, due to the limited footprint of infrastructure operators, it is also required to enable the interconnection and federation of multiple administrative domains, to effectively allow services to span across several providers. This paper presents the design of Service Orchestrator (SO) in the 5G-TRANSFORMER system, which deals with service orchestration and federation across multiple domains." ] }
1901.00132
2963652257
In 5G research, it is traditionally assumed that vertical industries (a.k.a verticals) set the performance requirements for the services they want to offer to mobile users, and the mobile operators alone are in charge of orchestrating their resources so as to meet such requirements. Motivated by the observation that successful orchestration requires reliable traffic predictions, in this paper we investigate the effects of having the verticals, instead of the mobile operators, performing such predictions. Leveraging a real-world, large-scale, crowd-sourced trace, we find that involving the verticals in the prediction process reduces the prediction errors and improves the quality of the resulting orchestration decisions.
Our purpose in this paper is to study a different model of interaction between vertical industries (henceforth verticals) and MNOs, whereby verticals provide not only the target KPIs but also an estimation of their expected traffic patterns. The reason for this change is that service orchestration is greatly simplified if the evolution of the demand to serve is known @cite_8 , or it can be reliably predicted @cite_12 , and verticals are in a better position than MNOs to make such a prediction. Indeed, unlike verticals, MNOs cannot access, for technical and legal reasons, detailed, application-layer information on the traffic flowing through their network. It follows that, since network slices are tailored around a single type of service I.e., services with the same KPIs. , the service-specific predictions that verticals can make may be more useful than predictions made by MNOs.
{ "cite_N": [ "@cite_12", "@cite_8" ], "mid": [ "2612759037", "2792251914" ], "abstract": [ "The emerging network slicing paradigm for 5G provides new business opportunities by enabling multi-tenancy support. At the same time, new technical challenges are introduced, as novel resource allocation algorithms are required to accommodate different business models. In particular, infrastructure providers need to implement radically new admission control policies to decide on network slices requests depending on their Service Level Agreements (SLA). When implementing such admission control policies, infrastructure providers may apply forecasting techniques in order to adjust the allocated slice resources so as to optimize the network utilization while meeting network slices' SLAs. This paper focuses on the design of three key network slicing building blocks responsible for (i) traffic analysis and prediction per network slice, (ii) admission control decisions for network slice requests, and (iii) adaptive correction of the forecasted load based on measured deviations. Our results show very substantial potential gains in terms of system utilization as well as a trade-off between conservative forecasting configurations versus more aggressive ones (higher gains, SLA risk).", "Thanks to network slicing, 5G networks will support a variety of services in a flexible and swift manner. In this context, we seek to make high-quality, joint optimal decisions concerning the placement of VNFs across the physical hosts for realizing the services, and the allocation of CPU resources in VNFs sharing a host. To this end, we present a queuing-based system model, accounting for all the entities involved in 5G networks. Then, we propose a fast and efficient solution strategy yielding near-optimal decisions. We evaluate our approach in multiple scenarios that well represent real-world services, and find it to consistently outperform state-of-the-art alternatives and closely match the optimum." ] }
1907.03880
2958588938
When designing swarm-robotic systems, systematic comparison of algorithms from different domains is necessary to determine which is capable of scaling up to handle the target problem size and target operating conditions. We propose a set of quantitative metrics for scalability, flexibility, and emergence which are capable of addressing these needs during the system design process. We demonstrate the applicability of our proposed metrics as a design tool by solving a large object gathering problem in temporally varying operating conditions using iterative hypothesis evaluation. We provide experimental results obtained in simulation for swarms of over 10,000 robots.
In recent years, many theoretical SR system design tools have become available @cite_22 @cite_9 @cite_8 @cite_14 . These tools have made it easier to conduct mathematical analysis of algorithms and derive analytical, rather than weakly inductive proofs of correctness @cite_25 . Despite this, there has not been a corresponding increase in the average swarm sizes used to evaluate new algorithms (Notable exceptions include @cite_8 (600 robots), @cite_11 (768 robots), @cite_7 (375 robots)). Investigation of simple behaviors such as pattern formation, localization, or collective motion, where design and computational complexity do not inherently limit scalability, is generally evaluated with relatively small scales ( robots @cite_25 ). Methods for more complex behaviors, such as foraging @cite_19 @cite_1 (20 robots), @cite_26 (30 robots), task allocation @cite_9 (25 robots) are likewise tested at similar scales.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_22", "@cite_7", "@cite_8", "@cite_9", "@cite_1", "@cite_19", "@cite_25", "@cite_11" ], "mid": [ "2041478716", "2114481362", "2146086743", "2011995167", "2274806495", "2139610651", "155446141", "", "", "" ], "abstract": [ "Methods of general applicability are searched for in swarm intelligence with the aim of gaining new insights about natural swarms and to develop design methodologies for artificial swarms. An ideal solution could be a ‘swarm calculus’ that allows to calculate key features of swarms such as expected swarm performance and robustness based on only a few parameters. To work towards this ideal, one needs to find methods and models with high degrees of generality. In this paper, we report two models that might be examples of exceptional generality. First, an abstract model is presented that describes swarm performance depending on swarm density based on the dichotomy between cooperation and interference. Typical swarm experiments are given as examples to show how the model fits to several different results. Second, we give an abstract model of collective decision making that is inspired by urn models. The effects of positive-feedback probability, that is increasing over time in a decision making system, are understood by the help of a parameter that controls the feedback based on the swarm’s current consensus. Several applicable methods, such as the description as Markov process, calculation of splitting probabilities, mean first passage times, and measurements of positive feedback, are discussed and applications to artificial and natural swarms are reported.", "Task partitioning is the decomposition of a task into two or more sub-tasks that can be tackled separately. Task partitioning can be observed in many species of social insects, as it is often an advantageous way of organizing the work of a group of individuals. Potential advantages of task partitioning are, among others: reduction of interference between workers, exploitation of individuals’ skills and specializations, energy efficiency, and higher parallelism. Even though swarms of robots can benefit from task partitioning in the same way as social insects do, only few works in swarm robotics are dedicated to this subject. In this paper, we study the case in which a swarm of robots has to tackle a task that can be partitioned into a sequence of two sub-tasks. We propose a method that allows the individual robots in the swarm to decide whether to partition the given task or not. The method is self-organized, relies on the experience of each individual, and does not require explicit communication between robots. We evaluate the method in simulation experiments, using foraging as testbed. We study cases in which task partitioning is preferable and cases in which it is not. We show that the proposed method leads to good performance of the swarm in both cases, by employing task partitioning only when it is advantageous. We also show that the swarm is able to react to changes in the environmental conditions by adapting the behavior on-line. Scalability experiments show that the proposed method performs well across all the tested group sizes.", "We present a decentralized, scalable approach to assembling a group of heterogeneous parts into different products using a swarm of robots. While the assembly plans are predetermined, the exact sequence of assembly of parts and the allocation of subassembly tasks to robots are determined by the interactions between robots in a decentralized fashion in real time. Our approach is based on developing a continuous abstraction of the system derived from models of chemical reactions and formulating the strategy as a problem of selecting rates of assembly and disassembly. These rates are mapped onto probabilities that determine stochastic control policies for individual robots, which then produce the desired aggregate behavior. This top-down approach to determining robot controllers also allows us to optimize the rates at the abstract level to achieve fast convergence to the specified target numbers of products. Because the method incorporates programs for assembly and disassembly, changes in demand can lead to reconfiguration in a seamless fashion. We illustrate the methodology using a physics-based simulator with examples involving 15 robots and two types of final products.", "Designing and analyzing self-organizing systems such as robotic swarms is a challenging task even though we have complete knowledge about the robot’s interior. It is difficult to determine the individual robot’s behavior based on the swarm behavior and vice versa due to the high number of agent–agent interactions. A step towards a solution of this problem is the development of appropriate models which accurately predict the swarm behavior based on a specified control algorithm. Such models would reduce the necessary number of time-consuming simulations and experiments during the design process of an algorithm. In this paper we propose a model with focus on an explicit representation of space because the effectiveness of many swarm robotic scenarios depends on spatial inhomogeneity. We use methods of statistical physics to address spatiality. Starting from a description of a single robot we derive an abstract model of swarm motion. The model is then extended to a generic model framework of communicating robots. In two examples we validate models against simulation results. Our experience shows that qualitative correctness is easily achieved, while quantitative correctness is disproportionately more difficult but still possible.", "Currently, the control software of swarm robotics systems is created by ad hoc development. This makes it hard to deploy these systems in real-world scenarios. In particular, it is difficult to maintain, analyse, or verify the systems. Formal methods can contribute to overcome these problems. However, they usually do not guarantee that the implementation matches the specification, because the system’s control code is typically generated manually. Also, there is cultural resistance to apply formal methods; they may be perceived as an additional step that does not add value to the final product. To address these problems, we propose supervisory control theory for the domain of swarm robotics. The advantages of supervisory control theory, and its associated tools, are a reduction in the amount of ad hoc development, the automatic generation of control code from modelled specifications, proofs of properties over generated control code, and the reusability of formally designed controllers between different robotic platforms. These advantages are demonstrated in four case studies using the e-puck and Kilobot robot platforms. Experiments with up to 600 physical robots are reported, which show that supervisory control theory can be used to formally develop state-of-the-art solutions to a range of problems in swarm robotics.", "This paper presents a methodology for finding optimal control parameters as well as optimal system parameters for robot swarm controllers using probabilistic, population dynamic models. With distributed task allocation as a case study, we show how optimal control parameters leading to a desired steady-state task distribution for two fully-distributed algorithms can be found even if the parameters of the system are unknown. First, a reactive algorithm in which robots change states independently from each other and which leads to a linear macroscopic model describing the dynamics of the system is considered. Second, a threshold-based algorithm where robots change states based on the number of other robots in this state and which leads to a non-linear model is investigated. Whereas analytical results can be obtained for the linear system, the optimization of the non-linear controller is performed numerically. Finally, we show using stochastic simulations that whereas the presented methodology and models work best if the swarm size is large, useful results can already be obtained for team-sizes below a hundred robots. The methodology presented can be applied to scenarios involving the control of large numbers of entities with limited computational and communication abilities as well as a tight energy budget, such as swarms of robots from the centimeter to nanometer range or sensor networks.", "The performance of large groups of robots is often limited by a commonly shared resource. This effect, termed interference, can have a large impact on robotic swarms. This article studies the issue of interference in a swarm of robots working on a harvesting task. The environment of the robots is spatially constrained, i.e., there is a commonly shared resource, the nest, which limits the group’s performance when used without any arbitration mechanism. The article studies the use of task partitioning for reducing concurrent accesses to the resource, and therefore limiting the impact of interference on the group’s performance. In our study, we spatially partition the environment into two subparts, thereby partitioning the corresponding harvesting task as well. We employ a simple method to allocate individuals to the partitions. The approach is empirically studied both in an environment with a narrow nest area and an environment without this constraint. The results of the task partitioning strategy are analyzed and compared to the case in which task partitioning is not employed.", "", "", "" ] }
1907.03880
2958588938
When designing swarm-robotic systems, systematic comparison of algorithms from different domains is necessary to determine which is capable of scaling up to handle the target problem size and target operating conditions. We propose a set of quantitative metrics for scalability, flexibility, and emergence which are capable of addressing these needs during the system design process. We demonstrate the applicability of our proposed metrics as a design tool by solving a large object gathering problem in temporally varying operating conditions using iterative hypothesis evaluation. We provide experimental results obtained in simulation for swarms of over 10,000 robots.
Within SR, no widely accepted theory of self-organizing systems exists @cite_7 @cite_23 . Cotsaftis presents a control-theoretic model of emergence, distinguishing between systems which can be studied by the methods of scientific reductionism, and systems which originate from the existence of a threshold above which interaction between system components overtakes outside interactions, leading to system self-organization and new behavior not predictable from component study, similar to @cite_24 . While many papers cite evidence that their algorithms exhibit emergent behavior @cite_16 @cite_2 @cite_17 @cite_22 , or even prove simple emergent properties via temporal logic ( @cite_5 ), few provide a quantitative method for measuring emergence (with the exception of @cite_13 , who used robot nearest-neighbor calculations to calculate a degree of interaction for the swarm). We present an empirical method for measuring the level of self organization present in a swarm by measuring the linearity of inter-robot interference as the swarm size is increased, as a small step towards the development of a more general emergence theory.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_24", "@cite_23", "@cite_2", "@cite_5", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2146086743", "2011995167", "1501224446", "2120573220", "", "2108526989", "2014403316", "2107787644", "2807731679" ], "abstract": [ "We present a decentralized, scalable approach to assembling a group of heterogeneous parts into different products using a swarm of robots. While the assembly plans are predetermined, the exact sequence of assembly of parts and the allocation of subassembly tasks to robots are determined by the interactions between robots in a decentralized fashion in real time. Our approach is based on developing a continuous abstraction of the system derived from models of chemical reactions and formulating the strategy as a problem of selecting rates of assembly and disassembly. These rates are mapped onto probabilities that determine stochastic control policies for individual robots, which then produce the desired aggregate behavior. This top-down approach to determining robot controllers also allows us to optimize the rates at the abstract level to achieve fast convergence to the specified target numbers of products. Because the method incorporates programs for assembly and disassembly, changes in demand can lead to reconfiguration in a seamless fashion. We illustrate the methodology using a physics-based simulator with examples involving 15 robots and two types of final products.", "Designing and analyzing self-organizing systems such as robotic swarms is a challenging task even though we have complete knowledge about the robot’s interior. It is difficult to determine the individual robot’s behavior based on the swarm behavior and vice versa due to the high number of agent–agent interactions. A step towards a solution of this problem is the development of appropriate models which accurately predict the swarm behavior based on a specified control algorithm. Such models would reduce the necessary number of time-consuming simulations and experiments during the design process of an algorithm. In this paper we propose a model with focus on an explicit representation of space because the effectiveness of many swarm robotic scenarios depends on spatial inhomogeneity. We use methods of statistical physics to address spatiality. Starting from a description of a single robot we derive an abstract model of swarm motion. The model is then extended to a generic model framework of communicating robots. In two examples we validate models against simulation results. Our experience shows that qualitative correctness is easily achieved, while quantitative correctness is disproportionately more difficult but still possible.", "We propose to investigate the concept of an Emergent Programming Environment enabling the development of complex adaptive systems. For this we use as a foundation the concept of emergence and a multi-agent system technology based on cooperative self-organizing mechanisms. The general objective is then to develop a complete programming language in which each instruction is an autonomous agent trying to be in a cooperative state with the other agents of the system, as well as with the environment of the system. The work presented here aims at showing the feasibility of such a concept by specifying, and experimenting with, a core of instruction-agents needed for a sub-set of mathematical calculus.", "The biologically-inspired swarm paradigm is being used to design self-organizing systems of locally interacting artificial agents. A major difficulty in designing swarms with desired characteristics is understanding the causal relation between individual agent and collective behaviors. Mathematical analysis of swarm dynamics can address this difficulty to gain insight into system design. This paper proposes a framework for mathematical modeling of swarms of microscopic robots that may one day be useful in medical applications. While such devices do not yet exist, the modeling approach can be helpful in identifying various design trade-offs for the robots and be a useful guide for their eventual fabrication. Specifically, we examine microscopic robots that reside in a fluid, for example, a bloodstream, and are able to detect and respond to different chemicals. We present the general mathematical model of a scenario in which robots locate a chemical source. We solve the scenario in one-dimension and show how results can be used to evaluate certain design decisions.", "", "It is a characteristic of swarm robotics that specifying overall emergent swarm behaviours in terms of the low-level behaviours of individual robots is very difficult. Yet if swarm robotics is to make the transition from the laboratory to real-world engineering realisation we need such specifications. This paper explores the use of temporal logic to formally specify, and possibly also prove, the emergent behaviours of a robotic swarm. The paper makes use of a simplified wireless connected swarm as a case study with which to illustrate the approach. Such a formal approach could be an important step toward a disciplined design methodology for swarm robotics.", "This article presents a simple adaptation mechanism to automatically adjust the ratio of foragers to resters (division of labor) in a swarm of foraging robots and hence maximize the net energy income to the swarm. Three adaptation rules are introduced based on local sensing and communications. Individual robots use internal cues (successful food retrieval), environmental cues (collisions with team-mates while searching for food) and social cues (team-mate success in food retrieval) to dynamically vary the time spent foraging or resting. Simulation results show that the swarm demonstrates successful adaptive emergent division of labor and robustness to environmental change (in food source density), and we observe that robots need to cooperate more when food is scarce. Furthermore, the adaptation mechanism is able to guide the swarm towards energy optimization despite the limited sensing and communication abilities of the individual robots and the simple social interaction rules. The swarm also exhibits the capacity to collectively perceive environmental changes; a capacity that can only be observed at a group level and cannot be deduced from individual robots.", "Understanding the behavior of complex systems is becoming a crucial issue as systems grow in size, and the interconnection and geographical distribution of their components diversifies. The interaction over time of many components often leads to emergent behavior, which can be harmful to the system. Despite this, very few practical approaches for the identification of emergent behavior exist, and many are unfeasible to implement. Approaches using interaction as a measure of emergence have the potential to alleviate this problem. In this paper, we analyse absolute and relative methods that use interaction as a measure of emergence. Absolute methods compute a degree of interaction that characterizes a system state as being emergent. Relative methods compare interaction graphs of the system state with interaction graphs of systems that have been shown previously to exhibit emergence. We present these approaches and discuss their advantages and limitations using theoretical and experimental analysis.", "Swarm robotics (SR) offers promising solutions to real-world problems that can be modeled as foraging tasks, e.g. disaster trash cleanup or object gathering for construction. Yet current SR foraging approaches make limiting assumptions that restrict their applicability to selected real-world environments. We propose an improved self-organized task allocation method based on task partitioning that removes restrictions such as: (1) a priori knowledge of foraging environment, and (2) strict limitations on intermediate drop pickup site behavior. With experiments in simulation, we show that under the proposed constraint relaxation, our approach still provides performance increases when compared to an unpartitioned strategy within some combinations of swarm sizes, robot capabilities, and environmental conditions. This work broadens the applicability of SR foraging approaches, showing that they can be effective under ideal conditions while continuing to perform robustly in more volatile challenging environments." ] }
1907.03904
2968976338
Blockchains and smart contracts are an emerging, promising technology, that has received considerable attention. We use the blockchain technology, and in particular Ethereum, to implement a large-scale event-based Internet of Things (IoT) control system. We argue that the distributed nature of the “ledger,” as well as, Ethereum's capability of parallel execution of replicated “smart contracts”, provide the sought after automation, generality, flexibility, resilience, and high availability. We design a realistic blockchain-based loT architecture, using existing technologies while by taking into consideration the characteristics and limitations of IoT devices and applications. Furthermore, we leverage blockchain's immutability and Ethereum's support for custom tokens to build a robust and efficient token-based access control mechanism. Our evaluation shows that our solution is viable and offers significant security and usability advantages.
Early attempts to incorporate blockchain technology into the IoT proposed new blockchain systems. For example, @cite_1 designed a blockchain-based smart home management system. They proposed a custom, blockchain technology, where the home gateways hold the role of the miners. Such solutions are hard to be deployed since they require a critical mass.'' Our approach is built on existing technologies and can be used with already available libraries and wallets.
{ "cite_N": [ "@cite_1" ], "mid": [ "2611626082" ], "abstract": [ "Internet of Things (IoT) security and privacy remain a major challenge, mainly due to the massive scale and distributed nature of IoT networks. Blockchain-based approaches provide decentralized security and privacy, yet they involve significant energy, delay, and computational overhead that is not suitable for most resource-constrained IoT devices. In our previous work, we presented a lightweight instantiation of a BC particularly geared for use in IoT by eliminating the Proof of Work (POW) and the concept of coins. Our approach was exemplified in a smart home setting and consists of three main tiers namely: cloud storage, overlay, and smart home. In this paper we delve deeper and outline the various core components and functions of the smart home tier. Each smart home is equipped with an always online, high resource device, known as “miner” that is responsible for handling all communication within and external to the home. The miner also preserves a private and secure BC, used for controlling and auditing communications. We show that our proposed BC-based smart home framework is secure by thoroughly analysing its security with respect to the fundamental security goals of confidentiality, integrity, and availability. Finally, we present simulation results to highlight that the overheads (in terms of traffic, processing time and energy consumption) introduced by our approach are insignificant relative to its security and privacy gains." ] }
1907.03904
2968976338
Blockchains and smart contracts are an emerging, promising technology, that has received considerable attention. We use the blockchain technology, and in particular Ethereum, to implement a large-scale event-based Internet of Things (IoT) control system. We argue that the distributed nature of the “ledger,” as well as, Ethereum's capability of parallel execution of replicated “smart contracts”, provide the sought after automation, generality, flexibility, resilience, and high availability. We design a realistic blockchain-based loT architecture, using existing technologies while by taking into consideration the characteristics and limitations of IoT devices and applications. Furthermore, we leverage blockchain's immutability and Ethereum's support for custom tokens to build a robust and efficient token-based access control mechanism. Our evaluation shows that our solution is viable and offers significant security and usability advantages.
Recently, @cite_5 explored the potential of smart contracts for machine-to-machine (M2M) communication. To this end, they developed and evaluated an IoT application for automated, M2M, gasoline purchases that uses Ethereum smart contracts to perform transactions. Our work is also in this direction. Nevertheless, in addition to merely using smart contracts to provide message transfer and payments, our solution supports group communication and access control.
{ "cite_N": [ "@cite_5" ], "mid": [ "2962852903" ], "abstract": [ "Blockchain technologies, such as smart contracts, present a unique interface for machine-to-machine communication that provides a secure, append-only record that can be shared without trust and without a central administrator. We study the possibilities and limitations of using smart contracts for machine-to-machine communication by designing, implementing, and evaluating AGasP, an application for automated gasoline purchases. We find that using smart contracts allows us to directly address the challenges of transparency, longevity, and trust in IoT applications. However, real-world applications using smart contracts must address their important trade-offs, such as performance, privacy, and the challenge of ensuring they are written correctly." ] }
1907.03843
2960728668
The rise of artificial intelligence (A.I.) based systems has the potential to benefit adopters and society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will only adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose a stochastic game theoretical model for these conflicts. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societ al gains, the societ al value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides a more significant benefit for the individual and the society. Nevertheless, we show that it is possible to develop human conscious A.I. systems that reach an equilibrium where the gains for the adopters are not at a cost for non-adopters while increasing the overall fitness and lowering inequality. However, as shown, a self-organized adoption of such policies would require external regulation.
One major problem for the introduction of safe A.I. systems is the so called value alignment problem @cite_37 : How can A.I. systems ensure that their behaviour aligns to the values of their owners? Even though it is not yet a solved problem, for this paper, we will assume that an A.I. system can accurately estimate the goals of each individual with whom it interacts. With this assumption, we are able to study the problems that emerge at the societ al level even after having the individual value alignment solved.
{ "cite_N": [ "@cite_37" ], "mid": [ "88368075" ], "abstract": [ "The principal-agent problem concerns delegation in the absence of trust. Given a principal and an agent with different value structures, the principal wants to motivate the agent to address the principal’s aims by providing appropriate incentives. We address this problem in the context of a real-world complication, where the principal and agent lack a common problem frame. This context is especially relevant when the principal is a user, and the agent is a technological artifact with a limited repertoire of percepts and actions. We identify necessary conditions for establishing trust between such disparate actors, and we show, via a constructive proof, that it is always possible to create these necessary conditions. We conclude with several distinctions that let the principal rank the expected quality of agent behavior." ] }
1907.03843
2960728668
The rise of artificial intelligence (A.I.) based systems has the potential to benefit adopters and society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will only adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose a stochastic game theoretical model for these conflicts. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societ al gains, the societ al value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides a more significant benefit for the individual and the society. Nevertheless, we show that it is possible to develop human conscious A.I. systems that reach an equilibrium where the gains for the adopters are not at a cost for non-adopters while increasing the overall fitness and lowering inequality. However, as shown, a self-organized adoption of such policies would require external regulation.
A comparison of a number of such ethical frameworks can be found on the paper ''An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations'' @cite_43 . That paper analyses principles proposed by 6 different entities, including the previously mentioned Asilomar AI principles @cite_41 , wielding 47 principles on total, and compares them to the existing 4 principles of bio-ethics (Non-maleficence; Justice; Beneficence; Autonomy) @cite_42 , finding a considerable overlap. They argue that for the bio-ethics principles to be applied to the field of A.I., a fifth principle is needed: Explicability. This principle incorporates both intelligibility and accountability. They go on to propose 20 action points, that is, recommendations for enabling a beneficial A.I. society.
{ "cite_N": [ "@cite_41", "@cite_43", "@cite_42" ], "mid": [ "", "2902634493", "2884040483" ], "abstract": [ "", "This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.", "Les principes de l'ethique biomedicale constituent, par leur influence, l’ouvrage majeur de l’ethique medicale contemporaine. Au-dela de son contexte nord-americain d’elaboration, de la deontologie medicale traditionnelle et des theories morales classiques, la reflexion proposee a reconfigure l’analyse des questions ethiques liees a la relation de soin et au monde de la sante. Depuis la premiere edition de ce texte (1979), les auteurs n’ont eu de cesse de le remanier et d’en presenter des versions integrant toujours davantage leurs reponses aux objections et critiques qu’ils recevaient ou se formulaient eux-memes. C’est la 5e edition (dans l’attente de la 6e), datant de 2001, qui est ici traduite pour la premiere fois. Livre en debat, ne lui-meme des discussions engagees aux Etats-Unis dans les annees 1970, notamment sur l’ethique des essais cliniques, il vise a determiner des principes-reperes susceptibles d’eclairer les pratiques medicales et les argumentations qui les accompagnent des lors qu’elles engagent un rapport aux normes et aux valeurs : le principe d’autonomie, le principe de non-malfaisance, le principe de bienfaisance et le principe de justice. Le statut de la theorie dans la vie morale, les rapports entre les principes et les elements factuels, les rapports des differents principes entre eux sont les problemes qui structurent continument la reflexion des auteurs. Ainsi s’edifie une approche se voulant fine, souple et rigoureuse de ce qui peut justifier et guider le raisonnement ethique dans les prestations de sante et les relations de soin." ] }
1907.03843
2960728668
The rise of artificial intelligence (A.I.) based systems has the potential to benefit adopters and society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will only adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose a stochastic game theoretical model for these conflicts. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societ al gains, the societ al value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides a more significant benefit for the individual and the society. Nevertheless, we show that it is possible to develop human conscious A.I. systems that reach an equilibrium where the gains for the adopters are not at a cost for non-adopters while increasing the overall fitness and lowering inequality. However, as shown, a self-organized adoption of such policies would require external regulation.
The paper ''Machine Ethics: Creating an Ethical Intelligent Agent'' @cite_22 defends that it may be possible to incorporate an explicit ethical component into a machine relying on inductive logic programming approach. The goal is to solve ethical dilemmas by finding ethical principles that best fit given positive and negative examples. They advocate the use of a modified version of the Turing test @cite_9 , the comparative moral Turing test @cite_11 . This test is an elegant solution to the question ''What is an ethical moral A.I. system?''. The test consists in giving to a human judge pairs of descriptions of actual, morally-significant actions of a human and an A.I. system. If the judge identifies the A.I. as a moral equal or superior to the human, then the A.I. system passed the comparative moral Turing test.
{ "cite_N": [ "@cite_9", "@cite_22", "@cite_11" ], "mid": [ "2001771035", "2133460105", "2051031454" ], "abstract": [ "", "The newly emerging field of machine ethics (Anderson and Anderson 2006) is concerned with adding an ethical dimension to machines. Unlike computer ethics -- which has traditionally focused on ethical issues surrounding humans' use of machines -- machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. In this article we discuss the importance of machine ethics, the need for machines that represent ethical principles explicitly, and the challenges facing those working on machine ethics. We also give an example of current research in the field that shows that it is possible, at least in a limited domain, for a machine to abstract an ethical principle from examples of correct ethical judgments and use that principle to guide its own behavior.", "As artificial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an artificial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing artificial moral agents result both from controversies among ethicists about moral theory itself, and from computational limits to the implementation of such theories. In this paper the ethical disputes are surveyed, the possibility of a ‘moral Turing Test’ is considered and the computational difficulties accompanying the different types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people ..." ] }
1907.03843
2960728668
The rise of artificial intelligence (A.I.) based systems has the potential to benefit adopters and society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will only adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose a stochastic game theoretical model for these conflicts. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societ al gains, the societ al value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides a more significant benefit for the individual and the society. Nevertheless, we show that it is possible to develop human conscious A.I. systems that reach an equilibrium where the gains for the adopters are not at a cost for non-adopters while increasing the overall fitness and lowering inequality. However, as shown, a self-organized adoption of such policies would require external regulation.
However, some argue that having an ethical framework or even A.I. systems that pass the comparative moral Turing test is not enough @cite_6 . Roman Yampolskiy defends that it is insufficient to have a human-like morality on A.I. systems with super-human intelligence. On such agents, small moral mistakes, common in humans, could lead to the extinction of humanity. Furthermore, a moral A.I. system with super-human intelligence will be able to recursively self-improve, with no provided guarantees that the resulting improvements remain moral. Instead of an ethical approach, Yampolskiy proposes a safety engineering approach, able to provide proofs that developed A.I. systems will remain safe, even under recursive self-improvement @cite_23 . Yampolskiy also proposes A.I. confinement as a possible approach while no safety guarantees are in place @cite_12 @cite_8 . This approach would consist in ensuring that an A.I. system could help humanity while having no ability to negatively influence the world around it. This idea of A.I. confinement had been first presented in @cite_32 , and discussed by Bostrom @cite_31 and Chalmers @cite_7 . This is, however, more of a preventive measure than a perfect solution, as limiting the negative A.I. influence will also limit the possible positive influence.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_32", "@cite_6", "@cite_23", "@cite_31", "@cite_12" ], "mid": [ "2215775476", "", "1557461560", "2238205855", "2022546212", "", "2737838988" ], "abstract": [ "What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I. J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”:", "", "Part 1 The foundations of foresight: engines of construction the principles of change predicting and projecting. Part 2 Profiles of the possible: engines of abundance thinking machines the world beyond Earth engines of healing long life in an open world a door to the future the limits to growth. Part 3 Dangers and hopes: engines of destruction strategies and survival finding the facts the network of knowledge world enough and time.", "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive self-improvement.", "Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge the scientific community to develop intelligent systems that have human-friendly values that they provably retain, even under recursive self-improvement.", "", "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container." ] }
1907.03843
2960728668
The rise of artificial intelligence (A.I.) based systems has the potential to benefit adopters and society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will only adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose a stochastic game theoretical model for these conflicts. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societ al gains, the societ al value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides a more significant benefit for the individual and the society. Nevertheless, we show that it is possible to develop human conscious A.I. systems that reach an equilibrium where the gains for the adopters are not at a cost for non-adopters while increasing the overall fitness and lowering inequality. However, as shown, a self-organized adoption of such policies would require external regulation.
The focus of most previous works was on considering high-level ethical principles for A.I. systems acting in a society. In most cases there was no claim or prediction about the potential adoption of A.I. systems or their acceptance by non-adopters and by society in general. Just a few works considered the development of computational models on the impact of A.I.. For instance, one study analyzed the amount of safety precautions companies would take considering that they are competing with others for the dominating A.I. @cite_46 .
{ "cite_N": [ "@cite_46" ], "mid": [ "1012910110" ], "abstract": [ "This paper presents a simple model of an AI (artificial intelligence) arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first--by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI disaster, especially if risk-taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others' capabilities (and about their own), the more the danger increases. Should these results persist in more realistic models and analysis, it points the way to methods of increasing the chance of the safe development of AI." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
The Adaptation Algorithm for Adaptive Streaming over HTTP (AAASH) @cite_7 tries to optimize the user's experience by balancing the following goals: a) to prevent video playback interruptions when possible; b) to maintain a high average and minimum video resolution; c) to decrease the number of resolution changes; d) to minimize the initial buffering time when the playback starts.
{ "cite_N": [ "@cite_7" ], "mid": [ "1967330816" ], "abstract": [ "Internet video makes up a significant part of the Internet traffic and its fraction is constantly growing. In order to guarantee best user experience throughout different network access technologies with dynamically varying network conditions, it is fundamental to adopt technologies enabling a proper delivery of the media content. One of such technologies is adaptive streaming. It allows to dynamically adapt the bit-rate of the stream to varying network conditions. There are various approaches to adaptive streaming. In our work, we focus on the receiver-driven approach where the media file is subdivided into segments, each of the segments is provided at multiple bit-rates, and the task of the client is to select the appropriate bit-rate for each of the segments. With this approach, the challenges are (i) to properly estimate the dynamics of the available network throughput, (ii) to control the filling level of the client buffer in order to avoid underflows resulting in playback interruptions, (iii) to maximize the quality of the stream, while avoiding unnecessary quality fluctuations, and, finally, (iv) to minimize the delay between the user's request and the start of the playback. During our work, we designed and implemented a receiver-driven adaptation algorithm for adaptive streaming that does not rely on cross-layer information or server assistance. We integrated the algorithm with a prototype implementation of a streaming client based on the MPEG DASH (Dynamic Adaptive Streaming over HTTP) standard. We evaluated the implemented prototype in real-world scenarios and found that it performes remarkably well even under challenging network conditions. Further, it exhibits stable and fair operation if a common link is shared among multiple clients." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
Reference @cite_15 presents an adaptation scheme called Rate Adaptation for Adaptive HTTP Streaming (RAAHS). RAAHS exploits the segment fetch time and compares it to the client's playback time in order to calculate the bit rate of the following segment. Switch up is done using a step-wise process, whereas switch down is done in a single step (aggressive). There is also a mechanism to limit the maximum buffering time.
{ "cite_N": [ "@cite_15" ], "mid": [ "2061700262" ], "abstract": [ "Recently, HTTP has been widely used for the delivery of real-time multimedia content over the Internet, such as in video streaming applications. To combat the varying network resources of the Internet, rate adaptation is used to adapt the transmission rate to the varying network capacity. A key research problem of rate adaptation is to identify network congestion early enough and to probe the spare network capacity. In adaptive HTTP streaming, this problem becomes challenging because of the difficulties in differentiating between the short-term throughput variations, incurred by the TCP congestion control, and the throughput changes due to more persistent bandwidth changes. In this paper, we propose a novel rate adaptation algorithm for adaptive HTTP streaming that detects bandwidth changes using a smoothed HTTP throughput measured based on the segment fetch time (SFT). The smoothed HTTP throughput instead of the instantaneous TCP transmission rate is used to determine if the bitrate of the current media matches the end-to-end network bandwidth capacity. Based on the smoothed throughput measurement, this paper presents a receiver-driven rate adaptation method for HTTP TCP streaming that deploys a step-wise increase aggressive decrease method to switch up down between the different representations of the content that are encoded at different bitrates. Our rate adaptation method does not require any transport layer information such as round trip time (RTT) and packet loss rates which are available at the TCP layer. Simulation results show that the proposed rate adaptation algorithm quickly adapts to match the end-to-end network capacity and also effectively controls buffer underflow and overflow." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
The agile Smooth Video Adaptation Algorithm (SVAA) for DASH systems, proposed in @cite_16 , uses client-side buffered video time as feedback signal to estimate the video rate of the next downloadable video segment. The algorithm increases smoothly the video rate with the available network bandwidth, and it reduces promptly the video rate in response to sudden congestion level shift-ups. Moreover, it uses a rate margin to reduce slightly the video rate to limit video rate adjustments. The buffer cap and the small video rate margin improve the smoothness in video rate and buffer size.
{ "cite_N": [ "@cite_16" ], "mid": [ "2157394357" ], "abstract": [ "Dynamic Adaptive Streaming over HTTP (DASH) is widely deployed on the Internet for live and on-demand video streaming services. Video adaptation algorithms in existing DASH systems are either too sluggish to respond to congestion level shifts or too sensitive to short-term network bandwidth variations. Both degrade user video experience. In this paper, we formally study the responsiveness and smoothness trade-off in DASH through analysis and experiments. We show that client-side buffered video time is a good feedback signal to guide video adaptation. We then propose novel video rate control algorithms that balance the needs for video rate smoothness and high bandwidth utilization. We show that a small video rate margin can lead to much improved smoothness in video rate and buffer size. The proposed DASH designs are also extended to work with multiple CDN servers. We develop a fully-functional DASH system and evaluate its performance through extensive experiments on a network testbed and the Internet. We demonstrate that our DASH designs are highly efficient and robust in realistic network environment." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
The authors in @cite_17 replaced the original quality adaption algorithm in Adobe's Open Source Media Framework (OSMF) so that the quality level switching follows a pre-defined scenario. The fetch times of last two video segments are used to estimate the bandwidth that is available between the server and the client. This bandwidth estimation is then used to select the bit rate of the following segment, where the rate selected is the highest rate that is smaller than the estimated bandwidth.
{ "cite_N": [ "@cite_17" ], "mid": [ "2104644670" ], "abstract": [ "Dynamic Adaptation Streaming over HTTP (DASH) enhances the Quality of Experience (QoE) for users by automatically switching quality levels according to network conditions. Various adaptation schemes have been proposed to select the most suitable quality level during video playback. Adaptation schemes are currently based on the measured TCP throughput received by the video player. Although video buffer can mitigate throughput fluctuations, it does not take into account the effect of the transition of quality levels on the QoE. In this paper, we propose a QoE-aware DASH system (or QDASH) to improve the user-perceived quality of video watching. We integrate available bandwidth measurement into the video data probes with a measurement proxy architecture. We have found that our available bandwidth measurement method facilitates the selection of video quality levels. Moreover, we assess the QoE of the quality transitions by carrying out subjective experiments. Our results show that users prefer a gradual quality change between the best and worst quality levels, instead of an abrupt switching. Hence, we propose a QoE-aware quality adaptation algorithm for DASH based on our findings. Finally, we integrate both network measurement and the QoE-aware quality adaptation into a comprehensive DASH system." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
@cite_5 , the authors applied a Markov chain to analyse the QoE metrics for the user, namely the probability of the video to be interrupted, the initial buffering delay, the average bit rate of the video, and the rate of bit rate changes.
{ "cite_N": [ "@cite_5" ], "mid": [ "2804849494" ], "abstract": [ "Adaptive video streaming improves users' quality of experience (QoE), while using the network efficiently. In the last few years, adaptive video streaming has seen widespread adoption and has attracted significant research effort. We study a dynamic system of random arrivals and departures for different classes of users using the adaptive streaming industry standard DASH (Dynamic Adaptive Streaming over HTTP). Using a Markov chain based analysis, we compute the user QoE metrics: probability of starvation, prefetching delay, average video quality and switching rate. We validate our model by simulations, which show a very close match. Our study of the playout buffer is based on client adaptation scheme, which makes efficient use of the network while improving users' QoE. We prove that for buffer-based variants, the average video bit-rate matches the average channel rate. Hence, we would see quality switches whenever the average channel rate does not match the available video bit rates. We give a sufficient condition for setting the playout buffer threshold to ensure that quality switches only between adjacent quality levels." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
The work in @cite_10 @cite_11 takes an advantage of a fuzzy logic application to handle the uncertainty of the network system. In @cite_11 , the mobile QoS is improved by using a cumulative moving average in order to capture the related information between near-term past values and current values. In order to enhance user QoE in video distribution applications with DASH, mobile edge computing in LTE and 5G has been considered in @cite_18 @cite_1 @cite_14 . Caching enables storage of popular videos in the network edge, close to the users, however, caching cannot be applied to all videos.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_10", "@cite_11" ], "mid": [ "2508578164", "2805947310", "2744786564", "2601347938", "2750827144" ], "abstract": [ "In this paper, we present a Mobile Edge Computing (MEC) scheme for enabling network edge-assisted video adaptation based on MPEG-DASH (Dynamic Adaptive Streaming over HTTP). In contrast to the traditional over-the-top (OTT) adaptation performed by DASH clients, the MEC server at the mobile network edge can capture radio access network (RAN) conditions through its intrinsic Radio Network Information Service (RNIS) function, and use the knowledge to provide guidance to clients so that they can perform more intelligent video adaptation. In order to support such MEC-assisted DASH video adaptation, the MEC server needs to locally cache the most popular content segments at the qualities that can be supported by the current network throughput. Towards this end, we introduce a two-dimensional user Quality-of-Experience (QoE)-driven algorithm for making caching replacement decisions based on both content context (e.g., segment popularity) and network context (e.g., RAN downlink throughput). We conducted experiments by deploying a prototype MEC server at a real LTE-A based network testbed. The results show that our QoE-driven algorithm is able to achieve significant improvement on user QoE over 2 benchmark schemes.", "With the rapid development of Mobile Internet, online Video-on-Demand (VoD) services, primarily 4K video, grow tremendously with the key performance indicators of lower latency, higher bandwidth, and higher bitrate. However, due to the long-distance between the user equipment (UE) and Internet Service Provider, Quality-of-Service (QoS) in terms of low playback delay and high transmission rate cannot be guaranteed. Therefore, Mobile Edge Computing (MEC), at the edge of the cellular network, is highly recommended with the benefits of lower uncertainty and end-to-end latency. The UE can enjoy better customized services with more appropriate bitrates as a result. In this paper, we propose a practical framework of MEC-enabled cellular network with radio network-aware edge cache and a radio network-aware cache updating algorithm. The framework uses Dynamic Adaptive Streaming over HTTP (DASH) and Radio Network Information Service (RNIS) is leveraged to accelerate multi-media services. Under this framework, RNIS collects information and delivers it to the MEC server based on the context. A testbed based on a real 4G Long Term Evolution (LTE) Base Station is developed carrying out experiments under this framework. Compared to traditional networks, the result shows that our approach maintains a smooth high quality of experience (QoE).", "Internet video streaming applications have been demanding more bandwidth and higher video quality, especially with the advent of virtual reality and augmented reality appli-cations. While adaptive strea ming protocols like MPEG-DASH (dynamic adaptive streaming over HTTP) allows video quality to be flexibly adapted, e.g., degraded when mobile network condition deteriorates, this is not an option if the application itself requires guaranteed 4K quality at all time. On the other hand, conventional end-to-end transmission control protocol (TCP) has been struggling in supporting 4K video delivery across long-distance Internet paths containing both fixed and mobile network segments with heterogeneous characteristics. In this paper, we present a novel and practically feasible system architecture named MVP (mobile edge virtualization with adaptive prefetching), which enables content providers to embed their content intelligence as a virtual network function into the mobile network operator's infrastructure edge. Based on this architecture, we present a context-aware adaptive video prefetching scheme in order to achieve quality of experience (QoE)-assured 4K video on demand (VoD) delivery across the global Internet. Through experiments based on a real LTE-A network infrastructure, we demonstrate that our proposed scheme is able to achieve QoE-assured 4K VoD streaming, especially when the video source is located remotely in the public Internet, in which case none of the state-of-the-art solutions is able to support such an objective at global Internet scale.", "", "Dynamic adaptive streaming over Hypertext Transfer Protocol (HTTP) is an advanced technology in video streaming to deal with the uncertainty of network states. However, this technology has one drawback as the network states frequently and continuously change. The quality of a video streaming fluctuates along with the network changes, and it might reduce the quality of service. In recent years, many researchers have proposed several adaptive streaming algorithms to reduce such changes. However, these algorithms only consider the current state of a network. Thus, these algorithms might result in inaccurate estimates of a video quality in the near term. Therefore, in this paper, we propose a method using fuzzy logic and a mathematics moving average technique, in order to reduce mobile video quality fluctuation in Dynamic Adaptive Streaming over HTTP (DASH). First, we calculate the moving average of the bandwidth and buffer values for a given period. On the basis of differences between real and average values, we propose a fuzzy logic system to deduce the value of the video quality representation for the next request. In addition, we use the entropy rate of a bandwidth measurement sequence to measure the predictable stabilization of our method. The experiment results show that our proposed method reduces video quality fluctuation as well as improves 40 of bandwidth utilization compared to existing methods." ] }
1812.11779
2949427863
Video streaming currently accounts for the majority of Internet traffic. One factor that enables video streaming is HTTP Adaptive Streaming (HAS), that allows the users to stream video using a bit rate that closely matches the available bandwidth from the server to the client. MPEG Dynamic Adaptive Streaming over HTTP (DASH) is a widely used standard, that allows the clients to select the resolution to download based on their own estimations. The algorithm for determining the next segment in a DASH stream is not partof the standard, but it is an important factor in the resulting playback quality. Nowadays vehicles are increasingly equipped with mobile communication devices, and in-vehicle multimedia entertainment systems. In this paper, we evaluate the performance of various DASH adaptation algorithms over a vehicular network. We present detailed simulation results highlighting the advantages and disadvantages of various adaptation algorithms in delivering video content to vehicular users, and we show how the different adaptation algorithms perform in terms of throughput, playback interruption time, and number of interruptions.
Recent surveys @cite_4 @cite_3 give a good overview of the bit rate adaptation algorithms for DASH based content delivery.
{ "cite_N": [ "@cite_4", "@cite_3" ], "mid": [ "2602023803", "2906364736" ], "abstract": [ "With companies such as Netflix and YouTube accounting for more than 50 of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.", "In this survey, we present state-of-the-art bitrate adaptation algorithms for HTTP adaptive streaming (HAS). As a key distinction from other streaming approaches, the bitrate adaptation algorithms in HAS are chiefly executed at each client, i.e. , in a distributed manner. The objective of these algorithms is to ensure a high quality of experience (QoE) for viewers in the presence of bandwidth fluctuations due to factors like signal strength, network congestion, network reconvergence events, etc. While such fluctuations are common in public Internet, they can also occur in home networksor even managed networks where there is often admission control and QoS tools. Bitrate adaptation algorithms may take factors like bandwidth estimations, playback buffer fullness, device features, viewer preferences, and content features into account, albeit with different weights. Since the viewer’s QoE needs to be determined in real-time during playback, objective metrics are generally used including number of buffer stalls, duration of startup delay, frequency and amount of quality oscillations, and video instability. By design, the standards for HAS do not mandate any particular adaptation algorithm, leaving it to system builders to innovate and implement their own method. This survey provides an overview of the different methods proposed over the last several years." ] }
1812.11941
2907670226
Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available.
has been considered by many CNN methods where they formulate the problem as a regression of the depth map from a single RGB image @cite_2 @cite_18 @cite_14 @cite_6 @cite_26 @cite_36 . While the performance of these methods have been increasing steadily, general problems in both the quality and resolution of the estimated depth maps leave a lot of room for improvement. Our main focus in this paper is to push towards generating higher quality depth maps with more accurate boundaries using standard neural network architectures. Our preliminary results do indicate that improvements on the state-of-the-art are possible to achieve by leveraging existing simple architectures that perform well on other computer vision tasks.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_36", "@cite_6", "@cite_2" ], "mid": [ "2963591054", "2605938684", "2964014680", "2963488291", "2890173472", "2171740948" ], "abstract": [ "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.", "This paper addresses the problem of depth estimation from a single still image. Inspired by recent works on multi-scale convolutional neural networks (CNN), we propose a deep model which fuses complementary information derived from multiple CNN side outputs. Different from previous methods, the integration is obtained by means of continuous Conditional Random Fields (CRFs). In particular, we propose two different variations, one based on a cascade of multiple CRFs, the other on a unified graphical model. By designing a novel CNN implementation of mean-field updates for continuous CRFs, we show that both proposed models can be regarded as sequential deep networks and that training can be performed end-to-end. Through extensive experimental evaluation we demonstrate the effectiveness of the proposed approach and establish new state of the art results on publicly available datasets.", "Recent works have shown the benefit of integrating Conditional Random Fields (CRFs) models into deep architectures for improving pixel-level prediction tasks. Following this line of research, in this paper we introduce a novel approach for monocular depth estimation. Similarly to previous works, our method employs a continuous CRF to fuse multi-scale information derived from different layers of a front-end Convolutional Neural Network (CNN). Differently from past works, our approach benefits from a structured attention model which automatically regulates the amount of information transferred between corresponding features at different scales. Importantly, the proposed attention model is seamlessly integrated into the CRF, allowing end-to-end training of the entire architecture. Our extensive experimental evaluation demonstrates the effectiveness of the proposed method which is competitive with previous methods on the KITTI benchmark and outperforms the state of the art on the NYU Depth V2 dataset.", "Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multilayer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch. Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The proposed deep ordinal regression network (DORN) achieves state-of-the-art results on three challenging benchmarks, i.e., KITTI [16], Make3D [49], and NYU Depth v2 [41], and outperforms existing methods by a large margin.", "Convolutional Neural Networks have demonstrated superior performance on single image depth estimation in recent years. These works usually use stacked spatial pooling or strided convolution to get high-level information which are common practices in classification task. However, depth estimation is a dense prediction problem and low-resolution feature maps usually generate blurred depth map which is undesirable in application. In order to produce high quality depth map, say clean and accurate, we propose a network consists of a Dense Feature Extractor (DFE) and a Depth Map Generator (DMG). The DFE combines ResNet and dilated convolutions. It extracts multi-scale information from input image while keeping the feature maps dense. As for DMG, we use attention mechanism to fuse multi-scale features produced in DFE. Our Network is trained end-to-end and does not need any post-processing. Hence, it runs fast and can predict depth map in about 15 fps. Experiment results show that our method is competitive with the state-of-the-art in quantitative evaluation, but can preserve better structural details of the scene depth.", "Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation." ] }
1812.11941
2907670226
Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available.
stereo reconstruction using CNN algorithms have been recently proposed @cite_40 . Prior work considered the subproblem that looks at image pairs @cite_17 , or three consecutive frames @cite_35 . Joint key-frame based dense camera tracking and depth map estimation was presented by @cite_20 . In this work, we seek to push the performance for single image depth estimation. We suspect that the features extracted by monocular depth estimators could also help derive better multi-view stereo reconstruction methods.
{ "cite_N": [ "@cite_35", "@cite_40", "@cite_20", "@cite_17" ], "mid": [ "2806446538", "2964153986", "2887825894", "2561074213" ], "abstract": [ "Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark.", "We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of plane-sweep volumes and use the proposed DeepMVS network to predict high-quality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multi-layer feature activations from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures.", "We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.", "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training." ] }
1812.11941
2907670226
Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available.
approaches have been shown to be very helpful in many different contexts. In recent work, investigated the efficiency of transfer learning between different tasks @cite_3 , many of which were are related to 3D reconstruction. Our method is heavily based on the idea of transfer learning where we make use of image encoders originally designed for the problem of image classification @cite_34 . We found that using such encoders that do not aggressively downsample the spatial resolution of the input tend to produce sharper depth estimations especially with the presence of skip connections.
{ "cite_N": [ "@cite_34", "@cite_3" ], "mid": [ "2963446712", "2964185501" ], "abstract": [ "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.", "Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. We provide a set of tools for computing and probing this taxonomical structure including a solver users can employ to find supervision policies for their use cases." ] }
1812.11941
2907670226
Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available.
networks have made significant contributions in many vision related problems such as image segmentation @cite_10 , optical flow estimation @cite_25 , and image restoration @cite_27 . In recent years, the use of such architectures have shown great success both in the supervised and the unsupervised setting of the depth estimation problem @cite_8 @cite_17 @cite_40 @cite_20 . Such methods typically use one or more encoder-decoder network as a sub part of their larger network. In this work, we employ a single straightforward encoder-decoder architecture with skip connections (see Fig. ). Our results indicate that it is possible to achieve state-of-the-art high quality depth maps using a simple encoder-decoder architecture.
{ "cite_N": [ "@cite_8", "@cite_27", "@cite_40", "@cite_10", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2520707372", "2964204553", "2964153986", "1901129140", "764651262", "2887825894", "2561074213" ], "abstract": [ "Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.", "", "We present DeepMVS, a deep convolutional neural network (ConvNet) for multi-view stereo reconstruction. Taking an arbitrary number of posed images as input, we first produce a set of plane-sweep volumes and use the proposed DeepMVS network to predict high-quality disparity maps. The key contributions that enable these results are (1) supervised pretraining on a photorealistic synthetic dataset, (2) an effective method for aggregating information across a set of unordered images, and (3) integrating multi-layer feature activations from the pre-trained VGG-19 network. We validate the efficacy of DeepMVS using the ETH3D Benchmark. Our results show that DeepMVS compares favorably against state-of-the-art conventional MVS algorithms and other ConvNet based methods, particularly for near-textureless regions and thin structures.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net .", "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.", "We present a system for keyframe-based dense camera tracking and depth map estimation that is entirely learned. For tracking, we estimate small pose increments between the current camera image and a synthetic viewpoint. This significantly simplifies the learning problem and alleviates the dataset bias for camera motions. Further, we show that generating a large number of pose hypotheses leads to more accurate predictions. For mapping, we accumulate information in a cost volume centered at the current depth estimate. The mapping network then combines the cost volume and the keyframe image to update the depth prediction, thereby effectively making use of depth measurements and image-based priors. Our approach yields state-of-the-art results with few images and is robust with respect to noisy camera poses. We demonstrate that the performance of our 6 DOF tracking competes with RGB-D tracking algorithms.We compare favorably against strong classic and deep learning powered dense depth algorithms.", "In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training." ] }
1812.11671
2907508336
At present, deep learning has been applied more and more in monocular image depth estimation and has shown promising results. The current more ideal method for monocular depth estimation is the supervised learning based on ground truth depth, but this method requires an abundance of expensive ground truth depth as the supervised labels. Therefore, researchers began to work on unsupervised depth estimation methods. Although the accuracy of unsupervised depth estimation method is still lower than that of supervised method, it is a promising research direction. In this paper, Based on the experimental results that the stereo matching models outperforms monocular depth estimation models under the same unsupervised depth estimation model, we proposed an unsupervised monocular vision stereo matching method. In order to achieve the monocular stereo matching, we constructed two unsupervised deep convolution network models, one was to reconstruct the right view from the left view, and the other was to estimate the depth map using the reconstructed right view and the original left view. The two network models are piped together during the test phase. The output results of this method outperforms the current mainstream unsupervised depth estimation method in the challenging KITTI dataset.
Due to the rise of robotics and virtual reality, depth evaluation has undoubtedly become one of the most popular research points at present. Because machine learning or deep learning has better performance than traditional methods, more and more researchers has applied this methods to depth evaluation and some research results have been achieved. Here we would focus on works related to stereo matching @cite_27 and monocular depth evaluation @cite_31 with machine learning or depth learning, and no assumptions about the scene geometry or types of objects present are made.
{ "cite_N": [ "@cite_27", "@cite_31" ], "mid": [ "2440384215", "2139905387" ], "abstract": [ "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.", "We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone." ] }
1812.11671
2907508336
At present, deep learning has been applied more and more in monocular image depth estimation and has shown promising results. The current more ideal method for monocular depth estimation is the supervised learning based on ground truth depth, but this method requires an abundance of expensive ground truth depth as the supervised labels. Therefore, researchers began to work on unsupervised depth estimation methods. Although the accuracy of unsupervised depth estimation method is still lower than that of supervised method, it is a promising research direction. In this paper, Based on the experimental results that the stereo matching models outperforms monocular depth estimation models under the same unsupervised depth estimation model, we proposed an unsupervised monocular vision stereo matching method. In order to achieve the monocular stereo matching, we constructed two unsupervised deep convolution network models, one was to reconstruct the right view from the left view, and the other was to estimate the depth map using the reconstructed right view and the original left view. The two network models are piped together during the test phase. The output results of this method outperforms the current mainstream unsupervised depth estimation method in the challenging KITTI dataset.
We can see the comparison results from Table , that the stereo matching models outperforms monocular depth estimation models under the same unsupervised depth estimation model. So inspired by @cite_11 , we proposed a unsupervised monocular image stereo matching model that composed by the view synthesis network and stereo matching network. For these two network, we was suggested from Refs. Godard2017Unsupervised , that constructed an unsupervised end-to-end convolutional network model with similar structure. We can synthesize right view from left view through view synthesis network that was trained by the loss of consistency between the predicted view and the original image. Then we input the concatenation of both original left and synthesized right views into stereo matching networks for depth estimation . The implementation procedure of our unsupervised monocular vision stereo matching is illustrated in Fig. .
{ "cite_N": [ "@cite_11" ], "mid": [ "2794293902" ], "abstract": [ "Previous monocular depth estimation methods take a single view and directly regress the expected results. Though recent advances are made by applying geometrically inspired loss functions during training, the inference procedure does not explicitly impose any geometrical constraint. Therefore these models purely rely on the quality of data and the effectiveness of learning to generalize. This either leads to suboptimal results or the demand of huge amount of expensive ground truth labelled data to generate reasonable results. In this paper, we show for the first time that the monocular depth estimation problem can be reformulated as two sub-problems, a view synthesis procedure followed by stereo matching, with two intriguing properties, namely i) geometrical constraints can be explicitly imposed during inference; ii) demand on labelled depth data can be greatly alleviated. We show that the whole pipeline can still be trained in an end-to-end fashion and this new formulation plays a critical role in advancing the performance. The resulting model outperforms all the previous monocular depth estimation methods as well as the stereo block matching method in the challenging KITTI dataset by only using a small number of real training data. The model also generalizes well to other monocular depth estimation benchmarks. We also discuss the implications and the advantages of solving monocular depth estimation using stereo methods." ] }
1812.11834
2907413770
Face restoration from low resolution and noise is important for applications of face analysis recognition. However, most existing face restoration models omit the multiple scale issues in face restoration problem, which is still not well-solved in research area. In this paper, we propose a Sequential Gating Ensemble Network (SGEN) for multi-scale noise robust face restoration issue. To endow the network with multi-scale representation ability, we first employ the principle of ensemble learning for SGEN network architecture designing. The SGEN aggregates multi-level base-encoders and base-decoders into the network, which enables the network to contain multiple scales of receptive field. Instead of combining these base-en decoders directly with non-sequential operations, the SGEN takes base-en decoders from different levels as sequential data. Specifically, it is visualized that SGEN learns to sequentially extract high level information from base-encoders in bottom-up manner and restore low level information from base-decoders in top-down manner. Besides, we propose to realize bottom-up and top-down information combination and selection with Sequential Gating Unit (SGU). The SGU sequentially takes information from two different levels as inputs and decides the output based on one active input. Experiment results on benchmark dataset demonstrate that our SGEN is more effective at multi-scale human face restoration with more image details and less noise than state-of-the-art image restoration models. Further utilizing adversarial training scheme, SGEN also produces more visually preferred results than other models under subjective evaluation.
face restoration is of great importance for vision applications. Therefore, extensive studies have been carried out to restore the low quality face image to high quality face image in the past decades. The early face restoration algorithms can be categorized into two classes, i.e., global face-based restoration methods and local patch-based restoration methods. Global face-based restoration methods model LR face image as a linear combination of LR face images in the training set by using different face representation models, such as principal component analysis (PCA) @cite_37 , kernel PCA @cite_10 , locality preserving projections @cite_17 , canonical correlation analysis (CCA) @cite_27 , and non-negative matrix factorization @cite_0 . Then, these global face-based restoration methods reconstruct the target HR face image by replacing the LR training images with the corresponding HR ones, while using the same coefficients. Though global face-based restoration methods may well preserve the global shape information, the details of input face are usually not well recovered by these methods.
{ "cite_N": [ "@cite_37", "@cite_0", "@cite_27", "@cite_10", "@cite_17" ], "mid": [ "", "2121058967", "2015497428", "2103871101", "2171107009" ], "abstract": [ "", "This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework.", "Super-resolution reconstruction of face image is the problem of reconstructing a high resolution face image from one or more low resolution face images. Assuming that high and low resolution images share similar intrinsic geometries, various recent super-resolution methods reconstruct high resolution images based on a weights determined from nearest neighbors in the local embedding of low resolution images. These methods suffer disadvantages from the finite number of samples and the nature of manifold learning techniques, and hence yield unrealistic reconstructed images. To address the problem, we apply canonical correlation analysis (CCA), which maximizes the correlation between the local neighbor relationships of high and low resolution images. We use it separately for reconstruction of global face appearance, and facial details. Experiments using a collection of frontal human faces show that the proposed algorithm improves reconstruction quality over existing state-of-the-art super-resolution algorithms, both visually, and using a quantitative peak signal-to-noise ratio assessment.", "We present a learning-based method to super-resolve face images using a kernel principal component analysis-based prior model. A prior probability is formulated based on the energy lying outside the span of principal components identified in a higher-dimensional feature space. This is used to regularize the reconstruction of the high-resolution image. We demonstrate with experiments that including higher-order correlations results in significant improvements", "A two-phase face hallucination approach is proposed in this paper to infer high-resolution face image from the low-resolution observation based on a set of training image pairs. The proposed locality preserving hallucination (LPH) algorithm combines locality preserving projection (LPP) and radial basis function (RBF) regression together to hallucinate the global high-resolution face. Furthermore, in order to compensate the inferred global face with detailed inartificial facial features, the neighbor reconstruction based face residue hallucination is used. Compared with existing approaches, the proposed LPH algorithm can generate global face more similar to the ground truth face efficiently, moreover, the patch structure and search strategy carefully designed for the neighbor reconstruction algorithm greatly reduce the computational complexity without diminishing the quality of high-resolution face detail. The details of synthetic high-resolution face are further improved by a global linear smoother. Experiments indicate that our approach can synthesize distinct high-resolution faces with various facial appearances such as facial expressions, eyeglasses efficiently." ] }
1812.11834
2907413770
Face restoration from low resolution and noise is important for applications of face analysis recognition. However, most existing face restoration models omit the multiple scale issues in face restoration problem, which is still not well-solved in research area. In this paper, we propose a Sequential Gating Ensemble Network (SGEN) for multi-scale noise robust face restoration issue. To endow the network with multi-scale representation ability, we first employ the principle of ensemble learning for SGEN network architecture designing. The SGEN aggregates multi-level base-encoders and base-decoders into the network, which enables the network to contain multiple scales of receptive field. Instead of combining these base-en decoders directly with non-sequential operations, the SGEN takes base-en decoders from different levels as sequential data. Specifically, it is visualized that SGEN learns to sequentially extract high level information from base-encoders in bottom-up manner and restore low level information from base-decoders in top-down manner. Besides, we propose to realize bottom-up and top-down information combination and selection with Sequential Gating Unit (SGU). The SGU sequentially takes information from two different levels as inputs and decides the output based on one active input. Experiment results on benchmark dataset demonstrate that our SGEN is more effective at multi-scale human face restoration with more image details and less noise than state-of-the-art image restoration models. Further utilizing adversarial training scheme, SGEN also produces more visually preferred results than other models under subjective evaluation.
To overcome the drawback of global face-based restoration methods, local patch-based restoration methods decompose face image into small patches, which can capture more facial details. Local patch-based restoration methods assume that LR and HR face patch manifolds are locally isometric. Therefore, once obtaining the representation of the input LR patch with the LR training patches, we can reconstruct the target HR patch by transforming the reconstruction weights to corresponding HR training patch. The work in @cite_36 proposed a least squares representation (LSR) framework that restores images using all the training patches, which incorporates more face priors. Due to the un-stability of LSR, @cite_22 introduced a weighted sparse representation (SR) with sparsity constraint for face super-resolution. However, one main drawback of SR based methods is its sensitivity to noise. Accordingly, @cite_14 @cite_35 introduced to reconstruct noise corrupted LR images with weighted local patch, namely locality-constrained representation (LcR).
{ "cite_N": [ "@cite_36", "@cite_35", "@cite_14", "@cite_22" ], "mid": [ "2141631520", "2509704168", "2027325144", "2031349574" ], "abstract": [ "In video surveillance, the faces of interest are often of small size. Image resolution is an important factor affecting face recognition by human and computer. In this paper, we propose a new face hallucination method using eigentransformation. Different from most of the proposed methods based on probabilistic models, this method views hallucination as a transformation between different image styles. We use Principal Component Analysis (PCA) to fit the input face image as a linear combination of the low-resolution face images in the training set. The high-resolution image is rendered by replacing the low-resolution training images with high-resolution ones, while retaining the same combination coefficients. Experiments show that the hallucinated face images are not only very helpful for recognition by humans, but also make the automatic recognition procedure easier, since they emphasize the face difference by adding more high-frequency details.", "Face image super-resolution has attracted much attention in recent years. Many algorithms have been proposed. Among them, sparse representation (SR)-based face image super-resolution approaches are able to achieve competitive performance. However, these SR-based approaches only perform well under the condition that the input is noiseless or has small noise. When the input is corrupted by large noise, the reconstruction weights (or coefficients) of the input low-resolution (LR) patches using SR-based approaches will be seriously unstable, thus leading to poor reconstruction results. To this end, in this paper, we propose a novel SR-based face image super-resolution approach that incorporates smooth priors to enforce similar training patches having similar sparse coding coefficients. Specifically, we introduce the fused least absolute shrinkage and selection operator-based smooth constraint and locality-based smooth constraint to the least squares representation-based patch representation in order to obtain stable reconstruction weights, especially when the noise level of the input LR image is high. Experiments are carried out on the benchmark FEI face database and CMU+MIT face database. Visual and quantitative comparisons show that the proposed face image super-resolution method yields superior reconstruction results when the input LR face image is contaminated by strong noise.", "", "Sparse representation-based face hallucination approaches proposed so far use fixed l1 norm penalty to capture the sparse nature of face images, and thus hardly adapt readily to the statistical variability of underlying images. Additionally, they ignore the influence of spatial distances between the test image and training basis images on optimal reconstruction coefficients. Consequently, they cannot offer a satisfactory performance in practical face hallucination applications. In this paper, we propose a weighted adaptive sparse regularization (WASR) method to promote accuracy, stability and robustness for face hallucination reconstruction, in which a distance-inducing weighted lq norm penalty is imposed on the solution. With the adjustment to shrinkage parameter q , the weighted lq penalty function enables elastic description ability in the sparse domain, leading to more conservative sparsity in an ascending order of q . In particular, WASR with an optimal q > 1 can reasonably represent the less sparse nature of noisy images and thus remarkably boosts noise robust performance in face hallucination. Various experimental results on standard face database as well as real-world images show that our proposed method outperforms state-of-the-art methods in terms of both objective metrics and visual quality." ] }
1812.11834
2907413770
Face restoration from low resolution and noise is important for applications of face analysis recognition. However, most existing face restoration models omit the multiple scale issues in face restoration problem, which is still not well-solved in research area. In this paper, we propose a Sequential Gating Ensemble Network (SGEN) for multi-scale noise robust face restoration issue. To endow the network with multi-scale representation ability, we first employ the principle of ensemble learning for SGEN network architecture designing. The SGEN aggregates multi-level base-encoders and base-decoders into the network, which enables the network to contain multiple scales of receptive field. Instead of combining these base-en decoders directly with non-sequential operations, the SGEN takes base-en decoders from different levels as sequential data. Specifically, it is visualized that SGEN learns to sequentially extract high level information from base-encoders in bottom-up manner and restore low level information from base-decoders in top-down manner. Besides, we propose to realize bottom-up and top-down information combination and selection with Sequential Gating Unit (SGU). The SGU sequentially takes information from two different levels as inputs and decides the output based on one active input. Experiment results on benchmark dataset demonstrate that our SGEN is more effective at multi-scale human face restoration with more image details and less noise than state-of-the-art image restoration models. Further utilizing adversarial training scheme, SGEN also produces more visually preferred results than other models under subjective evaluation.
In the past few years, convolutional neural networks (CNN) @cite_29 have shown an explosive popularity and success in various computer vision fields, such as image recognition @cite_6 , object detection @cite_3 , face recognition @cite_40 , and semantic segmentation @cite_32 . CNN based image restoration algorithms have also shown excellent performance compared with previous state-of-the-art methods. SRCNN @cite_33 is a three layer fully convolutional network and trained end-to-end for image super resolution. @cite_2 presented a ultra-resolution discriminative generative network (URDGN) that can ultra-resolve a very low resolution face. Instead of building network as simple hierarchy structure, other works also applied the skip connections, which can be viewed as one kind of ensemble structure @cite_18 , to image restoration tasks. @cite_28 proposed a SRResNet that uses ResNet blocks in the generative model and achieves state-of-the-art peak signal-to-noise ratio (PSNR) performance for image super-resolution. In addition, they presented a SRGAN that utilizes adversarial training to achieve better visual quality than SRResNet. @cite_34 proposed a residual encoder-decoder network (RED-Net) which symmetrically links convolutional and deconvolutional layers with skip-layer connections.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_28", "@cite_29", "@cite_32", "@cite_3", "@cite_6", "@cite_40", "@cite_2", "@cite_34" ], "mid": [ "2963410064", "54257720", "2523714292", "2147800946", "1903029394", "2953106684", "2194775991", "2325939864", "2520930090", "2964046669" ], "abstract": [ "In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.", "Conventional face super-resolution methods, also known as face hallucination, are limited up to (2 ! ! 4 ) scaling factors where (4 16 ) additional pixels are estimated for each given pixel. Besides, they become very fragile when the input low-resolution image size is too small that only little information is available in the input image. To address these shortcomings, we present a discriminative generative network that can ultra-resolve a very low resolution face image of size (16 16 ) pixels to its (8 ) larger version by reconstructing 64 pixels from a single pixel. We introduce a pixel-wise ( _2 ) regularization term to the generative model and exploit the feedback of the discriminative network to make the upsampled face images more similar to real ones. In our framework, the discriminative network learns the essential constituent parts of the faces and the generative network blends these parts in the most accurate fashion to the input image. Since only frontal and ordinary aligned images are used in training, our method can ultra-resolve a wide range of very low-resolution images directly regardless of pose and facial expression variations. Our extensive experimental evaluations demonstrate that the presented ultra-resolution by discriminative generative networks (UR-DGN) achieves more appealing results than the state-of-the-art.", "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and deconvolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. Deconvolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and deconvolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, the skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to deconvolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than recent state-of-the-art methods." ] }
1812.11834
2907413770
Face restoration from low resolution and noise is important for applications of face analysis recognition. However, most existing face restoration models omit the multiple scale issues in face restoration problem, which is still not well-solved in research area. In this paper, we propose a Sequential Gating Ensemble Network (SGEN) for multi-scale noise robust face restoration issue. To endow the network with multi-scale representation ability, we first employ the principle of ensemble learning for SGEN network architecture designing. The SGEN aggregates multi-level base-encoders and base-decoders into the network, which enables the network to contain multiple scales of receptive field. Instead of combining these base-en decoders directly with non-sequential operations, the SGEN takes base-en decoders from different levels as sequential data. Specifically, it is visualized that SGEN learns to sequentially extract high level information from base-encoders in bottom-up manner and restore low level information from base-decoders in top-down manner. Besides, we propose to realize bottom-up and top-down information combination and selection with Sequential Gating Unit (SGU). The SGU sequentially takes information from two different levels as inputs and decides the output based on one active input. Experiment results on benchmark dataset demonstrate that our SGEN is more effective at multi-scale human face restoration with more image details and less noise than state-of-the-art image restoration models. Further utilizing adversarial training scheme, SGEN also produces more visually preferred results than other models under subjective evaluation.
However, these skip-connections in @cite_28 @cite_34 fail to explore the underlying sequential relationship among multi-level feature maps in image restoration problem. Therefore, we design our SGEN followed by the goal of autoencoder, which sequentially extracts high level information from base-encoders in bottom-up manner and restores low level information from base-decoders in top-down manner.
{ "cite_N": [ "@cite_28", "@cite_34" ], "mid": [ "2523714292", "2964046669" ], "abstract": [ "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "In this paper, we propose a very deep fully convolutional encoding-decoding framework for image restoration such as denoising and super-resolution. The network is composed of multiple layers of convolution and deconvolution operators, learning end-to-end mappings from corrupted images to the original ones. The convolutional layers act as the feature extractor, which capture the abstraction of image contents while eliminating noises corruptions. Deconvolutional layers are then used to recover the image details. We propose to symmetrically link convolutional and deconvolutional layers with skip-layer connections, with which the training converges much faster and attains a higher-quality local optimum. First, the skip connections allow the signal to be back-propagated to bottom layers directly, and thus tackles the problem of gradient vanishing, making training deep networks easier and achieving restoration performance gains consequently. Second, these skip connections pass image details from convolutional layers to deconvolutional layers, which is beneficial in recovering the original image. Significantly, with the large capacity, we can handle different levels of noises using a single model. Experimental results show that our network achieves better performance than recent state-of-the-art methods." ] }
1812.11741
2907694156
We consider multi-agent systems where agents actions and beliefs are determined aleatorically, or "by the throw of dice". This system consists of possible worlds that assign distributions to independent random variables, and agents who assign probabilities to these possible worlds. We present a novel syntax and semantics for such system, and show that they generalise Modal Logic. We also give a sound and complete calculus for reasoning in the base semantics, and a sound calculus for the full modal semantics, that we conjecture to be complete. Finally we discuss some application to reasoning about game playing agents.
These approaches lose the simplicity of Boolean logics, as deductive systems must deal with propositions that are not independent. This limits their practicality as well defined semantics require the conditional probabilities of all atoms to be known. However, these approaches have been successfully combined with logic programming @cite_12 and machine learning @cite_14 . Feldman and Harel @cite_10 and Kozen @cite_6 gave a probabilistic variation of propositional dynamic logic for reasoning about the correctness of programs with random variables. Importantly, this work generalises a modal logic (PDL) as a many valued logic.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_6", "@cite_12" ], "mid": [ "2738790068", "", "2000108089", "2400548963" ], "abstract": [ "We present an implementation of a probabilistic first-order logic called TensorLog, in which classes of logical queries are compiled into differentiable functions in a neural-network infrastructure such as Tensorflow or Theano. This leads to a close integration of probabilistic logical reasoning with deep-learning infrastructure: in particular, it enables high-performance deep learning frameworks to be used for tuning the parameters of a probabilistic logic. Experimental results show that TensorLog scales to problems involving hundreds of thousands of knowledge-base triples and tens of thousands of examples.", "", "Abstract In this paper we give a probabilistic analog PPDL of Propositional Dynamic Logic. We prove a small model property and give a polynomial space decision procedure for formulas involving well-structured programs. We also give a deductive calculus and illustrate its use by calculating the expected running time of a simple random walk.", "The last two decades has seen the emergence of many different probabilistic logics that use logical languages to specify, and sometimes reason, with probability distributions. Probabilistic logics that support reasoning with probability distributions, such as ProbLog, use an implicit definition of an interaction rule to combine probabilistic evidence about atoms. In this paper, we show that this interaction rule is an example of a more general class of interactions that can be described by nonmonotonic logics. We furthermore show that such local interactions about the probability of an atom can be described by convolution. The resulting extended probabilistic logic supports nonmonotonic reasoning with probabilistic information." ] }
1812.11741
2907694156
We consider multi-agent systems where agents actions and beliefs are determined aleatorically, or "by the throw of dice". This system consists of possible worlds that assign distributions to independent random variables, and agents who assign probabilities to these possible worlds. We present a novel syntax and semantics for such system, and show that they generalise Modal Logic. We also give a sound and complete calculus for reasoning in the base semantics, and a sound calculus for the full modal semantics, that we conjecture to be complete. Finally we discuss some application to reasoning about game playing agents.
More general foundational work on reasoning probabilistically was done by de Finetti @cite_3 who established an epistemic notion of probability based on what an agent would consider to be a rational wager (the Dutch book argument). In @cite_21 , Milne incorporates these ideas into the logic of conditional events. Stalnaker @cite_9 has also considered conditional events and has presented conditional logic @cite_17 . Here, conditional refers to the interpretation of one proposition being contingent on another, although this is not quantified nor assigned a probability.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_3", "@cite_17" ], "mid": [ "1590902735", "1982537065", "2172298389", "" ], "abstract": [ "A conditional sentence expresses a proposition which is a function of two other propositions, yet not one which is a truth function of those propositions. I may know the truth values of “Willie Mays played in the American League” and “Willie Mays hit four hundred” without knowing whether or not Mays, would have hit four hundred if he had played in the American League. This fact has tended to puzzle, displease, or delight philosophers, and many have felt that it is a fact that calls for some comment or explanation. It has given rise to a number of philosophical problems; I shall discuss three of these.", "This article begins by outlining some of the history--beginning with brief remarks of Quine's-of work on conditional assertions and conditional events. The upshot of the historical narrative is that diverse works from various starting points have circled around a nexus of ideas without convincingly tying them together. Section 3 shows how ideas contained in a neglected article of de Finetti's lead to a unified treatment of the topics based on the identification of conditional events as the objects of conditional bets. The penultimate section explores some of the consequences of the resulting logic of conditional events while the last defends it.", "Part 7 A preliminary survey: heads and tails - preliminary considerations heads and tails - the random process laws of \"large numbers\" the \"central limit theorem\". Part 8 Random processes with independent increments: the case of asymptotic normality the Wiener-Levy process behaviour and asymptotic behaviour ruin problems ballot problems. Part 9 An introduction to other types of stochastic process: Markov processes stationary processes. Part 10 Problems in higher dimensions: second-order characteristics and the normal distribution the discrete case the continuous case the case of spherical symmetry. Part 11 Inductive reasoning, statistical inference: the basic formulation and preliminary clarifications the case of independence and the case of dependence exchangeability. Part 12 Mathematical statistics: the scope and limits of the treatment the likelihood principle and sufficient statistics a Bayesian approach to \"estimation\" and \"hypothesis testing\" the connections with decision theory.", "" ] }
1812.11321
2949347774
A capsule is a group of neurons, whose activity vector represents the instantiation parameters of a specific type of entity. In this paper, we explore the capsule networks used for relation extraction in a multi-instance multi-label learning framework and propose a novel neural approach based on capsule networks with attention mechanisms. We evaluate our method with different benchmarks, and it is demonstrated that our method improves the precision of the predicted relations. Particularly, we show that capsule networks improve multiple entity pairs relation extraction.
In the recent years, NN models have shown superior performance over approaches using hand-crafted features in various tasks. CNN is the first one of the deep learning models that have been applied to relation extraction @cite_14 . Variants of convolutional networks include piecewise-CNN (PCNN) @cite_16 , instance-level selective attention CNN @cite_9 , rank CNN @cite_15 , attention and memory CNN @cite_11 and syntax-aware CNN @cite_6 . Recurrent neural networks (RNN) are another popular choice, and have been used in recent works in the form of attention RNNs @cite_3 , context-aware long short-term memory units (LSTMs) @cite_23 , graph-LSTMs @cite_12 and ensemble LSTMs @cite_0 .
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_6", "@cite_3", "@cite_0", "@cite_23", "@cite_15", "@cite_16", "@cite_12", "@cite_11" ], "mid": [ "2950371387", "2515462165", "2783378231", "", "2787051437", "", "2963912690", "2251135946", "2612364175", "2740759433" ], "abstract": [ "Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals.", "", "Distant supervised relation extraction is an efficient approach to scale relation extraction to very large corpora, and has been widely used to find novel relational facts from plain text. Recent studies on neural relation extraction have shown great progress on this task via modeling the sentences in low-dimensional spaces, but seldom considered syntax information to model the entities. In this paper, we propose to learn syntax-aware entity embedding for neural relation extraction. First, we encode the context of entities on a dependency tree as sentence-level entity embedding based on tree-GRU. Then, we utilize both intra-sentence and inter-sentence attentions to obtain sentence set-level entity embedding over all sentences containing the focus entity pair. Finally, we combine both sentence embedding and entity embedding for relation classification. We conduct experiments on a widely used real-world dataset and the experimental results show that our model can make full use of all informative instances and achieve state-of-the-art performance of relation extraction.", "", "Relation extraction has been widely studied to extract new relational facts from open corpus. Previous relation extraction methods are faced with the problem of wrong labels and noisy data, which substantially decrease the performance of the model. In this paper, we propose an ensemble neural network model - Adaptive Boosting LSTMs with Attention, to more effectively perform relation extraction. Specifically, our model first employs the recursive neural network LSTMs to embed each sentence. Then we import attention into LSTMs by considering that the words in a sentence do not contribute equally to the semantic meaning of the sentence. Next via adaptive boosting, we build strategically several such neural classifiers. By ensembling multiple such LSTM classifiers with adaptive boosting, we could build a more effective and robust joint ensemble neural networks based relation extractor. Experiment results on real dataset demonstrate the superior performance of the proposed model, improving F1-score by about 8 compared to the state-of-the-art models. The code of this work is publicly available on this https URL", "", "", "Two problems arise when using distant supervision for relation extraction. First, in this method, an already existing knowledge base is heuristically aligned to texts, and the alignment results are treated as labeled data. However, the heuristic alignment can fail, resulting in wrong label problem. In addition, in previous approaches, statistical models have typically been applied to ad hoc features. The noise that originates from the feature extraction process can cause poor performance. In this paper, we propose a novel model dubbed the Piecewise Convolutional Neural Networks (PCNNs) with multi-instance learning to address these two problems. To solve the first problem, distant supervised relation extraction is treated as a multi-instance problem in which the uncertainty of instance labels is taken into account. To address the latter problem, we avoid feature engineering and instead adopt convolutional architecture with piecewise max pooling to automatically learn relevant features. Experiments show that our method is effective and outperforms several competitive baseline methods.", "Past work in relation extraction has focused on binary relations in single sentences. Recent NLP inroads in high-value domains have sparked interest in the more general setting of extracting n-ary relations that span multiple sentences. In this paper, we explore a general relation extraction framework based on graph long short-term memory networks (graph LSTMs) that can be easily extended to cross-sentence n-ary relation extraction. The graph formulation provides a unified way of exploring different LSTM approaches and incorporating various intra-sentential and inter-sentential dependencies, such as sequential, syntactic, and discourse relations. A robust contextual representation is learned for the entities, which serves as input to the relation classifier. This simplifies handling of relations with arbitrary arity, and enables multi-task learning with related relations. We evaluate this framework in two important precision medicine settings, demonstrating its effectiveness with both conventional supervised learning and distant supervision. Cross-sentence extraction produced larger knowledge bases. and multi-task learning significantly improved extraction accuracy. A thorough analysis of various LSTM approaches yielded useful insight the impact of linguistic analysis on extraction accuracy.", "" ] }
1812.11321
2949347774
A capsule is a group of neurons, whose activity vector represents the instantiation parameters of a specific type of entity. In this paper, we explore the capsule networks used for relation extraction in a multi-instance multi-label learning framework and propose a novel neural approach based on capsule networks with attention mechanisms. We evaluate our method with different benchmarks, and it is demonstrated that our method improves the precision of the predicted relations. Particularly, we show that capsule networks improve multiple entity pairs relation extraction.
Recently, the capsule network has been proposed to improve the representation limitations of CNNs and RNNs. @cite_13 replaced the scalar-output feature detectors of CNNs with vector-output capsules and max-pooling with routing-by-agreement. @cite_21 ) proposed a new iterative routing procedure among capsule layers, based on the EM algorithm. For natural language processing tasks, @cite_1 explored capsule networks for text classification. @cite_19 designed two dynamic routing policies to aggregate the outputs of RNN CNN encoding layer into a final encoding vector. @cite_20 proposed a capsule model based on RNN for sentiment analysis. To the best of our knowledge, there has been no work that investigates the performance of capsule networks in relation extraction tasks at present.
{ "cite_N": [ "@cite_21", "@cite_1", "@cite_19", "@cite_13", "@cite_20" ], "mid": [ "2785994986", "2796138868", "2805853672", "2963703618", "2788347302" ], "abstract": [ "A capsule is a group of neurons whose outputs represent different properties of the same entity. Each layer in a capsule network contains many capsules [a group of capsules forms a capsule layer and can be used in place of a traditional layer in a neural net]. We describe a version of capsules in which each capsule has a logistic unit to represent the presence of an entity and a 4x4 matrix which could learn to represent the relationship between that entity and the viewer (the pose). A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships. Each of these votes is weighted by an assignment coefficient. These coefficients are iteratively updated for each image using the Expectation-Maximization algorithm such that the output of each capsule is routed to a capsule in the layer above that receives a cluster of similar votes. The transformation matrices are trained discriminatively by backpropagating through the unrolled iterations of EM between each pair of adjacent capsule layers. On the smallNORB benchmark, capsules reduce the number of test errors by 45 compared to the state-of-the-art. Capsules also show far more resistance to white box adversarial attack than our baseline convolutional neural network.", "In this study, we explore capsule networks with dynamic routing for text classification. We propose three strategies to stabilize the dynamic routing process to alleviate the disturbance of some noise capsules which may contain \"background\" information or have not been successfully trained. A series of experiments are conducted with capsule networks on six text classification benchmarks. Capsule networks achieve state of the art on 4 out of 6 datasets, which shows the effectiveness of capsule networks for text classification. We additionally show that capsule networks exhibit significant improvement when transfer single-label to multi-label text classification over strong baseline methods. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for text modeling.", "While much progress has been made in how to encode a text sequence into a sequence of vectors, less attention has been paid to how to aggregate these preceding vectors (outputs of RNN CNN) into fixed-size encoding vector. Usually, a simple max or average pooling is used, which is a bottom-up and passive way of aggregation and lack of guidance by task information. In this paper, we propose an aggregation mechanism to obtain a fixed-size encoding with a dynamic routing policy. The dynamic routing policy is dynamically deciding that what and how much information need be transferred from each word to the final encoding of the text sequence. Following the work of Capsule Network, we design two dynamic routing policies to aggregate the outputs of RNN CNN encoding layer into a final encoding vector. Compared to the other aggregation methods, dynamic routing can refine the messages according to the state of final encoding vector. Experimental results on five text classification tasks show that our method outperforms other aggregating models by a significant margin. Related source code is released on our github page.", "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.", "In this paper, we propose RNN-Capsule, a capsule model based on Recurrent Neural Network (RNN) for sentiment analysis. For a given problem, one capsule is built for each sentiment category e.g., ‘positive’, ‘neutral’, and ‘negative’. Each capsule has an attribute, a state, and three modules: representation module, probability module, and reconstruction module. The attribute of a capsule is the assigned sentiment category. Given an instance encoded in hidden vectors by a typical RNN, the representation module builds capsule representation by the attention mechanism. Based on capsule representation, the probability module computes the capsule’s state probability. A capsule’s state is active if its state probability is the largest among all capsules for the given instance, and inactive otherwise. On two benchmark datasets (i.e., Movie Review and Stanford Sentiment Treebank) and one proprietary dataset (i.e., Hospital Feedback), we show that RNN-Capsule achieves state-of-the-art performance on sentiment classification. More importantly, without using any linguistic knowledge, RNN-Capsule is capable of outputting words with sentiment tendencies reflecting capsules’ attributes. The words well reflect the domain specificity of the dataset. To the best of our knowledge, this is the first capsule model for sentiment analysis." ] }
1812.11252
2908253384
As science advances, the academic community has published millions of research papers. Researchers devote time and effort to search relevant manuscripts when writing a paper or simply to keep up with current research. In this paper, we consider the problem of citation recommendation by extending a set of known-to-be-relevant references. Our analysis shows the degrees of cited papers in the subgraph induced by the citations of a paper, called projection graph, follow a power law distribution. Existing popular methods are only good at finding the long tail papers, the ones that are highly connected to others. In other words, the majority of cited papers are loosely connected in the projection graph but they are not going to be found by existing methods. To address this problem, we propose to combine author, venue and keyword information to interpret the citation behavior behind those loosely connected papers. Results show that different methods are finding cited papers with widely different properties. We suggest multiple recommended lists by different algorithms could satisfy various users for a real citation recommendation system. Moreover, we also explore the fast local approximation for combined methods in order to improve the efficiency.
Given a "basket" of citations, @cite_31 explore the use of Collaborative Filtering (CF) to recommend papers that would be suitable additional references for a target research paper. They create a ratings matrix where citing papers correspond to users and citations correspond to items. The experiments show CF could generate high quality recommendations. As a follow-up, @cite_16 describe and test different techniques for combining Collaborative Filtering and Content-Based Filtering. A user study shows many of CF-CBF hybrid recommender algorithms can generate research paper recommendations that users were happy to receive. However, offline experiments show those hybrid algorithms did not perform well. In their opinion, the sequential nature of these hybrid algorithms: the second module is only able to make recommendations seeded by the results of the first module. To address this problem, @cite_19 propose to fuse the two steps by running a CF and a CBF recommender in parallel and blending the resulting ranked lists. The first items on the combined recommendation list are those items which appeared on both lists, ordered by the sum of their ranks. Surprisingly, Collaborative Filtering outperforms all hybrid algorithms in their experiments.
{ "cite_N": [ "@cite_19", "@cite_31", "@cite_16" ], "mid": [ "2139375986", "2116655493", "2142574815" ], "abstract": [ "All new researchers face the daunting task of familiarizing themselves with the existing body of research literature in their respective fields. Recommender algorithms could aid in preparing these lists, but most current algorithms do not understand how to rate the importance of a paper within the literature, which might limit their effectiveness in this domain. We explore several methods for augmenting existing collaborative and content-based filtering algorithms with measures of the influence of a paper within the web of citations. We measure influence using well-known algorithms, such as HITS and PageRank, for measuring a node's importance in a graph. Among these augmentation methods is a novel method for using importance scores to influence collaborative filtering. We present a task-centered evaluation, including both an offline analysis and a user study, of the performance of the algorithms. Results from these studies indicate that collaborative filtering outperforms content-based approaches for generating introductory reading lists.", "Collaborative filtering has proven to be valuable for recommending items in many different domains. In this paper, we explore the use of collaborative filtering to recommend research papers, using the citation web between papers to create the ratings matrix. Specifically, we tested the ability of collaborative filtering to recommend citations that would be suitable additional references for a target research paper. We investigated six algorithms for selecting citations, evaluating them through offline experiments against a database of over 186,000 research papers contained in ResearchIndex. We also performed an online experiment with over 120 users to gauge user opinion of the effectiveness of the algorithms and of the utility of such recommendations for common research tasks. We found large differences in the accuracy of the algorithms in the offline experiment, especially when balanced for coverage. In the online experiment, users felt they received quality recommendations, and were enthusiastic about the idea of receiving recommendations in this domain.", "The number of research papers available is growing at a staggering rate. Researchers need tools to help them find the papers they should read among all the papers published each year. In this paper, we present and experiment with hybrid recommender algorithms that combine Collaborative Filtering and Content-based. Filtering to recommend research papers to users. Our hybrid algorithms combine the strengths of each filtering approach to address their individual weaknesses. We evaluated our algorithms through offline experiments on a database of 102, 000 research papers, and through an online experiment with 110 users. For both experiments we used a dataset created from the CiteSeer repository of computer science research papers. We developed separate English and Portuguese versions of the interface and specifically recruited American and Brazilian users to test for cross-cultural effects. Our results show that users value paper recommendations, that the hybrid algorithms can be successfully combined, that different algorithms are more suitable for recommending different kinds of papers, and that users with different levels of experience perceive recommendations differently These results can be applied to develop recommender systems for other types of digital libraries." ] }