aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
A3C (Asynchronous Advantage Actor Critic) @cite_27 is an algorithm that employs a asynchronous training scheme (using multiple CPU cores) for efficiency. It is an on-policy RL method that does not use an experience replay buffer. A3C allows multiple workers to simultaneously interact with the environment and compute gradients locally. All the workers pass their computed local gradients to a global neural network that performs the optimization and synchronizes with the workers asynchronously. There is also the A2C (Advantage Actor-Critic) method that combines all the gradients from all the workers to update the global neural network .
|
{
"cite_N": [
"@cite_27"
],
"mid": [
"2260756217"
],
"abstract": [
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input."
]
}
|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
The UNREAL framework @cite_35 is built on top of A3C. In particular, UNREAL proposes unsupervised (e.g., reward prediction) to speed up learning which require no additional feedback from the environment. In contrast to A3C, UNREAL uses an experience replay buffer that is sampled with more priority given to positively rewarded interactions to improve the critic network.
|
{
"cite_N": [
"@cite_35"
],
"mid": [
"2950872548"
],
"abstract": [
"Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth."
]
}
|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
Monte Carlo Tree Search (see Figure ) is a best first search algorithm that gained traction after its breakthrough performance in Go @cite_17 . Other than for game playing agents, MCTS has been employed for a variety of domains such as robotics @cite_22 @cite_19 and Sokoban puzzle generation @cite_23 . A recent work @cite_15 provided an excellent unification of MCTS and RL.
|
{
"cite_N": [
"@cite_22",
"@cite_19",
"@cite_23",
"@cite_15",
"@cite_17"
],
"mid": [
"2573637607",
"2747605198",
"2729964169",
"2778917778",
"1714211023"
],
"abstract": [
"Multi-robot teams are useful in a variety of task allocation domains such as warehouse automation and surveillance. Robots in such domains perform tasks at given locations and specific times, and are allocated tasks to optimize given team objectives. We propose an efficient, satisficing and centralized Monte Carlo Tree Search based algorithm exploiting branch and bound paradigm to solve the multi-robot task allocation problem with spatial, temporal and other side constraints. Unlike previous heuristics proposed for this problem, our approach offers theoretical guarantees and finds optimal solutions for some non-trivial data sets.",
"This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2–9 grasps and outperformed a greedy baseline.",
"",
"Fuelled by successes in Computer Go, Monte Carlo tree search (MCTS) has achieved wide-spread adoption within the games community. Its links to traditional reinforcement learning (RL) methods have been outlined in the past; however, the use of RL techniques within tree search has not been thoroughly studied yet. In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two communities. We show that a straightforward adaptation of RL semantics within tree search can lead to a wealth of new algorithms, for which the traditional MCTS is only one of the variants. We confirm that planning methods inspired by RL in conjunction with online search demonstrate encouraging results on several classic board games and in arcade video game competitions, where our algorithm recently ranked first. Our study promotes a unified view of learning, planning, and search.",
"A Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations. The method can serve as an evaluation function at the leaves of a min-max tree. This paper presents a new framework to combine tree search with Monte-Carlo evaluation, that does not separate between a min-max phase and a Monte-Carlo phase. Instead of backing-up the min-max value close to the root, and the average value at some depth, a more general backup operator is defined that progressively changes from averaging to minmax as the number of simulations grows. This approach provides a finegrained control of the tree growth, at the level of individual simulations, and allows efficient selectivity. The resulting algorithm was implemented in a 9 × 9 Go-playing program, Crazy Stone, that won the 10th KGS computer-Go tournament."
]
}
|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
Approaches such as DAGGER @cite_37 or its extended version @cite_10 formulate imitation learning as a supervised problem where the aim is to match the performance of the demonstrator. However, performance of agents using these methods is upper-bounded by the demonstrator performance.
|
{
"cite_N": [
"@cite_37",
"@cite_10"
],
"mid": [
"1931877416",
"2950735232"
],
"abstract": [
"Sequential prediction problems such as imitation learning, where future observations depend on previous predictions (actions), violate the common i.i.d. assumptions made in statistical learning. This leads to poor performance in theory and often in practice. Some recent approaches provide stronger guarantees in this setting, but remain somewhat unsatisfactory as they train either non-stationary or stochastic policies and require a large number of iterations. In this paper, we propose a new iterative algorithm, which trains a stationary deterministic policy, that can be seen as a no regret algorithm in an online learning setting. We show that any such no regret algorithm, combined with additional reduction assumptions, must find a policy with good performance under the distribution of observations it induces in such sequential settings. We demonstrate that this new approach outperforms previous approaches on two challenging imitation learning problems and a benchmark sequence labeling problem.",
"Researchers have demonstrated state-of-the-art performance in sequential decision making problems (e.g., robotics control, sequential prediction) with deep neural network models. One often has access to near-optimal oracles that achieve good performance on the task during training. We demonstrate that AggreVaTeD --- a policy gradient extension of the Imitation Learning (IL) approach of (Ross & Bagnell, 2014) --- can leverage such an oracle to achieve faster and better solutions with less training data than a less-informed Reinforcement Learning (RL) technique. Using both feedforward and recurrent neural network predictors, we present stochastic gradient procedures on a sequential prediction task, dependency-parsing from raw image data, as well as on various high dimensional robotics control problems. We also provide a comprehensive theoretical study of IL that demonstrates we can expect up to exponentially lower sample complexity for learning with AggreVaTeD than with RL algorithms, which backs our empirical findings. Our results and theory indicate that the proposed approach can achieve superior performance with respect to the oracle when the demonstrator is sub-optimal."
]
}
|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
Previously, ( lagoudakis2003reinforcement ) proposed a classification-based RL method using Monte-Carlo rollouts for each action to construct a training dataset to improve the policy iteratively. Other more recent works such as Expert Iteration @cite_1 extend imitation learning to the RL setting where the demonstrator is also continuously improved during training. There has been a growing body of work on imitation learning where human or simulated demonstrators' data is used to speed up policy learning in RL @cite_11 @cite_21 @cite_33 @cite_32 @cite_31 .
|
{
"cite_N": [
"@cite_33",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_31",
"@cite_11"
],
"mid": [
"2754799999",
"2415726935",
"2618097077",
"2626804490",
"2756826236",
"2788862220"
],
"abstract": [
"Disclosed herein are a system and method for providing a machine learning architecture based on monitored demonstrations. The system may include: a non-transitory computer-readable memory storage; at least one processor configured for dynamically training a machine learning architecture for performing one or more sequential tasks, the at least one processor configured to provide: a data receiver for receiving one or more demonstrator data sets, each demonstrator data set including a data structure representing the one or more state-action pairs; a neural network of the machine learning architecture, the neural network including a group of nodes in one or more layers; and a pre-training engine configured for processing the one or more demonstrator data sets to extract one or more features, the extracted one or more features used to pre-train the neural network based on the one or more state-action pairs observed in one or more interactions with the environment.",
"Reinforcement Learning (RL) has been effectively used to solve complex problems given careful design of the problem and algorithm parameters. However standard RL approaches do not scale particularly well with the size of the problem and often require extensive engineering on the part of the designer to minimize the search space. To alleviate this problem, we present a model-free policy-based approach called Exploration from Demonstration (EfD) that uses human demonstrations to guide search space exploration. We use statistical measures of RL algorithms to provide feedback to the user about the agent's uncertainty and use this to solicit targeted demonstrations useful from the agent's perspective. The demonstrations are used to learn an exploration policy that actively guides the agent towards important aspects of the problem. We instantiate our approach in a gridworld and a popular arcade game and validate its performance under different experimental conditions. We show how EfD scales to large problems and provides convergence speed-ups over traditional exploration and interactive learning methods.",
"Sequential decision making problems, such as structured prediction, robotic control, and game playing, require a combination of planning policies and generalisation of those plans. In this paper, we present Expert Iteration (ExIt), a novel reinforcement learning algorithm which decomposes the problem into separate planning and generalisation tasks. Planning new policies is performed by tree search, while a deep neural network generalises those plans. Subsequently, tree search is improved by using the neural network policy to guide search, increasing the strength of new plans. In contrast, standard deep Reinforcement Learning algorithms rely on a neural network not only to generalise plans, but to discover them too. We show that ExIt outperforms REINFORCE for training a neural network to play the board game Hex, and our final tree search agent, trained tabula rasa, defeats MoHex, the previous state-of-the-art Hex player.",
"For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.",
"Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.",
"Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator's actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD's performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN."
]
}
|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
In some domains, such as robotics, the tasks can be too difficult or time consuming for humans to provide full demonstrations. Instead, humans can provide more sparse @cite_29 @cite_32 on alternative agent trajectories that RL can use to speed up learning. Along this direction, ( christiano2017deep ) proposed a method that constructs a reward function based on data containing human feedback with agent trajectories and showed that a small amount of non-expert human feedback suffices to learn complex agent behaviours.
|
{
"cite_N": [
"@cite_29",
"@cite_32"
],
"mid": [
"745775011",
"2626804490"
],
"abstract": [
"This paper introduces two novel algorithms for learning behaviors from human-provided rewards. The primary novelty of these algorithms is that instead of treating the feedback as a numeric reward signal, they interpret feedback as a form of discrete communication that depends on both the behavior the trainer is trying to teach and the teaching strategy used by the trainer. For example, some human trainers use a lack of feedback to indicate whether actions are correct or incorrect, and interpreting this lack of feedback accurately can significantly improve learning speed. Results from user studies show that humans use a variety of training strategies in practice and both algorithms can learn a contextual bandit task faster than algorithms that treat the feed-back as numeric. Simulated trainers are also employed to evaluate the algorithms in both contextual bandit and sequential decision-making tasks with similar results.",
"For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback."
]
}
|
1812.00045
|
2903445514
|
Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.
|
We conclude our literature review with the description of the AlphaGo @cite_12 and AlphaGo Zero @cite_3 methods that combined methods from aforementioned research areas for a breakthrough success.
|
{
"cite_N": [
"@cite_3",
"@cite_12"
],
"mid": [
"2766447205",
"2257979135"
],
"abstract": [
"Starting from zero knowledge and without human data, AlphaGo Zero was able to teach itself to play Go and to develop novel strategies that provide new insights into the oldest of games.",
"The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of stateof-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8 winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away."
]
}
|
1812.00329
|
2903520223
|
Learning visual features from unlabeled image data is an important yet challenging task, which is often achieved by training a model on some annotation-free information. We consider spatial contexts, for which we solve so-called jigsaw puzzles, i.e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration. Existing approaches formulated it as a classification task by defining a fixed mapping from a small subset of configurations to a class set, but these approaches ignore the underlying relationship between different configurations and also limit their application to more complex scenarios. This paper presents a novel approach which applies to jigsaw puzzles with an arbitrary grid size and dimensionality. We provide a fundamental and generalized principle, that weaker cues are easier to be learned in an unsupervised manner and also transfer better. In the context of puzzle recognition, we use an iterative manner which, instead of solving the puzzle all at once, adjusts the order of the patches in each step until convergence. In each step, we combine both unary and binary features on each patch into a cost function judging the correctness of the current configuration. Our approach, by taking similarity between puzzles into consideration, enjoys a more reasonable way of learning visual knowledge. We verify the effectiveness of our approach in two aspects. First, it is able to solve arbitrarily complex puzzles, including high-dimensional puzzles, that prior methods are difficult to handle. Second, it serves as a reliable way of network initialization, which leads to better transfer performance in a few visual recognition tasks including image classification, object detection, and semantic segmentation.
|
Deep neural networks have been playing an important role in modern computer vision systems. With the availability of large-scale datasets @cite_14 and powerful computational device such as GPUs, researchers have designed network structures with tens @cite_0 @cite_15 @cite_6 or hundreds @cite_8 @cite_40 of layers towards better recognition performance. Also, the pre-trained networks in ImageNet were transferred to other recognition tasks by either extracting visual features directly @cite_20 @cite_26 @cite_10 or being fine-tuned on a new loss function @cite_34 @cite_2 . Despite their effectiveness, these networks still strongly rely on labeled image data, but in some areas such as medical imaging, data collection and annotation can be expensive, time-consuming, or requiring expertise. Thus, there has been efforts to design unsupervised @cite_36 @cite_41 or weakly supervised @cite_22 approaches which learned visual knowledge from unlabeled data, or semi-supervised learning algorithms @cite_47 @cite_33 which were aimed at combining a limited amount of labeled data and a large corpus of unlabeled data towards better performance. It has been verified that unsupervised pre-training helps supervised learning especially deep learning @cite_9 .
|
{
"cite_N": [
"@cite_47",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_33",
"@cite_8",
"@cite_36",
"@cite_41",
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_40",
"@cite_2",
"@cite_15",
"@cite_34",
"@cite_10",
"@cite_20"
],
"mid": [
"1529410181",
"2108598243",
"2102605133",
"2100031962",
"2790079029",
"2949650786",
"",
"2950789693",
"2138857742",
"2950179405",
"",
"",
"2953106684",
"1686810756",
"2952632681",
"2953391683",
"2953360861"
],
"abstract": [
"Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at this https URL",
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"Convolutional networks trained on large supervised datasets produce visual features which form the basis for the state-of-the-art in many computer-vision problems. Further improvements of these visual features will likely require even larger manually labeled data sets, which severely limits the pace at which progress can be made. In this paper, we explore the potential of leveraging massive, weakly-labeled image collections for learning good visual features. We train convolutional networks on a dataset of 100 million Flickr photos and comments, and show that these networks produce features that perform well in a range of vision problems. We also show that the networks appropriately capture word similarity and learn correspondences between different languages.",
"In this paper, we study the problem of semi-supervised image recognition, which is to learn classifiers using both labeled and unlabeled images. We present Deep Co-Training, a deep learning based method inspired by the Co-Training framework. The original Co-Training learns two classifiers on two views which are data from different sources that describe the same instances. To extend this concept to deep learning, Deep Co-Training trains multiple deep neural networks to be the different views and exploits adversarial examples to encourage view difference, in order to prevent the networks from collapsing into each other. As a result, the co-trained networks provide different and complementary information about the data, which is necessary for the Co-Training framework to achieve good results. We test our method on SVHN, CIFAR-10 100 and ImageNet datasets, and our method outperforms the previous state-of-the-art methods by a large margin.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"We consider the problem of building high- level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 bil- lion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a clus- ter with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental re- sults reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bod- ies. Starting with these learned features, we trained our network to obtain 15.8 accu- racy in recognizing 20,000 object categories from ImageNet, a leap of 70 relative im- provement over the previous state-of-the-art.",
"Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of auto-encoder variants, with impressive results obtained in several areas, mostly on vision and language data sets. The best results obtained on supervised learning tasks involve an unsupervised learning component, usually in an unsupervised pre-training phase. Even though these new algorithms have enabled training deep models, many questions remain as to the nature of this difficult learning problem. The main question investigated here is the following: how does unsupervised pre-training work? Answering this questions is important if learning in deep architectures is to be further improved. We propose several explanatory hypotheses and test them through extensive simulations. We empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples. The experiments confirm and clarify the advantage of unsupervised pre-training. The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre-training.",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"",
"",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.",
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms."
]
}
|
1812.00329
|
2903520223
|
Learning visual features from unlabeled image data is an important yet challenging task, which is often achieved by training a model on some annotation-free information. We consider spatial contexts, for which we solve so-called jigsaw puzzles, i.e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration. Existing approaches formulated it as a classification task by defining a fixed mapping from a small subset of configurations to a class set, but these approaches ignore the underlying relationship between different configurations and also limit their application to more complex scenarios. This paper presents a novel approach which applies to jigsaw puzzles with an arbitrary grid size and dimensionality. We provide a fundamental and generalized principle, that weaker cues are easier to be learned in an unsupervised manner and also transfer better. In the context of puzzle recognition, we use an iterative manner which, instead of solving the puzzle all at once, adjusts the order of the patches in each step until convergence. In each step, we combine both unary and binary features on each patch into a cost function judging the correctness of the current configuration. Our approach, by taking similarity between puzzles into consideration, enjoys a more reasonable way of learning visual knowledge. We verify the effectiveness of our approach in two aspects. First, it is able to solve arbitrarily complex puzzles, including high-dimensional puzzles, that prior methods are difficult to handle. Second, it serves as a reliable way of network initialization, which leads to better transfer performance in a few visual recognition tasks including image classification, object detection, and semantic segmentation.
|
The key factor to learning from unlabeled data is to establish some kind of prior , or some weak constraints that naturally exist, i.e. , no annotations are required. Such prior can be either (1) embedded into the network architecture or (2) encoded as a weak supervision to optimize the network. For the first type, researchers designed clustering-based approaches to optimize visual representation so as to be beneficial to clustering @cite_37 @cite_35 , as well as generator-based approaches which assumed that all images can be represented in a low-level space and trained encoders and or decoders to recover the image and or representation @cite_29 @cite_38 . Network architectures of these approaches are often largely modified, e.g. , with a set of clustering layers or encoder-decoder modules.
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_29",
"@cite_38"
],
"mid": [
"2883725317",
"2337374958",
"2173520492",
"2962793481"
],
"abstract": [
"Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large-scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks.",
"In this paper, we propose a recurrent framework for Joint Unsupervised LEarning (JULE) of deep representations and image clusters. In our framework, successive operations in a clustering algorithm are expressed as steps in a recurrent process, stacked on top of representations output by a Convolutional Neural Network (CNN). During training, image clusters and representations are updated jointly: image clustering is conducted in the forward pass, while representation learning in the backward pass. Our key idea behind this framework is that good representations are beneficial to image clustering and clustering results provide supervisory signals to representation learning. By integrating two processes into a single model with a unified weighted triplet loss and optimizing it end-to-end, we can obtain not only more powerful representations, but also more precise image clusters. Extensive experiments show that our method outperforms the state-of-the-art on image clustering across a variety of image datasets. Moreover, the learned representations generalize well when transferred to other tasks.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."
]
}
|
1812.00329
|
2903520223
|
Learning visual features from unlabeled image data is an important yet challenging task, which is often achieved by training a model on some annotation-free information. We consider spatial contexts, for which we solve so-called jigsaw puzzles, i.e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration. Existing approaches formulated it as a classification task by defining a fixed mapping from a small subset of configurations to a class set, but these approaches ignore the underlying relationship between different configurations and also limit their application to more complex scenarios. This paper presents a novel approach which applies to jigsaw puzzles with an arbitrary grid size and dimensionality. We provide a fundamental and generalized principle, that weaker cues are easier to be learned in an unsupervised manner and also transfer better. In the context of puzzle recognition, we use an iterative manner which, instead of solving the puzzle all at once, adjusts the order of the patches in each step until convergence. In each step, we combine both unary and binary features on each patch into a cost function judging the correctness of the current configuration. Our approach, by taking similarity between puzzles into consideration, enjoys a more reasonable way of learning visual knowledge. We verify the effectiveness of our approach in two aspects. First, it is able to solve arbitrarily complex puzzles, including high-dimensional puzzles, that prior methods are difficult to handle. Second, it serves as a reliable way of network initialization, which leads to better transfer performance in a few visual recognition tasks including image classification, object detection, and semantic segmentation.
|
This paper mainly considers the second type which, in comparison to the type, is much easier in algorithmic design. Typical examples include temporal consistency which assumes that neighboring video frames contain similar visual contents @cite_25 , spatial relationship between some pairs of unlabeled patches @cite_23 , learning an additive function on different regions as well as the entire image @cite_11 , etc . Among these priors, spatial contexts are widely believed to contain rich information which a vision system should be able to capture. Going one step beyond modeling patch relationship @cite_23 , researchers designed so-called jigsaw puzzles @cite_16 @cite_28 which are more complex so that the networks are better trained in learning to solve them. Consequently, such networks perform better in transfer learning.
|
{
"cite_N": [
"@cite_28",
"@cite_23",
"@cite_16",
"@cite_25",
"@cite_11"
],
"mid": [
"2799113232",
"2950187998",
"2321533354",
"219040644",
"2750549109"
],
"abstract": [
"In self-supervised learning, one trains a model to solve a so-called pretext task on a dataset without the need for human annotation. The main objective, however, is to transfer this model to a target domain and task. Currently, the most effective transfer strategy is fine-tuning, which restricts one to use the same model or parts thereof for both pretext and target tasks. In this paper, we present a novel framework for self-supervised learning that overcomes limitations in designing and comparing different tasks, models, and data domains. In particular, our framework decouples the structure of the self-supervised model from the final task-specific fine-tuned model. This allows us to: 1) quantitatively assess previously incompatible models including handcrafted features; 2) show that deeper neural network models can learn better representations from the same pretext task; 3) transfer knowledge learned with a deep model to a shallower one and thus boost its learning. We use this framework to design a novel self-supervised task, which achieves state-of-the-art performance on the common benchmarks in PASCAL VOC 2007, ILSVRC12 and Places by a significant margin. Our learned features shrink the mAP gap between models trained via self-supervised learning and supervised learning from 5.9 to 2.6 in object detection on PASCAL VOC 2007.",
"This work explores the use of spatial context as a source of free and plentiful supervisory signal for training a rich visual representation. Given only a large, unlabeled image collection, we extract random pairs of patches from each image and train a convolutional neural net to predict the position of the second patch relative to the first. We argue that doing well on this task requires the model to learn to recognize objects and their parts. We demonstrate that the feature representation learned using this within-image context indeed captures visual similarity across images. For example, this representation allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Furthermore, we show that the learned ConvNet can be used in the R-CNN framework and provides a significant boost over a randomly-initialized ConvNet, resulting in state-of-the-art performance among algorithms which use only Pascal-provided training set annotations.",
"We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively).",
"Is strong supervision necessary for learning a good visual representation? Do we really need millions of semantically-labeled images to train a Convolutional Neural Network (CNN)? In this paper, we present a simple yet surprisingly powerful approach for unsupervised learning of CNN. Specifically, we use hundreds of thousands of unlabeled videos from the web to learn visual representations. Our key idea is that visual tracking provides the supervision. That is, two patches connected by a track should have similar visual representation in deep feature space since they probably belong to same object or object part. We design a Siamese-triplet network with a ranking loss function to train this CNN representation. Without using a single image from ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train an ensemble of unsupervised networks that achieves 52 mAP (no bounding box regression). This performance comes tantalizingly close to its ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4 . We also show that our unsupervised network can perform competitively in other tasks such as surface-normal estimation.",
"We introduce a novel method for representation learning that uses an artificial supervision signal based on counting visual primitives. This supervision signal is obtained from an equivariance relation, which does not require any manual annotation. We relate transformations of images to transformations of the representations. More specifically, we look for the representation that satisfies such relation rather than the transformations that match a given representation. In this paper, we use two image transformations in the context of counting: scaling and tiling. The first transformation exploits the fact that the number of visual primitives should be invariant to scale. The second transformation allows us to equate the total number of visual primitives in each tile to that in the whole image. These two transformations are combined in one constraint and used to train a neural network with a contrastive loss. The proposed task produces representations that perform on par or exceed the state of the art in transfer learning benchmarks."
]
}
|
1812.00329
|
2903520223
|
Learning visual features from unlabeled image data is an important yet challenging task, which is often achieved by training a model on some annotation-free information. We consider spatial contexts, for which we solve so-called jigsaw puzzles, i.e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration. Existing approaches formulated it as a classification task by defining a fixed mapping from a small subset of configurations to a class set, but these approaches ignore the underlying relationship between different configurations and also limit their application to more complex scenarios. This paper presents a novel approach which applies to jigsaw puzzles with an arbitrary grid size and dimensionality. We provide a fundamental and generalized principle, that weaker cues are easier to be learned in an unsupervised manner and also transfer better. In the context of puzzle recognition, we use an iterative manner which, instead of solving the puzzle all at once, adjusts the order of the patches in each step until convergence. In each step, we combine both unary and binary features on each patch into a cost function judging the correctness of the current configuration. Our approach, by taking similarity between puzzles into consideration, enjoys a more reasonable way of learning visual knowledge. We verify the effectiveness of our approach in two aspects. First, it is able to solve arbitrarily complex puzzles, including high-dimensional puzzles, that prior methods are difficult to handle. Second, it serves as a reliable way of network initialization, which leads to better transfer performance in a few visual recognition tasks including image classification, object detection, and semantic segmentation.
|
Researchers believed that learning from these weakly-supervised cues can help visual recognition, because many problems are indeed built on understanding and integrating this type of information. Regarding spatial contexts, a wide range of recognition tasks can benefit from understanding the relative position of two (or more) patches, such as image classification @cite_4 , semantic segmentation @cite_39 and parsing @cite_24 , etc.
|
{
"cite_N": [
"@cite_24",
"@cite_4",
"@cite_39"
],
"mid": [
"2755542034",
"",
"2114740909"
],
"abstract": [
"In this paper, we study the task of detecting semantic parts of an object. This is very important in computer vision, as it provides the possibility to parse an object as human do, and helps us better understand object detection algorithms. Also, detecting semantic parts is very challenging especially when the parts are partially or fully occluded. In this scenario, the popular proposal-based methods like Faster-RCNN often produce unsatisfactory results, because both the proposal extraction and classification stages may be confused by the irrelevant occluders. To this end, we propose a novel detection framework, named DeepVoting, which accumulates local visual cues, called visual concepts (VC), to locate the semantic parts. Our approach involves adding two layers after the intermediate outputs of a deep neural network. The first layer is used to extract VC responses, and the second layer performs a voting mechanism to capture the spatial relationship between VC's and semantic parts. The benefit is that each semantic part is supported by multiple VC's. Even if some of the supporting VC's are missing due to occlusion, we can still infer the presence of the target semantic part using the remaining ones. To avoid generating an exponentially large training set to cover all occlusion cases, we train our model without seeing occlusion and transfer the learned knowledge to deal with occlusions. This setting favors learning the models which are naturally robust and adaptive to occlusions instead of over-fitting the occlusion patterns in the training data. In experiments, DeepVoting shows significantly better performance on semantic part detection in occlusion scenarios, compared with Faster-RCNN, with one order of magnitude fewer parameters and 2.5x testing speed. In addition, DeepVoting is explainable as the detection result can be diagnosed via looking up the voted VC's.",
"",
"We propose in this work a patch-based image labeling method relying on a label propagation framework. Based on image intensity similarities between the input image and an anatomy textbook, an original strategy which does not require any nonrigid registration is presented. Following recent developments in nonlocal image denoising, the similarity between images is represented by a weighted graph computed from an intensity-based distance between patches. Experiments on simulated and in vivo magnetic resonance images show that the proposed method is very successful in providing automated human brain labeling."
]
}
|
1812.00281
|
2902083265
|
This paper presents a new dataset called HUMBI - a large corpus of high fidelity models of behavioral signals in 3D from a diverse population measured by a massive multi-camera system. With our novel design of a portable imaging system (consists of 107 HD cameras), we collect human behaviors from 164 subjects across gender, ethnicity, age, and physical condition at a public venue. Using the multiview image streams, we reconstruct high fidelity models of five elementary parts: gaze, face, hands, body, and cloth. As a byproduct, the 3D model provides geometrically consistent image annotation via 2D projection, e.g., body part segmentation. This dataset is a significant departure from the existing human datasets that suffers from subject diversity. We hope the HUMBI opens up a new opportunity for the development for behavioral imaging.
|
Humans transmit and respond to many different behavioral signals such as gaze movement, facial expression, and body gestures when they interact with others @cite_10 @cite_25 . Effective signaling and interpretation of signals are the basis of successful social performance, for example, in business @cite_0 @cite_12 @cite_21 . Researchers have developed various computational models to measure, model, and predict the behavioral signals @cite_19 @cite_15 . Some behavioral signals such as hand-flapping, repeating sounds, and deficits of joint attention have shown to be early markers of the autistic spectrum disorder, and computational tools have been designed to detect these symptoms @cite_43 @cite_3 . The data of behavioral signals is the key enabling factor, which builds computational models. Here, we briefly review the existing datasets for gaze, face, hand, body, and cloth. These datasets are summarized in Table .
|
{
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_43",
"@cite_15",
"@cite_10",
"@cite_25",
"@cite_12"
],
"mid": [
"2152928398",
"",
"171934365",
"2165734786",
"2167462312",
"2143554828",
"1493363822",
"2097128017",
"2156333962"
],
"abstract": [
"Humans and other animals express power through open, expansive postures, and they express powerlessness through closed, contractive postures. But can these postures actually cause power? The results of this study confirmed our prediction that posing in high-power nonverbal displays (as opposed to low-power nonverbal displays) would cause neuroendocrine and behavioral changes for both male and female participants: High-power posers experienced elevations in testosterone, decreases in cortisol, and increased feelings of power and tolerance for risk; low-power posers exhibited the opposite pattern. In short, posing in displays of power caused advantaged and adaptive psychological, physiological, and behavioral changes, and these findings suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices. That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.",
"",
"Nonlinguistic social signals (e.g., tone of voice’) are often as important as linguistic content in predicting behavioural outcomes [1,2]. This paper describes four automated measure of such social signalling, and shows that they can be used to form powerful predictosr of objective and subjective outcomes in several important situations. Finally, it is argued that such signals are important determinants of social position.",
"Although developers of communication-support tools have certainly tried to create products that support group thinking, they usually do so without adequately accounting for social context, so that all too often these systems are jarring and even downright rude. In fact, most people would agree that today's communication technology seems to be at war with human society. Technology must account for this by recognizing that communication is always socially situated and that discussions are not just words but part of a larger social dialogue. This web of social interaction forms a sort of collective intelligence; it is the unspoken shared understanding that enforces the dominance hierarchy and passes judgment about it. We have found nonlinguistic social signals to be particularly powerful for analyzing and predicting human behavior, sometimes exceeding even expert human capabilities. Psychologists have firmly established that social signals are a powerful determinant of human behavior and speculate that they might have evolved as a way to establish hierarchy and group cohesion.",
"We introduce a new problem domain for activity recognition: the analysis of children's social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.",
"We introduce a system for sensing complex social systems with data collected from 100 mobile phones over the course of 9 months. We demonstrate the ability to use standard Bluetooth-enabled mobile telephones to measure information access and use in different contexts, recognize social patterns in daily user activity, infer relationships, identify socially significant locations, and model organizational rhythms.",
"How can you know when someone is bluffing? Paying attention? Genuinely interested? The answer, writes Sandy Pentland in Honest Signals, is that subtle patterns in how we interact with other people reveal our attitudes toward them. These unconscious social signals are not just a back channel or a complement to our conscious language; they form a separate communication network. Biologically based \"honest signaling,\" evolved from ancient primate signaling mechanisms, offers an unmatched window into our intentions, goals, and values. If we understand this ancient channel of communication, Pentland claims, we can accurately predict the outcomes of situations ranging from job interviews to first dates. Pentland, an MIT professor, has used a specially designed digital sensor worn like an ID badgea \"sociometer\"to monitor and analyze the back-and-forth patterns of signaling among groups of people. He and his researchers found that this second channel of communication, revolving not around words but around social relations, profoundly influences major decisions in our liveseven though we are largely unaware of it. Pentland presents the scientific background necessary for understanding this form of communication, applies it to examples of group behavior in real organizations, and shows how by \"reading\" our social networks we can become more successful at pitching an idea, getting a job, or closing a deal. Using this \"network intelligence\" theory of social signaling, Pentland describes how we can harness the intelligence of our social network to become better managers, workers, and communicators.",
"The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.",
"In this research we examine whether conversational dynamics occurring within the first five minutes of a negotiation can predict negotiated outcomes. In a simulated employment negotiation, micro-coding conducted by a computer showed that activity level, conversational engagement, prosodic emphasis, and vocal mirroring predicted 30 of the variance in individual outcomes. The conversational dynamics associated with individual success among high-status parties were different from those associated with individual success among low-status parties. Results are interpreted in light of theory and research exploring the predictive power of \"thin slices\" (Ambady & Rosenthal, 1992). Implications include the development of new technology to diagnose and improve negotiation processes."
]
}
|
1812.00281
|
2902083265
|
This paper presents a new dataset called HUMBI - a large corpus of high fidelity models of behavioral signals in 3D from a diverse population measured by a massive multi-camera system. With our novel design of a portable imaging system (consists of 107 HD cameras), we collect human behaviors from 164 subjects across gender, ethnicity, age, and physical condition at a public venue. Using the multiview image streams, we reconstruct high fidelity models of five elementary parts: gaze, face, hands, body, and cloth. As a byproduct, the 3D model provides geometrically consistent image annotation via 2D projection, e.g., body part segmentation. This dataset is a significant departure from the existing human datasets that suffers from subject diversity. We hope the HUMBI opens up a new opportunity for the development for behavioral imaging.
|
Sigal al @cite_38 proposed a passive, appearance-based approach that focus on gaze locking instead of gaze tracking, which can sense eye contact in an image. Sugano al @cite_31 proposed a method for reconstructing gaze from low-resolution eye images. Unlike many other methods which assume person-specific training data, a large amount of cross-subject training data was created and used to train a 3D gaze estimator. Mora al @cite_2 introduced a novel database along with a common framework for the training and evaluation of gaze estimation approaches. To drive the work on appearance-based gaze estimation, Zhang al @cite_34 presented the MPII-Gaze dataset that collected gazes in the wild.
|
{
"cite_N": [
"@cite_38",
"@cite_31",
"@cite_34",
"@cite_2"
],
"mid": [
"2099333815",
"1995694455",
"",
"2042906110"
],
"abstract": [
"While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.",
"Inferring human gaze from low-resolution eye images is still a challenging task despite its practical importance in many application scenarios. This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent. Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator. We collect the largest and fully calibrated multi-view gaze dataset and perform a 3D reconstruction in order to generate dense training data of eye images. By using the synthesized dataset to learn a random regression forest, we show that our method outperforms existing methods that use low-resolution eye images.",
"",
"The lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data is a serious limitation for distinguishing the advantages and disadvantages of the many proposed algorithms found in the literature. This paper intends to overcome this limitation by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches. In particular, we have designed this database to enable the evaluation of the robustness of algorithms with respect to the main challenges associated to this task: i) Head pose variations; ii) Person variation; iii) Changes in ambient and sensing conditions and iv) Types of target: screen or 3D object."
]
}
|
1812.00281
|
2902083265
|
This paper presents a new dataset called HUMBI - a large corpus of high fidelity models of behavioral signals in 3D from a diverse population measured by a massive multi-camera system. With our novel design of a portable imaging system (consists of 107 HD cameras), we collect human behaviors from 164 subjects across gender, ethnicity, age, and physical condition at a public venue. Using the multiview image streams, we reconstruct high fidelity models of five elementary parts: gaze, face, hands, body, and cloth. As a byproduct, the 3D model provides geometrically consistent image annotation via 2D projection, e.g., body part segmentation. This dataset is a significant departure from the existing human datasets that suffers from subject diversity. We hope the HUMBI opens up a new opportunity for the development for behavioral imaging.
|
Dexterous hand manipulation through behavioral signaling frequently introduces severe self-occlusion, which is the main challenge of recovering 3D finger configuration. A depth image that provides trivial hand segmentation in conjunction with tracking has been used to establish the ground truth hand pose @cite_20 @cite_14 @cite_56 @cite_1 . However, as the occlusion still play a key role, these datasets involve with large manual adjustments, which limits the size of data. This has been addressed by using magnetic sensors on hands that can precisely measure the joint angle, which allows automatically computing the 3D hand pose using forward kinematics @cite_17 @cite_22 . Notably, a multi-camera system has been used to annotate the hand using 3D bootstrapping @cite_49 , which can provide the hand annotations for the RGB data.
|
{
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_1",
"@cite_56",
"@cite_49",
"@cite_20",
"@cite_17"
],
"mid": [
"1928739709",
"",
"2214145768",
"2093414253",
"2609211631",
"2075156252",
""
],
"abstract": [
"We extends the previous 2D cascaded object pose regression work [9] in two aspects so that it works better for 3D articulated objects. Our first contribution is 3D pose-indexed features that generalize the previous 2D parameterized features and achieve better invariance to 3D transformations. Our second contribution is a principled hierarchical regression that is adapted to the articulated object structure. It is therefore more accurate and faster. Comprehensive experiments verify the state-of-the-art accuracy and efficiency of the proposed approach on the challenging 3D hand pose estimation problem, on a public dataset and our new dataset.",
"",
"Hand pose estimation has matured rapidly in recent years. The introduction of commodity depth sensors and a multitude of practical applications have spurred new advances. We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame. To do so, we have implemented a considerable number of systems, and will release all software and evaluation code. We summarize important conclusions here: (1) Pose estimation appears roughly solved for scenes with isolated hands. However, methods still struggle to analyze cluttered scenes where hands may be interacting with nearby objects and surfaces. To spur further progress we introduce a challenging new dataset with diverse, cluttered scenes. (2) Many methods evaluate themselves with disparate criteria, making comparisons difficult. We define a consistent evaluation criteria, rigorously motivated by human experiments. (3) We introduce a simple nearest-neighbor baseline that outperforms most existing systems. This implies that most systems do not generalize beyond their training sets. This also reinforces the under-appreciated point that training data is as important as the model itself. We conclude with directions for future progress.",
"In this paper we present the Latent Regression Forest (LRF), a novel framework for real-time, 3D hand pose estimation from a single depth image. In contrast to prior forest-based methods, which take dense pixels as input, classify them independently and then estimate joint positions afterwards, our method can be considered as a structured coarse-to-fine search, starting from the centre of mass of a point cloud until locating all the skelet al joints. The searching process is guided by a learnt Latent Tree Model which reflects the hierarchical topology of the hand. Our main contributions can be summarised as follows: (i) Learning the topology of the hand in an unsupervised, data-driven manner. (ii) A new forest-based, discriminative framework for structured search in images, as well as an error regression step to avoid error accumulation. (iii) A new multi-view hand pose dataset containing 180K annotated images from 10 different subjects. Our experiments show that the LRF out-performs state-of-the-art methods in both accuracy and efficiency.",
"We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions.",
"We present a novel method for real-time continuous pose recovery of markerless complex articulable objects from a single depth image. Our method consists of the following stages: a randomized decision forest classifier for image segmentation, a robust method for labeled dataset generation, a convolutional network for dense feature extraction, and finally an inverse kinematics stage for stable real-time pose recovery. As one possible application of this pipeline, we show state-of-the-art results for real-time puppeteering of a skinned hand-model.",
""
]
}
|
1812.00281
|
2902083265
|
This paper presents a new dataset called HUMBI - a large corpus of high fidelity models of behavioral signals in 3D from a diverse population measured by a massive multi-camera system. With our novel design of a portable imaging system (consists of 107 HD cameras), we collect human behaviors from 164 subjects across gender, ethnicity, age, and physical condition at a public venue. Using the multiview image streams, we reconstruct high fidelity models of five elementary parts: gaze, face, hands, body, and cloth. As a byproduct, the 3D model provides geometrically consistent image annotation via 2D projection, e.g., body part segmentation. This dataset is a significant departure from the existing human datasets that suffers from subject diversity. We hope the HUMBI opens up a new opportunity for the development for behavioral imaging.
|
For simulation purpose, many previous works have proposed to capture the natural property of cloth affected by human body using 3D segmentation on the scanned human in cloth using 4D scanner @cite_26 or multiple synchronized cameras @cite_13 @cite_33 . However, their methods does not ensure the same topology across the time which is key component of recent learning approaches, and the diversity of pose and shape in cloth is limited to propose the dataset. To the best of our knowledge, this is the first attempts to propose public 3D cloth dataset with its associated 2D image pair captured under the natural human performance. Our method is not based on 3D segmentation but based on fitting expert designed cloth templates to the 3D reconstruction scanned by multiview system.
|
{
"cite_N": [
"@cite_26",
"@cite_13",
"@cite_33"
],
"mid": [
"2737762407",
"2124527723",
"2082145490"
],
"abstract": [
"Designing and simulating realistic clothing is challenging. Previous methods addressing the capture of clothing from 3D scans have been limited to single garments and simple motions, lack detail, or require specialized texture patterns. Here we address the problem of capturing regular clothing on fully dressed people in motion. People typically wear multiple pieces of clothing at a time. To estimate the shape of such clothing, track it over time, and render it believably, each garment must be segmented from the others and the body. Our ClothCap approach uses a new multi-part 3D model of clothed bodies, automatically segments each piece of clothing, estimates the minimally clothed body shape and pose under the clothing, and tracks the 3D deformations of the clothing over time. We estimate the garments and their motion from 4D scans; that is, high-resolution 3D scans of the subject in motion at 60 fps. ClothCap is able to capture a clothed person in motion, extract their clothing, and retarget the clothing to new body shapes; this provides a step towards virtual try-on.",
"We capture the shape of moving cloth using a custom set of color markers printed on the surface of the cloth. The output is a sequence of triangle meshes with static connectivity and with detail at the scale of individual markers in both smooth and folded regions. We compute markers' coordinates in space using correspondence across multiple synchronized video cameras. Correspondence is determined from color information in small neighborhoods and refined using a novel strain pruning process. Final correspondence does not require neighborhood information. We use a novel data driven hole-filling technique to fill occluded regions. Our results include several challenging examples: a wrinkled shirt sleeve, a dancing pair of pants, and a rag tossed onto a cup. Finally, we demonstrate that cloth capture is reusable by animating a pair of pants using human motion capture data.",
"A lot of research has recently focused on the problem of capturing the geometry and motion of garments. Such work usually relies on special markers printed on the fabric to establish temporally coherent correspondences between points on the garment's surface at different times. Unfortunately, this approach is tedious and prevents the capture of off-the-shelf clothing made from interesting fabrics. In this paper, we describe a marker-free approach to capturing garment motion that avoids these downsides. We establish temporally coherent parameterizations between incomplete geometries that we extract at each timestep with a multiview stereo algorithm. We then fill holes in the geometry using a template. This approach, for the first time, allows us to capture the geometry and motion of unpatterned, off-the-shelf garments made from a range of different fabrics."
]
}
|
1906.12263
|
2954128485
|
Motion estimation is an important component of video codecs and various applications in computer vision. Especially in video compression the compact representation of motion fields is crucial, as modern video codecs use them for inter frame prediction. In recent years compression methods relying on diffusion-based inpainting have been becoming an increasingly competitive alternative to classical transform-based codecs. They perform particularly well on piecewise smooth data, suggesting that motion fields can be efficiently represented by such approaches. However, they have so far not been used for the compression of motion data. Therefore, we assess the potential of flow field compression based on homogeneous diffusion with a specifically designed new framework: Our codec stores only a few representative flow vectors and reconstructs the flow field with edge-aware homogeneous diffusion inpainting. Additionally stored edge data thereby ensure the accurate representation of discontinuities in the flow field. Our experiments show that this approach can outperform state-of-the-art codecs such as JPEG2000 and BPG HEVC intra.
|
Compression methods with diffusion-based inpainting were introduced by Gali ' c et al @cite_3 in 2005. Their method stores only a few selected pixels of an image and reconstructs missing data with edge-enhancing anisotropic diffusion @cite_10 . The R-EED algorithm of @cite_22 improves this idea with an efficient tree structure to adaptively encode mask pixels and can beat the quality of JPEG2000.
|
{
"cite_N": [
"@cite_10",
"@cite_22",
"@cite_3"
],
"mid": [
"1549562450",
"1586517188",
"2157028296"
],
"abstract": [
"Theoretical Foundations of Anisotropic Diffusion in Image Processing. A frequent problem in low-level vision consists of eliminating noise and small-scale details from an image while still preserving or even enhancing the edge structure. Nonlinear anisotropic diffusion filtering may be one possibility to achieve these goals. The objective of the present paper is to review the author’s results on a scale-space interpretation of a class of diffusion filters which comprises also several nonlinear anisotropic models. It is demonstrated that these models—which use an adapted diffusion tensor instead of a scalar diffusivity—offer advantages over isotropic filters. Most of the restoration and scale-space properties carry over from the continuous to the discrete case. Applications are presented ranging from preprocessing of medical images and postprocessing of fluctuating numerical data to visualizing quality relevant features for the grading of wood surfaces and fabrics.",
"Although widely used standards such as JPEG and JPEG 2000 exist in the literature, lossy image compression is still a subject of ongoing research. (2008) have shown that compression based on edge-enhancing anisotropic diffusion can outperform JPEG for medium to high compression ratios when the interpolation points are chosen as vertices of an adaptive triangulation. In this paper we demonstrate that it is even possible to beat the quality of the much more advanced JPEG 2000 standard when one uses subdivisions on rectangles and a number of additional optimisations. They include improved entropy coding, brightness rescaling, diffusivity optimisation, and interpolation swapping. Experiments on classical test images are presented that illustrate the potential of our approach.",
"While methods based on partial differential equations (PDEs) and variational techniques are powerful tools for denoising and inpainting digital images, their use for image compression was mainly focussing on pre- or postprocessing so far. In our paper we investigate their potential within the decoding step. We start with the observation that edge-enhancing diffusion (EED), an anisotropic nonlinear diffusion filter with a diffusion tensor, is well-suited for scattered data interpolation: Even when the interpolation data are very sparse, good results are obtained that respect discontinuities and satisfy a maximum–minimum principle. This property is exploited in our studies on PDE-based image compression. We use an adaptive triangulation method based on B-tree coding for removing less significant pixels from the image. The remaining points serve as scattered interpolation data for the EED process. They can be coded in a compact and elegant way that reflects the B-tree structure. Our experiments illustrate that for high compression rates and non-textured images, this PDE-based approach gives visually better results than the widely-used JPEG coding."
]
}
|
1906.12263
|
2954128485
|
Motion estimation is an important component of video codecs and various applications in computer vision. Especially in video compression the compact representation of motion fields is crucial, as modern video codecs use them for inter frame prediction. In recent years compression methods relying on diffusion-based inpainting have been becoming an increasingly competitive alternative to classical transform-based codecs. They perform particularly well on piecewise smooth data, suggesting that motion fields can be efficiently represented by such approaches. However, they have so far not been used for the compression of motion data. Therefore, we assess the potential of flow field compression based on homogeneous diffusion with a specifically designed new framework: Our codec stores only a few representative flow vectors and reconstructs the flow field with edge-aware homogeneous diffusion inpainting. Additionally stored edge data thereby ensure the accurate representation of discontinuities in the flow field. Our experiments show that this approach can outperform state-of-the-art codecs such as JPEG2000 and BPG HEVC intra.
|
The concept of diffusion-based inpainting has also been extended to video compression. @cite_21 developed a method based on R-EED that allows decoding in real time. However, this approach compresses each frame individually and does not exploit temporal redundancies. @cite_6 proposed a proof-of-concept video codec that additionally uses optical flow methods for inter frame prediction. The motion fields are compressed with a simple subsampling, resulting again in block artefacts. Ottaviano and Kohli @cite_4 developed a motion estimation algorithm that incorporates coding costs for a wavelet-based compression of the resulting flow field.
|
{
"cite_N": [
"@cite_21",
"@cite_4",
"@cite_6"
],
"mid": [
"",
"2124871034",
"2611859123"
],
"abstract": [
"",
"Traditional video compression methods obtain a compact representation for image frames by computing coarse motion fields defined on patches of pixels called blocks, in order to compensate for the motion in the scene across frames. This piecewise constant approximation makes the motion field efficiently encodable, but it introduces block artifacts in the warped image frame. In this paper, we address the problem of estimating dense motion fields that, while accurately predicting one frame from a given reference frame by warping it with the field, are also compressible. We introduce a representation for motion fields based on wavelet bases, and approximate the compressibility of their coefficients with a piecewise smooth surrogate function that yields an objective function similar to classical optical flow formulations. We then show how to quantize and encode such coefficients with adaptive precision. We demonstrate the effectiveness of our approach by comparing its performance with a state-of-the-art wavelet video encoder. Experimental results on a number of standard flow and video datasets reveal that our method significantly outperforms both block-based and optical-flow-based motion compensation algorithms.",
"In image compression, codecs that rely on interpolation with partial differential equations (PDEs) are becoming increasingly popular. However, there have not been many attempts to transfer this concept to video compression. Since real-time performance is challenging for PDE-based reconstruction, first efficient approaches work on a frame-by-frame basis and focus on parallel implementations without considering coding quality. So far, there is no fully PDE-based video codec that exploits temporal redundancies. As a remedy, we propose a modular framework that combines PDE-based compression with motion compensation: Intra frames are predicted with PDE-based inpainting and inter frames with dense optic flow fields. We use this framework to develop a proof-of-concept codec that combines homogeneous diffusion inpainting with the variational optic flow model of (2004). Even without sophisticated parallelisation, we are able to perform real-time decompression of colour videos for the first time in PDE-based video compression."
]
}
|
1906.12263
|
2954128485
|
Motion estimation is an important component of video codecs and various applications in computer vision. Especially in video compression the compact representation of motion fields is crucial, as modern video codecs use them for inter frame prediction. In recent years compression methods relying on diffusion-based inpainting have been becoming an increasingly competitive alternative to classical transform-based codecs. They perform particularly well on piecewise smooth data, suggesting that motion fields can be efficiently represented by such approaches. However, they have so far not been used for the compression of motion data. Therefore, we assess the potential of flow field compression based on homogeneous diffusion with a specifically designed new framework: Our codec stores only a few representative flow vectors and reconstructs the flow field with edge-aware homogeneous diffusion inpainting. Additionally stored edge data thereby ensure the accurate representation of discontinuities in the flow field. Our experiments show that this approach can outperform state-of-the-art codecs such as JPEG2000 and BPG HEVC intra.
|
Our method is related to codecs for the compression of depth maps, as they also have a piecewise smooth structure. The approach of @cite_2 stores mask pixels on both sides of edges and uses homogeneous diffusion inpainting to reconstruct smooth regions in-between. A similar approach by @cite_12 for cartoon-like images also selects mask pixels along edges. @cite_5 extended this idea by explicitly storing segment boundaries with chain codes and selecting mask pixels on a hexagonal grid.
|
{
"cite_N": [
"@cite_5",
"@cite_12",
"@cite_2"
],
"mid": [
"2155665474",
"",
"2043675455"
],
"abstract": [
"The efficient compression of depth maps is becoming more and more important. We present a novel codec specifically suited for this task. In the encoding step we segment the image and extract between-pixel contours. Subsequently we optimise the grey values at carefully selected mask points, including both hexagonal grid locations as well as freely chosen points. We use a chain code to store the contours. For the decoding we apply a segment-based homogeneous diffusion inpainting. The segmentation allows parallel processing of the individual segments. Experiments show that our compression algorithm outperforms comparable methods such as JPEG or JPEG2000, while being competitive with HEVC (High Efficiency Video Coding).",
"",
"The multi-view plus depth video (MVD) format has recently been introduced for 3DTV and free-viewpoint video (FVV) scene rendering. Given one view (or several views) with its depth information, depth image-based rendering techniques have the ability to generate intermediate views. The MVD format however generates large volumes of data which need to be compressed for storage and transmission. This paper describes a new depth map encoding algorithm which aims at exploiting the intrinsic depth maps properties. Depth images indeed represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. Preserving these characteristics is important to enable high quality view rendering at the receiver side. The proposed algorithm proceeds in three steps: the edges at object boundaries are first detected using a Sobel operator. The positions of the edges are encoded using the JBIG algorithm. The luminance values of the pixels along the edges are then encoded using an optimized path encoder. The decoder runs a fast diffusion-based inpainting algorithm which fills in the unknown pixels within the objects by starting from their boundaries. The performance of the algorithm is assessed against JPEG-2000 and HEVC, both in terms of PSNR of the depth maps versus rate as well as in terms of PSNR of the synthesized virtual views."
]
}
|
1906.12237
|
2954918751
|
The Sybil attack plagues all peer-to-peer systems, and modern open distributed ledgers employ a number of tactics to prevent it from proof of work, or other resources such as space, stake or memory, to traditional admission control in permissioned settings. With SybilQuorum we propose an alternative approach to securing an open distributed ledger against Sybil attacks, and ensuring consensus amongst honest participants, leveraging social network based Sybil defences. We show how nodes expressing their trust relationships through the ledger can bootstrap and operate a value system, and general transaction system, and how Sybil attacks are thwarted. We empirically evaluate our system as a secure Federated Byzantine Agreement System, and extend the theory of those systems to do so.
|
Besides blockchains, systems leveraging social networks --- and explicit trust judgments of users about each other --- have been proposed to combat Sybil attacks. Early work considers leveraging the introduction graph', by which nodes get to access a Distributed Hash Table through other nodes, to ensure routing security @cite_2 . Raph Levien productized those ideas to extract reputation of developers in Advogato' @cite_3 ; and Sam Lessin @cite_16 proposed using trust graphs backed by financial commitments to infer the financial trustworthiness of users in a graph in the context of blockchains.
|
{
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_2"
],
"mid": [
"",
"2103438541",
"1587208850"
],
"abstract": [
"",
"The Internet is an amazingly powerful tool for connecting people together, unmatched in human history. Yet, with that power comes great potential for spam and abuse. Trust metrics are an attempt to compute the set of which people are trustworthy and which are likely attackers. This chapter presents two specific trust metrics developed and deployed on the Advogato Website, which is a community blog for free software developers. This real-world experience demonstrates that the trust metrics fulfilled their goals, but that for good results, it is important to match the assumptions of the abstract trust metric computation to the real-world implementation.",
"Distributed Hash Tables (DHTs) are very efficient distributed systems for routing, but at the same time vulnerable to disruptive nodes. Designers of such systems want them used in open networks, where an adversary can perform a sybil attack by introducing a large number of corrupt nodes in the network, considerably degrading its performance. We introduce a routing strategy that alleviates some of the effects of such an attack by making sure that lookups are performed using a diverse set of nodes. This ensures that at least some of the nodes queried are good, and hence the search makes forward progress. This strategy makes use of latent social information present in the introduction graph of the network."
]
}
|
1906.12237
|
2954918751
|
The Sybil attack plagues all peer-to-peer systems, and modern open distributed ledgers employ a number of tactics to prevent it from proof of work, or other resources such as space, stake or memory, to traditional admission control in permissioned settings. With SybilQuorum we propose an alternative approach to securing an open distributed ledger against Sybil attacks, and ensuring consensus amongst honest participants, leveraging social network based Sybil defences. We show how nodes expressing their trust relationships through the ledger can bootstrap and operate a value system, and general transaction system, and how Sybil attacks are thwarted. We empirically evaluate our system as a secure Federated Byzantine Agreement System, and extend the theory of those systems to do so.
|
Academic works within this family of systems consider general social network information distributed in a peer to peer network to allow each node to determine which other nodes are genuine or Sybils. In this line of work SybilGuard @cite_1 and SybilLimit @cite_13 perform a distributed computation, using random walks in a network, to determine the honest regions within it. SybilInfer @cite_19 takes a centralized approach, and analyzes a stored social graph to identify potential Sybil regions.
|
{
"cite_N": [
"@cite_13",
"@cite_19",
"@cite_1"
],
"mid": [
"2110801527",
"1551760018",
"2153305401"
],
"abstract": [
"Open-access distributed systems such as peer-to-peer systems are particularly vulnerable to sybil attacks, where a malicious user creates multiple fake identities (called sybil nodes). Without a trusted central authority that can tie identities to real human beings, defending against sybil attacks is quite challenging. Among the small number of decentralized approaches, our recent SybilGuard protocol leverages a key insight on social networks to bound the number of sybil nodes accepted. Despite its promising direction, SybilGuard can allow a large number of sybil nodes to be accepted. Furthermore, SybilGuard assumes that social networks are fast-mixing, which has never been confirmed in the real world. This paper presents the novel SybilLimit protocol that leverages the same insight as SybilGuard, but offers dramatically improved and near-optimal guarantees. The number of sybil nodes accepted is reduced by a factor of Θ(√n), or around 200 times in our experiments for a million-node system. We further prove that SybilLimit's guarantee is at most a log n factor away from optimal when considering approaches based on fast-mixing social networks. Finally, based on three large-scale real-world social networks, we provide the first evidence that real-world social networks are indeed fast-mixing. This validates the fundamental assumption behind SybilLimit's and SybilGuard's approach.",
"SybilInfer is an algorithm for labelling nodes in a social network as honest users or Sybils controlled by an adversary. At the heart of SybilInfer lies a probabilistic model of honest social networks, and an inference engine that returns potential regions of dishonest nodes. The Bayesian inference approach to Sybil detection comes with the advantage label has an assigned probability, indicating its degree of certainty. We prove through analytical results as well as experiments on simulated and real-world network topologies that, given standard constraints on the adversary, SybilInfer is secure, in that it successfully distinguishes between honest and dishonest nodes and is not susceptible to manipulation by the adversary. Furthermore, our results show that SybilInfer outperforms state of the art algorithms, both in being more widely applicable, as well as providing vastly more accurate results.",
"Peer-to-peer and other decentralized, distributed systems are known to be particularly vulnerable to sybil attacks. In a sybil attack, a malicious user obtains multiple fake identities and pretends to be multiple, distinct nodes in the system. By controlling a large fraction of the nodes in the system, the malicious user is able to ldquoout voterdquo the honest users in collaborative tasks such as Byzantine failure defenses. This paper presents SybilGuard, a novel protocol for limiting the corruptive influences of sybil attacks. Our protocol is based on the ldquosocial networkrdquo among user identities, where an edge between two identities indicates a human-established trust relationship. Malicious users can create many identities but few trust relationships. Thus, there is a disproportionately small ldquocutrdquo in the graph between the sybil nodes and the honest nodes. SybilGuard exploits this property to bound the number of identities a malicious user can create. We show the effectiveness of SybilGuard both analytically and experimentally."
]
}
|
1906.12237
|
2954918751
|
The Sybil attack plagues all peer-to-peer systems, and modern open distributed ledgers employ a number of tactics to prevent it from proof of work, or other resources such as space, stake or memory, to traditional admission control in permissioned settings. With SybilQuorum we propose an alternative approach to securing an open distributed ledger against Sybil attacks, and ensuring consensus amongst honest participants, leveraging social network based Sybil defences. We show how nodes expressing their trust relationships through the ledger can bootstrap and operate a value system, and general transaction system, and how Sybil attacks are thwarted. We empirically evaluate our system as a secure Federated Byzantine Agreement System, and extend the theory of those systems to do so.
|
Subsequent work questions a number of assumptions based on the analysis of real-world social graphs @cite_11 . This work is influential in that it highlights that the social graphs on which these defences rest, but truly capture trust judgments, and provide incentives for users to not accept any links, including to malicious nodes. In this work we also highlight a further limit of SybilInfer as originally proposed: it is an effective mechanism to detect Sybil regions in the presence of an attack, however it is also presenting a large number of false positives'' when the network is free of such attacks --- by misclasifying a large number of honest nodes as Sybils. We provide a solution to this problem.
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"2148667671"
],
"abstract": [
"Many graphs in general, and social graphs in particular, are directed by nature. However, applications built on top of social networks, including Sybil defenses, information routing and dissemination, and anonymous communication require mutual relationships which produce undirected graphs. When undirected graphs are used as testing tools for these applications to bring insight on their usability and potential deployment, directed graphs are converted into undirected graphs by omitting edge directions or by augmenting graphs. Unfortunately, it is unclear how altering these graphs affects the quality of their mixing time. Motivated by the lack of prior work on this problem, we investigate mathematical tools for measuring the mixing time of directed social graphs and its associated error bounds. We use these tools to measure the mixing time of several benchmarking directed graphs and their undirected counterparts. We then measure how this difference impacts two applications built on top of social networks: a Sybil defense mechanism and an anonymous communication system."
]
}
|
1811.12786
|
2903311229
|
In this paper, we propose a novel scene text detection method named TextMountain. The key idea of TextMountain is making full use of border-center information. Different from previous works that treat center-border as a binary classification problem, we predict text center-border probability (TCBP) and text center-direction (TCD). The TCBP is just like a mountain whose top is text center and foot is text border. The mountaintop can separate text instances which cannot be easily achieved using semantic segmentation map and its rising direction can plan a road to top for each pixel on mountain foot at the group stage. The TCD helps TCBP learning better. Our label rules will not lead to the ambiguous problem with the transformation of angle, so the proposed method is robust to multi-oriented text and can also handle well with curved text. In inference stage, each pixel at the mountain foot needs to search the path to the mountaintop and this process can be efficiently completed in parallel, yielding the efficiency of our method compared with others. The experiments on MLT, ICDAR2015, RCTW-17 and SCUT-CTW1500 databases demonstrate that the proposed method achieves better or comparable performance in terms of both accuracy and efficiency. It is worth mentioning our method achieves an F-measure of 76.85 on MLT which outperforms the previous methods by a large margin. Code will be made available.
|
Traditional text detection methods mainly use extremal region, border information or character's morphological information to locate text such as Stroke Width Transform (SWT) @cite_34 and Maximally Stable Extremal Regions (MSER) @cite_37 @cite_49 . With the emergence of deep learning, many methods try using deep neural nets to solve this problem and greatly exceed traditional methods on both performance and robustness. The deep learning based methods can be roughly divided into two categories: regression based method and segmentation based method.
|
{
"cite_N": [
"@cite_37",
"@cite_34",
"@cite_49"
],
"mid": [
"2061802763",
"2142159465",
"1026856040"
],
"abstract": [
"An end-to-end real-time scene text localization and recognition method is presented. The real-time performance is achieved by posing the character detection problem as an efficient sequential selection from the set of Extremal Regions (ERs). The ER detector is robust to blur, illumination, color and texture variation and handles low-contrast text. In the first classification stage, the probability of each ER being a character is estimated using novel features calculated with O(1) complexity per region tested. Only ERs with locally maximal probability are selected for the second stage, where the classification is improved using more computationally expensive features. A highly efficient exhaustive search with feedback loops is then applied to group ERs into words and to select the most probable character segmentation. Finally, text is recognized in an OCR stage trained using synthetic fonts. The method was evaluated on two public datasets. On the ICDAR 2011 dataset, the method achieves state-of-the-art text localization results amongst published methods and it is the first one to report results for end-to-end text recognition. On the more challenging Street View Text dataset, the method achieves state-of-the-art recall. The robustness of the proposed method against noise and low contrast of characters is demonstrated by “false positives” caused by detected watermark text in the dataset.",
"We present a novel image operator that seeks to find the value of stroke width for each image pixel, and demonstrate its use on the task of text detection in natural images. The suggested operator is local and data dependent, which makes it fast and robust enough to eliminate the need for multi-scale computation or scanning windows. Extensive testing shows that the suggested scheme outperforms the latest published algorithms. Its simplicity allows the algorithm to detect texts in many fonts and languages.",
"Text localization from scene images is a challenging task that finds application in many areas. In this work, we propose a novel hybrid text localization approach that exploits Multi-resolution Maximally Stable Extremal Regions to discard false-positive detections from the text confidence maps generated by a Fast Feature Pyramid based sliding window classifier. The use of a multi-scale approach during both feature computation and connected component extraction allows our method to identify uncommon text elements that are usually not detected by competing algorithms, while the adoption of approximated features and appropriately filtered connected components assures a low overall computational complexity of the proposed system."
]
}
|
1811.12666
|
2949904587
|
This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
|
Several recent studies have applied deep neural networks for image-based face swapping. @cite_4 indicated that their conditional image generation technique can alter face identities by conditioning the generated images with an identity vector. Meanwhile, @cite_24 applied the neural style transfer @cite_19 for face swapping by considering the face identities as the artistic styles in the original style transfer. However, these recent approaches still have a problem. They require at least dozens of images of an individual person to obtain a face-swapped image. Collecting that many images is possible, albeit unreasonable for most non-celebrities.
|
{
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_4"
],
"mid": [
"2952357609",
"2475287302",
"2604433135"
],
"abstract": [
"We consider the problem of face swapping in images, where an input identity is transformed into a target identity while preserving pose, facial expression, and lighting. To perform this mapping, we use convolutional neural networks trained to capture the appearance of the target identity from an unstructured collection of his her photographs.This approach is enabled by framing the face swapping problem in terms of style transfer, where the goal is to render an image in the style of another one. Building on recent advances in this area, we devise a new loss function that enables the network to produce highly photorealistic results. By combining neural networks with simple pre- and post-processing steps, we aim at making face swap work in real-time with no input from the user.",
"Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.",
"We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models."
]
}
|
1811.12666
|
2949904587
|
This paper presents FSNet, a deep generative model for image-based face swapping. Traditionally, face-swapping methods are based on three-dimensional morphable models (3DMMs), and facial textures are replaced between the estimated three-dimensional (3D) geometries in two images of different individuals. However, the estimation of 3D geometries along with different lighting conditions using 3DMMs is still a difficult task. We herein represent the face region with a latent variable that is assigned with the proposed deep neural network (DNN) instead of facial textures. The proposed DNN synthesizes a face-swapped image using the latent variable of the face region and another image of the non-face region. The proposed method is not required to fit to the 3DMM; additionally, it performs face swapping only by feeding two face images to the proposed network. Consequently, our DNN-based face swapping performs better than previous approaches for challenging inputs with different face orientations and lighting conditions. Through several experiments, we demonstrated that the proposed method performs face swapping in a more stable manner than the state-of-the-art method, and that its results are compatible with the method thereof.
|
Another recent study @cite_1 proposed an identity-preserving GAN for transferring image appearances between two face images. While the purpose of this study is close to that of face swapping, it does not preserve the appearances of non-face regions including hairstyles and backgrounds. Several studies for DNN-based image completion @cite_15 @cite_7 have presented demonstrations of face appearance manipulation by filling the parts of an input image with their DNNs. However, the users can hardly estimate the results of these approaches because they only fill the regions specified by the users such that the completed results imitate the images in the training data.
|
{
"cite_N": [
"@cite_15",
"@cite_1",
"@cite_7"
],
"mid": [
"2738588019",
"2794512294",
"2784649957"
],
"abstract": [
"We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.",
"We propose a framework based on Generative Adversarial Networks to disentangle the identity and attributes of faces, such that we can conveniently recombine different identities and attributes for identity preserving face synthesis in open domains. Previous identity preserving face synthesis processes are largely confined to synthesizing faces with known identities that are already in the training dataset. To synthesize a face with identity outside the training dataset, our framework requires one input image of that subject to produce an identity vector, and any other input face image to extract an attribute vector capturing, e.g., pose, emotion, illumination, and even the background. We then recombine the identity vector and the attribute vector to synthesize a new face of the subject with the extracted attribute. Our proposed framework does not need to annotate the attributes of faces in any way. It is trained with an asymmetric loss function to better preserve the identity and stabilize the training process. It can also effectively leverage large amounts of unlabeled training face images to further improve the fidelity of the synthesized faces for subjects that are not presented in the labeled training face dataset. Our experiments demonstrate the efficacy of the proposed framework. We also present its usage in a much broader set of applications including face frontalization, face attribute morphing, and face adversarial example detection.",
"We present a deep learning approach for high resolution face completion with multiple controllable attributes (e.g., male and smiling) under arbitrary masks. Face completion entails understanding both structural meaningfulness and appearance consistency locally and globally to fill in \"holes\" whose content do not appear elsewhere in an input image. It is a challenging task with the difficulty level increasing significantly with respect to high resolution, the complexity of \"holes\" and the controllable attributes of filled-in fragments. Our system addresses the challenges by learning a fully end-to-end framework that trains generative adversarial networks (GANs) progressively from low resolution to high resolution with conditional vectors encoding controllable attributes. We design novel network architectures to exploit information across multiple scales effectively and efficiently. We introduce new loss functions encouraging sharp completion. We show that our system can complete faces with large structural and appearance variations using a single feed-forward pass of computation with mean inference time of 0.007 seconds for images at 1024 x 1024 resolution. We also perform a pilot human study that shows our approach outperforms state-of-the-art face completion methods in terms of rank analysis. The code will be released upon publication."
]
}
|
1811.12556
|
2903195088
|
In this empirical paper, we investigate how learning agents can be arranged in more efficient communication topologies for improved learning. This is an important problem because a common technique to improve speed and robustness of learning in deep reinforcement learning and many other machine learning algorithms is to run multiple learning agents in parallel. The standard communication architecture typically involves all agents intermittently communicating with each other (fully connected topology) or with a centralized server (star topology). Unfortunately, optimizing the topology of communication over the space of all possible graphs is a hard problem, so we borrow results from the networked optimization and collective intelligence literatures which suggest that certain families of network topologies can lead to strong improvements over fully-connected networks. We start by introducing alternative network topologies to DRL benchmark tasks under the Evolution Strategies paradigm which we call Network Evolution Strategies. We explore the relative performance of the four main graph families and observe that one such family (Erdos-Renyi random graphs) empirically outperforms all other families, including the de facto fully-connected communication topologies. Additionally, the use of alternative network topologies has a multiplicative performance effect: we observe that when 1000 learning agents are arranged in a carefully designed communication topology, they can compete with 3000 agents arranged in the de facto fully-connected topology. Overall, our work suggests that distributed machine learning algorithms would learn more efficiently if the communication topology between learning agents was optimized.
|
There is significant evidence that the network structure of communication between nodes significantly affects the convergence rate and accuracy of learning from the literatures of decentralized optimization @cite_26 @cite_20 @cite_22 . Similarly, in the collective intelligence literature, alternative network structures have been shown to result in increased exploration, higher overall maximum reward, and higher diversity of solutions in both simulated high-dimensional optimization @cite_11 and human experiments @cite_8 . We know of only one piece of prior work that has examined network topology in distributed machine learning @cite_16 , but network topology was only an aside in this work, and this prior work therefore presented little understanding or motivation for their brief investigation into the effect. Another recent piece of work examines the use of periodic broadcasting of successful parameter settings in deep learning but does not leverage complex network topologies @cite_24 .
|
{
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_8",
"@cite_24",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"2760049195",
"2130263842",
"",
"2770298516",
"2765172389",
"",
"2097141357"
],
"abstract": [
"In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.",
"We consider a distributed multi-agent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcast-based algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.",
"",
"Neural networks dominate the modern machine learning landscape, but their training and success still suffer from sensitivity to empirical choices of hyperparameters such as model architecture, loss function, and optimisation algorithm. In this work we present , a simple asynchronous optimisation algorithm which effectively utilises a fixed computational budget to jointly optimise a population of models and their hyperparameters to maximise performance. Importantly, PBT discovers a schedule of hyperparameter settings rather than following the generally sub-optimal strategy of trying to find a single fixed set to use for the whole course of training. With just a small modification to a typical distributed hyperparameter training framework, our method allows robust and reliable training of models. We demonstrate the effectiveness of PBT on deep reinforcement learning problems, showing faster wall-clock convergence and higher final performance of agents by optimising over a suite of hyperparameters. In addition, we show the same method can be applied to supervised learning for machine translation, where PBT is used to maximise the BLEU score directly, and also to training of Generative Adversarial Networks to maximise the Inception score of generated images. In all cases PBT results in the automatic discovery of hyperparameter schedules and model selection which results in stable training and better final performance.",
"We propose a multiagent distributed actor-critic algorithm for multitask reinforcement learning (MRL), named Diff-DAC. The agents are connected, forming a (possibly sparse) network. Each agent is assigned a task and has access to data from this local task only. During the learning process, the agents are able to communicate some parameters to their neighbors. Since the agents incorporate their neighbors' parameters into their own learning rules, the information is diffused across the network, and they can learn a common policy that generalizes well across all tasks. Diff-DAC is scalable since the computational complexity and communication overhead per agent grow with the number of neighbors, rather than with the total number of agents. Moreover, the algorithm is fully distributed in the sense that agents self-organize, with no need for coordinator node. Diff-DAC follows an actor-critic scheme where the value function and the policy are approximated with deep neural networks, being able to learn expressive policies from raw data. As a by-product of Diff-DAC's derivation from duality theory, we provide novel insights into the standard actor-critic framework, showing that it is actually an instance of the dual ascent method to approximate the solution of a linear program. Experiments illustrate the performance of the algorithm in the cart-pole, inverted pendulum, and swing-up cart-pole environments.",
"",
"Whether as team members brainstorming or cultures experimenting with new technologies, problem solvers communicate and share ideas. This paper examines how the structure of communication networks among actors can affect system-level performance. We present an agent-based computer simulation model of information sharing in which the less successful emulate the more successful. Results suggest that when agents are dealing with a complex problem, the more efficient the network at disseminating information, the better the short-run but the lower the long-run performance of the system. The dynamic underlying this result is that an inefficient network maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run. For intermediate time frames, there is an inverted-U relationship between connectedness and performance, in which both poorly and well-connected systems perform badly, and moderately connected systems perf..."
]
}
|
1811.12608
|
2902806643
|
Computing object skeletons in natural images is challenging, owing to large variations in object appearance and scale, and the complexity of handling background clutter. Many recent methods frame object skeleton detection as a binary pixel classification problem, which is similar in spirit to learning-based edge detection, as well as to semantic segmentation methods. In the present article, we depart from this strategy by training a CNN to predict a two-dimensional vector field, which maps each scene point to a candidate skeleton pixel, in the spirit of flux-based skeletonization algorithms. This "image context flux" representation has two major advantages over previous approaches. First, it explicitly encodes the relative position of skelet al pixels to semantically meaningful entities, such as the image points in their spatial context, and hence also the implied object boundaries. Second, since the skeleton detection context is a region-based vector field, it is better able to cope with object parts of large width. We evaluate the proposed method on three benchmark datasets for skeleton detection and two for symmetry detection, achieving consistently superior performance over state-of-the-art methods.
|
Many early skeleton detection algorithms @cite_21 @cite_25 @cite_22 @cite_18 @cite_42 @cite_28 @cite_49 are based on gradient intensity maps. @cite_45 , the authors study the limiting average outward flux of the gradient of a Euclidean distance function to a 2D or 3D object boundary. The skeleton is associated with those locations where an energy principle is violated, where there is a net inward flux. Other researchers have constructed the skeleton by merging local skeleton segments with a learned segment-linking model. Levinshtein @cite_34 propose a method to work directly on images, which uses multi-scale super-pixels and a learned affinity between adjacent super-pixels to group proximal medial points. A graph-based clustering algorithm is then applied to form the complete skeleton. Lee @cite_24 improve the approach in @cite_34 by using a deformable disc model, which can detect curved and tapered symmetric parts. A novel definition of an appearance medial axis transform (AMAT) has been proposed in @cite_23 , to detect symmetry in the wild in a purely bottom up, unsupervised fashion. @cite_30 , the authors present an unconventional method based on joint co-skeletonization and co-segmentation.
|
{
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_22",
"@cite_28",
"@cite_21",
"@cite_42",
"@cite_24",
"@cite_45",
"@cite_49",
"@cite_23",
"@cite_34",
"@cite_25"
],
"mid": [
"",
"2122737923",
"2111644809",
"2067663391",
"1495971627",
"2101698374",
"2114379931",
"",
"2162819370",
"",
"2133814640",
"2160255746"
],
"abstract": [
"",
"In this paper we describe a method for skeletonization of gray-scale images without segmentation. Our method is based on anisotropic vector diffusion. The skeleton strength map, calculated from the diffused vector field, provides us a measure of how possible each pixel could be on the skeletons. The final skeletons are traced from the skeleton strength map, which mimics the behavior of edge detection from the edge strength map of the original image. A couple of real or synthesized images will be shown to demonstrate the performance of our algorithm.",
"In this paper we introduce a new tool, called a pseudo-distance map (PDM), for extracting skeletons from grayscale images without region segmentation or edge detection. Given an edge-strength function (ESF) of a gray-scale image, the PDM is computed from the ESF using the partial differential equations we propose. The PDM can be thought of as a relaxed version of a Euclidean distance map. Therefore, its ridges correspond to the skeleton of the original gray-scale image and it provides information on the approximate width of skeletonized structures. Since the PDM is directly computed from the ESF without thresholding it, the skeletonization result is generally robust and less noisy. We tested our method using a variety of synthetic and real images. The experimental results show that our method works well on such images.",
"Centerline detection and line width estimation are important for many computer vision applications, e.g., road network extraction from high resolution remotely sensed imagery. Radon transform-based linear feature detection has many advantages over other approaches: for example, its robustness in noisy images. However, it usually fails to detect the centerline of a thick line due to the peak selection problem. In this paper, several key issues that affect the centerline detection using the radon transform are investigated. A mean filter is proposed to locate the true peak in the radon image and a profile analysis technique is used to further refine the line parameters. The thetas-boundary problem of the radon transform is also discussed and the erroneous line parameters are corrected. Intensive experiments have shown that the proposed methodology is effective in finding the centerline and estimating the line width of thick lines",
"When computing descriptors of image data, the type of information that can be extracted may be strongly dependent on the scales at which the image operators are applied. This article presents a systematic methodology for addressing this problem. A mechanism is presented for automatic selection of scale levels when detecting one-dimensional image features, such as edges and ridges. A novel concept of a scale-space edge is introduced, defined as a connected set of points in scale-space at which: (i) the gradient magnitude assumes a local maximum in the gradient direction, and (ii) a normalized measure of the strength of the edge response is locally maximal over scales. An important consequence of this definition is that it allows the scale levels to vary along the edge. Two specific measures of edge strength are analyzed in detail, the gradient magnitude and a differential expression derived from the third-order derivative in the gradient direction. For a certain way of normalizing these differential descriptors, by expressing them in terms of so-called γ-normalized derivatives, an immediate consequence of this definition is that the edge detector will adapt its scale levels to the local image structure. Specifically, sharp edges will be detected at fine scales so as to reduce the shape distortions due to scale-space smoothing, whereas sufficiently coarse scales will be selected at diffuse edges, such that an edge model is a valid abstraction of the intensity profile across the edge. Since the scale-space edge is defined from the intersection of two zero-crossing surfaces in scale-space, the edges will by definition form closed curves. This simplifies selection of salient edges, and a novel significance measure is proposed, by integrating the edge strength along the edge. Moreover, the scale information associated with each edge provides useful clues to the physical nature of the edge. With just slight modifications, similar ideas can be used for formulating ridge detectors with automatic selection, having the characteristic property that the selected scales on a scale-space ridge instead reflect the width of the ridge. It is shown how the methodology can be implemented in terms of straightforward visual front-end operations, and the validity of the approach is supported by theoretical analysis as well as experiments on real-world and synthetic data.",
"In this paper, the algorithm for thinning of grey-scale images is proposed that is based on a pseudo-distance map (PDM). The PDM is a simplified distance map of gray-scale image and uses only that features of image and objects that are necessary to build a skeleton. The algorithm works fast for large gray-scale images and allows constructing a high quality skeleton",
"Symmetry is a powerful shape regularity that's been exploited by perceptual grouping researchers in both human and computer vision to recover part structure from an image without a priori knowledge of scene content. Drawing on the concept of a medial axis, defined as the locus of centers of maximal inscribed discs that sweep out a symmetric part, we model part recovery as the search for a sequence of deformable maximal inscribed disc hypotheses generated from a multiscale super pixel segmentation, a framework proposed by LEV09. However, we learn affinities between adjacent super pixels in a space that's invariant to bending and tapering along the symmetry axis, enabling us to capture a wider class of symmetric parts. Moreover, we introduce a global cost that perceptually integrates the hypothesis space by combining a pair wise and a higher-level smoothing term, which we minimize globally using dynamic programming. The new framework is demonstrated on two datasets, and is shown to significantly outperform the baseline LEV09.",
"",
"Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analy ...",
"",
"Skeletonization algorithms typically decompose an object's silhouette into a set of symmetric parts, offering a powerful representation for shape categorization. However, having access to an object's silhouette assumes correct figure-ground segmentation, leading to a disconnect with the mainstream categorization community, which attempts to recognize objects from cluttered images. In this paper, we present a novel approach to recovering and grouping the symmetric parts of an object from a cluttered scene. We begin by using a multiresolution superpixel segmentation to generate medial point hypotheses, and use a learned affinity function to perceptually group nearby medial points likely to belong to the same medial branch. In the next stage, we learn higher granularity affinity functions to group the resulting medial branches likely to belong to the same object. The resulting framework yields a skelet al approximation that is free of many of the instabilities that occur with traditional skeletons. More importantly, it does not require a closed contour, enabling the application of skeleton-based categorization systems to more realistic imagery.",
"We introduce a method for segmenting a shape from an image and simultaneously determining its symmetry axis. The symmetry is used to help the segmentation and in turn the segmentation determines the symmetry. The problem is formulated as one of minimizing a goodness of fitness function and Dijkstra's algorithm is used to find the global minimum of the cost function. The results are illustrated on real images."
]
}
|
1811.12608
|
2902806643
|
Computing object skeletons in natural images is challenging, owing to large variations in object appearance and scale, and the complexity of handling background clutter. Many recent methods frame object skeleton detection as a binary pixel classification problem, which is similar in spirit to learning-based edge detection, as well as to semantic segmentation methods. In the present article, we depart from this strategy by training a CNN to predict a two-dimensional vector field, which maps each scene point to a candidate skeleton pixel, in the spirit of flux-based skeletonization algorithms. This "image context flux" representation has two major advantages over previous approaches. First, it explicitly encodes the relative position of skelet al pixels to semantically meaningful entities, such as the image points in their spatial context, and hence also the implied object boundaries. Second, since the skeleton detection context is a region-based vector field, it is better able to cope with object parts of large width. We evaluate the proposed method on three benchmark datasets for skeleton detection and two for symmetry detection, achieving consistently superior performance over state-of-the-art methods.
|
In other literature @cite_50 @cite_32 @cite_43 , object skeleton detection is treated as a pixel-wise classification or regression problem. Tsogkas and Kokkinos @cite_50 extract hand-designed features at each pixel and train a classifier for symmetry detection. They employ a multiple instance learning (MIL) framework to accommodate for the unknown scale and orientation of symmetry axes. Shen @cite_32 extend the approach in @cite_50 by training a group of MIL classifiers to capture the diversity of symmetry patterns. Sironi @cite_43 propose a regression-based approach to improve the accuracy of skeleton locations. They train regressors which learn the distances to the closest skeleton in scale-space and identify the skeleton by finding the local maxima.
|
{
"cite_N": [
"@cite_43",
"@cite_32",
"@cite_50"
],
"mid": [
"2052516389",
"2160306297",
"174734558"
],
"abstract": [
"We propose a robust and accurate method to extract the centerlines and scale of tubular structures in 2D images and 3D volumes. Existing techniques rely either on filters designed to respond to ideal cylindrical structures, which lose accuracy when the linear structures become very irregular, or on classification, which is inaccurate because locations on centerlines and locations immediately next to them are extremely difficult to distinguish. We solve this problem by reformulating centerline detection in terms of a regression problem. We first train regressors to return the distances to the closest centerline in scale-space, and we apply them to the input images or volumes. The centerlines and the corresponding scale then correspond to the regressors local maxima, which can be easily identified. We show that our method outperforms state-of-the-art techniques for various 2D and 3D datasets.",
"Local reflection symmetry detection in nature images is a quite important but challenging task in computer vision. The main obstacle is both the scales and the orientations of symmetric structure are unknown. The multiple instance learning (MIL) framework sheds lights onto this task owing to its capability to well accommodate the unknown scales and orientations of the symmetric structures. However, to differentiate symmetry vs non-symmetry remains to face extreme confusions caused by clutters scenes and ambiguous object structures. In this paper, we propose a novel multiple instance learning framework for local reflection symmetry detection, named multiple instance subspace learning (MISL), which instead learns a group of models respectively on well partitioned subspaces. To obtain such subspaces, we propose an efficient dividing strategy under MIL setting, named partial random projection tree (PRPT), by taking advantage of the fact that each sample (bag) is represented by the proposed symmetry features computed at specific scale and orientation combinations (instances). Encouraging experimental results on two datasets demonstrate that the proposed local reflection symmetry detection method outperforms current state-of-the-arts. HighlightsWe perform clustering on samples represented by multiple instances.We learn a group of MIL classifiers on subspaces.We report state-of-the-arts results on the symmetry detection benchmark.",
"In this work we propose a learning-based approach to symmetry detection in natural images. We focus on ribbon-like structures, i.e. contours marking local and approximate reflection symmetry and make three contributions to improve their detection. First, we create and make publicly available a ground-truth dataset for this task by building on the Berkeley Segmentation Dataset. Second, we extract features representing multiple complementary cues, such as grayscale structure, color, texture, and spectral clustering information. Third, we use supervised learning to learn how to combine these cues, and employ MIL to accommodate the unknown scale and orientation of the symmetric structures. We systematically evaluate the performance contribution of each individual component in our pipeline, and demonstrate that overall we consistently improve upon results obtained using existing alternatives."
]
}
|
1811.12608
|
2902806643
|
Computing object skeletons in natural images is challenging, owing to large variations in object appearance and scale, and the complexity of handling background clutter. Many recent methods frame object skeleton detection as a binary pixel classification problem, which is similar in spirit to learning-based edge detection, as well as to semantic segmentation methods. In the present article, we depart from this strategy by training a CNN to predict a two-dimensional vector field, which maps each scene point to a candidate skeleton pixel, in the spirit of flux-based skeletonization algorithms. This "image context flux" representation has two major advantages over previous approaches. First, it explicitly encodes the relative position of skelet al pixels to semantically meaningful entities, such as the image points in their spatial context, and hence also the implied object boundaries. Second, since the skeleton detection context is a region-based vector field, it is better able to cope with object parts of large width. We evaluate the proposed method on three benchmark datasets for skeleton detection and two for symmetry detection, achieving consistently superior performance over state-of-the-art methods.
|
Though the method we propose in the present paper benefits from CNN-based learning, it differs from the methods in @cite_13 @cite_39 @cite_47 @cite_14 @cite_40 @cite_48 in a fundamental way, due to its different learning objective. Instead of treating object skeleton detection in natural images as a binary classification problem, DeepFlux focuses on learning the context flux of skeletons, and as such includes more informative non-local cues, such as the relative position of skeleton points to image points in their vicinity, and thus also, implicitly, the associated object boundaries. A direct consequence of this powerful image context flux representation is that a simple post-processing step can recover the skeleton directly from the learned flux, avoiding inaccurate localizations of skelet al points caused by non-maximum suppression in previous deep learning methods. In addition, DeepFlux enlarges the spatial extent used by the CNN to detect the skeleton, through the use of skeleton context flux. This allows our approach to capture larger object parts.
|
{
"cite_N": [
"@cite_14",
"@cite_48",
"@cite_39",
"@cite_40",
"@cite_47",
"@cite_13"
],
"mid": [
"2765819532",
"2883596444",
"2518902831",
"",
"2952438563",
""
],
"abstract": [
"Extracting skeletons from natural images is a challenging problem, due to complex backgrounds in the scene and various scales of objects. To address this problem, we propose a two-stream fully convolutional neural network which uses the original image and its corresponding semantic segmentation probability map as inputs and predicts the skeleton map using merged multi-scale features. We find that the semantic segmentation probability map is complementary to the corresponding color image and can boost the performance of our baseline model which trained only on color images. We conduct experiments on SK-LARGE dataset and the F-measure of our method on validation set is 0.738 which outperforms current state-of-the-art significantly and demonstrates the effectiveness of our proposed approach.",
"Robust object skeleton detection requires to explore rich representative visual features and effective feature fusion strategies. In this paper, we first re-visit the implementation of HED, the essential principle of which can be ideally described with a linear reconstruction model. Hinted by this, we formalize a Linear Span framework, and propose Linear Span Network (LSN) which introduces Linear Span Units (LSUs) to minimizes the reconstruction error. LSN further utilizes subspace linear span besides the feature linear span to increase the independence of convolutional features and the efficiency of feature integration, which enhances the capability of fitting complex ground-truth. As a result, LSN can effectively suppress the cluttered backgrounds and reconstruct object skeletons. Experimental results validate the state-of-the-art performance of the proposed LSN.",
"Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the ground-truth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. In addition, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: foreground object segmentation and object proposal detection.",
"",
"In this paper, we establish a baseline for object symmetry detection in complex backgrounds by presenting a new benchmark and an end-to-end deep learning approach, opening up a promising direction for symmetry detection in the wild. The new benchmark, named Sym-PASCAL, spans challenges including object diversity, multi-objects, part-invisibility, and various complex backgrounds that are far beyond those in existing datasets. The proposed symmetry detection approach, named Side-output Residual Network (SRN), leverages output Residual Units (RUs) to fit the errors between the object symmetry groundtruth and the outputs of RUs. By stacking RUs in a deep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales to ease the problems of fitting complex outputs with limited layers, suppressing the complex backgrounds, and effectively matching object symmetry of different scales. Experimental results validate both the benchmark and its challenging aspects related to realworld images, and the state-of-the-art performance of our symmetry detection approach. The benchmark and the code for SRN are publicly available at this https URL.",
""
]
}
|
1811.12597
|
2903089869
|
The advent of isogeometric analysis has prompted a need for methods to generate Trivariate B-spline Solids (TBS) with positive Jacobian. However, it is difficult to guarantee a positive Jacobian of a TBS since the geometric pre-condition for ensuring the positive Jacobian is very complicated. In this paper, we propose a method for generating TBSs with guaranteed positive Jacobian. For the study, we used a tetrahedral (tet) mesh model and segmented it into sub-volumes using the pillow operation. Then, to reduce the difficulty in ensuring a positive Jacobian, we separately fitted the boundary curves and surfaces and the sub-volumes using a geometric iterative fitting algorithm. Finally, the smoothness between adjacent TBSs is improved. The experimental examples presented in this paper demonstrate the effectiveness and efficiency of the developed algorithm.
|
To analyze the arterial blood flow by IGA, a trivariate NURBS-solid modeling the artery was constructed using a skeleton-based method . Following volume parameterization by a harmonic function, a cylinder-like trivariate B-spline solid was generated with a singular centric curve @cite_5 . Moreover, trivariate B-spline solids with positive Jacobian values were produced from boundary representations using optimization-based approaches . However, the optimization method may fail when the objective function is highly nonlinear. Based on the given boundary conditions and guiding curves, a NURBS solid was constructed to model a swept volume by the variational approach .
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2054892990"
],
"abstract": [
"In this paper, we present a novel algorithm for constructing a volumetric T-spline from B-reps inspired by constructive solid geometry Boolean operations. By solving a harmonic field with proper boundary conditions, the input surface is automatically decomposed into regions that are classified into two groups represented, topologically, by either a cube or a torus. We perform two Boolean operations (union and difference) with the primitives and convert them into polycubes through parametric mapping. With these polycubes, octree subdivision is carried out to obtain a volumetric T-mesh, and sharp features detected from the input model are also preserved. An optimization is then performed to improve the quality of the volumetric T-spline. The obtained T-spline surface is C 2 everywhere except the local region surrounding irregular nodes, where the surface continuity is elevated from C 0 to G 1. Finally, we extract trivariate Bezier elements from the volumetric T-spline and use them directly in isogeometric analysis."
]
}
|
1811.12597
|
2903089869
|
The advent of isogeometric analysis has prompted a need for methods to generate Trivariate B-spline Solids (TBS) with positive Jacobian. However, it is difficult to guarantee a positive Jacobian of a TBS since the geometric pre-condition for ensuring the positive Jacobian is very complicated. In this paper, we propose a method for generating TBSs with guaranteed positive Jacobian. For the study, we used a tetrahedral (tet) mesh model and segmented it into sub-volumes using the pillow operation. Then, to reduce the difficulty in ensuring a positive Jacobian, we separately fitted the boundary curves and surfaces and the sub-volumes using a geometric iterative fitting algorithm. Finally, the smoothness between adjacent TBSs is improved. The experimental examples presented in this paper demonstrate the effectiveness and efficiency of the developed algorithm.
|
The methods described above usually generate a trivariate solid to fill a given B-rep model. However, the generation of a TBS by fitting a tet mesh model is much easier than by filling a B-rep model because it is very easy to produce a tet mesh using popular software such as, TetGen @cite_4 , and NetGen @cite_6 . Hence, it is feasible to generate a TBS by fitting a tet mesh model. In Ref. , a tet mesh model is fitted by the geometric iterative method to generate a TBS. However, there are some regions with negative Jacobian values close to the boundary. In this experiment, a tet mesh is first segmented into seven sub-volumes, each of which is fitted with a TBS by a geometric iterative fitting method. In this way, the generated TBSs are ensured to be valid, i.e. the Jacobian value at any point of the TBSs is positive.
|
{
"cite_N": [
"@cite_4",
"@cite_6"
],
"mid": [
"1973938346",
"2003238405"
],
"abstract": [
"In this paper, we study the progressive iteration approximation property of a curve (tensor product surface) generated by blending a given data point set and a set of basis functions. The curve (tensor product surface) has the progressive iteration approximation property as long as the basis is totally positive and the corresponding collocation matrix is nonsingular. Thus, the B-spline and NURBS curve (surface) have the progressive iteration approximation property, and Bezier curve (surface) also has the property if the corresponding collocation matrix is nonsingular.",
"A dual channel time domain reflectometer includes a pair of input lines connected at respective nodes to a reference flat current pulse generator and a traveling wave sampling gate, respectively. The sampling gates are actuated by a balanced strobe generator which includes a waveguide coupler for coupling a high amplitude fast rise time pulse to each of the gates simultaneously. Pulses of requisite amplitude and shape are generated by a circuit responsive to a strobe trigger input which drives a step recovery diode."
]
}
|
1811.12599
|
2971953368
|
In isogeometric analysis, it is frequently required to handle the geometric models enclosed by four-sided or non-four-sided boundary patches, such as trimmed surfaces. In this paper, we develop a Gregory solid based method to parameterize those models. First, we extend the Gregory patch representation to the trivariate Gregory solid representation. Second, the trivariate Gregory solid representation is employed to interpolate the boundary patches of a geometric model, thus generating the polyhedral volume parametrization. To improve the regularity of the polyhedral volume parametrization, we formulate the construction of the trivariate Gregory solid as a sparse optimization problem, where the optimization objective function is a linear combination of some terms, including a sparse term aiming to reduce the negative Jacobian area of the Gregory solid. Then, the alternating direction method of multipliers (ADMM) is used to solve the sparse optimization problem. Lots of experimental examples illustrated in this paper demonstrate the effectiveness and efficiency of the developed method.
|
Triangular mesh parametrization is a commonly employed technique in curve and surface fitting @cite_24 , texture mapping @cite_17 , remeshing @cite_10 , and so on. A triangular mesh parametrization constructs a bijective mapping from the mesh in three dimension to a planar domain. According to the requirements of applications, the frequently used mapping methods in mesh parametrization includes discrete harmonic mapping @cite_24 , discrete equiareal mappings @cite_25 , and discrete conformal mapping @cite_22 . For more details on triangular mesh parametrization methods and their applications, please refer to @cite_11 @cite_16 .
|
{
"cite_N": [
"@cite_11",
"@cite_22",
"@cite_24",
"@cite_16",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"1621614599",
"",
"2111501452",
"2064233581",
"120859022",
"",
"2097246665"
],
"abstract": [
"This paper provides a tutorial and survey of methods for parameterizing surfaces with a view to applications in geometric modelling and computer graphics. We gather various concepts from differential geometry which are relevant to surface mapping and use them to understand the strengths and weaknesses of the many methods for parameterizing piecewise linear surfaces and their relationship to one another.",
"",
"A method based on graph theory is investigated for creating global parametrizations for surface triangulations for the purpose of smooth surface fitting. The parametrizations, which are planar triangulations, are the solutions of linear systems based on convex combinations. A particular parametrization, called shape-preserving, is found to lead to visually smooth surface approximations.",
"We present a survey of recent methods for creating piecewise linear mappings between triangulations in 3D and simpler domains such as planar regions, simplicial complexes, and spheres. We also discuss emerging tools such as global parameterization, inter-surface mapping, and parameterization with constraints. We start by describing the wide range of applications where parameterization tools have been used in recent years. We then briefly review the pertinent mathematical background and terminology, before proceeding to survey the existing parameterization techniques. Our survey summarizes the main ideas of each technique and discusses its main properties, comparing it to other methods available. Thus it aims to provide guidance to researchers and developers when assessing the suitability of different methods for various applications. This survey focuses on the practical aspects of the methods available, such as time complexity and robustness and shows multiple examples of parameterizations generated using different methods, allowing the reader to visually evaluate and compare the results.",
"Remeshing is a key component of many geometric algorithms, including modeling, editing, animation and simulation. As such, the rapidly developing field of geometry processing has produced a profusion of new remeshing techniques over the past few years. In this paper we survey recent developments in remeshing of surfaces, focusing mainly on graphics applications. We classify the techniques into five categories based on their end goal: structured, compatible, high quality, feature and error-driven remeshing. We limit our description to the main ideas and intuition behind each technique, and a brief comparison between some of the techniques. We also list some open questions and directions for future research.",
"",
"Given an arbitrary mesh, we present a method to construct a progressive mesh (PM) such that all meshes in the PM sequence share a common texture parametrization. Our method considers two important goals simultaneously. It minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. It also minimizes texture deviation (“slippage” error based on parametric correspondence) to obtain accurate textured mesh approximations. The method begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. Next, it simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole PM sequence. Finally, the charts are packed into a texture atlas. We demonstrate using such atlases to sample color and normal maps over several models."
]
}
|
1811.12599
|
2971953368
|
In isogeometric analysis, it is frequently required to handle the geometric models enclosed by four-sided or non-four-sided boundary patches, such as trimmed surfaces. In this paper, we develop a Gregory solid based method to parameterize those models. First, we extend the Gregory patch representation to the trivariate Gregory solid representation. Second, the trivariate Gregory solid representation is employed to interpolate the boundary patches of a geometric model, thus generating the polyhedral volume parametrization. To improve the regularity of the polyhedral volume parametrization, we formulate the construction of the trivariate Gregory solid as a sparse optimization problem, where the optimization objective function is a linear combination of some terms, including a sparse term aiming to reduce the negative Jacobian area of the Gregory solid. Then, the alternating direction method of multipliers (ADMM) is used to solve the sparse optimization problem. Lots of experimental examples illustrated in this paper demonstrate the effectiveness and efficiency of the developed method.
|
In this paper, we developed the representation of the trivariate Gregory solid, and employed it to fill the models enclosed by boundary patches, thus generating the polyhedral volume parametrization of the input models. The Gregory patch @cite_8 @cite_18 arose from the Gregory's method @cite_4 , which produces the 8 inner control points from four boundary edges and four corner points, one pair per corner. And then, the four pairs of inner control points are blended so that the generated patch interpolates the boundary straight line segments. Similarly, a triangular Gregory patch can be constructed using the method proposed in @cite_12 . Moreover, @cite_28 defined the Gregory patch as a mapping from an @math -sided parametric domain with straight line boundaries to an @math -sided parametric domain of a trimmed surface, and non-self-overlapping structured grids can be generated on it, as well as the trimmed surface.
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_8",
"@cite_28",
"@cite_12"
],
"mid": [
"1904788812",
"1559488781",
"2160676410",
"2167150269",
"2090480945"
],
"abstract": [
"A surface interpolation method for irregular meshes of curves is proposed. When a face in the mesh is interpolated, a surface patch on the face is individually generated from localized boundary information. The tangent planes of generated patches are continuous. This method employs an extended Bezier patch as a surface equation. The patch can be defined, specifying independently normal derivatives along the boundary curves. The total procedure of generating patches is simple and quick.",
"Publisher Summary Smooth or blending function interpolants, which match a given function and slopes on the boundary of a rectangle or a triangle, usually require that the cross derivative or twist terms be defined unambiguously at vertices. Smooth interpolation schemes that avoid such restrictions could be useful for the piecewise generation of surfaces in computer-aided geometric design. This chapter discusses two such schemes—one over a rectangle and the other over a triangle. This chapter describes a new interpolation scheme for the triangle. This scheme has a relatively simple construction, it is symmetric in that each side of the triangle is treated in the same way, and it involves no compatibility constraints. The set of polynomials that are reproduced by an interpolation scheme is defined as the precision set of the interpolant.",
"We propose a unified method of generating a wide range of three dimensional objects from polyhedra to solids with free-form surfaces. Modeling systems for polyhedra and systems for free-form surfaces have been developed independently in the past because of the difference in their underlying theory and practices. However, this is not desirable for a designer. So in this paper, we have shown a method in which a wide range of shapes are generated in one system by using local modifications. Local modifications are procedures used to change the shape of solids locally. The construction and the modification of three dimensional shapes by these procedures are natural and easy for a designer in many cases. The implementation of these procedures in a computer is easy and their execution does not require much time. Our method to construct a solid with free-form surfaces consists of following three phases. 1) A solid which serves as a basis of free-form shape design is generated by local modifications. Edges of this solid are straight lines but its faces are not necessarily flat planes. 2) From this model, a curve model which adequately represents the characteristics of a free-form shape is generated. 3) Surface equations interpolating over the curve model are generated. We have made a geometric modeling system MODIF. Using this system, a complicated solid with free-form surfaces can be designed easily. MODIF can generate color shaded pictures and cutter path data for making a real object model by NC machining tool.",
"Most existing meshing algorithms for a 2D or shell figure requires the figure to have exactly four sides. Generating structured grids in the n-sided parametric region of a trimmed surface thus usually requires to first partition the region into four-sided sub-regions. We address the automatic structured grid generation problem in an n-sided region by fitting a planar Gregory patch so that the partition requirement is naturally avoided. However, self-overlapping may occur in some portions of the algebraically generated grid; this severely limits its usage in most of engineering and scientific applications where a grid system with no self-intersecting is strictly required. To solve the problem, we use a functional optimization approach to move grid nodes in the u−v domain of the trimmed surface to eliminate the self-overlapping. The derivatives of a Gregory patch, which are extremely difficult to compute analytically, are not required in our method. Thus, our optimization algorithm compares favourably at least in terms of speed with some other mesh optimization algorithms, such as the elliptic PDE method. In addition, to overcome the difficulty of guessing a good initial position of every grid node for the conjugate gradient method, a progressive optimization algorithm is incorporated in our optimization. Experiment results are given to illustrate the usefulness and effectiveness of the presented method. Copyright © 2004 John Wiley & Sons, Ltd.",
"We describe the theoretical frame for a method of creating and describing rounded objects of arbitrary topology in CAD, and its implementation for UNIGRAFIX, a polygon-based modeler developed at UC Berkeley that generates black-and-white, smooth-shaded images on several output devices. The mathematical foundation for building triangular patches interpolating cubic edges and blending with geometric continuity is given, and various approaches are discussed. To represent curvature information, we extended the UNIGRAFIX language to UniCubix, and we implemented uci, an interactive shell that interprets a UniCubix description and converts it into UNIGRAFIX wireframes or polyhedral nets that approximate curved patches. Uci also provides a prototype of a global smoothing operation, that takes a polyhedral object of arbitrary topology and creates the UniCubix representation of a smooth object interpolating the input vertices."
]
}
|
1811.12326
|
2903186170
|
The goal of data selection is to capture the most structural information from a set of data. This paper presents a fast and accurate data selection method, in which the selected samples are optimized to span the subspace of all data. We propose a new selection algorithm, referred to as iterative projection and matching (IPM), with linear complexity w.r.t. the number of data, and without any parameter to be tuned. In our algorithm, at each iteration, the maximum information from the structure of the data is captured by one selected sample, and the captured information is neglected in the next iterations by projection on the null-space of previously selected samples. The computational efficiency and the selection accuracy of our proposed algorithm outperform those of the conventional methods. Furthermore, the superiority of the proposed algorithm is shown on active learning for video action recognition dataset on UCF-101; learning using representatives on ImageNet; training a generative adversarial network (GAN) to generate multi-view images from a single-view input on CMU Multi-PIE dataset; and video summarization on UTE Egocentric dataset.
|
A method for sampling from a set of data is proposed by Elhamifar et. al. based on sparse modeling representative selection (SMRS) @cite_20 . Their proposed cost function for data selection is the error of projecting all the data onto the subspace spanned by the selected data. Mathematically, the optimization problem in @cite_20 can be written as, This is an NP-hard problem. Their main contribution is solving this problem via convex relaxation. However, there is no guarantee that convex relaxation provides the best approximation for an NP-hard problem. In this paper, we propose a new fast algorithm for solving Problem ).
|
{
"cite_N": [
"@cite_20"
],
"mid": [
"1966872876"
],
"abstract": [
"We consider the problem of finding a few representatives for a dataset, i.e., a subset of data points that efficiently describes the entire dataset. We assume that each data point can be expressed as a linear combination of the representatives and formulate the problem of finding the representatives as a sparse multiple measurement vector problem. In our formulation, both the dictionary and the measurements are given by the data matrix, and the unknown sparse codes select the representatives via convex optimization. In general, we do not assume that the data are low-rank or distributed around cluster centers. When the data do come from a collection of low-rank models, we show that our method automatically selects a few representatives from each low-rank model. We also analyze the geometry of the representatives and discuss their relationship to the vertices of the convex hull of the data. We show that our framework can be extended to detect and reject outliers in datasets, and to efficiently deal with new observations and large datasets. The proposed framework and theoretical foundations are illustrated with examples in video summarization and image classification using representatives."
]
}
|
1811.12506
|
2902917913
|
We propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data in medical imaging. The original co-training method was applied to non-visual data. It requires different sources, or representations, of the data, which are called different views and differ from viewpoint in computer vision. Co-training was recently applied to visual tasks where the views were deep networks learnt by adversarial training. In our work, targeted at 3D data, co-training is achieved by exploiting multi-viewpoint consistency. We generate different views by rotating the 3D data and utilize asymmetrical 3D kernels to further encourage diversified features of each sub-net. In addition, we propose an uncertainty-aware attention mechanism to estimate the reliability of each view prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample, in order to assign a reliable pseudo label and thus achieve better performance. We show the effectiveness of our proposed method on several open datasets from medical image segmentation tasks (NIH pancreas & LiTS liver tumor dataset). A method based on our approach achieved the state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework even when fully supervised training is feasible.
|
Semi-supervised learning approaches aim at learning models with limited labeled data and a large proportion of unlabeled data @cite_40 @cite_34 @cite_3 @cite_31 . Emerging semi-supervised approaches have been successfully applied to image recognition using deep neural networks @cite_33 @cite_12 @cite_15 @cite_27 @cite_0 . These algorithms are mostly based on adding regularization terms to train networks to be resistant to some specific noise. A recent approach @cite_36 extended the co-training strategy to 2D deep networks and multiple views, using adversarial examples to encourage view differences to boost performance. Tri-Net @cite_7 trains a three-branch network to supervise each other, which can also be viewed as a multi-view learning @cite_2 approach that encourages view differences with classifiers of diverse structures.
|
{
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_15",
"@cite_36",
"@cite_3",
"@cite_0",
"@cite_40",
"@cite_27",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_12"
],
"mid": [
"",
"",
"",
"2962804657",
"",
"2431080869",
"2048679005",
"",
"1670132599",
"",
"",
"2952229419"
],
"abstract": [
"",
"",
"",
"In this paper, we study the problem of semi-supervised image recognition, which is to learn classifiers using both labeled and unlabeled images. We present Deep Co-Training, a deep learning based method inspired by the Co-Training framework. The original Co-Training learns two classifiers on two views which are data from different sources that describe the same instances. To extend this concept to deep learning, Deep Co-Training trains multiple deep neural networks to be the different views and exploits adversarial examples to encourage view difference, in order to prevent the networks from collapsing into each other. As a result, the co-trained networks provide different and complementary information about the data, which is necessary for the Co-Training framework to achieve good results. We test our method on SVHN, CIFAR-10 100 and ImageNet datasets, and our method outperforms the previous state-of-the-art methods by a large margin.",
"",
"Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets.",
"We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 mitchell+@cs.cmu.edu",
"",
"In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning.",
"",
"",
"We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on the Ladder network proposed by Valpola (2015), which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification, in addition to permutation-invariant MNIST classification with all labels."
]
}
|
1811.12506
|
2902917913
|
We propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data in medical imaging. The original co-training method was applied to non-visual data. It requires different sources, or representations, of the data, which are called different views and differ from viewpoint in computer vision. Co-training was recently applied to visual tasks where the views were deep networks learnt by adversarial training. In our work, targeted at 3D data, co-training is achieved by exploiting multi-viewpoint consistency. We generate different views by rotating the 3D data and utilize asymmetrical 3D kernels to further encourage diversified features of each sub-net. In addition, we propose an uncertainty-aware attention mechanism to estimate the reliability of each view prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample, in order to assign a reliable pseudo label and thus achieve better performance. We show the effectiveness of our proposed method on several open datasets from medical image segmentation tasks (NIH pancreas & LiTS liver tumor dataset). A method based on our approach achieved the state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework even when fully supervised training is feasible.
|
@cite_4 mentioned that current semi-supervised medical analysis methods fall into 3 types - self-training (teacher-student models), co-training (with hand-crafted features) and graph-based approaches (mostly applications of graph-cut optimization). @cite_17 introduced a deep network based self-training framework with conditional random field (CRF) based iterative refinements for medical image segmentation. @cite_19 trained three 2D networks from three planar slices of the 3D data and fused them in each self-training iteration to get a stronger student model. @cite_23 extended the self-ensemble approach @math model @cite_33 with 90 degree rotations making the network rotation-invariant. GAN based approaches are also popular recently for medical imaging @cite_5 @cite_21 @cite_16 .
|
{
"cite_N": [
"@cite_4",
"@cite_33",
"@cite_21",
"@cite_19",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_17"
],
"mid": [
"2964317695",
"",
"2891179298",
"2796152334",
"",
"2876381398",
"2891451067",
"2751665805"
],
"abstract": [
"Abstract Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https: figshare.com articles Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_ 7479416 .",
"",
"We present an adversarial domain adaptation based deep learning approach for automatic tumor segmentation from T2-weighted MRI. Our approach is composed of two steps: (i) a tumor-aware unsupervised cross-domain adaptation (CT to MRI), followed by (ii) semi-supervised tumor segmentation using Unet trained with synthesized and limited number of original MRIs. We introduced a novel target specific loss, called tumor-aware loss, for unsupervised cross-domain adaptation that helps to preserve tumors on synthesized MRIs produced from CT images. In comparison, state-of-the art adversarial networks trained without our tumor-aware loss produced MRIs with ill-preserved or missing tumors. All networks were trained using labeled CT images from 377 patients with non-small cell lung cancer obtained from the Cancer Imaging Archive and unlabeled T2w MRIs from a completely unrelated cohort of 6 patients with pre-treatment and 36 on-treatment scans. Next, we combined 6 labeled pre-treatment MRI scans with the synthesized MRIs to boost tumor segmentation accuracy through semi-supervised learning. Semi-supervised training of cycle-GAN produced a segmentation accuracy of 0.66 computed using Dice Score Coefficient (DSC). Our method trained with only synthesized MRIs produced an accuracy of 0.74 while the same method trained in semi-supervised setting produced the best accuracy of 0.80 on test. Our results show that tumor-aware adversarial domain adaptation helps to achieve reasonably accurate cancer segmentation from limited MRI data by leveraging large CT datasets.",
"Multi-organ segmentation is a critical problem in medical image analysis due to its great value for computer-aided diagnosis, computer-aided surgery, and radiation therapy. Although fully-supervised segmentation methods can achieve good performance, they usually require a large amount of 3D data, such as CT scans, with voxel-wised annotations which are usually difficult, expensive, and slow to obtain. By contrast, large unannotated datasets of CT images are available. Inspired by the well-known semi-supervised learning framework co-training, we propose multi-planar co-training (MPCT), to generate more reliable pseudo-labels by enforcing consistency among multiple planes, i.e., saggital, coronal, and axial planes, of 3D unlabeled medical data, which play a vital role in our framework. Empirical results show that generating pseudo-labels by the multi-planar fusion rather than a single plane leads to a significant performance gain. We evaluate our approach on a new collected dataset and show that MPCT boosts the performance of a typical segmentation model, fully convolutional networks, by a large margin, when only a small set of labeled 3D data is available, i.e., 77.49 vs. 73.14 .",
"",
"The cardiothoracic ratio (CTR), a clinical metric of heart size in chest X-rays (CXRs), is a key indicator of cardiomegaly. Manual measurement of CTR is time-consuming and can be affected by human subjectivity, making it desirable to design computer-aided systems that assist clinicians in the diagnosis process. Automatic CTR estimation through chest organ segmentation, however, requires large amounts of pixel-level annotated data, which is often unavailable. To alleviate this problem, we propose an unsupervised domain adaptation framework based on adversarial networks. The framework learns domain invariant feature representations from openly available data sources to produce accurate chest organ segmentation for unlabeled datasets. Specifically, we propose a model that enforces our intuition that prediction masks should be domain independent. Hence, we introduce a discriminator that distinguishes segmentation predictions from ground truth masks. We evaluate our system’s prediction based on the assessment of radiologists and demonstrate the clinical practicability for the diagnosis of cardiomegaly. We finally illustrate on the JSRT dataset that the semi-supervised performance of our model is also very promising.",
"Segmentation is a key step for various medical image analysis tasks. Recently, deep neural networks could provide promising solutions for automatic image segmentation. The network training usually involves a large scale of training data with corresponding ground truth label maps. However, it is very challenging to obtain the ground-truth label maps due to the requirement of expertise knowledge and also intensive labor work. To address such challenges, we propose a novel semi-supervised deep learning framework, called “Attention based Semi-supervised Deep Networks” (ASDNet), to fulfill the segmentation tasks in an end-to-end fashion. Specifically, we propose a fully convolutional confidence network to adversarially train the segmentation network. Based on the confidence map from the confidence network, we then propose a region-attention based semi-supervised learning strategy to include the unlabeled data for training. Besides, sample attention mechanism is also explored to improve the network training. Experimental results on real clinical datasets show that our ASDNet can achieve state-of-the-art segmentation accuracy. Further analysis also indicates that our proposed network components contribute most to the improvement of performance.",
"Training a fully convolutional network for pixel-wise (or voxel-wise) image segmentation normally requires a large number of training images with corresponding ground truth label maps. However, it is a challenge to obtain such a large training set in the medical imaging domain, where expert annotations are time-consuming and difficult to obtain. In this paper, we propose a semi-supervised learning approach, in which a segmentation network is trained from both labelled and unlabelled data. The network parameters and the segmentations for the unlabelled data are alternately updated. We evaluate the method for short-axis cardiac MR image segmentation and it has demonstrated a high performance, outperforming a baseline supervised method. The mean Dice overlap metric is 0.92 for the left ventricular cavity, 0.85 for the myocardium and 0.89 for the right ventricular cavity. It also outperforms a state-of-the-art multi-atlas segmentation method by a large margin and the speed is substantially faster."
]
}
|
1811.12506
|
2902917913
|
We propose a novel framework, uncertainty-aware multi-view co-training (UMCT), to address semi-supervised learning on 3D data, such as volumetric data in medical imaging. The original co-training method was applied to non-visual data. It requires different sources, or representations, of the data, which are called different views and differ from viewpoint in computer vision. Co-training was recently applied to visual tasks where the views were deep networks learnt by adversarial training. In our work, targeted at 3D data, co-training is achieved by exploiting multi-viewpoint consistency. We generate different views by rotating the 3D data and utilize asymmetrical 3D kernels to further encourage diversified features of each sub-net. In addition, we propose an uncertainty-aware attention mechanism to estimate the reliability of each view prediction with Bayesian deep learning. As one view requires the supervision from other views in co-training, our self-adaptive approach computes a confidence score for the prediction of each unlabeled sample, in order to assign a reliable pseudo label and thus achieve better performance. We show the effectiveness of our proposed method on several open datasets from medical image segmentation tasks (NIH pancreas & LiTS liver tumor dataset). A method based on our approach achieved the state-of-the-art performances on both the LiTS liver tumor segmentation and the Medical Segmentation Decathlon (MSD) challenge, demonstrating the robustness and value of our framework even when fully supervised training is feasible.
|
2D networks and 3D networks both have their advantages and limitations. The former benefit from 2D pre-trained weights and well-studies architectures in natural image processing, while the latter better explore 3D information utilizing 3D convolutional kernels. @cite_22 @cite_24 either uses 2D probability maps or 2D feature maps for building 3D models. @cite_38 proposed a 3D architecture which can be initialized by 2D pre-trained models. Moreover, @cite_30 @cite_29 illustrates the effectiveness of multi-view training on 2D slices, even by simply averaging multi-planar results, indicating complementary latent information exists in the biases of 2D networks. This inspired us to train 3D multi-view networks with 2D initializations jointly using an additional loss function for multi-view networks, encouraging each network to learn from each other.
|
{
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_22",
"@cite_29",
"@cite_24"
],
"mid": [
"1871050032",
"2952234052",
"2796194485",
"2618237340",
"2964227007"
],
"abstract": [
"Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities @math of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57 to 70 , 43 to 77 , and 58 to 75 at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively.",
"While deep convolutional neural networks (CNN) have been successfully applied for 2D image analysis, it is still challenging to apply them to 3D anisotropic volumes, especially when the within-slice resolution is much higher than the between-slice resolution and when the amount of 3D volumes is relatively small. On one hand, direct learning of CNN with 3D convolution kernels suffers from the lack of data and likely ends up with poor generalization; insufficient GPU memory limits the model size or representational power. On the other hand, applying 2D CNN with generalizable features to 2D slices ignores between-slice information. Coupling 2D network with LSTM to further handle the between-slice information is not optimal due to the difficulty in LSTM learning. To overcome the above challenges, we propose a 3D Anisotropic Hybrid Network (AH-Net) that transfers convolutional features learned from 2D images to 3D anisotropic volumes. Such a transfer inherits the desired strong generalization capability for within-slice information while naturally exploiting between-slice information for more effective modelling. The focal loss is further utilized for more effective end-to-end learning. We experiment with the proposed 3D AH-Net on two different medical image analysis tasks, namely lesion detection from a Digital Breast Tomosynthesis volume, and liver and liver tumor segmentation from a Computed Tomography volume and obtain the state-of-the-art results.",
"There has been a debate on whether to use 2D or 3D deep neural networks for volumetric organ segmentation. Both 2D and 3D models have their advantages and disadvantages. In this paper, we present an alternative framework, which trains 2D networks on different viewpoints for segmentation, and builds a 3D Volumetric Fusion Net (VFN) to fuse the 2D segmentation results. VFN is relatively shallow and contains much fewer parameters than most 3D networks, making our framework more efficient at integrating 3D information for segmentation. We train and test the segmentation and fusion modules individually, and propose a novel strategy, named cross-cross-augmentation, to make full use of the limited training data. We evaluate our framework on several challenging abdominal organs, and verify its superiority in segmentation accuracy and stability over existing 2D and 3D approaches.",
"Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than (4 ), measured by the average Dice-Sorensen Coefficient (DSC). In addition, we report (62.43 ) DSC in the worst case, which guarantees the reliability of our approach in clinical applications.",
"Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2-D and 3-D FCNs, serve as the backbone in many volumetric image segmentation. However, 2-D convolutions cannot fully leverage the spatial information along the third dimension while 3-D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D DenseUNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of the H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion layer. We extensively evaluated our method on the data set of the MICCAI 2017 Liver Tumor Segmentation Challenge and 3DIRCADb data set. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model."
]
}
|
1811.12328
|
2903434551
|
We show how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image. The network takes an RGB image as input, regresses albedo and normal maps from which we compute lighting coefficients. Our network is trained using large uncontrolled image collections without ground truth. By incorporating a differentiable renderer, our network can learn from self-supervision. Since the problem is ill-posed we introduce additional supervision: 1. We learn a statistical natural illumination prior, 2. Our key insight is to perform offline multiview stereo (MVS) on images containing rich illumination variation. From the MVS pose and depth maps, we can cross project between overlapping views such that Siamese training can be used to ensure consistent estimation of photometric invariants. MVS depth also provides direct coarse supervision for normal map estimation. We believe this is the first attempt to use MVS supervision for learning inverse rendering.
|
Classical methods estimate intrinsic properties by fitting photometric or geometric models. Most methods require multiple images. From multiview images, a structure-from-motion multiview stereo pipeline enables recovery of dense mesh models @cite_35 @cite_5 though illumination effects are baked into the texture. From images with fixed viewpoint but varying illumination photometric stereo can be applied. Variants consider statistical BRDF models @cite_55 , the use of outdoor time-lapse images @cite_38 , spatially-varying BRDFs @cite_31 Attempts to combine geometric and photometric methods are limited. Haber al @cite_27 assume known geometry (which can be provided by MVS) and inverse render reflectance and lighting from community photo collections. Kim al @cite_9 represents the state-of-the-art and again uses an MVS initialisation for joint optimisation of geometry, illumination and albedo. Some methods consider a single image setting. Jeson al @cite_42 introduce a local-adaptive reflectance smoothness constraint for intrinsic image decomposition on texture-free input images which are acquired with a texture separation algorithm. Barron al @cite_2 present SIRFS, a classical optimisation-based approach that recovers all of shape, illumination and albedo using a sophisticated combination of generic priors.
|
{
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_55",
"@cite_9",
"@cite_42",
"@cite_27",
"@cite_2",
"@cite_5",
"@cite_31"
],
"mid": [
"1992642990",
"2078020442",
"2097839898",
"2520013826",
"2581345",
"1607067788",
"",
"",
"2134019950"
],
"abstract": [
"Poisson surface reconstruction creates watertight surfaces from oriented point sets. In this work we extend the technique to explicitly incorporate the points as interpolation constraints. The extension can be interpreted as a generalization of the underlying mathematical framework to a screened Poisson equation. In contrast to other image and geometry processing techniques, the screening term is defined over a sparse set of points rather than over the full domain. We show that these sparse constraints can nonetheless be integrated efficiently. Because the modified linear system retains the same finite-element discretization, the sparsity structure is unchanged, and the system can still be solved using a multigrid approach. Moreover we present several algorithmic improvements that together reduce the time complexity of the solver to linear in the number of points, thereby enabling faster, higher-quality surface reconstructions.",
"We present a photometric stereo technique that operates on time-lapse sequences captured by static outdoor webcams over the course of several months. Outdoor webcams produce a large set of uncontrolled images subject to varying lighting and weather conditions. We first automatically select a suitable subset of the captured frames for further processing, reducing the dataset size by several orders of magnitude. A camera calibration step is applied to recover the camera response function, the absolute camera orientation, and to compute the light directions for each image. Finally, we describe a new photometric stereo technique for non-Lambertian scenes and unknown light source intensities to recover normal maps and spatially varying materials of the scene.",
"We present a method for simultaneously recovering shape and spatially varying reflectance of a surface from photometric stereo images. The distinguishing feature of our approach is its generality; it does not rely on a specific parametric reflectance model and is therefore purely ldquodata-drivenrdquo. This is achieved by employing novel bi-variate approximations of isotropic reflectance functions. By combining this new approximation with recent developments in photometric stereo, we are able to simultaneously estimate an independent surface normal at each point, a global set of non-parametric ldquobasis materialrdquo BRDFs, and per-point material weights. Our experimental results validate the approach and demonstrate the utility of bi-variate reflectance functions for general non-parametric appearance capture.",
"3D shape reconstruction with multi-view stereo (MVS) relies on a robust evaluation of photo consistencies across images. The robustness is ensured by isolating surface albedo and scene illumination from the shape recovery, i.e. shading and colour variation are regarded as a nuisance in MVS. This yields a gap in the qualities between the recovered shape and the images used. We present a method to address it by jointly estimating detailed shape, illumination and albedo using the initial shape robustly recovered by MVS. This is achieved by solving the multi-view inverse rendering problem using the geometric and photometric smoothness terms and the normalized spherical harmonics illumination model. Our method allows spatially-varying albedo and per image illumination without any prerequisites such as training data or image segmentation. We demonstrate that our method can clearly improve the 3D shape and recover illumination and albedo on real world scenes.",
"While intrinsic image decomposition has been studied extensively during the past a few decades, it is still a challenging problem. This is partly because commonly used constraints on shading and reflectance are often too restrictive to capture an important property of natural images, i.e., rich textures. In this paper, we propose a novel image model for handling textures in intrinsic image decomposition, which enables us to produce high quality results even with simple constraints. We also propose a novel constraint based on surface normals obtained from an RGB-D image. Assuming Lambertian surfaces, we formulate the constraint based on a locally linear embedding framework to promote local and global consistency on the shading layer. We demonstrate that combining the novel texture-aware image model and the novel surface normal based constraint can produce superior results to existing approaches.",
"",
"",
"",
"This paper describes a photometric stereo method designed for surfaces with spatially-varying BRDFs, including surfaces with both varying diffuse and specular properties. Our optimization-based method builds on the observation that most objects are composed of a small number of fundamental materials by constraining each pixel to be representable by a combination of at most two such materials. This approach recovers not only the shape but also material BRDFs and weight maps, yielding accurate rerenderings under novel lighting conditions for a wide variety of objects. We demonstrate examples of interactive editing operations made possible by our approach."
]
}
|
1811.12328
|
2903434551
|
We show how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image. The network takes an RGB image as input, regresses albedo and normal maps from which we compute lighting coefficients. Our network is trained using large uncontrolled image collections without ground truth. By incorporating a differentiable renderer, our network can learn from self-supervision. Since the problem is ill-posed we introduce additional supervision: 1. We learn a statistical natural illumination prior, 2. Our key insight is to perform offline multiview stereo (MVS) on images containing rich illumination variation. From the MVS pose and depth maps, we can cross project between overlapping views such that Siamese training can be used to ensure consistent estimation of photometric invariants. MVS depth also provides direct coarse supervision for normal map estimation. We believe this is the first attempt to use MVS supervision for learning inverse rendering.
|
Deep depth prediction Direct estimation of shape alone using deep neural networks has attracted a lot of attention. Eigen al @cite_18 @cite_30 were the first to apply deep learning in this context. Subsequently, performance gains were obtained using improved architectures @cite_28 , post-processing with classical CRF-based methods @cite_39 @cite_56 @cite_0 and using ordinal relationships for objects within the scenes @cite_15 @cite_19 @cite_50 . Zheng al @cite_29 use synthetic images for training but improve generalisation using a synthetic-to-real transform GAN. However, all of this work requires supervision by ground truth depth. An alternative branch of methods explore using self-supervision from augmented data. For example, binocular stereo pairs can provide a supervisory signal through consistency of cross projected images @cite_3 @cite_51 @cite_32 @cite_20 . Alternatively, video data can provide a similar source of supervision @cite_52 @cite_13 @cite_41 . Some of other work built from specific ways were proposed recently. Tulsiani al @cite_25 use multiview supervision in a ray tracing network. While all these methods take single image input, Ji al @cite_23 tackle the MVS problem itself using deep learning.
|
{
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_29",
"@cite_3",
"@cite_15",
"@cite_20",
"@cite_18",
"@cite_52",
"@cite_39",
"@cite_23",
"@cite_28",
"@cite_32",
"@cite_56",
"@cite_19",
"@cite_50",
"@cite_51",
"@cite_25",
"@cite_0",
"@cite_13"
],
"mid": [
"",
"",
"2949634581",
"2561074213",
"2798410215",
"",
"2951234442",
"2609883120",
"1915250530",
"2745144057",
"2963591054",
"",
"",
"",
"",
"",
"2609026071",
"",
""
],
"abstract": [
"",
"",
"A significant weakness of most current deep Convolutional Neural Networks is the need to train them using vast amounts of manu- ally labelled data. In this work we propose a unsupervised framework to learn a deep convolutional neural network for single view depth predic- tion, without requiring a pre-training stage or annotated ground truth depths. We achieve this by training the network in a manner analogous to an autoencoder. At training time we consider a pair of images, source and target, with small, known camera motion between the two such as a stereo pair. We train the convolutional encoder for the task of predicting the depth map for the source image. To do so, we explicitly generate an inverse warp of the target image using the predicted depth and known inter-view displacement, to reconstruct the source image; the photomet- ric error in the reconstruction is the reconstruction loss for the encoder. The acquisition of this training data is considerably simpler than for equivalent systems, requiring no manual annotation, nor calibration of depth sensor to camera. We show that our network trained on less than half of the KITTI dataset (without any further augmentation) gives com- parable performance to that of the state of art supervised methods for single view depth estimation.",
"In this paper we formulate structure from motion as a learning problem. We train a convolutional network end-to-end to compute depth and camera motion from successive, unconstrained image pairs. The architecture is composed of multiple stacked encoder-decoder networks, the core part being an iterative network that is able to improve its own predictions. The network estimates not only depth and motion, but additionally surface normals, optical flow between the images and confidence of the matching. A crucial component of the approach is a training loss based on spatial relative differences. Compared to traditional two-frame structure from motion methods, results are more accurate and more robust. In contrast to the popular depth-from-single-image networks, DeMoN learns the concept of matching and, thus, better generalizes to structures not seen during training.",
"Monocular depth estimation, which plays a crucial role in understanding 3D scene geometry, is an ill-posed problem. Recent methods have gained significant improvement by exploring image-level information and hierarchical features from deep convolutional neural networks (DCNNs). These methods model depth estimation as a regression problem and train the regression networks by minimizing mean squared error, which suffers from slow convergence and unsatisfactory local solutions. Besides, existing depth estimation networks employ repeated spatial pooling operations, resulting in undesirable low-resolution feature maps. To obtain high-resolution depth maps, skip-connections or multi-layer deconvolution networks are required, which complicates network training and consumes much more computations. To eliminate or at least largely reduce these problems, we introduce a spacing-increasing discretization (SID) strategy to discretize depth and recast depth network learning as an ordinal regression problem. By training the network using an ordinary regression loss, our method achieves much higher accuracy and faster convergence in synch . Furthermore, we adopt a multi-scale network structure which avoids unnecessary spatial pooling and captures multi-scale information in parallel. The method described in this paper achieves state-of-the-art results on four challenging benchmarks, i.e., KITTI [17], ScanNet [9], Make3D [50], and NYU Depth v2 [42], and win the 1st prize in Robust Vision Challenge 2018. Code has been made available at: this https URL",
"",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation.",
"We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.",
"Depth estimation and semantic segmentation are two fundamental problems in image understanding. While the two tasks are strongly correlated and mutually beneficial, they are usually solved separately or sequentially. Motivated by the complementary properties of the two tasks, we propose a unified framework for joint depth and semantic prediction. Given an image, we first use a trained Convolutional Neural Network (CNN) to jointly predict a global layout composed of pixel-wise depth values and semantic labels. By allowing for interactions between the depth and semantic information, the joint network provides more accurate depth prediction than a state-of-the-art CNN trained solely for depth prediction [6]. To further obtain fine-level details, the image is decomposed into local segments for region-level depth and semantic prediction under the guidance of global layout. Utilizing the pixel-wise global prediction and region-wise local prediction, we formulate the inference problem in a two-layer Hierarchical Conditional Random Field (HCRF) to produce the final depth and semantic map. As demonstrated in the experiments, our approach effectively leverages the advantages of both tasks and provides the state-of-the-art results.",
"This paper proposes an end-to-end learning framework for multiview stereopsis. We term the network SurfaceNet. It takes a set of images and their corresponding camera parameters as input and directly infers the 3D model. The key advantage of the framework is that both photo-consistency as well geometric relations of the surface structure can be directly learned for the purpose of multiview stereopsis in an end-to-end fashion. SurfaceNet is a fully 3D convolutional network which is achieved by encoding the camera parameters together with the images in a 3D voxel representation. We evaluate SurfaceNet on the large-scale DTU benchmark.",
"This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.",
"",
"",
"",
"",
"",
"We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.",
"",
""
]
}
|
1811.12328
|
2903434551
|
We show how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image. The network takes an RGB image as input, regresses albedo and normal maps from which we compute lighting coefficients. Our network is trained using large uncontrolled image collections without ground truth. By incorporating a differentiable renderer, our network can learn from self-supervision. Since the problem is ill-posed we introduce additional supervision: 1. We learn a statistical natural illumination prior, 2. Our key insight is to perform offline multiview stereo (MVS) on images containing rich illumination variation. From the MVS pose and depth maps, we can cross project between overlapping views such that Siamese training can be used to ensure consistent estimation of photometric invariants. MVS depth also provides direct coarse supervision for normal map estimation. We believe this is the first attempt to use MVS supervision for learning inverse rendering.
|
Deep intrinsic image decomposition Intrinsic image decomposition is a partial step towards inverse rendering. It decomposes an image into reflectance (albedo) and shading but does not separate shading into shape and illumination. Even so, the lack of ground truth training data makes this a hard problem to solve with deep learning. Recent work either uses synthetic training data and supervised learning @cite_48 @cite_36 @cite_21 @cite_54 @cite_7 or self-supervision unsupervised learning. Very recently, Li al @cite_16 used uncontrolled time-lapse images allowing them to combine an image reconstruction loss with reflectance consistency between frames. This work was further extended using photorealistic, synthetic training data @cite_45 . Ma al @cite_43 also trained on time-lapse sequences and introduced a new gradient constraint which encourage better explanations for sharp changes caused by shading or reflectance. Baslamisli al @cite_11 applied a similar gradient constraint while they used supervised training. Shelhamer al @cite_33 propose a hybrid approach where a CNN estimates a depth map which is used to constrain a classical optimisation-based intrinsic image estimation.
|
{
"cite_N": [
"@cite_33",
"@cite_7",
"@cite_36",
"@cite_48",
"@cite_54",
"@cite_21",
"@cite_43",
"@cite_45",
"@cite_16",
"@cite_11"
],
"mid": [
"2245606284",
"2769142325",
"2794766838",
"2951548216",
"",
"",
"2895013824",
"2888277922",
"2795464554",
"2963395931"
],
"abstract": [
"Intrinsic image decomposition factorizes an observed image into its physical causes. This is most commonly framed as a decomposition into reflectance and shading, although recent progress has made full decompositions into shape, illumination, reflectance, and shading possible. However, existing factorization approaches require depth sensing to initialize the optimization of scene intrinsics. Rather than relying on depth sensors, we show that depth estimated purely from monocular appearance can provide sufficient cues for intrinsic image analysis. Our full intrinsic pipeline regresses depth by a fully convolutional network then jointly optimizes the intrinsic factorization to recover the input image. This combination yields full decompositions by uniting feature learning through deep network regression with physical modeling through statistical priors and random field regularization. This work demonstrates the first pipeline for full intrinsic decomposition of scenes from a single color image input alone.",
"While invaluable for many computer vision applications, decomposing a natural image into intrinsic reflectance and shading layers represents a challenging, underdetermined inverse problem. As opposed to strict reliance on conventional optimization or filtering solutions with strong prior assumptions, deep learning based approaches have also been proposed to compute intrinsic image decompositions when granted access to sufficient labeled training data. The downside is that current data sources are quite limited, and broadly speaking fall into one of two categories: either dense fully-labeled images in synthetic narrow settings, or weakly-labeled data from relatively diverse natural scenes. In contrast to many previous learning-based approaches, which are often tailored to the structure of a particular dataset (and may not work well on others), we adopt core network structures that universally reflect loose prior knowledge regarding the intrinsic image formation process and can be largely shared across datasets. We then apply flexibly supervised loss layers that are customized for each source of ground truth labels. The resulting deep architecture achieves state-of-the-art results on all of the major intrinsic image benchmarks, and runs considerably faster than most at test time.",
"Intrinsic image decomposition refers to recover the albedo and shading from images, which is an ill-posed problem in signal processing. As realistic labeled data are severely lacking, it is difficult to apply learning methods in this issue. In this letter, we propose using a synthesized dataset to facilitate the solving of this problem. A physically based renderer is used to generate color images and their underlying ground-truth albedo and shading from three-dimensional models. Additionally, we render a Kinect-like noisy depth map for each instance. We utilize this synthetic dataset to train a deep neural network for intrinsic image decomposition and further fine-tune it for real-world images. Our model supports both RGB and RGB-D as input, and it employs both high-level and low-level features to avoid blurry outputs. Experimental results verify the effectiveness of our model on realistic images.",
"We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays a key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities; it produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.",
"",
"",
"Intrinsic image decomposition—decomposing a natural image into a set of images corresponding to different physical causes—is one of the key and fundamental problems of computer vision. Previous intrinsic decomposition approaches either address the problem in a fully supervised manner, or require multiple images of the same scene as input. These approaches are less desirable in practice, as ground truth intrinsic images are extremely difficult to acquire, and requirement of multiple images pose severe limitation on applicable scenarios. In this paper, we propose to bring the best of both worlds. We present a two stream convolutional neural network framework that is capable of learning the decomposition effectively in the absence of any ground truth intrinsic images, and can be easily extended to a (semi-)supervised setup. At inference time, our model can be easily reduced to a single stream module that performs intrinsic decomposition on a single input image. We demonstrate the effectiveness of our framework through extensive experimental study on both synthetic and real-world datasets, showing superior performance over previous approaches in both single-image and multi-image settings. Notably, our approach outperforms previous state-of-the-art single image methods while using only 50 of ground truth supervision.",
"Intrinsic image decomposition is a challenging, long-standing computer vision problem for which ground truth data is very difficult to acquire. We explore the use of synthetic data for training CNN-based intrinsic image decomposition models, then applying these learned models to real-world images. To that end, we present CGIntrinsics, a new, large-scale dataset of physically-based rendered images of scenes with full ground truth decompositions. The rendering process we use is carefully designed to yield high-quality, realistic images, which we find to be crucial for this problem domain. We also propose a new end-to-end training method that learns better decompositions by leveraging CGIntrinsics, and optionally IIW and SAW, two recent datasets of sparse annotations on real-world images. Surprisingly, we find that a decomposition network trained solely on our synthetic data outperforms the state-of-the-art on both IIW and SAW, and performance improves even further when IIW and SAW data is added during training. Our work demonstrates the suprising effectiveness of carefully-rendered synthetic data for the intrinsic images task.",
"Single-view intrinsic image decomposition is a highly ill-posed problem, and so a promising approach is to learn from large amounts of data. However, it is difficult to collect ground truth training data at scale for intrinsic images. In this paper, we explore a different approach to learning intrinsic images: observing image sequences over time depicting the same scene under changing illumination, and learning single-view decompositions that are consistent with these changes. This approach allows us to learn without ground truth decompositions, and to instead exploit information available from multiple images when training. Our trained model can then be applied at test time to single views. We describe a new learning framework based on this idea, including new loss functions that can be efficiently evaluated over entire sequences. While prior learning-based methods achieve good performance on specific benchmarks, we show that our approach generalizes well to several diverse datasets, including MIT intrinsic images, Intrinsic Images in the Wild and Shading Annotations in the Wild.",
"Most of the traditional work on intrinsic image decomposition rely on deriving priors about scene characteristics. On the other hand, recent research use deep learning models as in-and-out black box and do not consider the well-established, traditional image formation process as the basis of their intrinsic learning process. As a consequence, although current deep learning approaches show superior performance when considering quantitative benchmark results, traditional approaches are still dominant in achieving high qualitative results. In this paper, the aim is to exploit the best of the two worlds. A method is proposed that (1) is empowered by deep learning capabilities, (2) considers a physics-based reflection model to steer the learning process, and (3) exploits the traditional approach to obtain intrinsic images by exploiting reflectance and shading gradient information. The proposed model is fast to compute and allows for the integration of all intrinsic components. To train the new model, an object centered large-scale datasets with intrinsic ground-truth images are created. The evaluation results demonstrate that the new model outperforms existing methods. Visual inspection shows that the image formation loss function augments color reproduction and the use of gradient information produces sharper edges. Datasets, models and higher resolution images are available at https: ivi.fnwi.uva.nl cv retinet."
]
}
|
1811.12328
|
2903434551
|
We show how to train a fully convolutional neural network to perform inverse rendering from a single, uncontrolled image. The network takes an RGB image as input, regresses albedo and normal maps from which we compute lighting coefficients. Our network is trained using large uncontrolled image collections without ground truth. By incorporating a differentiable renderer, our network can learn from self-supervision. Since the problem is ill-posed we introduce additional supervision: 1. We learn a statistical natural illumination prior, 2. Our key insight is to perform offline multiview stereo (MVS) on images containing rich illumination variation. From the MVS pose and depth maps, we can cross project between overlapping views such that Siamese training can be used to ensure consistent estimation of photometric invariants. MVS depth also provides direct coarse supervision for normal map estimation. We believe this is the first attempt to use MVS supervision for learning inverse rendering.
|
Deep inverse rendering To date, this topic has not received much attention. One line of work simplifies the problem by restricting to a single object class, e.g. faces @cite_4 , meaning that a statistical face model can constrain the geometry and reflectance estimates. This enables entirely self-supervised training. Shu al @cite_24 extend this idea with an adversarial loss. Sengupta al @cite_22 on the other hand, initialise with supervised training on synthetic data, and fine-tuned their network in an unsupervised fashion on real images. Another line of work restricts geometry to almost planar objects and lighting to a flash in the viewing direction @cite_37 @cite_53 under which assumptions they can obtain impressive results. More general settings have been considered by Kulkarni al @cite_12 who show how to learn latent variables that correspond to extrinsic parameters allowing image manipulation. Janner al @cite_6 is the only prior work we are aware of that tackles the full inverse rendering problem. Like us, they use self-supervision but include a trainable shading model. However, the shader requires supervised training on synthetic data, limiting the ability of the network to generalise to real world scenes.
|
{
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_22",
"@cite_53",
"@cite_6",
"@cite_24",
"@cite_12"
],
"mid": [
"2475362300",
"",
"2772390279",
"2736596523",
"",
"2607170299",
"2953255770"
],
"abstract": [
"We extend parametric texture synthesis to capture rich, spatially varying parametric reflectance models from a single image. Our input is a single head-lit flash image of a mostly flat, mostly stationary (textured) surface, and the output is a tile of SVBRDF parameters that reproduce the appearance of the material. No user intervention is required. Our key insight is to make use of a recent, powerful texture descriptor based on deep convolutional neural network statistics for \"softly\" comparing the model prediction and the examplars without requiring an explicit point-to-point correspondence between them. This is in contrast to traditional reflectance capture that requires pointwise constraints between inputs and outputs under varying viewing and lighting conditions. Seen through this lens, our method is an indirect algorithm for fitting photorealistic SVBRDFs. The problem is severely ill-posed and non-convex. To guide the optimizer towards desirable solutions, we introduce a soft Fourier-domain prior for encouraging spatial stationarity of the reflectance parameters and their correlations, and a complementary preconditioning technique that enables efficient exploration of such solutions by L-BFGS, a standard non-linear numerical optimizer.",
"",
"We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained image of a human face into shape, reflectance and illuminance. Our network is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic images and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. SfSNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.",
"We present a convolutional neural network (CNN) based solution for modeling physically plausible spatially varying surface reflectance functions (SVBRDF) from a single photograph of a planar material sample under unknown natural illumination. Gathering a sufficiently large set of labeled training pairs consisting of photographs of SVBRDF samples and corresponding reflectance parameters, is a difficult and arduous process. To reduce the amount of required labeled training data, we propose to leverage the appearance information embedded in unlabeled images of spatially varying materials to self-augment the training process. Starting from an initial approximative network obtained from a small set of labeled training pairs, we estimate provisional model parameters for each unlabeled training exemplar. Given this provisional reflectance estimate, we then synthesize a novel temporary labeled training pair by rendering the exact corresponding image under a new lighting condition. After refining the network using these additional training samples, we re-estimate the provisional model parameters for the unlabeled data and repeat the self-augmentation process until convergence. We demonstrate the efficacy of the proposed network structure on spatially varying wood, met als, and plastics, as well as thoroughly validate the effectiveness of the self-augmentation training process.",
"",
"Traditional face editing methods often require a number of sophisticated and task specific algorithms to be applied one after the other — a process that is tedious, fragile, and computationally intensive. In this paper, we propose an end-to-end generative adversarial network that infers a face-specific disentangled representation of intrinsic face properties, including shape (i.e. normals), albedo, and lighting, and an alpha matte. We show that this network can be trained on in-the-wild images by incorporating an in-network physically-based image formation module and appropriate loss functions. Our disentangling latent representation allows for semantically relevant edits, where one aspect of facial appearance can be manipulated while keeping orthogonal properties fixed, and we demonstrate its use for a number of facial editing applications.",
"This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that learns an interpretable representation of images. This representation is disentangled with respect to transformations such as out-of-plane rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative results of the model's efficacy at learning a 3D rendering engine."
]
}
|
1906.11897
|
2953610242
|
In this paper, we demonstrate a physical adversarial patch attack against object detectors, notably the YOLOv3 detector. Unlike previous work on physical object detection attacks, which required the patch to overlap with the objects being misclassified or avoiding detection, we show that a properly designed patch can suppress virtually all the detected objects in the image. That is, we can place the patch anywhere in the image, causing all existing objects in the image to be missed entirely by the detector, even those far away from the patch itself. This in turn opens up new lines of physical attacks against object detection systems, which require no modification of the objects in a scene. A demo of the system can be found at this https URL.
|
Because of the limitations of the classification setting, several other works have investigated the use of adversarial patches in the object detection setting @cite_11 @cite_14 @cite_5 @cite_6 @cite_1 @cite_8 . However, for the few cases in this domain dealing with physical adversarial examples, virtually all focused on the creation of an object that the object of interest, to either change its class or suppress detection. In contrast, our approach looks specifically at adversarial patches that do not overlap the objects of interest in the scene.
|
{
"cite_N": [
"@cite_14",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_5",
"@cite_11"
],
"mid": [
"2798302089",
"2805329444",
"2797328537",
"2938470389",
"",
"2535873859"
],
"abstract": [
"Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations, adversarial examples could mislead these systems and cause dangerous situations. Therefore, understanding adversarial examples in the physical world is an important step towards developing resilient learning algorithms. We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions. Using the real-world case of road sign classification, we show that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Due to the current lack of a standardized testing method, we propose a two-stage evaluation methodology for robust physical adversarial examples consisting of lab and field tests. Using this methodology, we evaluate the efficacy of physical adversarial manipulations on real objects. With a perturbation in the form of only black and white stickers, we attack a real stop sign, causing targeted misclassification in 100 of the images obtained in lab settings, and in 84.8 of the captured video frames obtained on a moving vehicle (field test) for the target classifier.",
"Adversarial attacks involve adding, small, often imperceptible, perturbations to inputs with the goal of getting a machine learning model to misclassifying them. While many different adversarial attack strategies have been proposed on image classification models, object detection pipelines have been much harder to break. In this paper, we propose a novel strategy to craft adversarial examples by solving a constrained optimization problem using an adversarial generator network. Our approach is fast and scalable, requiring only a forward pass through our trained generator network to craft an adversarial sample. Unlike in many attack strategies, we show that the same trained generator is capable of attacking new images without explicitly optimizing on them. We evaluate our attack on a trained Faster R-CNN face detector on the cropped 300-W face dataset where we manage to reduce the number of detected faces to @math of all originally detected faces. In a different experiment, also on 300-W, we demonstrate the robustness of our attack to a JPEG compression based defense typical JPEG compression level of @math reduces the effectiveness of our attack from only @math of detected faces to a modest @math .",
"Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work. In this work, we tackle the more challenging problem of crafting physical adversarial perturbations to fool image-based object detectors like Faster R-CNN. Attacking an object detector is more difficult than attacking an image classifier, as it needs to mislead the classification results in multiple bounding boxes with different scales. Extending the digital attack to the physical world adds another layer of difficulty, because it requires the perturbation to be robust enough to survive real-world distortions due to different viewing distances and angles, lighting conditions, and camera limitations. We show that the Expectation over Transformation technique, which was originally proposed to enhance the robustness of adversarial perturbations in image classification, can be successfully adapted to the object detection setting. Our approach can generate adversarially perturbed stop signs that are consistently mis-detected by Faster R-CNN as other objects, posing a potential threat to autonomous vehicles and other safety-critical computer vision systems.",
"Adversarial attacks on machine learning models have seen increasing interest in the past years. By making only subtle changes to the input of a convolutional neural network, the output of the network can be swayed to output a completely different result. The first attacks did this by changing pixel values of an input image slightly to fool a classifier to output the wrong class. Other approaches have tried to learn \"patches\" that can be applied to an object to fool detectors and classifiers. Some of these approaches have also shown that these attacks are feasible in the real-world, i.e. by modifying an object and filming it with a video camera. However, all of these approaches target classes that contain almost no intra-class variety (e.g. stop signs). The known structure of the object is then used to generate an adversarial patch on top of it. In this paper, we present an approach to generate adversarial patches to targets with lots of intra-class variety, namely persons. The goal is to generate a patch that is able successfully hide a person from a person detector. An attack that could for instance be used maliciously to circumvent surveillance systems, intruders can sneak around undetected by holding a small cardboard plate in front of their body aimed towards the surveillance camera. From our results we can see that our system is able significantly lower the accuracy of a person detector. Our approach also functions well in real-life scenarios where the patch is filmed by a camera. To the best of our knowledge we are the first to attempt this kind of attack on targets with a high level of intra-class variety like persons.",
"",
"Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection."
]
}
|
1906.11897
|
2953610242
|
In this paper, we demonstrate a physical adversarial patch attack against object detectors, notably the YOLOv3 detector. Unlike previous work on physical object detection attacks, which required the patch to overlap with the objects being misclassified or avoiding detection, we show that a properly designed patch can suppress virtually all the detected objects in the image. That is, we can place the patch anywhere in the image, causing all existing objects in the image to be missed entirely by the detector, even those far away from the patch itself. This in turn opens up new lines of physical attacks against object detection systems, which require no modification of the objects in a scene. A demo of the system can be found at this https URL.
|
YOLO is a one-shot" object detector with state-of-the-art performance on certain metrics running up to @math faster than other models @cite_2 . It treats the input image as an @math grid, each cell predicting @math bounding boxes and their confidence scores; and each box predicting @math class probabilities, conditioned on there being an object in the box. We specifically use the YOLOv3 model as the object detection system we use for our demonstrations, though other object detectors would be possible as well.
|
{
"cite_N": [
"@cite_2"
],
"mid": [
"2796347433"
],
"abstract": [
"We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at this https URL"
]
}
|
1906.12089
|
2954545858
|
The Wikipedia category graph serves as the taxonomic backbone for large-scale knowledge graphs like YAGO or Probase, and has been used extensively for tasks like entity disambiguation or semantic similarity estimation. Wikipedia's categories are a rich source of taxonomic as well as non-taxonomic information. The category 'German science fiction writers', for example, encodes the type of its resources (Writer), as well as their nationality (German) and genre (Science Fiction). Several approaches in the literature make use of fractions of this encoded information without exploiting its full potential. In this paper, we introduce an approach for the discovery of category axioms that uses information from the category network, category instances, and their lexicalisations. With DBpedia as background knowledge, we discover 703k axioms covering 502k of Wikipedia's categories and populate the DBpedia knowledge graph with additional 4.4M relation assertions and 3.3M type assertions at more than 87 and 90 precision, respectively.
|
With the wider adoption of general purpose knowledge graphs such as DBpedia @cite_9 , YAGO @cite_20 , or Wikidata @cite_10 , their quality has come into the focus of recent research @cite_25 @cite_0 . The systematic analysis of knowledge graph quality has inspired a lot of research around an automatic or semi-automatic improvement or refinement @cite_3 .
|
{
"cite_N": [
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_10",
"@cite_25",
"@cite_20"
],
"mid": [
"1552847225",
"2300469216",
"1870959433",
"2080133951",
"2622701666",
"804133461"
],
"abstract": [
"The DBpedia community project extracts structured, multilingual knowledge from Wikipedia and makes it freely available on the Web using Semantic Web and Linked Data technologies. The project extracts knowledge from 111 different language editions of Wikipedia. The largest DBpedia knowledge base which is extracted from the English edition of Wikipedia consists of over 400 million facts that describe 3.7 million things. The DBpedia knowledge bases that are extracted from the other 110 Wikipedia editions together consist of 1.46 billion facts and describe 10 million additional things. The DBpedia project maps Wikipedia infoboxes from 27 different language editions to a single shared ontology consisting of 320 classes and 1,650 properties. The mappings are created via a world-wide crowd-sourcing effort and enable knowledge from the different Wikipedia editions to be combined. The project publishes releases of all DBpedia knowledge bases for download and provides SPARQL query access to 14 out of the 111 language editions via a global network of local DBpedia chapters. In addition to the regular releases, the project maintains a live knowledge base which is updated whenever a page in Wikipedia changes. DBpedia sets 27 million RDF links pointing into over 30 external data sources and thus enables data from these sources to be used together with DBpedia data. Several hundred data sets on the Web publish RDF links pointing to DBpedia themselves and make DBpedia one of the central interlinking hubs in the Linked Open Data (LOD) cloud. In this system report, we give an overview of the DBpedia community project, including its architecture, technical implementation, maintenance, internationalisation, usage statistics and applications.",
"In the recent years, different Web knowledge graphs, both free and commercial, have been created. While Google coined the term \"Knowledge Graph\" in 2012, there are also a few openly available knowledge graphs, with DBpedia, YAGO, and Freebase being among the most prominent ones. Those graphs are often constructed from semi-structured knowledge, such as Wikipedia, or harvested from the web with a combination of statistical and linguistic methods. The result are large-scale knowledge graphs that try to make a good trade-off between completeness and correctness. In order to further increase the utility of such knowledge graphs, various refinement methods have been proposed, which try to infer and add missing knowledge to the graph, or identify erroneous pieces of information. In this article, we provide a survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.",
"The development and standardization of semantic web technologies has resulted in an unprecedented volume of data being published on the Web as Linked Data (LD). However, we observe widely varying data quality ranging from extensively curated datasets to crowdsourced and extracted data of relatively low quality. In this article, we present the results of a systematic review of approaches for assessing the quality of LD. We gather existing approaches and analyze them qualitatively. In particular, we unify and formalize commonly used terminologies across papers related to data quality and provide a comprehensive list of 18 quality dimensions and 69 metrics. Additionally, we qualitatively analyze the 30 core approaches and 12 tools using a set of attributes. The aim of this article is to provide researchers and data curators a comprehensive understanding of existing work, thereby encouraging further experimentation and development of new approaches focused towards data quality, specifically for LD.",
"This collaboratively edited knowledgebase provides a common source of data for Wikipedia, and everyone else.",
"",
"We present YAGO3, an extension of the YAGO knowledge base that combines the information from the Wikipedias in multiple languages. Our technique fuses the multilingual information with the English WordNet to build one coherent knowledge base. We make use of the categories, the infoboxes, and Wikidata, and learn the meaning of infobox attributes across languages. We run our method on 10 different languages, and achieve a precision of 95 -100 in the attribute mapping. Our technique enlarges YAGO by 1m new entities and 7m new facts."
]
}
|
1906.12089
|
2954545858
|
The Wikipedia category graph serves as the taxonomic backbone for large-scale knowledge graphs like YAGO or Probase, and has been used extensively for tasks like entity disambiguation or semantic similarity estimation. Wikipedia's categories are a rich source of taxonomic as well as non-taxonomic information. The category 'German science fiction writers', for example, encodes the type of its resources (Writer), as well as their nationality (German) and genre (Science Fiction). Several approaches in the literature make use of fractions of this encoded information without exploiting its full potential. In this paper, we introduce an approach for the discovery of category axioms that uses information from the category network, category instances, and their lexicalisations. With DBpedia as background knowledge, we discover 703k axioms covering 502k of Wikipedia's categories and populate the DBpedia knowledge graph with additional 4.4M relation assertions and 3.3M type assertions at more than 87 and 90 precision, respectively.
|
There are quite a few refinement strategies using additional sources in Wikipedia especially for the extraction of new RAs. Most of them use the text of Wikipedia pages @cite_15 @cite_8 @cite_4 @cite_7 , but also Wikipedia-specific structures, such as tables @cite_29 @cite_5 or list pages @cite_18 @cite_21 .
|
{
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_29",
"@cite_21",
"@cite_5",
"@cite_15"
],
"mid": [
"2575075980",
"2794999389",
"2107598941",
"",
"2274241723",
"614773905",
"2020022499",
"2407215154"
],
"abstract": [
"",
"Large-scale knowledge graphs, such as DBpedia, Wikidata, or YAGO, can be enhanced by relation extraction from text, using the data in the knowledge graph as training data, i.e., using distant supervision. While most existing approaches use language-specific methods (usually for English), we present a language-agnostic approach that exploits background knowledge from the graph instead of language-specific techniques and builds machine learning models only from language-independent features. We demonstrate the extraction of relations from Wikipedia abstracts, using the twelve largest language editions of Wikipedia. From those, we can extract 1.6 M new relations in DBpedia at a level of precision of 95 , using a RandomForest classifier trained only on language-independent features. We furthermore investigate the similarity of models for different languages and show an exemplary geographical breakdown of the information extracted. In a second series of experiments, we show how the approach can be transferred to DBkWik, a knowledge graph extracted from thousands of Wikis. We discuss the challenges and first results of extracting relations from a larger set of Wikis, using a less formalized knowledge graph.",
"Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACE-style algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6 . We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression.",
"",
"We are currently investigating methods to triplify the content of Wikipedia's tables. We propose that existing knowledge-bases can be leveraged to semi-automatically extract high-quality facts (in the form of RDF triples) from tables embedded in Wikipedia articles (henceforth called \"Wikitables\"). We present a survey of Wikitables and their content in a recent dump of Wikipedia. We then discuss some ongoing work on using DBpedia to mine novel RDF triples from these tables: we present methods that automatically extract 24.4 million raw triples from the Wikitables at an estimated precision of 52.2 . We believe this precision can be (greatly) improved through machine learning methods and sketch ideas for features that should help classify (in)correct triples.",
"Thanks to its wide coverage and general-purpose ontology, DBpedia is a prominent dataset in the Linked Open Data cloud. DBpedia's content is harvested from Wikipedia's infoboxes, based on manually created mappings. In this paper, we explore the use of a promising source of knowledge for extending DBpedia, i.e., Wikipedia's list pages. We discuss how a combination of frequent pattern mining and natural language processing (NLP) methods can be leveraged in order to extend both the DBpedia ontology, as well as the instance information in DBpedia. We provide an illustrative example to show the potential impact of our approach and discuss its main challenges.",
"Millions of HTML tables containing structured data can be found on the Web. With their wide coverage, these tables are potentially very useful for filling missing values and extending cross-domain knowledge bases such as DBpedia, YAGO, or the Google Knowledge Graph. As a prerequisite for being able to use table data for knowledge base extension, the HTML tables need to be matched with the knowledge base, meaning that correspondences between table rows columns and entities schema elements of the knowledge base need to be found. This paper presents the T2D gold standard for measuring and comparing the performance of HTML table to knowledge base matching systems. T2D consists of 8 700 schema-level and 26 100 entity-level correspondences between the WebDataCommons Web Tables Corpus and the DBpedia knowledge base. In contrast related work on HTML table to knowledge base matching, the Web Tables Corpus (147 million tables), the knowledge base, as well as the gold standard are publicly available. The gold standard is used afterward to evaluate the performance of T2K Match, an iterative matching method which combines schema and instance matching. T2K Match is designed for the use case of matching large quantities of mostly small and narrow HTML tables against large cross-domain knowledge bases. The evaluation using the T2D gold standard shows that T2K Match discovers table-to-class correspondences with a precision of 94 , row-to-entity correspondences with a precision of 90 , and column-to-property correspondences with a precision of 77 .",
"DBpedia is a Semantic Web project aiming to extract structured data from Wikipedia articles. Due to the increasing number of resources linked to it, DBpedia plays a central role in the Linked Open Data community. Currently, the information contained in DBpedia is mainly collected from Wikipedia infoboxes, a set of subject-attribute-value triples that represents a summary of the Wikipedia page. These infoboxes are manually compiled by the Wikipedia contributors, and in more than 50 of the Wikipedia articles the infobox is missing. In this article, we use the distant supervision paradigm to extract the missing information directly from the Wikipedia article, using a Relation Extraction tool trained on the information already present in DBpedia. We evaluate our system on a data set consisting of seven DBpedia properties, demonstrating the suitability of the approach in extending the DBpedia coverage."
]
}
|
1906.12089
|
2954545858
|
The Wikipedia category graph serves as the taxonomic backbone for large-scale knowledge graphs like YAGO or Probase, and has been used extensively for tasks like entity disambiguation or semantic similarity estimation. Wikipedia's categories are a rich source of taxonomic as well as non-taxonomic information. The category 'German science fiction writers', for example, encodes the type of its resources (Writer), as well as their nationality (German) and genre (Science Fiction). Several approaches in the literature make use of fractions of this encoded information without exploiting its full potential. In this paper, we introduce an approach for the discovery of category axioms that uses information from the category network, category instances, and their lexicalisations. With DBpedia as background knowledge, we discover 703k axioms covering 502k of Wikipedia's categories and populate the DBpedia knowledge graph with additional 4.4M relation assertions and 3.3M type assertions at more than 87 and 90 precision, respectively.
|
For extracting information from categories, there are two signals that can be exploited: (1) lexical information from the category's name, and (2) statistical information of the instances belonging to the category. YAGO, as discussed above, uses the first signal. A similar approach is @cite_11 , which exploits manually defined textual patterns (such as ) to identify parent categories which organize instances by objects of a given relation: for example, the category has child categories whose instances share the same object for the relation , and can thus be used to generate axioms such as the one in Equation 3 above. The Catriple approach does not explicitly extract category axioms, but finds 1.27M RAs. A similar approach is taken in @cite_19 , utilizing POS tagging to extract patterns from category names, but not deriving any knowledge graph axioms from them.
|
{
"cite_N": [
"@cite_19",
"@cite_11"
],
"mid": [
"2165615475",
"1802555571"
],
"abstract": [
"This paper presents an approach to acquire knowledge from Wikipedia categories and the category network. Many Wikipedia categories have complex names which reflect human classification and organizing instances, and thus encode knowledge about class attributes, taxonomic and other semantic relations. We decode the names and refer back to the network to induce relations between concepts in Wikipedia represented through pages or categories. The category structure allows us to propagate a relation detected between constituents of a category name to numerous concept links. The results of the process are evaluated against ResearchCyc and a subset also by human judges. The results support the idea that Wikipedia category names are a rich source of useful and accurate knowledge.",
"As an important step towards bootstrapping the Semantic Web, many efforts have been made to extract triples from Wikipedia because of its wide coverage, good organization and rich knowledge. One kind of important triples is about Wikipedia articles and their non-isa properties, e.g. (Beijing, country, China). Previous work has tried to extract such triples from Wikipedia infoboxes, article text and categories. The infobox-based and text-based extraction methods depend on the infoboxes and suffer from a low article coverage. In contrast, the category-based extraction methods exploit the widespread categories. However, they rely on predefined properties, which is too effort-consuming and explores only very limited knowledge in the categories. This paper automatically extracts properties and triples from the less explored Wikipedia categories so as to achieve a wider article coverage with less manual effort. We manage to realize this goal by utilizing the syntax and semantics brought by super-sub category pairs in Wikipedia. Our prototype implementation outputs about 10M triples with a 12-level confidence ranging from 47.0 to 96.4 , which cover 78.2 of Wikipedia articles. Among them, 1.27M triples have confidence of 96.4 . Applications can on demand use the triples with suitable confidence."
]
}
|
1906.12089
|
2954545858
|
The Wikipedia category graph serves as the taxonomic backbone for large-scale knowledge graphs like YAGO or Probase, and has been used extensively for tasks like entity disambiguation or semantic similarity estimation. Wikipedia's categories are a rich source of taxonomic as well as non-taxonomic information. The category 'German science fiction writers', for example, encodes the type of its resources (Writer), as well as their nationality (German) and genre (Science Fiction). Several approaches in the literature make use of fractions of this encoded information without exploiting its full potential. In this paper, we introduce an approach for the discovery of category axioms that uses information from the category network, category instances, and their lexicalisations. With DBpedia as background knowledge, we discover 703k axioms covering 502k of Wikipedia's categories and populate the DBpedia knowledge graph with additional 4.4M relation assertions and 3.3M type assertions at more than 87 and 90 precision, respectively.
|
In the area of taxonomy induction, many approaches make use of lexical information when extracting hierarchies of terms. Using Hearst patterns @cite_28 is one of the most well known method to extract hypernymy relations from text. It has been extended multiple times, e.g., by @cite_22 who enhance their precision by starting with a set of pre-defined terms and post-filtering the final results. @cite_16 use an optimal branching algorithm to induce a taxonomy from definitions and hypernym relations that have been extracted from text.
|
{
"cite_N": [
"@cite_28",
"@cite_16",
"@cite_22"
],
"mid": [
"2068737686",
"2029344051",
"2107322005"
],
"abstract": [
"We describe a method for the automatic acquisition of the hyponymy lexical relation from unrestricted text. Two goals motivate the approach: (i) avoidance of the need for pre-encoded knowledge and (ii) applicability across a wide range of text. We identify a set of lexico-syntactic patterns that are easily recognizable, that occur frequently and across text genre boundaries, and that indisputably indicate the lexical relation of interest. We describe a method for discovering these patterns and suggest that other lexical relations will also be acquirable in this way. A subset of the acquisition algorithm is implemented and the results are used to augment and critique the structure of a large hand-built thesaurus. Extensions and applications to areas such as information retrieval are suggested.",
"In 2004 we published in this journal an article describing OntoLearn, one of the first systems to automatically induce a taxonomy from documents and Web sites. Since then, OntoLearn has continued to be an active area of research in our group and has become a reference work within the community. In this paper we describe our next-generation taxonomy learning methodology, which we name OntoLearn Reloaded. Unlike many taxonomy learning approaches in the literature, our novel algorithm learns both concepts and relations entirely from scratch via the automated extraction of terms, definitions, and hypernyms. This results in a very dense, cyclic and potentially disconnected hypernym graph. The algorithm then induces a taxonomy from this graph via optimal branching and a novel weighting policy. Our experiments show that we obtain high-quality results, both when building brand-new taxonomies and when reconstructing sub-hierarchies of existing taxonomies.",
"A challenging problem in open information extraction and text mining is the learning of the selectional restrictions of semantic relations. We propose a minimally supervised bootstrapping algorithm that uses a single seed and a recursive lexico-syntactic pattern to learn the arguments and the supertypes of a diverse set of semantic relations from the Web. We evaluate the performance of our algorithm on multiple semantic relations expressed using \"verb\", \"noun\", and \"verb prep\" lexico-syntactic patterns. Human-based evaluation shows that the accuracy of the harvested information is about 90 . We also compare our results with existing knowledge base to outline the similarities and differences of the granularity and diversity of the harvested knowledge."
]
}
|
1906.12089
|
2954545858
|
The Wikipedia category graph serves as the taxonomic backbone for large-scale knowledge graphs like YAGO or Probase, and has been used extensively for tasks like entity disambiguation or semantic similarity estimation. Wikipedia's categories are a rich source of taxonomic as well as non-taxonomic information. The category 'German science fiction writers', for example, encodes the type of its resources (Writer), as well as their nationality (German) and genre (Science Fiction). Several approaches in the literature make use of fractions of this encoded information without exploiting its full potential. In this paper, we introduce an approach for the discovery of category axioms that uses information from the category network, category instances, and their lexicalisations. With DBpedia as background knowledge, we discover 703k axioms covering 502k of Wikipedia's categories and populate the DBpedia knowledge graph with additional 4.4M relation assertions and 3.3M type assertions at more than 87 and 90 precision, respectively.
|
The approach @cite_14 is an approach of the second category, i.e., it relies on statistical signals. In a first step, it uses probabilistic methods on the category entities to identify an initial set of axioms, and from that, it mines the extraction patterns for category names automatically. The authors find axioms for more than 60k categories and extract around 700k RAs and 200k TAs.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2571611815"
],
"abstract": [
"Categories play a fundamental role in human cognition. Defining features (short for DFs) are the key elements to define a category, which enables machines to categorize objects. Categories enriched with their DFs significantly improve the machine's ability of categorization and benefit many applications built upon categorization. However, defining features can rarely be found for categories in current knowledge bases. Traditional efforts such as manual construction by domain experts are not practical to find defining features for millions of categories. In this paper, we make the first attempt to automatically find defining features for millions of categories in the real world. We formalize the defining feature learning problem and propose a bootstrapping solution to learn defining features from the features of entities belonging to a category. Experimental results show the effectiveness and efficiency of our method. Finally, we find defining features for overall 60,247 categories with acceptable accuracy."
]
}
|
1906.12064
|
2953870632
|
An important task when processing sensor data is to distinguish relevant from irrelevant data. This paper describes a method for an iterative singular value decomposition that maintains a model of the background via singular vectors spanning a subspace of the image space, thus providing a way to determine the amount of new information contained in an incoming frame. We update the singular vectors spanning the background space in a computationally efficient manner and provide the ability to perform block-wise updates, leading to a fast and robust adaptive SVD computation. The effects of those two properties and the success of the overall method to perform a state of the art background subtraction are shown in both qualitative and quantitative evaluations.
|
The philosophical'' goal of background modeling is to acquire a background image that does not include any moving objects. In realistical environments, the background may also change, due to influences like illumination or objects being introduced to or removed from the scene. Taking into account these problems as well as robustness and adaptation, background modeling methods can, according to the survey papers @cite_16 @cite_18 @cite_5 , be classified into the following categories: Statistical Background Modeling, Background Modeling via Clustering, Background Estimation and Neural Networks.
|
{
"cite_N": [
"@cite_5",
"@cite_18",
"@cite_16"
],
"mid": [
"2316304551",
"2091741383",
"1988061476"
],
"abstract": [
"Background modeling is currently used to detect moving objects in video acquired from static cameras. Numerous statistical methods have been developed over the recent years. The aim of this paper is firstly to provide an extended and updated survey of the recent researches and patents which concern statistical background modeling and secondly to achieve a comparative evaluation. For this, we firstly classified the statistical methods in terms of category. Then, the original methods are reminded and discussed following the challenges met in video sequences. We classified their respective improvements in terms of strategies used. Furthermore, we discussed them in terms of the critical situations they claim to handle. Finally, we conclude with several promising directions for future research. The survey also discussed relevant patents.",
"Abstract Foreground detection is the first step in video surveillance system to detect moving objects. Recent research on subspace estimation by sparse representation and rank minimization represents a nice framework to separate moving objects from the background. Robust Principal Component Analysis (RPCA) solved via Principal Component Pursuit decomposes a data matrix A in two components such that A = L + S , where L is a low-rank matrix and S is a sparse noise matrix. The background sequence is then modeled by a low-rank subspace that can gradually change over time, while the moving foreground objects constitute the correlated sparse outliers. To date, many efforts have been made to develop Principal Component Pursuit (PCP) methods with reduced computational cost that perform visually well in foreground detection. However, no current algorithm seems to emerge and to be able to simultaneously address all the key challenges that accompany real-world videos. This is due, in part, to the absence of a rigorous quantitative evaluation with synthetic and realistic large-scale dataset with accurate ground truth providing a balanced coverage of the range of challenges present in the real world. In this context, this work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection. For this, we first review the recent developments in the field of RPCA solved via Principal Component Pursuit. Furthermore, we investigate how these methods are solved and if incremental algorithms and real-time implementations can be achieved for foreground detection. Finally, experimental results on the Background Models Challenge (BMC) dataset which contains different synthetic and real datasets show the comparative performance of these recent methods.",
"Abstract Background modeling for foreground detection is often used in different applications to model the background and then detect the moving objects in the scene like in video surveillance. The last decade witnessed very significant publications in this field. Furthermore, several surveys can be found in the literature but none of them addresses an overall review in this field. So, the purpose of this paper is to provide a complete survey of the traditional and recent approaches. First, we categorize the different approaches found in the literature. We have classified them in terms of the mathematical models used and we have discussed them in terms of the critical situations that they claim to handle. Furthermore, we present the available resources, datasets and libraries. Then, we conclude with several promising directions for future research."
]
}
|
1906.12064
|
2953870632
|
An important task when processing sensor data is to distinguish relevant from irrelevant data. This paper describes a method for an iterative singular value decomposition that maintains a model of the background via singular vectors spanning a subspace of the image space, thus providing a way to determine the amount of new information contained in an incoming frame. We update the singular vectors spanning the background space in a computationally efficient manner and provide the ability to perform block-wise updates, leading to a fast and robust adaptive SVD computation. The effects of those two properties and the success of the overall method to perform a state of the art background subtraction are shown in both qualitative and quantitative evaluations.
|
Our background subtraction method is based on an iterative calculation of an SVD for matrices augmented by columns, cf. @cite_9 . In this section we revise the essential statements and the advantages of using this method for calculating the SVD.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2892093392"
],
"abstract": [
"Abstract We consider the problem of updating the SVD when augmenting a “tall thin” matrix, i.e., a rectangular matrix A ∈ R m × n with m ≫ n . Supposing that an SVD of A is already known, and given a matrix B ∈ R m × n ′ , we derive an efficient method to compute and efficiently store the SVD of the augmented matrix [ A B ] ∈ R m × ( n + n ′ ) . This is an important tool for two types of applications: in the context of principal component analysis, the dominant left singular vectors provided by this decomposition form an orthonormal basis for the best linear subspace of a given dimension, while from the right singular vectors one can extract an orthonormal basis of the kernel of the matrix. We also describe two concrete applications of these concepts which motivated the development of our method and to which it is very well adapted."
]
}
|
1906.12064
|
2953870632
|
An important task when processing sensor data is to distinguish relevant from irrelevant data. This paper describes a method for an iterative singular value decomposition that maintains a model of the background via singular vectors spanning a subspace of the image space, thus providing a way to determine the amount of new information contained in an incoming frame. We update the singular vectors spanning the background space in a computationally efficient manner and provide the ability to perform block-wise updates, leading to a fast and robust adaptive SVD computation. The effects of those two properties and the success of the overall method to perform a state of the art background subtraction are shown in both qualitative and quantitative evaluations.
|
The method from @cite_9 is outlined, in its basic form, as follows: Given: SVD of @math , and @math Aim: Compute SVD for @math @math Update: @math with [ U_ k+1 = U_k Q , ] [ V_ k+1 = (P'_k P_k)^T , ] where @math results from a QR-decomposition, @math , @math and @math result from the SVD of a @math matrix. @math and @math are permutation matrices. For details, see @cite_9 . In the original version of the iterative SVD, the matrix @math is (formally) of dimension @math . Since in image processing @math captures the amount of pixels of one image, an explicit representation of @math consumes too much memory to be efficient which suggests to represent @math in terms of Householder reflections. This ensures that the of the SVD of @math is bounded by @math , and the step @math requires @math .
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2892093392"
],
"abstract": [
"Abstract We consider the problem of updating the SVD when augmenting a “tall thin” matrix, i.e., a rectangular matrix A ∈ R m × n with m ≫ n . Supposing that an SVD of A is already known, and given a matrix B ∈ R m × n ′ , we derive an efficient method to compute and efficiently store the SVD of the augmented matrix [ A B ] ∈ R m × ( n + n ′ ) . This is an important tool for two types of applications: in the context of principal component analysis, the dominant left singular vectors provided by this decomposition form an orthonormal basis for the best linear subspace of a given dimension, while from the right singular vectors one can extract an orthonormal basis of the kernel of the matrix. We also describe two concrete applications of these concepts which motivated the development of our method and to which it is very well adapted."
]
}
|
1906.12064
|
2953870632
|
An important task when processing sensor data is to distinguish relevant from irrelevant data. This paper describes a method for an iterative singular value decomposition that maintains a model of the background via singular vectors spanning a subspace of the image space, thus providing a way to determine the amount of new information contained in an incoming frame. We update the singular vectors spanning the background space in a computationally efficient manner and provide the ability to perform block-wise updates, leading to a fast and robust adaptive SVD computation. The effects of those two properties and the success of the overall method to perform a state of the art background subtraction are shown in both qualitative and quantitative evaluations.
|
There already exist iterative methods to calculate an SVD, but for our purpose the approach from @cite_9 has two favorable aspects. The first one is the possibility to perform blockwise updates with @math , that is, with several frames. The second one is the ability to estimate the effect of appending @math on the singular values of @math . In order to compute the SVD of @math , @math is first calculated and a QR decomposition with column pivoting of @math is determined. The @math matrix contains the information in the added data @math that is not already described by the singular vectors in @math . Then, the matrix @math can be truncated by a significance level @math such that the singular values less than @math are set to zero in the SVD calculation of [ . ] Therefore, one can determine only from the (cheap) calculation of a QR decomposition, whether the new data contains significant new information and the threshold level @math can control how big the gain has to be for a data vector to be added to the current SVD decomposition in an iterative step.
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2892093392"
],
"abstract": [
"Abstract We consider the problem of updating the SVD when augmenting a “tall thin” matrix, i.e., a rectangular matrix A ∈ R m × n with m ≫ n . Supposing that an SVD of A is already known, and given a matrix B ∈ R m × n ′ , we derive an efficient method to compute and efficiently store the SVD of the augmented matrix [ A B ] ∈ R m × ( n + n ′ ) . This is an important tool for two types of applications: in the context of principal component analysis, the dominant left singular vectors provided by this decomposition form an orthonormal basis for the best linear subspace of a given dimension, while from the right singular vectors one can extract an orthonormal basis of the kernel of the matrix. We also describe two concrete applications of these concepts which motivated the development of our method and to which it is very well adapted."
]
}
|
1906.12170
|
2955275164
|
In recent years, deep learning based machine lipreading has gained prominence. To this end, several architectures such as LipNet, LCANet and others have been proposed which perform extremely well compared to traditional lipreading DNN-HMM hybrid systems trained on DCT features. In this work, we propose a simpler architecture of 3D-2D-CNN-BLSTM network with a bottleneck layer. We also present analysis of two different approaches for lipreading on this architecture. In the first approach, 3D-2D-CNN-BLSTM network is trained with CTC loss on characters (ch-CTC). Then BLSTM-HMM model is trained on bottleneck lip features (extracted from 3D-2D-CNN-BLSTM ch-CTC network) in a traditional ASR training pipeline. In the second approach, same 3D-2D-CNN-BLSTM network is trained with CTC loss on word labels (w-CTC). The first approach shows that bottleneck features perform better compared to DCT features. Using the second approach on Grid corpus' seen speaker test set, we report @math WER - a @math improvement relative to LCANet. On unseen speaker test set we report @math WER which is @math improvement relative to LipNet. We also verify the method on a second dataset of @math speakers which we collected. Finally, we also discuss the effect of feature duplication on BLSTM-HMM model performance.
|
Two decades ago, lipreading is seen as a word classification problem, where each input video is classified to one of the limited words. The authors in @cite_0 do word classification using different variations of 3D CNN architectures. Word classification using CNNs followed by RNNs or HMMs is presented in number of different papers @cite_5 @cite_10 @cite_9 . Later same authors proposed network in @cite_6 , which uses encoder-decoder type of architecture for audio-visual sentence level speech recognition. They have also introduced curriculum learning @cite_20 , a strategy to accelerate training and reduce overfitting. We have adopted curriculum learning from this paper, which has resulted in faster convergence.
|
{
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_10",
"@cite_20"
],
"mid": [
"2963192365",
"2551572271",
"2594690981",
"2060510034",
"",
""
],
"abstract": [
"Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7 over the base line is reported on the OuluVS2 database, and 1.5 on the CUAVE database when compared with other methods which use a similar visual front-end.",
"The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem – unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a Watch, Listen, Attend and Spell (WLAS) network that learns to transcribe videos of mouth motion to characters, (2) a curriculum learning strategy to accelerate training and to reduce overfitting, (3) a Lip Reading Sentences (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that if audio is available, then visual information helps to improve speech recognition performance.",
"Our aim is to recognise the words being spoken by a talking face, given only the video but not the audio. Existing works in this area have focussed on trying to recognise a small number of utterances in controlled environments (e.g. digits and alphabets), partially due to the shortage of suitable datasets.",
"Abstract Visual speech information plays an important role in automatic speech recognition (ASR) especially when audio is corrupted or even inaccessible. Despite the success of audio-based ASR, the problem of visual speech decoding remains widely open. This paper provides a detailed review of recent advances in this research area. In comparison with the previous survey [97] which covers the whole ASR system that uses visual speech information, we focus on the important questions asked by researchers and summarize the recent studies that attempt to answer them. In particular, there are three questions related to the extraction of visual features, concerning speaker dependency, pose variation and temporal information, respectively. Another question is about audio-visual speech fusion, considering the dynamic changes of modality reliabilities encountered in practice. In addition, the state-of-the-art on facial landmark localization is briefly introduced in this paper. Those advanced techniques can be used to improve the region-of-interest detection, but have been largely ignored when building a visual-based ASR system. We also provide details of audio-visual speech databases. Finally, we discuss the remaining challenges and offer our insights into the future research on visual speech decoding.",
"",
""
]
}
|
1906.12170
|
2955275164
|
In recent years, deep learning based machine lipreading has gained prominence. To this end, several architectures such as LipNet, LCANet and others have been proposed which perform extremely well compared to traditional lipreading DNN-HMM hybrid systems trained on DCT features. In this work, we propose a simpler architecture of 3D-2D-CNN-BLSTM network with a bottleneck layer. We also present analysis of two different approaches for lipreading on this architecture. In the first approach, 3D-2D-CNN-BLSTM network is trained with CTC loss on characters (ch-CTC). Then BLSTM-HMM model is trained on bottleneck lip features (extracted from 3D-2D-CNN-BLSTM ch-CTC network) in a traditional ASR training pipeline. In the second approach, same 3D-2D-CNN-BLSTM network is trained with CTC loss on word labels (w-CTC). The first approach shows that bottleneck features perform better compared to DCT features. Using the second approach on Grid corpus' seen speaker test set, we report @math WER - a @math improvement relative to LCANet. On unseen speaker test set we report @math WER which is @math improvement relative to LipNet. We also verify the method on a second dataset of @math speakers which we collected. Finally, we also discuss the effect of feature duplication on BLSTM-HMM model performance.
|
The end-to-end sentence-level lipreading with 3D-CNN-RNN based model (LipNet) with CTC loss on character labels is proposed in paper @cite_23 . We also propose end-to-end sentence-level lipreading with a new 3D-2D-CNN-BLSTM network architecture in this paper which has fewer parameters compared to lipnet. We also train our model with CTC loss on word labels. The paper @cite_2 presents different experiments which use DCT and AAM visual features in traditional speech-style GMM-HMM models for audio-visual speech recognition. But in this paper we use 3D-2D-CNN-BLSTM network features instead of DCT or AMM visual features in RNN-HMM context. Several papers @cite_13 have tried phoneme or viseme labels for CTC loss, followed by WFST with language model. Two commonly used cost functions in end-to-end based sequential models are CTC-loss and sequence-to-sequence loss. Comparison between CTC-loss and sequence-to-sequence loss in different conditions is shown in @cite_1 using 3D-CNN and LSTM based architectures. The conditional independence assumption in CTC loss is said to be one of cons for CTC based sequential models. The LCANet proposed in @cite_8 has used highway network with bidirectional GRU layers after 3D CNN layers with cascaded attention-CTC to overcome conditional independence assumption of CTC.
|
{
"cite_N": [
"@cite_8",
"@cite_1",
"@cite_23",
"@cite_2",
"@cite_13"
],
"mid": [
"2963030892",
"2890952074",
"2578229578",
"2086349491",
"2883383043"
],
"abstract": [
"Machine lipreading is a special type of automatic speech recognition (ASR) which transcribes human speech by visually interpreting the movement of related face regions including lips, face, and tongue. Recently, deep neural network based lipreading methods show great potential and have exceeded the accuracy of experienced human lipreaders in some benchmark datasets. However, lipreading is still far from being solved, and existing methods tend to have high error rates on the wild data. In this paper, we propose LCANet, an end-to-end deep neural network based lipreading system. LCANet encodes input video frames using a stacked 3D convolutional neural network (CNN), highway network and bidirectional GRU network. The encoder effectively captures both short-term and long-term spatio-temporal information. More importantly, LCANet incorporates a cascaded attention-CTC decoder to generate output texts. By cascading CTC with attention, it partially eliminates the defect of the conditional independence assumption of CTC within the hidden neural layers, and this yields notably performance improvement as well as faster convergence. The experimental results show the proposed system achieves a 1.3 CER and 3.0 WER on the GRID corpus database, leading to a 12.3 improvement compared to the state-of-the-art methods.",
"The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem -- unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release two new datasets for audio-visual speech recognition: LRS2-BBC, consisting of thousands of natural sentences from British television; and LRS3-TED, consisting of hundreds of hours of TED and TEDx talks obtained from YouTube. The models that we train surpass the performance of all previous work on lip reading benchmark datasets by a significant margin.",
"Lipreading is the task of decoding text from the movement of a speaker's mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (, 2016; Chung & Zisserman, 2016a). However, existing work on models trained end-to-end perform only word classification, rather than sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, a recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first end-to-end sentence-level lipreading model that simultaneously learns spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 95.2 accuracy in sentence-level, overlapped speaker split task, outperforming experienced human lipreaders and the previous 86.4 word-level state-of-the-art accuracy (, 2016).",
"At least some of a sequence of spoken phonemes are indicated by analysing detected sounds to determine a group of phonemes to which a phoneme belongs, optically detecting the lipshape of the speaker and correlating the respective signals by a computer.",
"This work presents a scalable solution to open-vocabulary visual speech recognition. To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video). In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words. The proposed system achieves a word error rate (WER) of 40.9 as measured on a held-out set. In comparison, professional lipreaders achieve either 86.4 or 92.9 WER on the same dataset when having access to additional types of contextual information. Our approach significantly improves on other lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 89.8 and 76.8 WER respectively."
]
}
|
1906.12170
|
2955275164
|
In recent years, deep learning based machine lipreading has gained prominence. To this end, several architectures such as LipNet, LCANet and others have been proposed which perform extremely well compared to traditional lipreading DNN-HMM hybrid systems trained on DCT features. In this work, we propose a simpler architecture of 3D-2D-CNN-BLSTM network with a bottleneck layer. We also present analysis of two different approaches for lipreading on this architecture. In the first approach, 3D-2D-CNN-BLSTM network is trained with CTC loss on characters (ch-CTC). Then BLSTM-HMM model is trained on bottleneck lip features (extracted from 3D-2D-CNN-BLSTM ch-CTC network) in a traditional ASR training pipeline. In the second approach, same 3D-2D-CNN-BLSTM network is trained with CTC loss on word labels (w-CTC). The first approach shows that bottleneck features perform better compared to DCT features. Using the second approach on Grid corpus' seen speaker test set, we report @math WER - a @math improvement relative to LCANet. On unseen speaker test set we report @math WER which is @math improvement relative to LipNet. We also verify the method on a second dataset of @math speakers which we collected. Finally, we also discuss the effect of feature duplication on BLSTM-HMM model performance.
|
The CTC loss with word labels is explored for ASR task in @cite_7 @cite_11 @cite_16 but was not attempted for lipreading. In this paper we attempt lipreading with CTC loss on word labels and discuss its limitations. In @cite_3 we used a technique of feature duplication for DNN-HMM model with DCT features of lip image, to match the frame rate of audio signal. But we haven't explored the significance of feature duplication in HMM based models. In this paper we show how feature duplication in context of HMM based models helps in improving performance. In another paper @cite_12 we used CTC loss on character labels with DCT features of lip as input to RNN layers.
|
{
"cite_N": [
"@cite_7",
"@cite_3",
"@cite_16",
"@cite_12",
"@cite_11"
],
"mid": [
"2951327905",
"2572178803",
"",
"2561086211",
"2953291251"
],
"abstract": [
"We present results that show it is possible to build a competitive, greatly simplified, large vocabulary continuous speech recognition system with whole words as acoustic units. We model the output vocabulary of about 100,000 words directly using deep bi-directional LSTM RNNs with CTC loss. The model is trained on 125,000 hours of semi-supervised acoustic training data, which enables us to alleviate the data sparsity problem for word models. We show that the CTC word models work very well as an end-to-end all-neural speech recognition model without the use of traditional context-dependent sub-word phone units that require a pronunciation lexicon, and without any language model removing the need to decode. We demonstrate that the CTC word models perform better than a strong, more complex, state-of-the-art baseline with sub-word units.",
"Multi-task learning (MTL) involves the simultaneous training of two or more related tasks over shared representations. In this work, we apply MTL to audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn a mapping between audio-visual fused features and frame labels obtained from acoustic GMM HMM model. This is combined with an auxiliary task which maps visual features to frame labels obtained from a separate visual GMM HMM model. The MTL model is tested at various levels of babble noise and the results are compared with a base-line hybrid DNN-HMM AV-ASR model. Our results indicate that MTL is especially useful at higher level of noise. Compared to base-line, upto 7 relative improvement in WER is reported at -3 SNR dB",
"",
"In this work, we propose a training algorithm for an audio-visual automatic speech recognition (AV-ASR) system using deep recurrent neural network (RNN). First, we train a deep RNN acoustic model with a Connectionist Temporal Classification (CTC) objective function. The frame labels obtained from the acoustic model are then used to perform a non-linear dimensionality reduction of the visual features using a deep bottleneck network. Audio and visual features are fused and used to train a fusion RNN. The use of bottleneck features for visual modality helps the model to converge properly during training. Our system is evaluated on GRID corpus. Our results show that presence of visual modality gives significant improvement in character error rate (CER) at various levels of noise even when the model is trained without noisy data. We also provide a comparison of two fusion methods: feature fusion and decision fusion.",
"Recent work on end-to-end automatic speech recognition (ASR) has shown that the connectionist temporal classification (CTC) loss can be used to convert acoustics to phone or character sequences. Such systems are used with a dictionary and separately-trained Language Model (LM) to produce word sequences. However, they are not truly end-to-end in the sense of mapping acoustics directly to words without an intermediate phone representation. In this paper, we present the first results employing direct acoustics-to-word CTC models on two well-known public benchmark tasks: Switchboard and CallHome. These models do not require an LM or even a decoder at run-time and hence recognize speech with minimal complexity. However, due to the large number of word output units, CTC word models require orders of magnitude more data to train reliably compared to traditional systems. We present some techniques to mitigate this issue. Our CTC word model achieves a word error rate of 13.0 18.8 on the Hub5-2000 Switchboard CallHome test sets without any LM or decoder compared with 9.6 16.0 for phone-based CTC with a 4-gram LM. We also present rescoring results on CTC word model lattices to quantify the performance benefits of a LM, and contrast the performance of word and phone CTC models."
]
}
|
1906.12165
|
2955794838
|
Action localization in untrimmed videos is an important topic in the field of video understanding. However, existing action localization methods are restricted to a pre-defined set of actions and cannot localize unseen activities. Thus, we consider a new task to localize unseen activities in videos via image queries, named Image-Based Activity Localization. This task faces three inherent challenges: (1) how to eliminate the influence of semantically inessential contents in image queries; (2) how to deal with the fuzzy localization of inaccurate image queries; (3) how to determine the precise boundaries of target segments. We then propose a novel self-attention interaction localizer to retrieve unseen activities in an end-to-end fashion. Specifically, we first devise a region self-attention method with relative position encoding to learn fine-grained image region representations. Then, we employ a local transformer encoder to build multi-step fusion and reasoning of image and video contents. We next adopt an order-sensitive localizer to directly retrieve the target segment. Furthermore, we construct a new dataset ActivityIBAL by reorganizing the ActivityNet dataset. The extensive experiments show the effectiveness of our method.
|
Temporal action localization aims to detect the action instances in untrimmed videos. In the fully-supervised fashion, some works @cite_16 @cite_20 apply recurrent neural networks to capture the temporal dynamics of video contents. @cite_16 propose an RNN-based agent to observe video frames and decide both where to look next and when to emit a prediction. @cite_20 devise two additional streams on motion and appearance for fine-grained action detection. With the development of 3D convolution @cite_8 , @cite_24 employ three segment-based 3D ConvNets to explicitly consider temporal overlap in videos. @cite_12 directly encodes the video streams using a 3D fully convolutional network without pre-defined segments. And @cite_23 employ temporal upsampling and spatial downsampling operations simultaneously. Furthermore, @cite_22 model the temporal structure of each action instance via a temporal pyramid. @cite_7 skip the proposal generation and directly detect action instances based on temporal convolutional layers. And inspired by the Faster R-CNN object detection framework @cite_1 , @cite_27 develop an improved method for temporal action localization.
|
{
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_24",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_20",
"@cite_12"
],
"mid": [
"2964216549",
"2766402183",
"1522734439",
"2613718673",
"2394849137",
"",
"2593722617",
"2179401333",
"",
"2963247196"
],
"abstract": [
"Detecting actions in untrimmed videos is an important yet challenging task. In this paper, we present the structured segment network (SSN), a novel framework which models the temporal structure of each action instance via a structured temporal pyramid. On top of the pyramid, we further introduce a decomposed discriminative model comprising two classifiers, respectively for classifying actions and determining completeness. This allows the framework to effectively distinguish positive proposals from background or incomplete ones, thus leading to both accurate recognition and localization. These components are integrated into a unified network that can be efficiently trained in an end-to-end fashion. Additionally, a simple yet effective temporal action proposal scheme, dubbed temporal actionness grouping (TAG) is devised to generate high quality action proposals. On two challenging benchmarks, THUMOS14 and ActivityNet, our method remarkably outperforms previous state-of-the-art methods, demonstrating superior accuracy and strong adaptivity in handling actions with various temporal structures.",
"Temporal action detection is a very important yet challenging problem, since videos in real applications are usually long, untrimmed and contain multiple action instances. This problem requires not only recognizing action categories but also detecting start time and end time of each action instance. Many state-of-the-art methods adopt the \"detection by classification\" framework: first do proposal, and then classify proposals. The main drawback of this framework is that the boundaries of action instance proposals have been fixed during the classification step. To address this issue, we propose a novel Single Shot Action Detector (SSAD) network based on 1D temporal convolutional layers to skip the proposal generation step via directly detecting action instances in untrimmed video. On pursuit of designing a particular SSAD network that can work effectively for temporal action detection, we empirically search for the best network architecture of SSAD due to lacking existing models that can be directly adopted. Moreover, we investigate into input feature types and fusion strategies to further improve detection accuracy. We conduct extensive experiments on two challenging datasets: THUMOS 2014 and MEXaction2. When setting Intersection-over-Union threshold to 0.5 during evaluation, SSAD significantly outperforms other state-of-the-art systems by increasing mAP from @math to @math on THUMOS 2014 and from 7.4 to @math on MEXaction2.",
"We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities. To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7 to 7.4 on MEXaction2 and increases from 15.0 to 19.0 on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5.",
"",
"Temporal action localization is an important yet challenging problem. Given a long, untrimmed video consisting of multiple action instances and complex background contents, we need not only to recognize their action categories, but also to localize the start time and end time of each instance. Many state-of-the-art systems use segment-level classifiers to select and rank proposal segments of pre-determined boundaries. However, a desirable model should move beyond segment-level and make dense predictions at a fine granularity in time to determine precise temporal boundaries. To this end, we design a novel Convolutional-De-Convolutional (CDC) network that places CDC filters on top of 3D ConvNets, which have been shown to be effective for abstracting action semantics but reduce the temporal length of the input data. The proposed CDC filter performs the required temporal upsampling and spatial downsampling operations simultaneously to predict actions at the frame-level granularity. It is unique in jointly modeling action semantics in space-time and fine-grained temporal dynamics. We train the CDC network in an end-to-end manner efficiently. Our model not only achieves superior performance in detecting actions in every frame, but also significantly boosts the precision of localizing temporal boundaries. Finally, the CDC network demonstrates a very high efficiency with the ability to process 500 frames per second on a single GPU server. Source code and trained models are available online at https: bitbucket.org columbiadvmm cdc.",
"In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2 or less) of the video frames.",
"",
"We address the problem of activity detection in continuous, untrimmed video streams. This is a difficult task that requires extracting meaningful spatio-temporal features to capture activities, accurately localizing the start and end times of each activity. We introduce a new model, Region Convolutional 3D Network (R-C3D), which encodes the video streams using a three-dimensional fully convolutional network, then generates candidate temporal regions containing activities, and finally classifies selected regions into specific activities. Computation is saved due to the sharing of convolutional features between the proposal and the classification pipelines. The entire model is trained end-to-end with jointly optimized localization and classification losses. R-C3D is faster than existing methods (569 frames per second on a single Titan X Maxwell GPU) and achieves state-of-the-art results on THUMOS’14. We further demonstrate that our model is a general activity detection framework that does not rely on assumptions about particular dataset properties by evaluating our approach on ActivityNet and Charades. Our code is available at http: ai.bu.edu r-c3d"
]
}
|
1906.12165
|
2955794838
|
Action localization in untrimmed videos is an important topic in the field of video understanding. However, existing action localization methods are restricted to a pre-defined set of actions and cannot localize unseen activities. Thus, we consider a new task to localize unseen activities in videos via image queries, named Image-Based Activity Localization. This task faces three inherent challenges: (1) how to eliminate the influence of semantically inessential contents in image queries; (2) how to deal with the fuzzy localization of inaccurate image queries; (3) how to determine the precise boundaries of target segments. We then propose a novel self-attention interaction localizer to retrieve unseen activities in an end-to-end fashion. Specifically, we first devise a region self-attention method with relative position encoding to learn fine-grained image region representations. Then, we employ a local transformer encoder to build multi-step fusion and reasoning of image and video contents. We next adopt an order-sensitive localizer to directly retrieve the target segment. Furthermore, we construct a new dataset ActivityIBAL by reorganizing the ActivityNet dataset. The extensive experiments show the effectiveness of our method.
|
In the non-fully supervised setting, @cite_19 directly learn action recognition from untrimmed videos without the temporal action annotations. And @cite_2 identify a sparse subset of key segments associated with target actions and fuse them through adaptive temporal pooling. Moreover, @cite_18 learn unsupervised action localization as a Knapsack problem.
|
{
"cite_N": [
"@cite_19",
"@cite_18",
"@cite_2"
],
"mid": [
"2604113307",
"",
"2772992824"
],
"abstract": [
"Current action recognition methods heavily rely on trimmed videos for model training. However, it is expensive and time-consuming to acquire a large-scale trimmed video dataset. This paper presents a new weakly supervised architecture, called UntrimmedNet, which is able to directly learn action recognition models from untrimmed videos without the requirement of temporal annotations of action instances. Our UntrimmedNet couples two important components, the classification module and the selection module, to learn the action models and reason about the temporal duration of action instances, respectively. These two components are implemented with feed-forward networks, and UntrimmedNet is therefore an end-to-end trainable architecture. We exploit the learned models for action recognition (WSR) and detection (WSD) on the untrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet only employs weak supervision, our method achieves performance superior or comparable to that of those strongly supervised approaches on these two datasets.",
"",
"We propose a weakly supervised temporal action localization algorithm on untrimmed videos using convolutional neural networks. Our algorithm predicts temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. This objective is achieved by proposing a novel deep neural network that recognizes actions and identifies a sparse set of key segments associated with the actions through adaptive temporal pooling of video segments. We design the loss function of the network to comprise two terms--one for classification error and the other for sparsity of the selected segments. After recognizing actions with sparse attention weights for key segments, we extract temporal proposals for actions using temporal class activation mappings to estimate time intervals that localize target actions. The proposed algorithm attains state-of-the-art accuracy on the THUMOS14 dataset and outstanding performance on ActivityNet1.3 even with weak supervision."
]
}
|
1906.12165
|
2955794838
|
Action localization in untrimmed videos is an important topic in the field of video understanding. However, existing action localization methods are restricted to a pre-defined set of actions and cannot localize unseen activities. Thus, we consider a new task to localize unseen activities in videos via image queries, named Image-Based Activity Localization. This task faces three inherent challenges: (1) how to eliminate the influence of semantically inessential contents in image queries; (2) how to deal with the fuzzy localization of inaccurate image queries; (3) how to determine the precise boundaries of target segments. We then propose a novel self-attention interaction localizer to retrieve unseen activities in an end-to-end fashion. Specifically, we first devise a region self-attention method with relative position encoding to learn fine-grained image region representations. Then, we employ a local transformer encoder to build multi-step fusion and reasoning of image and video contents. We next adopt an order-sensitive localizer to directly retrieve the target segment. Furthermore, we construct a new dataset ActivityIBAL by reorganizing the ActivityNet dataset. The extensive experiments show the effectiveness of our method.
|
To go beyond the pre-defined set of actions, @cite_6 @cite_11 @cite_21 @cite_0 focus on temporal localization of actions by natural language queries. These methods can localize complex actions according to sentence queries, but are still difficult to recognize unseen activities. Thus, we consider localizing unseen activities in untrimmed videos via an image query.
|
{
"cite_N": [
"@cite_0",
"@cite_21",
"@cite_6",
"@cite_11"
],
"mid": [
"2948958195",
"2903901502",
"2964089981",
"2742343242"
],
"abstract": [
"Query-based moment retrieval aims to localize the most relevant moment in an untrimmed video according to the given natural language query. Existing works often only focus on one aspect of this emerging task, such as the query representation learning, video context modeling or multi-modal fusion, thus fail to develop a comprehensive system for further performance improvement. In this paper, we introduce a novel Cross-Modal Interaction Network (CMIN) to consider multiple crucial factors for this challenging task, including (1) the syntactic structure of natural language queries; (2) long-range semantic dependencies in video context and (3) the sufficient cross-modal interaction. Specifically, we devise a syntactic GCN to leverage the syntactic structure of queries for fine-grained representation learning, propose a multi-head self-attention to capture long-range semantic dependencies from video context, and next employ a multi-stage cross-modal interaction to explore the potential relations of video and query contents. The extensive experiments demonstrate the effectiveness of our proposed method.",
"In this paper, we consider the task of natural language video localization (NLVL): given an untrimmed video and a natural language description, the goal is to localize a segment in the video which semantically corresponds to the given natural language description. We propose a localizing network (LNet), working in an end-to-end fashion, to tackle the NLVL task. We first match the natural sentence and video sequence by cross-gated attended recurrent networks to exploit their fine-grained interactions and generate a sentence-aware video representation. A self interactor is proposed to perform crossframe matching, which dynamically encodes and aggregates the matching evidences. Finally, a boundary model is proposed to locate the positions of video segments corresponding to the natural sentence description by predicting the starting and ending points of the segment. Extensive experiments conducted on the public TACoS and DiDeMo datasets demonstrate that our proposed model performs effectively and efficiently against the state-of-the-art approaches.",
"This paper focuses on temporal localization of actions in untrimmed videos. Existing methods typically train classifiers for a pre-defined list of actions and apply them in a sliding window fashion. However, activities in the wild consist of a wide combination of actors, actions and objects; it is difficult to design a proper activity list that meets users’ needs. We propose to localize activities by natural language queries. Temporal Activity Localization via Language (TALL) is challenging as it requires: (1) suitable design of text and video representations to allow cross-modal matching of actions and language queries; (2) ability to locate actions accurately given features from sliding windows of limited granularity. We propose a novel Cross-modal Temporal Regression Localizer (CTRL) to jointly model text query and video clips, output alignment scores and action boundary regression results for candidate clips. Lor evaluation, we adopt TaCoS dataset, and build a new dataset for this task on top of Charades by adding sentence temporal annotations, called Charades-STA. We also build complex sentence queries in Charades-STA for test. Experimental results show that CTRL outperforms previous methods significantly on both datasets.",
"We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
In this section of the paper, we provide chronological advancement in the deep super-resolution. Dong al @cite_9 proposed pioneering works in super-resolution by introducing a fully convolutional network composed of three convolutional layers followed by ReLU @cite_11 and termed it as SRCNN @cite_9 . The input to the SRCNN @cite_9 is a bicubic interpolated image which diminishes high-frequencies and requires additional computation. To reduce the burden on the network, FSRCNN @cite_13 inputs the original low-resolution image and employ deconvolution to upsample the features to the desired dimensions before the final objective function. The authors of @cite_13 also uses the shrinking and expansion of channels to make the model near real-time on a CPU.
|
{
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_11"
],
"mid": [
"1885185971",
"2503339013",
"1677182931"
],
"abstract": [
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.",
"As a successful deep model applied in image super-resolution (SR), the Super-Resolution Convolutional Neural Network (SRCNN) [1, 2] has demonstrated superior performance to the previous hand-crafted models either in speed and restoration quality. However, the high computational cost still hinders it from practical usage that demands real-time performance (24 fps). In this paper, we aim at accelerating the current SRCNN, and propose a compact hourglass-shape CNN structure for faster and better SR. We re-design the SRCNN structure mainly in three aspects. First, we introduce a deconvolution layer at the end of the network, then the mapping is learned directly from the original low-resolution image (without interpolation) to the high-resolution one. Second, we reformulate the mapping layer by shrinking the input feature dimension before mapping and expanding back afterwards. Third, we adopt smaller filter sizes but more mapping layers. The proposed model achieves a speed up of more than 40 times with even superior restoration quality. Further, we present the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance. A corresponding transfer strategy is also proposed for fast training and testing across different upscaling factors.",
"Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94 top-5 test error on the ImageNet 2012 classification dataset. This is a 26 relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66 [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1 , [26]) on this dataset."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
Initially, the focus was on linear networks, bearing a simple architecture with no skip-connections only one path for the signal flow with the layers stacked consecutively. SRCNN @cite_9 and FSRCNN @cite_13 are examples of linear networks. Similarly, Image Restoration CNN abbreviated as IRCNN @cite_49 , another straight model, can restore several low-level vision tasks jointly. The aim here is to employ dilation in convolutional layers to capture a larger receptive field for better learning coupled with batch normalization and non-linear activation (ReLU) to reduce the depth of the network. Furthermore, SRMD @cite_25 , an extended super-resolution network, can handle different degradations. SRMD @cite_25 inputs low-resolution images and their computed degradation maps. The model structure is similar to @cite_9 @cite_49 .
|
{
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_25",
"@cite_49"
],
"mid": [
"1885185971",
"2503339013",
"",
"2613155248"
],
"abstract": [
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.",
"As a successful deep model applied in image super-resolution (SR), the Super-Resolution Convolutional Neural Network (SRCNN) [1, 2] has demonstrated superior performance to the previous hand-crafted models either in speed and restoration quality. However, the high computational cost still hinders it from practical usage that demands real-time performance (24 fps). In this paper, we aim at accelerating the current SRCNN, and propose a compact hourglass-shape CNN structure for faster and better SR. We re-design the SRCNN structure mainly in three aspects. First, we introduce a deconvolution layer at the end of the network, then the mapping is learned directly from the original low-resolution image (without interpolation) to the high-resolution one. Second, we reformulate the mapping layer by shrinking the input feature dimension before mapping and expanding back afterwards. Third, we adopt smaller filter sizes but more mapping layers. The proposed model achieves a speed up of more than 40 times with even superior restoration quality. Further, we present the parameter settings that can achieve real-time performance on a generic CPU while still maintaining good performance. A corresponding transfer strategy is also proposed for fast training and testing across different upscaling factors.",
"",
"Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
With the emergence of skip-connections in CNN networks, its usage became a prominent feature in super-resolution. In this regard, very deep super-resolution (VDSR) @cite_58 incorporated a global skip connection to enforce residual learning using gradient clipping to avoid gradient vanishing. VDSR @cite_58 improved upon the previous CNN super-resolution methods. Inspired from VDSR @cite_58 , the same authors next presented DRCN @cite_28 , which shares parameters using a deep recursive structure. This sharing technique reduces the number of parameters significantly; however, the performance is lagging behind VDSR @cite_58 . Subsequently, deep recursive residual network (DRRN) @cite_51 replicates primary skip-connections across different convolutional blocks to enforce residual learning through multi-path architecture. The aim is to reduce the memory cost and computational complexity via parameter sharing. Further, Tai al @cite_24 introduces a persistent memory network (MemNet), which is composed of memory blocks stacked together recursively. Each block is then connected to a gate unit densely, where each gate unit is a convolutional layer with kernel size 1 @math 1. The performance of the networks employing recursive connections is comparable to each other.
|
{
"cite_N": [
"@cite_28",
"@cite_58",
"@cite_51",
"@cite_24"
],
"mid": [
"2214802144",
"2242218935",
"2747898905",
"2964125708"
],
"abstract": [
"We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.",
"We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.",
"Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.",
"Recently, very deep convolutional neural networks (CNNs) have been attracting considerable attention in image restoration. However, as the depth grows, the longterm dependency problem is rarely realized for these very deep models, which results in the prior states layers having little influence on the subsequent ones. Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to explicitly mine persistent memory through an adaptive learning process. The recursive unit learns multi-level representations of the current state under different receptive fields. The representations and the outputs from the previous memory blocks are concatenated and sent to the gate unit, which adaptively controls how much of the previous states should be reserved, and decides how much of the current state should be stored. We apply MemNet to three image restoration tasks, i.e., image denosing, super-resolution and JPEG deblocking. Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https: github.com tyshiwo MemNet."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
Lim al @cite_4 proposed the enhanced deep super-resolution (EDSR) network, which employs residual blocks and a long skip-connection. EDSR @cite_4 rescaled the features by a factor of 0.1 to avoid gradient exploding. EDSR improved upon all previous methods by a significant margin. More recently, Ahn al @cite_15 proposed the cascading residual network (CARN) which also employs a variant of residual blocks having three convolutional layers as compared to the customarily-used two convolutional layers with cascading connections. CARN @cite_15 lags behind EDSR @cite_4 in terms of PSNR.
|
{
"cite_N": [
"@cite_15",
"@cite_4"
],
"mid": [
"2963645458",
"2963372104"
],
"abstract": [
"In recent years, deep learning methods have been successfully applied to single-image super-resolution tasks. Despite their great performances, deep learning methods cannot be easily applied to real-world applications due to the requirement of heavy computation. In this paper, we address this issue by proposing an accurate and lightweight deep network for image super-resolution. In detail, we design an architecture that implements a cascading mechanism upon a residual network. We also present variant models of the proposed cascading residual network to further improve efficiency. Our extensive experiments show that even with much fewer parameters and operations, our models achieve performance comparable to that of state-of-the-art methods.",
"Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge[26]."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
Driven by the success of the dense-connection architecture proposed in DenseNet @cite_55 by Huang al for image classification, super-resolution networks have focused on the dense-connections to improve performance. As an example, SRDenseNet @cite_42 utilized dense-connections where every convolutional layer in a block operates on the output of all prior convolutional layers. To upsample the features, SRDenseNet @cite_42 orders the blocks sequentially followed by deconvolutional layers at the end of the network. Likewise, Zhang al @cite_0 proposed a residual dense network (RDN) to learn local features from the images via dense-connections. Furthermore, to avoid vanishing gradients and for ease of flow of information from low-level to high-level layers, RDN @cite_0 employed skip-connections. Lately, DDBPN @cite_5 aims to model a feedback mechanism with a feed-forward procedure; hence, a series of densely connected upsampling and downsampling layers are used as a single block. To predict the final super-resolved image, the outputs of the intermediate blocks are concatenated as well.
|
{
"cite_N": [
"@cite_0",
"@cite_5",
"@cite_55",
"@cite_42"
],
"mid": [
"2964101377",
"",
"2963446712",
"2780544323"
],
"abstract": [
"A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.",
"",
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https: github.com liuzhuang13 DenseNet.",
"Recent studies have shown that the performance of single-image super-resolution methods can be significantly boosted by using deep convolutional neural networks. In this study, we present a novel single-image super-resolution method by introducing dense skip connections in a very deep network. In the proposed network, the feature maps of each layer are propagated into all subsequent layers, providing an effective way to combine the low-level features and high-level features to boost the reconstruction performance. In addition, the dense skip connections in the network enable short paths to be built directly from the output to each layer, alleviating the vanishing-gradient problem of very deep networks. Moreover, deconvolution layers are integrated into the network to learn the upsampling filters and to speedup the reconstruction process. Further, the proposed method substantially reduces the number of parameters, enhancing the computational efficiency. We evaluate the proposed method using images from four benchmark datasets and set a new state of the art."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
To enhance the visual quality of the images, Generative Adversarial Networks (GANs) @cite_22 @cite_26 aim to improve the perceptual quality through super-resolution. The first exciting work in this regard is SRResNet @cite_44 , where the generator is comprised of residual blocks similar to @cite_14 with a skip-connection from the input to the output while the discriminator is fully convolutional. The SRResNet @cite_44 combined three different losses, which include perceptual, adversarial and @math . Next, to create the textures faithful to the original image, EnhanceNet @cite_37 used an additional texture matching loss with the mentioned losses. This loss aims to match the textures of low-resolution and high-resolution patches as gram matrices computed from deep features via the @math .
|
{
"cite_N": [
"@cite_37",
"@cite_14",
"@cite_26",
"@cite_22",
"@cite_44"
],
"mid": [
"2963037581",
"2194775991",
"2099471712",
"2173520492",
"2523714292"
],
"abstract": [
"Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
Similar to @cite_37 , to generate more realistic super-resolved images, Park al @cite_36 proposed SRFeat, which utilizes an additional discriminator to help the generator. The results of SRFeat @cite_36 are perceptually better than @cite_37 . Inspired by @cite_44 network, ESRGAN @cite_41 removed the batch normalization and used dense-connections between the convolutional layers in the same segment. A global skip-connection is incorporated for residual learning. Besides, changing the elements of the generator, an enhanced discriminator Relativistic GAN @cite_35 is used instead of the traditional one. The performance of the ESRGAN @cite_41 is the best among the current super-resolution GAN algorithms. Furthermore, the GAN super-resolution models have significantly improved the perceived quality compared to its CNN competitors.
|
{
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_36",
"@cite_41",
"@cite_44"
],
"mid": [
"2810518847",
"2963037581",
"2895240252",
"2891158090",
"2523714292"
],
"abstract": [
"In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400 ), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.",
"Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack highfrequency textures and do not look natural despite yielding high PSNR values.,,We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixelaccurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.",
"Generative adversarial networks (GANs) have recently been adopted to single image super-resolution (SISR) and showed impressive results with realistically synthesized high-frequency textures. However, the results of such GAN-based approaches tend to include less meaningful high-frequency noise that is irrelevant to the input image. In this paper, we propose a novel GAN-based SISR method that overcomes the limitation and produces more realistic results by attaching an additional discriminator that works in the feature domain. Our additional discriminator encourages the generator to produce structural high-frequency features rather than noisy artifacts as it distinguishes synthetic and real images in terms of features. We also design a new generator that utilizes long-range skip connections so that information between distant layers can be transferred more effectively. Experiments show that our method achieves the state-of-the-art performance in terms of both PSNR and perceptual quality compared to recent GAN-based methods.",
"The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge (region 3) with the best perceptual index. The code is available at https: github.com xinntao ESRGAN.",
"Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method."
]
}
|
1906.12021
|
2954113706
|
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
|
Visual attention @cite_19 is primarily employed in image classification. This concept was brought to image super-resolution by RCAN @cite_20 , which uses a channel attention mechanism for modeling the inter-channel dependencies coupled with stacking of groups of residual blocks. The PSNR values of RCAN @cite_20 is the best among all the algorithms as mentioned earlier. In parallel to RCAN @cite_20 , Kim al @cite_32 proposed a dual attention mechanism, namely, the super-resolution residual attention module (SRRAM). The depth of the SRRAM @cite_32 is comparatively smaller than RCAN @cite_20 and lag behind RCAN @cite_20 in PSNR numbers. On the other hand, our method improves upon RCAN @cite_20 both visually and in numbers by exploiting densely connected residual blocks followed by multi-scale attention using different levels of the skip and the cascading connections.
|
{
"cite_N": [
"@cite_19",
"@cite_32",
"@cite_20"
],
"mid": [
"2147527908",
"2903251150",
""
],
"abstract": [
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.",
"Attention mechanisms are a design trend of deep neural networks that stands out in various computer vision tasks. Recently, some works have attempted to apply attention mechanisms to single image super-resolution (SR) tasks. However, they apply the mechanisms to SR in the same or similar ways used for high-level computer vision problems without much consideration of the different nature between SR and other problems. In this paper, we propose a new attention method, which is composed of new channel-wise and spatial attention mechanisms optimized for SR and a new fused attention to combine them. Based on this, we propose a new residual attention module (RAM) and a SR network using RAM (SRRAM). We provide in-depth experimental analysis of different attention mechanisms in SR. It is shown that the proposed method can construct both deep and lightweight SR networks showing improved performance in comparison to existing state-of-the-art methods.",
""
]
}
|
1906.11887
|
2954533323
|
Deep learning models have a large number of freeparameters that need to be calculated by effective trainingof the models on a great deal of training data to improvetheir generalization performance. However, data obtaining andlabeling is expensive in practice. Data augmentation is one of themethods to alleviate this problem. In this paper, we conduct apreliminary study on how three variables (augmentation method,augmentation rate and size of basic dataset per label) can affectthe accuracy of deep learning for image classification. The studyprovides some guidelines: (1) it is better to use transformationsthat alter the geometry of the images rather than those justlighting and color. (2) 2-3 times augmentation rate is good enoughfor training. (3) the smaller amount of data, the more obviouscontributions could have.
|
Image processing methods are implemented by PIL which accept an image as input and output a processed image @cite_9 . The methods include ShearX Y, TranslateX Y, Rotate, AutoContrast, Invert, Equalize, Solarize, Posterize, Contrast, Color, Brightness and Sharpness, as shown in Figure .
|
{
"cite_N": [
"@cite_9"
],
"mid": [
"2804047946"
],
"abstract": [
"Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5 which is 0.4 better than the previous record of 83.1 . On CIFAR-10, we achieve an error rate of 1.5 , which is 0.6 better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars."
]
}
|
1906.11887
|
2954533323
|
Deep learning models have a large number of freeparameters that need to be calculated by effective trainingof the models on a great deal of training data to improvetheir generalization performance. However, data obtaining andlabeling is expensive in practice. Data augmentation is one of themethods to alleviate this problem. In this paper, we conduct apreliminary study on how three variables (augmentation method,augmentation rate and size of basic dataset per label) can affectthe accuracy of deep learning for image classification. The studyprovides some guidelines: (1) it is better to use transformationsthat alter the geometry of the images rather than those justlighting and color. (2) 2-3 times augmentation rate is good enoughfor training. (3) the smaller amount of data, the more obviouscontributions could have.
|
Various geometrical and photometric schemes are evaluated on a coarse-grained dataset using a relatively simple CNN @cite_7 . The experimental results indicate that, under these circumstances, in significantly increases CNN task performance.
|
{
"cite_N": [
"@cite_7"
],
"mid": [
"2746808752"
],
"abstract": [
"Deep artificial neural networks require a large corpus of training data in order to effectively learn, where collection of such training data is often expensive and laborious. Data augmentation overcomes this issue by artificially inflating the training set with label preserving transformations. Recently there has been extensive use of generic data augmentation to improve Convolutional Neural Network (CNN) task performance. This study benchmarks various popular data augmentation schemes to allow researchers to make informed decisions as to which training methods are most appropriate for their data sets. Various geometric and photometric schemes are evaluated on a coarse-grained data set using a relatively simple CNN. Experimental results, run using 4-fold cross-validation and reported in terms of Top-1 and Top-5 accuracy, indicate that cropping in geometric augmentation significantly increases CNN task performance."
]
}
|
1906.11887
|
2954533323
|
Deep learning models have a large number of freeparameters that need to be calculated by effective trainingof the models on a great deal of training data to improvetheir generalization performance. However, data obtaining andlabeling is expensive in practice. Data augmentation is one of themethods to alleviate this problem. In this paper, we conduct apreliminary study on how three variables (augmentation method,augmentation rate and size of basic dataset per label) can affectthe accuracy of deep learning for image classification. The studyprovides some guidelines: (1) it is better to use transformationsthat alter the geometry of the images rather than those justlighting and color. (2) 2-3 times augmentation rate is good enoughfor training. (3) the smaller amount of data, the more obviouscontributions could have.
|
Affine transformation, a 2D geometric transform method, is based on reflecting the image, scaling and translating the image, and rotating the image by different angles. The affine augmentation method is very common and widely used for correcting geometric distortion introduced by perspective @cite_11 .
|
{
"cite_N": [
"@cite_11"
],
"mid": [
"1874076486"
],
"abstract": [
"Affine image transformations are performed in an interleaved manner, whereby coordinate transformations and intensity calculations are alternately performed incrementally on small portions of an image. The pixels are processed in rows such that after coordinates of a first pixel are determined for reference, each pixel in a row, and then pixels in vertically adjacent rows, are processed relative to the coordinates of the previously processed adjacent pixels. After coordinate transformation to produce affine translation, rotation, skew, and or scaling, intermediate metapixels are vertically split and shifted to eliminate holes and overlaps. Intensity values of output metapixels are calculated as being proportional to the sum of scaled portions of the intermediate metapixels which cover the output pixels respectively."
]
}
|
1906.11887
|
2954533323
|
Deep learning models have a large number of freeparameters that need to be calculated by effective trainingof the models on a great deal of training data to improvetheir generalization performance. However, data obtaining andlabeling is expensive in practice. Data augmentation is one of themethods to alleviate this problem. In this paper, we conduct apreliminary study on how three variables (augmentation method,augmentation rate and size of basic dataset per label) can affectthe accuracy of deep learning for image classification. The studyprovides some guidelines: (1) it is better to use transformationsthat alter the geometry of the images rather than those justlighting and color. (2) 2-3 times augmentation rate is good enoughfor training. (3) the smaller amount of data, the more obviouscontributions could have.
|
Cutout is originally considered as a targeted method for removing visual features with high activations in later layers of a convolutional neural network (CNN). However, the results in @cite_2 @cite_8 show that randomly selecting a rectangle region in an image and erasing its pixels with random values can be used to improve the overall performance of CNNs.
|
{
"cite_N": [
"@cite_8",
"@cite_2"
],
"mid": [
"2747685395",
"2746314669"
],
"abstract": [
"In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: this https URL",
"Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56 , 15.20 , and 1.30 test error respectively. Code is available at this https URL"
]
}
|
1906.11887
|
2954533323
|
Deep learning models have a large number of freeparameters that need to be calculated by effective trainingof the models on a great deal of training data to improvetheir generalization performance. However, data obtaining andlabeling is expensive in practice. Data augmentation is one of themethods to alleviate this problem. In this paper, we conduct apreliminary study on how three variables (augmentation method,augmentation rate and size of basic dataset per label) can affectthe accuracy of deep learning for image classification. The studyprovides some guidelines: (1) it is better to use transformationsthat alter the geometry of the images rather than those justlighting and color. (2) 2-3 times augmentation rate is good enoughfor training. (3) the smaller amount of data, the more obviouscontributions could have.
|
Histogram equalization is introduced as a data augmentation method @cite_6 . Histogram equalization, solarization and adjusting image color balance are common methods used in digital image processing. These methods can simulate the problems encountered when taking photos improperly, like harsh lighting combined with auto-white-balance will produce images that over or under exposed.
|
{
"cite_N": [
"@cite_6"
],
"mid": [
"2563812708"
],
"abstract": [
"In this research, we apply a Convolutional Neural Network (CNN) to periocular authentication on two datasets. To increase accuracy, we try a variety of data augmentation techniques and compare their relative benefits. We find that, with augmentation appropriate to the dataset, CNN accuracy may be comparable to or significantly higher than traditional methods like Local Binary Pattern Histograms (LBPH) and Eigenface."
]
}
|
1906.11979
|
2954211634
|
From TV news to Google StreetView, face obscuration has been used for privacy protection. Due to recent advances in the field of deep learning, obscuration methods such as Gaussian blurring and pixelation are not guaranteed to conceal identity. In this paper, we propose a utility-preserving generative model, UP-GAN, that is able to provide an effective face obscuration, while preserving facial utility. By utility-preserving we mean preserving facial features that do not reveal identity, such as age, gender, skin tone, pose, and expression. We show that the proposed method achieves the best performance in terms of obscuration and utility preservation.
|
Standard approaches, such as pixelation and Gaussian blurring, achieve good obscuration performance in terms of human perception. However, @cite_21 proposed a deep learning method with a simple structure that is able to defeat these obscuration techniques. To provide better obscuration performance, a variety of approaches have been proposed to balance the need to remove identifiable information while preserving utility information.
|
{
"cite_N": [
"@cite_21"
],
"mid": [
"2516672586"
],
"abstract": [
"We demonstrate that modern image recognition methods based on artificial neural networks can recover hidden information from images protected by various forms of obfuscation. The obfuscation techniques considered in this paper are mosaicing (also known as pixelation), blurring (as used by YouTube), and P3, a recently proposed system for privacy-preserving photo sharing that encrypts the significant JPEG coefficients to make images unrecognizable by humans. We empirically show how to train artificial neural networks to successfully identify faces and recognize objects and handwritten digits even if the images are protected using any of the above obfuscation techniques."
]
}
|
1906.11979
|
2954211634
|
From TV news to Google StreetView, face obscuration has been used for privacy protection. Due to recent advances in the field of deep learning, obscuration methods such as Gaussian blurring and pixelation are not guaranteed to conceal identity. In this paper, we propose a utility-preserving generative model, UP-GAN, that is able to provide an effective face obscuration, while preserving facial utility. By utility-preserving we mean preserving facial features that do not reveal identity, such as age, gender, skin tone, pose, and expression. We show that the proposed method achieves the best performance in terms of obscuration and utility preservation.
|
This family of approaches first groups faces into clusters based on non-identifiable information such as expression, and then generates a surrogate face for each cluster. These methods can guarantee that any face recognition system cannot do better than @math in recognizing who a particular image corresponds to @cite_0 , where @math is the minimum number of faces among all clusters. This property is also known as @math -anonymity @cite_2 . In @cite_11 and @cite_0 , they simply compute the average face for each cluster. Therefore, their obscured faces are blurry and cannot handle various facial poses. In @cite_20 , the use of an active appearance model @cite_18 to generate more realistic surrogate faces is presented. A generative neural network, @math -same-net, that directly generates faces based on the cluster attributes is described in @cite_19 . These two methods are able to produce more realistic obscured faces with the property of @math -anonymity, but cannot handle different poses.
|
{
"cite_N": [
"@cite_18",
"@cite_0",
"@cite_19",
"@cite_2",
"@cite_20",
"@cite_11"
],
"mid": [
"2152826865",
"",
"2783131052",
"56293434",
"2003921219",
"2103958416"
],
"abstract": [
"We describe a new method of matching statistical models of appearance to images. A set of model parameters control modes of shape and gray-level variation learned from a training set. We construct an efficient iterative matching algorithm by learning the relationship between perturbations in the model parameters and the induced image errors.",
"",
"Image and video data are today being shared between government entities and other relevant stakeholders on a regular basis and require careful handling of the personal information contained therein. A popular approach to ensure privacy protection in such data is the use of deidentification techniques, which aim at concealing the identity of individuals in the imagery while still preserving certain aspects of the data after deidentification. In this work, we propose a novel approach towards face deidentification, called k-Same-Net, which combines recent Generative Neural Networks (GNNs) with the well-known k-Anonymitymechanism and provides formal guarantees regarding privacy protection on a closed set of identities. Our GNN is able to generate synthetic surrogate face images for deidentification by seamlessly combining features of identities used to train the GNN model. Furthermore, it allows us to control the image-generation process with a small set of appearance-related parameters that can be used to alter specific aspects (e.g., facial expressions, age, gender) of the synthesized surrogate images. We demonstrate the feasibility of k-Same-Net in comprehensive experiments on the XM2VTS and CK+ datasets. We evaluate the efficacy of the proposed approach through reidentification experiments with recent recognition models and compare our results with competing deidentification techniques from the literature. We also present facial expression recognition experiments to demonstrate the utility-preservation capabilities of k-Same-Net. Our experimental results suggest that k-Same-Net is a viable option for facial deidentification that exhibits several desirable characteristics when compared to existing solutions in this area.",
"",
"Face de-identification, the process of preventing a person’ identity from being connected with personal information, is an important privacy protection tool in multimedia data processing. With the advance of face detection algorithms, a natural solution is to blur or block facial regions in visual data so as to obscure identity information. Such solutions however often destroy privacy-insensitive information and hence limit the data utility, e.g., gender and age information. In this paper we address the de-identification problem by proposing a simple yet effective framework, named GARP-Face, that balances utility preservation in face deidentification. In particular, we use modern facial analysis technologies to determine the Gender, Age, and Race attributes of facial images, and Preserving these attributes by seeking corresponding representatives constructed through a gallery dataset. We evaluate the proposed approach using the MORPH dataset in comparison with several stateof-the-art face de-identification solutions. The results show that our method outperforms previous solutions in preserving data utility while achieving similar degree of privacy protection.",
"In the context of sharing video surveillance data, a significant threat to privacy is face recognition software, which can automatically identify known people, such as from a database of drivers' license photos, and thereby track people regardless of suspicion. This paper introduces an algorithm to protect the privacy of individuals in video surveillance data by deidentifying faces such that many facial characteristics remain but the face cannot be reliably recognized. A trivial solution to deidentifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use. Many ad hoc attempts, such as covering eyes, fail to thwart face recognition because of the robustness of face recognition methods. This work presents a new privacy-enabling algorithm, named k-Same, that guarantees face recognition software cannot reliably recognize deidentified faces, even though many facial details are preserved. The algorithm determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels (k-Same-Pixel) or eigenvectors (k-Same-Eigen). Results are presented on a standard collection of real face images with varying k."
]
}
|
1906.11979
|
2954211634
|
From TV news to Google StreetView, face obscuration has been used for privacy protection. Due to recent advances in the field of deep learning, obscuration methods such as Gaussian blurring and pixelation are not guaranteed to conceal identity. In this paper, we propose a utility-preserving generative model, UP-GAN, that is able to provide an effective face obscuration, while preserving facial utility. By utility-preserving we mean preserving facial features that do not reveal identity, such as age, gender, skin tone, pose, and expression. We show that the proposed method achieves the best performance in terms of obscuration and utility preservation.
|
Generative adversarial network (GAN) @cite_6 methods can provide more realistic faces. Their discriminator is designed to guide the generator by distinguishing real faces from generated faces. In @cite_3 , a model that produces obscured faces directly from original faces based on conditional-GAN @cite_10 is proposed. They use a contrastive loss to enforce the obscured face to be different than the input face. However, since they need to directly input the original faces, the obscuration performance is not guaranteed. @cite_9 present a two-stage model that is able to generate an obscured face without the original identifiable facial information, which prevents the leakage of identifiable information directly from faces. GANs have also been used for face manipulation in videos. These techniques aim to create believable face swaps without tampering traces, by altering age @cite_15 or skin color @cite_7 . To prevent scenarios where these videos are used to create political distress or fake terrorism events, @cite_24 design a deep learning model that is able to detect the altered frames using both the spatial and temporal information.
|
{
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_6",
"@cite_24",
"@cite_15",
"@cite_10"
],
"mid": [
"2949785681",
"2769823506",
"",
"",
"2911424785",
"2587706859",
"2125389028"
],
"abstract": [
"We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate impressive results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces impressive and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.",
"As more and more personal photos are shared online, being able to obfuscate identities in such photos is becoming a necessity for privacy protection. People have largely resorted to blacking out or blurring head regions, but they result in poor user experience while being surprisingly ineffective against state of the art person recognizers [17]. In this work, we propose a novel head inpainting obfuscation technique. Generating a realistic head inpainting in social media photos is challenging because subjects appear in diverse activities and head orientations. We thus split the task into two sub-tasks: (1) facial landmark generation from image context (e.g. body pose) for seamless hypothesis of sensible head pose, and (2) facial landmark conditioned head inpainting. We verify that our inpainting method generates realistic person images, while achieving superior obfuscation performance against automatic person recognizers.",
"",
"",
"In recent months a machine learning based free software tool has made it easy to create believable face swaps in videos that leaves few traces of manipulation, in what are known as \"deepfake\" videos. Scenarios where these realistic fake videos are used to create political distress, blackmail someone or fake terrorism events are easily envisioned. This paper proposes a temporal-aware pipeline to automatically detect deepfake videos. Our system uses a convolutional neural network (CNN) to extract frame-level features. These features are then used to train a recurrent neural network (RNN) that learns to classify if a video has been subject to manipulation or not. We evaluate our method against a large set of deepfake videos collected from multiple video websites. We show how our system can achieve competitive results in this task while using a simple architecture.",
"It has been recently shown that Generative Adversarial Networks (GANs) can produce synthetic images of exceptional visual fidelity. In this work, we propose the first GAN-based method for automatic face aging. Contrary to previous works employing GANs for altering of facial attributes, we make a particular emphasize on preserving the original person's identity in the aged version of his her face. To this end, we introduce a novel approach for “Identity-Preserving” optimization of GAN's latent vectors. The objective evaluation of the resulting aged and rejuvenated face images by the state-of-the-art face recognition and age estimation solutions demonstrate the high potential of the proposed method.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
Approaches to image-caption alignment differ with respect to the structure of the used CNNs and RNNs. @cite_18 proposed a deep bidirectional alignment between images and their captions. In this work, the input image is first fragmented to 19 sub-spaces using the R-CNN method @cite_22 . In the next step, a CNN @cite_0 is applied to the whole image and the 19 extracted sub-spaces and a 4096-dimensional feature vector is extracted for the 20 images. Meanwhile, a dependency tree of a given caption in the training set is extracted and its relationships are used to identify sentence fragments. A simple linear transformation is applied to each sentence fragment in order to generate a 4096-dimensional meaning vector for all of the extracted sentence fragments. An alignment model is trained in order to maximize the similarity of the related parts of the image and the fragments of the sentence. Figure demonstrates the system architecture @cite_18 .
|
{
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_22"
],
"mid": [
"",
"2112912048",
"2102605133"
],
"abstract": [
"",
"We introduce a model for bidirectional retrieval of images and sentences through a deep, multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. We then introduce a structured max-margin objective that allows our model to explicitly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions for the image-sentence retrieval task since the inferred inter-modal alignment of fragments is explicit.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
also used a bidirectional RNN as a word embedding model and trained it with captions provided in the training set @cite_19 .
|
{
"cite_N": [
"@cite_19"
],
"mid": [
"2951805548"
],
"abstract": [
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
The encoder-decoder framework proposed by @cite_5 is one of the most popular models used in machine translation. Methods based on this framework split the translation task into two steps: 1) extract features from the source sentence which is called the , 2) generate a new sentence in the destination language given the meaning encoded into the feature vector, which is called the .
|
{
"cite_N": [
"@cite_5"
],
"mid": [
"2950635152"
],
"abstract": [
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
Since, the encoder-decoder framework yields an end-to-end model to solve sophisticated problems, it is employed as a solution to a wide variety of problems in computer vision and natural language processing fields. @cite_14 first employed the encoder-decoder model in video description generation. @cite_24 also proposed a model for emotional human-machine conversation generation using the encoder-decoder baseline.
|
{
"cite_N": [
"@cite_24",
"@cite_14"
],
"mid": [
"2781061695",
"2136036867"
],
"abstract": [
"With the rise in popularity of artificial intelligence, the technology of verbal communication between man and machine has received an increasing amount of attention, but generating a good conversation remains a difficult task. The key factor in human-machine conversation is whether the machine can give good responses that are appropriate not only at the content level (relevant and grammatical) but also at the emotion level (consistent emotional expression). In our paper, we propose a new model based on long short-term memory, which is used to achieve an encoder-decoder framework, and we address the emotional factor of conversation generation by changing the model’s input using a series of input transformations: a sequence without an emotional category, a sequence with an emotional category for the input sentence, and a sequence with an emotional category for the output responses. We perform a comparison between our work and related work and find that we can obtain slightly better results with respect to emotion consistency. Although in terms of content coherence our result is lower than those of related work, in the present stage of research, our method can generally generate emotional responses in order to control and improve the user’s emotion. Our experiment shows that through the introduction of emotional intelligence, our model can generate responses appropriate not only in content but also in emotion.",
"Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
Image captioning task is one of the main application areas of the end-to-end neural encoder-decoder based models. first proposed a model based on this framework in image caption generation, substituting the source sentence in machine translation with the input image @cite_43 . Thus, the changed in a way that it generates a feature vector given an image and the remaining parts of the framework remained unchanged.
|
{
"cite_N": [
"@cite_43"
],
"mid": [
"2950178297"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
Encoder-decoder based image captioning baseline model has been employed by many researchers to propose novel image description systems. used a CNN with two novel bidirectional recurrent neural networks to model complicated linguistic patterns using historical and future sentence context @cite_39 . used the model proposed by @cite_30 as the encoder and LSTMs as the decoder of its framework @cite_35 . In addition, a linear transformation layer was added at the input of the LSTMs for better training.
|
{
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_39"
],
"mid": [
"2183341477",
"1895577753",
"2339652278"
],
"abstract": [
"Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2 top-1 and 5:6 top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5 top-5 error and 17:3 top-1 error on the validation set and 3:6 top-5 error on the official test set.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"This work presents an end-to-end trainable deep bidirectional LSTM (Long-Short Term Memory) model for image captioning. Our model builds on a deep convolutional neural network (CNN) and two separate LSTM networks. It is capable of learning long term visual-language interactions by making use of history and future context information at high level semantic space. Two novel deep bidirectional variant models, in which we increase the depth of nonlinearity transition in different way, are proposed to learn hierarchical visual-language embeddings. Data augmentation techniques such as multi-crop, multi-scale and vertical mirror are proposed to prevent overfitting in training deep models. We visualize the evolution of bidirectional LSTM internal states over time and qualitatively analyze how our models \"translate\" image to sentence. Our proposed models are evaluated on caption generation and image-sentence retrieval tasks with three benchmark datasets: Flickr8K, Flickr30K and MSCOCO datasets. We demonstrate that bidirectional LSTM models achieve highly competitive performance to the state-of-the-art results on caption generation even without integrating additional mechanism (e.g. object detection, attention model etc.) and significantly outperform recent methods on retrieval task"
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
More sophisticated models based on the encoder-decoder framework were also proposed. @cite_42 proposed a method based on the encoder-decoder framework, called (R-LSTM), aiming to lead the model to generate a more descriptive sentence for a given image by introducing some reference information. In this work, by introducing reference information, the importance of different words is considered while generating captions.
|
{
"cite_N": [
"@cite_42"
],
"mid": [
"2885822952"
],
"abstract": [
"Image captioning, which aims to automatically generate a sentence description for an image, has attracted much research attention in cognitive computing. The task is rather challenging, since it requires cognitively combining the techniques from both computer vision and natural language processing domains. Existing CNN-RNN framework-based methods suffer from two main problems: in the training phase, all the words of captions are treated equally without considering the importance of different words; in the caption generation phase, the semantic objects or scenes might be misrecognized. In our paper, we propose a method based on the encoder-decoder framework, named Reference based Long Short Term Memory (R-LSTM), aiming to lead the model to generate a more descriptive sentence for the given image by introducing reference information. Specifically, we assign different weights to the words according to the correlation between words and images during the training phase. We additionally maximize the consensus score between the captions generated by the captioning model and the reference information from the neighboring images of the target image, which can reduce the misrecognition problem. We have conducted extensive experiments and comparisons on the benchmark datasets MS COCO and Flickr30k. The results show that the proposed approach can outperform the state-of-the-art approaches on all metrics, especially achieving a 10.37 improvement in terms of CIDEr on MS COCO. By analyzing the quality of the generated captions, we come to a conclusion that through the introduction of reference information, our model can learn the key information of images and generate more trivial and relevant words for images."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
The first idea of using visual attention was proposed by @cite_10 to cope with the problem of fixed length context vectors extracted by the encoders. In this work, in order to decrease the overhead of applying convolutional networks at each step, a sequence of feature vectors was extracted from different image regions. At the first step, a convolutional network was applied to all selected image regions and the extracted feature vectors, called annotation vectors, were listed. At each step of generating text, a new feature vector was calculated from the given annotation vectors that helped to predict the next word in the sentence.
|
{
"cite_N": [
"@cite_10"
],
"mid": [
"2147527908"
],
"abstract": [
"Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
Attention mechanism then was exported to machine translation by @cite_2 . Furthermore, @cite_31 employed the attention mechanism for multiple object recognition. Finally, the well-formed encoder-decoder empowered by attention with two different mechanisms called and was proposed by @cite_43 .
|
{
"cite_N": [
"@cite_43",
"@cite_31",
"@cite_2"
],
"mid": [
"2950178297",
"1484210532",
"2133564696"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.",
"We present an attention-based model for recognizing multiple objects in images. The proposed model is a deep recurrent neural network trained with reinforcement learning to attend to the most relevant regions of the input image. We show that the model learns to both localize and recognize multiple objects despite being given only class labels during training. We evaluate the model on the challenging task of transcribing house number sequences from Google Street View images and show that it is both more accurate than the state-of-the-art convolutional networks and uses fewer parameters and less computation.",
"Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."
]
}
|
1906.12188
|
2954234862
|
Generating textual descriptions for images has been an attractive problem for the computer vision and natural language processing researchers in recent years. Dozens of models based on deep learning have been proposed to solve this problem. The existing approaches are based on neural encoder-decoder structures equipped with the attention mechanism. These methods strive to train decoders to minimize the log likelihood of the next word in a sentence given the previous ones, which results in the sparsity of the output space. In this work, we propose a new approach to train decoders to regress the word embedding of the next word with respect to the previous ones instead of minimizing the log likelihood. The proposed method is able to learn and extract long-term information and can generate longer fine-grained captions without introducing any external memory cell. Furthermore, decoders trained by the proposed technique can take the importance of the generated words into consideration while generating captions. In addition, a novel semantic attention mechanism is proposed that guides attention points through the image, taking the meaning of the previously generated word into account. We evaluate the proposed approach with the MS-COCO dataset. The proposed model outperformed the state of the art models especially in generating longer captions. It achieved a CIDEr score equal to 125.0 and a BLEU-4 score equal to 50.5, while the best scores of the state of the art models are 117.1 and 48.0, respectively.
|
Advances made through deploying the attention based techniques encouraged researchers to focus on this kind of model. A large number of studies tried to improve the simple attention model in image captioning. @cite_7 employed an encoder-decoder method along with a memory slut to generate a personalized caption for an image in social media for specific users. @cite_8 used saliency map estimators to strengthen the attention mechanism. @cite_1 used the attention mechanism in visual question answering in order to find the best place to attend in image while generating the answer. In this work, attention layers were structured in a stack. This improved the model's performance. Furthermore, @cite_21 proposed an encoder-decoder framework for video description using the attention mechanism.
|
{
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_7",
"@cite_8"
],
"mid": [
"2951183276",
"2963954913",
"2953002754",
"2751076261"
],
"abstract": [
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"We address personalization issues of image captioning, which have not been discussed yet in previous research. For a query image, we aim to generate a descriptive sentence, accounting for prior knowledge such as the user's active vocabularies in previous documents. As applications of personalized image captioning, we tackle two post automation tasks: hashtag prediction and post generation, on our newly collected Instagram dataset, consisting of 1.1M posts from 6.3K users. We propose a novel captioning model named Context Sequence Memory Network (CSMN). Its unique updates over previous memory network models include (i) exploiting memory as a repository for multiple types of context information, (ii) appending previously generated words into memory to capture long-term information without suffering from the vanishing gradient problem, and (iii) adopting CNN memory structure to jointly represent nearby ordered memory slots for better context understanding. With quantitative evaluation and user studies via Amazon Mechanical Turk, we show the effectiveness of the three novel features of CSMN and its performance enhancement for personalized image captioning over state-of-the-art captioning models.",
"Image and video captioning are important tasks in visual data analytics, as they concern the capability of describing visual content in natural language. They are the pillars of query answering systems, improve indexing and search and allow a natural form of human-machine interaction. Even though promising deep learning strategies are becoming popular, the heterogeneity of large image archives makes this task still far from being solved. In this paper we explore how visual saliency prediction can support image captioning. Recently, some forms of unsupervised machine attention mechanisms have been spreading, but the role of human attention prediction has never been examined extensively for captioning. We propose a machine attention model driven by saliency prediction to provide captions in images, which can be exploited for many services on cloud and on multimedia data. Experimental evaluations are conducted on the SALICON dataset, which provides groundtruths for both saliency and captioning, and on the large Microsoft COCO dataset, the most widely used for image captioning."
]
}
|
1906.12061
|
2949822226
|
The security of Deep Reinforcement Learning (Deep RL) algorithms deployed in real life applications are of a primary concern. In particular, the robustness of RL agents in cyber-physical systems against adversarial attacks are especially vital since the cost of a malevolent intrusions can be extremely high. Studies have shown Deep Neural Networks (DNN), which forms the core decision-making unit in most modern RL algorithms, are easily subjected to adversarial attacks. Hence, it is imperative that RL agents deployed in real-life applications have the capability to detect and mitigate adversarial attacks in an online fashion. An example of such a framework is the Meta-Learned Advantage Hierarchy (MLAH) agent that utilizes a meta-learning framework to learn policies robustly online. Since the mechanism of this framework are still not fully explored, we conducted multiple experiments to better understand the framework's capabilities and limitations. Our results shows that the MLAH agent exhibits interesting coping behaviors when subjected to different adversarial attacks to maintain a nominal reward. Additionally, the framework exhibits a hierarchical coping capability, based on the adaptability of the Master policy and sub-policies themselves. From empirical results, we also observed that as the interval of adversarial attacks increase, the MLAH agent can maintain a higher distribution of rewards, though at the cost of higher instabilities.
|
Multiple studies have shown that RL agent are easily susceptible to adversarial attacks. @cite_18 showed that by extending the framework of FGSM to RL agents, the RL agents can be tricked into behaving sub-optimally. @cite_17 experimented with transferrability of attacks on DQN agents and showed that a properly crafted attack can easily be transferred to another agent with a different model while retaining similar effectiveness. Additionally, more sophisticated adversarial techniques have been proposed by @cite_15 . In their experiment, the authors suggests that rather than perturbing the observation of the RL agent repeatedly, it is sufficient to attack the RL agent at strategic time points when the relative preference of the optimal action over the least optimal action is higher than a certain threshold. In addition, the authors also proposed another possible adversarial strategy called the enchanting attack. In this strategy, a series of perturbations are crafted such that the succession of adversarial states will lead the agent to a specific target adversarial state.
|
{
"cite_N": [
"@cite_18",
"@cite_15",
"@cite_17"
],
"mid": [
"2949103145",
"2896893468",
"2962755762"
],
"abstract": [
"Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies. Our threat model considers adversaries capable of introducing small perturbations to the raw input of the policy. We characterize the degree of vulnerability across tasks and training algorithms, for a subclass of adversarial-example attacks in white-box and black-box settings. Regardless of the learned task or training algorithm, we observe a significant drop in performance, even with small adversarial perturbations that do not interfere with human perception. Videos are available at this http URL.",
"Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals1–3, but the best treatment strategy remains uncertain. In particular, evidence suggests that current practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients1,4–6. To tackle this sequential decision-making problem, we developed a reinforcement learning agent, the Artificial Intelligence (AI) Clinician, which extracted implicit knowledge from an amount of patient data that exceeds by many-fold the life-time experience of human clinicians and learned optimal treatment by analyzing a myriad of (mostly suboptimal) treatment decisions. We demonstrate that the value of the AI Clinician’s selected treatment is on average reliably higher than human clinicians. In a large validation cohort independent of the training data, mortality was lowest in patients for whom clinicians’ actual doses matched the AI decisions. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.",
"Deep learning classifiers are known to be inherently vulnerable to manipulation by intentionally perturbed inputs, named adversarial examples. In this work, we establish that reinforcement learning techniques based on Deep Q-Networks (DQNs) are also vulnerable to adversarial input perturbations, and verify the transferability of adversarial examples across different DQN models. Furthermore, we present a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. We propose an attack mechanism that exploits the transferability of adversarial examples to implement policy induction attacks on DQNs, and demonstrate its efficacy and impact through experimental study of a game-learning scenario."
]
}
|
1906.12061
|
2949822226
|
The security of Deep Reinforcement Learning (Deep RL) algorithms deployed in real life applications are of a primary concern. In particular, the robustness of RL agents in cyber-physical systems against adversarial attacks are especially vital since the cost of a malevolent intrusions can be extremely high. Studies have shown Deep Neural Networks (DNN), which forms the core decision-making unit in most modern RL algorithms, are easily subjected to adversarial attacks. Hence, it is imperative that RL agents deployed in real-life applications have the capability to detect and mitigate adversarial attacks in an online fashion. An example of such a framework is the Meta-Learned Advantage Hierarchy (MLAH) agent that utilizes a meta-learning framework to learn policies robustly online. Since the mechanism of this framework are still not fully explored, we conducted multiple experiments to better understand the framework's capabilities and limitations. Our results shows that the MLAH agent exhibits interesting coping behaviors when subjected to different adversarial attacks to maintain a nominal reward. Additionally, the framework exhibits a hierarchical coping capability, based on the adaptability of the Master policy and sub-policies themselves. From empirical results, we also observed that as the interval of adversarial attacks increase, the MLAH agent can maintain a higher distribution of rewards, though at the cost of higher instabilities.
|
Furthermore, @cite_0 proposed an algorithm that detects the presence of adversaries by observing the advantages of sub-policies using a hierarchical framework. Using the proposed algorithm, the results suggested that the learned bias of the RL agent is greatly reduced under adversarial conditions and a robust policy can be learnt while in the presence of unknown adversaries. Leveraging this existing framework, we further explore the viability of using it as a defensive framework.
|
{
"cite_N": [
"@cite_0"
],
"mid": [
"2883567023"
],
"abstract": [
"The growing prospect of deep reinforcement learning (DRL) being used in cyber-physical systems has raised concerns around safety and robustness of autonomous agents. Recent work on generating adversarial attacks have shown that it is computationally feasible for a bad actor to fool a DRL policy into behaving sub optimally. Although certain adversarial attacks with specific attack models have been addressed, most studies are only interested in off-line optimization in the data space (e.g., example fitting, distillation). This paper introduces a Meta-Learned Advantage Hierarchy (MLAH) framework that is attack model-agnostic and more suited to reinforcement learning, via handling the attacks in the decision space (as opposed to data space) and directly mitigating learned bias introduced by the adversary. In MLAH, we learn separate sub-policies (nominal and adversarial) in an online manner, as guided by a supervisory master agent that detects the presence of the adversary by leveraging the advantage function for the sub-policies. We demonstrate that the proposed algorithm enables policy learning with significantly lower bias as compared to the state-of-the-art policy learning approaches even in the presence of heavy state information attacks. We present algorithm analysis and simulation results using popular OpenAI Gym environments."
]
}
|
1906.12010
|
2955882125
|
We show how a multi-agent simulator can support two important but distinct methods for assessing a trading strategy: Market Replay and Interactive Agent-Based Simulation (IABS). Our solution is important because each method offers strengths and weaknesses that expose or conceal flaws in the subject strategy. A key weakness of Market Replay is that the simulated market does not substantially adapt to or respond to the presence of the experimental strategy. IABS methods provide an artificial market for the experimental strategy using a population of background trading agents. Because the background agents attend to market conditions and current price as part of their strategy, the overall market is responsive to the presence of the experimental strategy. Even so, IABS methods have their own weaknesses, primarily that it is unclear if the market environment they provide is realistic. We describe our approach in detail, and illustrate its use in an example application: The evaluation of market impact for various size orders.
|
In our studies discussed below, we populate our IABS with hundreds of simple agents, referred to as Zero Intelligence'' or ZI agents. The term agent was coined by @cite_4 to describe a family of automated market participants that submit random bid and ask orders. In this seminal work, two types of agents were considered: ZI-U (unconstrained) agents which place orders entirely at random within fixed extents, and ZI-C (constrained) agents which are prohibited from placing orders that result in an immediate loss. These ZI agents were initially used to demonstrate that the allocative efficiency of a market arises from its structure and not the particular strategy or intelligence of its participants, i.e., individual strategies are subsumed by the market as whole.
|
{
"cite_N": [
"@cite_4"
],
"mid": [
"2018567859"
],
"abstract": [
"We report market experiments in which human traders are replaced by \"zero-intelligence\" programs that submit random bids and offers. Imposing a budget constraint (i.e., not permitting traders to sell below their costs or buy above their values) is sufficient to raise the allocative efficiency of these auctions close to 100 percent. Allocative efficiency of a double auction derives largely from its structure, independent of traders' motivation, intelligence, or learning. Adam Smith's invisible hand may be more powerful than some may have thought; it can generate aggregate rationality not only from individual rationality but also from individual irrationality."
]
}
|
1906.12010
|
2955882125
|
We show how a multi-agent simulator can support two important but distinct methods for assessing a trading strategy: Market Replay and Interactive Agent-Based Simulation (IABS). Our solution is important because each method offers strengths and weaknesses that expose or conceal flaws in the subject strategy. A key weakness of Market Replay is that the simulated market does not substantially adapt to or respond to the presence of the experimental strategy. IABS methods provide an artificial market for the experimental strategy using a population of background trading agents. Because the background agents attend to market conditions and current price as part of their strategy, the overall market is responsive to the presence of the experimental strategy. Even so, IABS methods have their own weaknesses, primarily that it is unclear if the market environment they provide is realistic. We describe our approach in detail, and illustrate its use in an example application: The evaluation of market impact for various size orders.
|
Modern market simulations often use some form of ZI agent as a background'' agent to produce a reasonable baseline market microstructure into which experimental agents can be injected. For example, Wang and Wellman's investigation of spoofing agents @cite_22 uses a modified ZI agent with a Bayesian fundamental value belief based on noisy observations of an oracular value series, a private valuation per agent per unit, and a strategic parameter'' @math (eta) that controls the agent's willingness to accept less than its desired surplus in exchange for immediate, guaranteed execution.
|
{
"cite_N": [
"@cite_22"
],
"mid": [
"2620687831"
],
"abstract": [
"We present an agent-based model of manipulating prices in financial markets through spoofing: submitting spurious orders to mislead traders who observe the order book. Built around the standard limit-order mechanism, our model captures a complex market environment with combined private and common values, the latter represented by noisy observations upon a dynamic fundamental time series. We consider background agents following two types of trading strategies: zero intelligence (ZI) that ignores the order book and heuristic belief learning (HBL) that exploits the order book to predict price outcomes. By employing an empirical game-theoretic analysis to derive approximate strategic equilibria, we demonstrate the effectiveness of HBL and the usefulness of order book information in a range of non-spoofing environments. We further show that a market with HBL traders is spoofable, in that a spoofer can qualitatively manipulate prices towards its desired direction. After re-equilibrating games with spoofing, we find spoofing generally hurts market surplus and decreases the proportion of HBL. However, HBL's persistence in most environments with spoofing indicates a consistently spoofable market. Our model provides a way to quantify the effect of spoofing on trading behavior and efficiency, and thus measures the profitability and cost of an important form of market manipulation."
]
}
|
1906.12010
|
2955882125
|
We show how a multi-agent simulator can support two important but distinct methods for assessing a trading strategy: Market Replay and Interactive Agent-Based Simulation (IABS). Our solution is important because each method offers strengths and weaknesses that expose or conceal flaws in the subject strategy. A key weakness of Market Replay is that the simulated market does not substantially adapt to or respond to the presence of the experimental strategy. IABS methods provide an artificial market for the experimental strategy using a population of background trading agents. Because the background agents attend to market conditions and current price as part of their strategy, the overall market is responsive to the presence of the experimental strategy. Even so, IABS methods have their own weaknesses, primarily that it is unclear if the market environment they provide is realistic. We describe our approach in detail, and illustrate its use in an example application: The evaluation of market impact for various size orders.
|
In this section, we refer to the ABIDES simulation framework @cite_14 with an obfuscated name and citation for the sake of peer review anonymity. After peer review, references to ABIDES will be replaced with the proper framework name and the real citation will be included.
|
{
"cite_N": [
"@cite_14"
],
"mid": [
"2941903151"
],
"abstract": [
"We introduce ABIDES, an Agent-Based Interactive Discrete Event Simulation environment. ABIDES is designed from the ground up to support AI agent research in market applications. While simulations are certainly available within trading firms for their own internal use, there are no broadly available high-fidelity market simulation environments. We hope that the availability of such a platform will facilitate AI research in this important area. ABIDES currently enables the simulation of tens of thousands of trading agents interacting with an exchange agent to facilitate transactions. It supports configurable pairwise network latencies between each individual agent as well as the exchange. Our simulator's message-based design is modeled after NASDAQ's published equity trading protocols ITCH and OUCH. We introduce the design of the simulator and illustrate its use and configuration with sample code, validating the environment with example trading scenarios. The utility of ABIDES is illustrated through experiments to develop a market impact model. We close with discussion of future experimental problems it can be used to explore, such as the development of ML-based trading algorithms."
]
}
|
1811.11985
|
2902422922
|
This paper presents a novel semantic change detection scheme with only weak supervision. A straightforward approach for this task is to train a semantic change detection network directly from a large-scale dataset in an end-to-end manner. However, a specific dataset for this new task, which is usually labor-intensive and time-consuming, becomes indispensable. To avoid this problem, we propose to train this kind of network from existing datasets by dividing this task into change detection and semantic extraction. On the other hand, the difference in camera viewpoints, for example images of the same scene captured from a vehicle-mounted camera at different time points, usually brings a challenge to the change detection task. To address this challenge, we propose a new siamese network structure with the introduction of correlation layer. In addition, we create a publicly available dataset for semantic change detection to evaluate the proposed method. Both the robustness to viewpoint difference in change detection task and the effectiveness for semantic change detection of the proposed networks are verified by the experimental results.
|
* Change Detection Change detection methods are classified into several categories depending on types of target scene changes and available information. Change detection in 2D (image) domain is the most standard approach, especially for surveillance and satellite cameras @cite_49 @cite_34 @cite_40 @cite_26 , which are accurately aligned. A typical approach models the appearance of the scene from a set of images captured at different times, against which a newly captured query image is compared to detect changes @cite_1 . Scene models are usually designed using the images from the same viewpoint to detect target changes while accounting for irrelevant appearance changes such as differences in illumination conditions.
|
{
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_40",
"@cite_49",
"@cite_34"
],
"mid": [
"2170140722",
"2786907594",
"2098079560",
"",
"1962952378"
],
"abstract": [
"Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer.",
"In this paper, we propose a robust change detection method for intelligent visual surveillance. This method, named M4CD, includes three major steps. Firstly, a sample-based background model that integrates color and texture cues is built and updated over time. Secondly, multiple heterogeneous features (including brightness variation, chromaticity variation, and texture variation) are extracted by comparing the input frame with the background model, and a multi-source learning strategy is designed to online estimate the probability distributions for both foreground and background. The three features are approximately conditionally independent, making multi-source learning feasible. Pixel-wise foreground posteriors are then estimated with Bayes rule. Finally, the Markov random field (MRF) optimization and heuristic post-processing techniques are used sequentially to improve accuracy. In particular, a two-layer MRF model is constructed to represent pixel-based and superpixel-based contextual constraints compactly. Experimental results on the CDnet dataset indicate that M4CD is robust under complex environments and ranks among the top methods.",
"This paper examines the problem of detecting changes in a 3-d scene from a sequence of images, taken by cameras with arbitrary but known pose. No prior knowledge of the state of normal appearance and geometry of object surfaces is assumed, and abnormal changes can occur in any image of the sequence. To the authors' knowledge, this paper is the first to address the change detection problem in such a general framework. Existing change detection algorithms that exploit multiple image viewpoints typically can detect only motion changes or assume a planar world geometry which cannot cope effectively with appearance changes due to occlusion and un-modeled 3-d scene geometry (ego-motion parallax). The approach presented here can manage the complications of unknown and sometimes changing world surfaces by maintaining a 3-d voxel-based model, where probability distributions for surface occupancy and image appearance are stored in each voxel. The probability distributions at each voxel are continuously updated as new images are received. The key question of convergence of this joint estimation problem is answered by a formal proof based on realistic assumptions about the nature of real world scenes. A series of experiments are presented that evaluate change detection accuracy under laboratory-controlled conditions as well as aerial reconnaissance scenarios.-",
"",
"Many applications require detecting structural changes in a scene over a period of time. Comparing intensity values of successive images is not effective as such changes don't necessarily reflect actual changes at a site but might be caused by changes in the view point, illumination and seasons. We take the approach of comparing a 3-D model of the site, prepared from previous images, with new images to infer significant changes. This task is difficult as the images and the models have very different levels of abstract representations. Our approach consists of several steps: registering a site model to a new image, model validation to confirm the presence of model objects in the image; structural change detection seeks to resolve matching problems and indicate possibly changed structures; and finally updating models to reflect the changes. Our system is able to detect missing (or mis-modeled) buildings, changes in model dimensions, and new buildings under some conditions."
]
}
|
1811.11985
|
2902422922
|
This paper presents a novel semantic change detection scheme with only weak supervision. A straightforward approach for this task is to train a semantic change detection network directly from a large-scale dataset in an end-to-end manner. However, a specific dataset for this new task, which is usually labor-intensive and time-consuming, becomes indispensable. To avoid this problem, we propose to train this kind of network from existing datasets by dividing this task into change detection and semantic extraction. On the other hand, the difference in camera viewpoints, for example images of the same scene captured from a vehicle-mounted camera at different time points, usually brings a challenge to the change detection task. To address this challenge, we propose a new siamese network structure with the introduction of correlation layer. In addition, we create a publicly available dataset for semantic change detection to evaluate the proposed method. Both the robustness to viewpoint difference in change detection task and the effectiveness for semantic change detection of the proposed networks are verified by the experimental results.
|
In recent years, significant efforts have been made to change detection using machine learning, especially for deep neural networks (DNNs) @cite_21 @cite_27 @cite_2 @cite_39 @cite_9 . There are mainly two types of formulations, patch similarity estimation" and pixel-wise segmentation", which can be converted to each other. Patch similarity estimation has been studied for not only change detection but also feature, stereo and image matchings @cite_48 @cite_37 @cite_44 @cite_11 @cite_47 . The work by @cite_11 showed in their experiments that one-stream networks, which take different time images concatenated in channel dimension as input, outperforms two-stream networks such as the siamese network @cite_15 , and multi-scale inputs improve the estimation accuracy. Pixel-wise change detection has been further studied in the context of anomaly detection, background subtraction and moving object detection @cite_43 @cite_2 @cite_38 .
|
{
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_15",
"@cite_48",
"@cite_9",
"@cite_21",
"@cite_39",
"@cite_44",
"@cite_43",
"@cite_27",
"@cite_2",
"@cite_47",
"@cite_11"
],
"mid": [
"",
"1946093182",
"",
"2214868166",
"2315313982",
"2415309533",
"2317688867",
"1869500417",
"2776256943",
"",
"2617840581",
"2963502507",
"2949213045"
],
"abstract": [
"",
"The recent availability of geo-tagged images and rich geospatial data has inspired a number of algorithms for image based geolocalization. Most approaches predict the location of a query image by matching to ground-level images with known locations (e.g., street-view data). However, most of the Earth does not have ground-level reference photos available. Fortunately, more complete coverage is provided by oblique aerial or “bird's eye” imagery. In this work, we localize a ground-level query image by matching it to a reference database of aerial imagery. We use publicly available data to build a dataset of 78K aligned crossview image pairs. The primary challenge for this task is that traditional computer vision approaches cannot handle the wide baseline and appearance variation of these cross-view pairs. We use our dataset to learn a feature representation in which matching views are near one another and mismatched views are far apart. Our proposed approach, Where-CNN, is inspired by deep learning success in face verification and achieves significant improvements over traditional hand-crafted features and existing deep features learned from other large-scale databases. We show the effectiveness of Where-CNN in finding matches between street view and aerial view imagery and demonstrate the ability of our learned features to generalize to novel locations.",
"",
"This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.",
"We describe a system for the detection of changes in multiple views of a tunnel surface. From data gathered by a robotic inspection rig, we use a structure-from-motion pipeline to build panoramas of the surface and register images from different time instances. Reliably detecting changes such as hairline cracks, water ingress and other surface damage between the registered images is a challenging problem: achieving the best possible performance for a given set of data requires sub-pixel precision and careful modelling of the noise sources. The task is further complicated by factors such as unavoidable registration error and changes in image sensors, capture settings and lighting. Our contribution is a novel approach to change detection using a two-channel convolutional neural network. The network accepts pairs of approximately registered image patches taken at different times and classifies them to detect anomalous changes. To train the network, we take advantage of synthetically generated training examples and the homogeneity of the tunnel surfaces to eliminate most of the manual labelling effort. We evaluate our method on field data gathered from a live tunnel over several months, demonstrating it to outperform existing approaches from recent literature and industrial practice.",
"We propose a system for performing structural change detection in street-view videos captured by a vehicle-mounted monocular camera over time. Our approach is motivated by the need for more frequent and efficient updates in the large-scale maps used in autonomous vehicle navigation. Our method chains a multi-sensor fusion SLAM and fast dense 3D reconstruction pipeline, which provide coarsely registered image pairs to a deep Deconvolutional Network (DN) for pixel-wise change detection. We investigate two DN architectures for change detection, the first one is based on the idea of stacking contraction and expansion blocks while the second one is based on the idea of Fully Convolutional Networks. To train and evaluate our networks we introduce a new urban change detection dataset which is an order of magnitude larger than existing datasets and contains challenging changes due to seasonal and lighting variations. Our method outperforms existing literature on this dataset, which we make available to the community, and an existing panoramic change detection dataset, demonstrating its wide applicability.",
"",
"Deep learning has revolutionalized image-level tasks such as classification, but patch-level tasks, such as correspondence, still rely on hand-crafted features, e.g. SIFT. In this paper we use Convolutional Neural Networks (CNNs) to learn discriminant patch representations and in particular train a Siamese network with pairs of (non-)corresponding patches. We deal with the large number of potential pairs with the combination of a stochastic sampling of the training set and an aggressive mining strategy biased towards patches that are hard to classify. By using the L2 distance during both training and testing we develop 128-D descriptors whose euclidean distances reflect patch similarity, and which can be used as a drop-in replacement for any task involving SIFT. We demonstrate consistent performance gains over the state of the art, and generalize well against scaling and rotation, perspective transformation, non-rigid deformation, and illumination changes. Our descriptors are efficient to compute and amenable to modern GPUs, and are publicly available.",
"The complementary nature of color and depth synchronized information acquired by low cost RGBD sensors poses new challenges and design opportunities in several applications and research areas. Here, we focus on background subtraction for moving object detection, which is the building block for many computer vision applications, being the first relevant step for subsequent recognition, classification, and activity analysis tasks. The aim of this paper is to describe a novel benchmarking framework that we set up and made publicly available in order to evaluate and compare scene background modeling methods for moving object detection on RGBD videos. The proposed framework involves the largest RGBD video dataset ever made for this specific purpose. The 33 videos span seven categories, selected to include diverse scene background modeling challenges for moving object detection. Seven evaluation metrics, chosen among the most widely used, are adopted to evaluate the results against a wide set of pixel-wise ground truths. Moreover, we present a preliminary analysis of results, devoted to assess to what extent the various background modeling challenges pose troubles to background subtraction methods exploiting color and depth information.",
"",
"Conventional change detection methods require a large number of images to learn background models or depend on tedious pixel-level labeling by humans. In this paper, we present a weakly supervised approach that needs only image-level labels to simultaneously detect and localize changes in a pair of images. To this end, we employ a deep neural network with DAG topology to learn patterns of change from image-level labeled training data. On top of the initial CNN activations, we define a CRF model to incorporate the local differences and context with the dense connections between individual pixels. We apply a constrained mean-field algorithm to estimate the pixel-level labels, and use the estimated labels to update the parameters of the CNN in an iterative EM framework. This enables imposing global constraints on the observed foreground probability mass function. Our evaluations on four benchmark datasets demonstrate superior detection and localization performance.",
"We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"In this paper we show how to learn directly from image data (i.e., without resorting to manually-designed features) a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems. To encode such a function, we opt for a CNN-based model that is trained to account for a wide variety of changes in image appearance. To that end, we explore and study multiple neural network architectures, which are specifically adapted to this task. We show that such an approach can significantly outperform the state-of-the-art on several problems and benchmark datasets."
]
}
|
1811.11985
|
2902422922
|
This paper presents a novel semantic change detection scheme with only weak supervision. A straightforward approach for this task is to train a semantic change detection network directly from a large-scale dataset in an end-to-end manner. However, a specific dataset for this new task, which is usually labor-intensive and time-consuming, becomes indispensable. To avoid this problem, we propose to train this kind of network from existing datasets by dividing this task into change detection and semantic extraction. On the other hand, the difference in camera viewpoints, for example images of the same scene captured from a vehicle-mounted camera at different time points, usually brings a challenge to the change detection task. To address this challenge, we propose a new siamese network structure with the introduction of correlation layer. In addition, we create a publicly available dataset for semantic change detection to evaluate the proposed method. Both the robustness to viewpoint difference in change detection task and the effectiveness for semantic change detection of the proposed networks are verified by the experimental results.
|
Recently, to update city model for autonomous driving, several change detection methods using vehicular imagery has been proposed @cite_21 @cite_39 @cite_5 . @cite_39 proposed the change detection method that differentiates feature maps extracted from input images using a CNN such as VGG @cite_19 , which is trained using large-scale image recognition datasets, and refines the coarse detection results using superpixel segmentation @cite_30 . The work by @cite_21 tackles the same problem of viewpoint changes between different times using depth map estimated from multi-view images in an end-to-end manner with CNN. For single view setting, the dense optical flow based network also has been proposed @cite_5 .
|
{
"cite_N": [
"@cite_30",
"@cite_21",
"@cite_39",
"@cite_19",
"@cite_5"
],
"mid": [
"1999478155",
"2415309533",
"2317688867",
"1686810756",
"2774991885"
],
"abstract": [
"This paper addresses the problem of segmenting an image into regions. We define a predicate for measuring the evidence for a boundary between two regions using a graph-based representation of the image. We then develop an efficient segmentation algorithm based on this predicate, and show that although this algorithm makes greedy decisions it produces segmentations that satisfy global properties. We apply the algorithm to image segmentation using two different kinds of local neighborhoods in constructing the graph, and illustrate the results with both real and synthetic images. The algorithm runs in time nearly linear in the number of graph edges and is also fast in practice. An important characteristic of the method is its ability to preserve detail in low-variability image regions while ignoring detail in high-variability regions.",
"We propose a system for performing structural change detection in street-view videos captured by a vehicle-mounted monocular camera over time. Our approach is motivated by the need for more frequent and efficient updates in the large-scale maps used in autonomous vehicle navigation. Our method chains a multi-sensor fusion SLAM and fast dense 3D reconstruction pipeline, which provide coarsely registered image pairs to a deep Deconvolutional Network (DN) for pixel-wise change detection. We investigate two DN architectures for change detection, the first one is based on the idea of stacking contraction and expansion blocks while the second one is based on the idea of Fully Convolutional Networks. To train and evaluate our networks we introduce a new urban change detection dataset which is an order of magnitude larger than existing datasets and contains challenging changes due to seasonal and lighting variations. Our method outperforms existing literature on this dataset, which we make available to the community, and an existing panoramic change detection dataset, demonstrating its wide applicability.",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"This paper presents a novel method for detecting scene changes from a pair of images with a difference of camera viewpoints using a dense optical flow based change detection network. In the case that camera poses of input images are fixed or known, such as with surveillance and satellite cameras, the pixel correspondence between the images captured at different times can be known. Hence, it is possible to comparatively accurately detect scene changes between the images by modeling the appearance of the scene. On the other hand, in case of cameras mounted on a moving object, such as ground and aerial vehicles, we must consider the spatial correspondence between the images captured at different times. However, it can be difficult to accurately estimate the camera pose or 3D model of a scene, owing to the scene changes or lack of imagery. To solve this problem, we propose a change detection convolutional neural network utilizing dense optical flow between input images to improve the robustness to the difference between camera viewpoints. Our evaluation based on the panoramic change detection dataset shows that the proposed method outperforms state-of-the-art change detection algorithms."
]
}
|
1811.11985
|
2902422922
|
This paper presents a novel semantic change detection scheme with only weak supervision. A straightforward approach for this task is to train a semantic change detection network directly from a large-scale dataset in an end-to-end manner. However, a specific dataset for this new task, which is usually labor-intensive and time-consuming, becomes indispensable. To avoid this problem, we propose to train this kind of network from existing datasets by dividing this task into change detection and semantic extraction. On the other hand, the difference in camera viewpoints, for example images of the same scene captured from a vehicle-mounted camera at different time points, usually brings a challenge to the change detection task. To address this challenge, we propose a new siamese network structure with the introduction of correlation layer. In addition, we create a publicly available dataset for semantic change detection to evaluate the proposed method. Both the robustness to viewpoint difference in change detection task and the effectiveness for semantic change detection of the proposed networks are verified by the experimental results.
|
* Semantic Change Detection There are few studies on semantic change detection because most of change detection studies that specify their target domain, such as moving object, forest, and do not explicitly recognize semantic classes of change. The work by @cite_25 does not consider the problem of detecting changes and estimating which of the input images contains change objects. @cite_45 @cite_42 detected land surface changes between satellite images. In the case of land surface change detection of satellite images, unlike scene change detection, it is not necessary to estimate which of the input images contains change objects because the change region between the two images is the same. However, for the scene change detection, the estimation is necessary because scene objects can appear, disappear and move.
|
{
"cite_N": [
"@cite_45",
"@cite_42",
"@cite_25"
],
"mid": [
"2891248708",
"2897489431",
"2343427588"
],
"abstract": [
"This paper presents three fully convolutional neural network architectures which perform change detection using a pair of coregistered images. Most notably, we propose two Siamese extensions of fully convolutional networks which use heuristics about the current problem to achieve the best results in our tests on two open change detection datasets, using both RGB and multispectral images. We show that our system is able to learn from scratch using annotated change detection images. Our architectures achieve better performance than previously proposed methods, while being at least 500 times faster than related systems. This work is a step towards efficient processing of data from large scale Earth observation systems such as Copernicus or Landsat.",
"Change detection is one of the main problems in remote sensing, and is essential to the accurate processing and understanding of the large scale Earth observation data available through programs such as Sentinel and Landsat. Most of the recently proposed change detection methods bring deep learning to this context, but openly available change detection datasets are still very scarce, which limits the methods that can be proposed and tested. In this paper we present the first large scale high resolution semantic change detection (HRSCD) dataset, which enables the usage of deep learning methods for semantic change detection. The dataset contains coregistered RGB image pairs, pixel-wise change information and land cover information. We then propose several methods using fully convolutional neural networks to perform semantic change detection. Most notably, we present a network architecture that performs change detection and land cover mapping simultaneously, while using the predicted land cover information to help to predict changes. We also describe a sequential training scheme that allows this network to be trained without setting a hyperparameter that balances different loss functions and achieves the best overall results.",
"Change detection is the study of detecting changes between two different images of a scene taken at different times. By the detected change areas, however, a human cannot understand how different the two images. Therefore, a semantic understanding is required in the change detection research such as disaster investigation. The paper proposes the concept of semantic change detection, which involves intuitively inserting semantic meaning into detected change areas. We mainly focus on the novel semantic segmentation in addition to a conventional change detection approach. In order to solve this problem and obtain a high-level of performance, we propose an improvement to the hypercolumns representation, hereafter known as hypermaps, which effectively uses convolutional maps obtained from convolutional neural networks (CNNs). We also employ multi-scale feature representation captured by different image patches. We applied our method to the TSUNAMI Panoramic Change Detection dataset, and re-annotated the changed areas of the dataset via semantic classes. The results show that our multi-scale hypermaps provided outstanding performance on the re-annotated TSUNAMI dataset."
]
}
|
1811.12083
|
2902027972
|
Probabilistic argumentation allows reasoning about argumentation problems in a way that is well-founded by probability theory. However, in practice, this approach can be severely limited by the fact that probabilities are defined by adding an exponential number of terms. We show that this exponential blowup can be avoided in an interesting fragment of epistemic probabilistic argumentation and that some computational problems that have been considered intractable can be solved in polynomial time. We give efficient convex programming formulations for these problems and explore how far our fragment can be extended without loosing tractability.
|
@cite_36 recently introduced a very general probabilistic argumentation framework that generalizes many ideas that have been considered before in the literature. The authors consider probability functions over subsets of defeasible theories or over subgraphs. The latter approach can then be seen as a generalization of the former, which abstracts from the structure of arguments. The authors discuss probabilistic labellings that should not be confused with probability labellings that we considered here. Roughly speaking, in @cite_36 , a probabilistic labelling frame corresponds to a probability function over subsets of possible classical labellings over an argumentation framework. These probabilistic labelling frames can then be used to assign probabilities to arguments. In this sense, a probabilistic labelling considered in @cite_36 induces a probability labelling as considered here. However, the focus in @cite_36 is on conceptual questions and computational problems are not discussed.
|
{
"cite_N": [
"@cite_36"
],
"mid": [
"2741916207"
],
"abstract": [
"The combination of argumentation and probability paves the way to new accounts of qualitative and quantitative uncertainty, thereby offering new theoretical and applicative opportunities. Due to a variety of interests, probabilistic argumentation is approached in the literature with different frameworks, pertaining to structured and abstract argumentation, and with respect to diverse types of uncertainty, in particular the uncertainty on the credibility of the premises, the uncertainty about which arguments to consider, and the uncertainty on the acceptance status of arguments or statements. Towards a general framework for probabilistic argumentation, we investigate a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account, along with diverse types of uncertainty. Our framework provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature."
]
}
|
1811.12114
|
2950914466
|
We address the multi-satellite scheduling problem with limited observation capacities that arises from the need to observe a set of targets on the Earth's surface using imaging resources installed on a set of satellites. We define and analyze the conflict indicators of all available visible time windows of missions, as well as the feasible time intervals of resources. The problem is then formulated as a mixed integer linear programming model, in which constraints are derived from a careful analysis of the interdependency between feasible time intervals that are eligible for observations. We apply the proposed model to several different problem instances that reflect real-world situations. The computational results verify that our approach is effective for obtaining optimum solutions or solutions with a very good quality.
|
Given the complexity of the issue, a large portion of previous works is concerned with single satellite scheduling and address the efficient performance by providing an optimal solution and an upper bound. A common set of benchmark instances (S5-DPSP) of the satellite SPOT5 scheduling problem is proposed by @cite_15 . Based on this data, a weighted acyclic digraph model is formulated by @cite_14 , and solved with a label-setting shortest path algorithm. Alternatively, formulations as generalized knapsack problems can be solved with a tabu search algorithm or a genetic algorithm . Two 0-1 linear programming models are considered by @cite_7 . Based on the valid inequalities that arise from node packing and the 3-regular independence system polyhedra, a strengthened formulation for the SPOT5 daily photograph scheduling is presented by @cite_12 . However, the benchmark instances are provided without consideration of the constraints that are imposed by a limited observation time of the target.
|
{
"cite_N": [
"@cite_15",
"@cite_14",
"@cite_12",
"@cite_7"
],
"mid": [
"1786712251",
"2050724929",
"2058114259",
"1975845493"
],
"abstract": [
"The daily management of an earth observation satellite is a challenging combinatorial optimization problem. This problem can be roughly stated as follows: given (1) a set of candidate images for the next day, each one associated with a weight reflecting its importance, (2) a set of imperative constraints expressing physical limitations (no overlapping images, sufficient transition times, bounded instantaneous data flow and recording capacity), select a subset of candidates which meets all the constraints and maximizes the sum of the weights of the selected candidates. It can be easily cast in variants of the CSP, ILP or SAT frameworks. As a benchmark, we propose to the CONSTRAINTS community a set of instances, which have been produced from a simulator of the order book of the future satellite SPOT5. The fact that only some of them have been optimally solved should make them very attractive.",
"We consider a satellite following orbits around the earth in order to take shots corresponding to images requested by various customers. The daily operations of such a satellite consist of defining a feasible and satisfactory shot sequence. This problem involves both combinatorial and multiple criteria difficulties. Indeed, the number of feasible shot sequences grows significantly with the number of images asked for, and the evaluation of a shot sequence is based on several conflicting criteria. We propose to formulate this problem as the selection of a multiple criteria path in a graph without circuit. Our approach for solving this problem involves two stages: generation of efficient paths and selection of a satisfactory path using a multiple criteria interactive procedure.",
"Earth observation satellites, such as the SPOT 5, take photographs of the earth according to consumers' demands. Obtaining a good schedule for the photographs is a combinatorial optimization problem known in the literature as the daily photograph scheduling problem (DPSP). The DPSP consists of selecting a subset of photographs, from a set of candidates, to different cameras, maximizing a profit function and satisfying a large number of constraints. Commercial solvers, with standard integer programming formulations, are not able to solve some DPSP real instances available in the literature. In this paper we present a strengthened formulation for the DPSP, based on valid inequalities arising in node packing and 3-regular independence system polyhedra. This formulation was able, with a commercial solver, to solve to optimality all those instances in a short computation time.",
"In this paper, we compare several 0-1 linear programs for solving the satellite mission planning problem. We prove that one of them presents a smaller integrality gap. Our explanation is based on stable set polytope formulations for perfect graphs."
]
}
|
1811.12114
|
2950914466
|
We address the multi-satellite scheduling problem with limited observation capacities that arises from the need to observe a set of targets on the Earth's surface using imaging resources installed on a set of satellites. We define and analyze the conflict indicators of all available visible time windows of missions, as well as the feasible time intervals of resources. The problem is then formulated as a mixed integer linear programming model, in which constraints are derived from a careful analysis of the interdependency between feasible time intervals that are eligible for observations. We apply the proposed model to several different problem instances that reflect real-world situations. The computational results verify that our approach is effective for obtaining optimum solutions or solutions with a very good quality.
|
@cite_13 propose a greedy algorithm and a genetic algorithm based on the assumption that there are only one resource and one observation window for every mission. A single-satellite single-orbit scheduling problem is addressed with a tabu search heuristic in , an adaptive meta-heuristic in @cite_8 , and a 0 1 linear programming model in . Another 0 1 model based on preprocessing the observation segments is discussed by @cite_10 . The problem of maximizing the total amount of downloaded data is addressed with a mixed-integer programming model and an iterative algorithm . There are also several publications that treat the single satellite scheduling as a machine scheduling problem with constraints of operating time windows. The problem is then solved by a heuristic . By considering the setup time between two consecutive observations, @cite_2 introduce the selecting and scheduling problem for an agile Earth observation satellite. @cite_9 take the limited time window and transition time constraints into account.
|
{
"cite_N": [
"@cite_13",
"@cite_8",
"@cite_9",
"@cite_2",
"@cite_10"
],
"mid": [
"2148027993",
"2605409691",
"2185236731",
"2110599800",
"2040890276"
],
"abstract": [
"This paper describes three approaches to assigning tasks to earth observing satellites EOS. A fast and simple priority dispatch method is described and shown to produce acceptable schedules most of the time. A look ahead algorithm is then introduced that outperforms the dispatcher by about 12 with only a small increase in run time. These algorithms set the stage for the introduction of a genetic algorithm that uses job permutations as the population. The genetic approach presented here is novel in that it uses two additional binary variables, one to allow the dispatcher to occasionally skip a job in the queue and another to allow the dispatcher to occasionally allocate the worst position to the job. These variables are included in the recombination step in a natural way. The resulting schedules improve on the look ahead by as much as 15 at times and 3 on average. We define and use the \"window-constrained packing\" problem to model the bare bones of the EOS scheduling problem.",
"Abstract Agile satellites belong to the new generation of satellites with three degrees of freedom for acquiring images on the Earth. As a result, they have longer visible time windows for the requested targets. An image shot can be conducted at any time in the window if and only if the time left is sufficient for the fulfillment of the imaging process. For an agile satellite, a different observation time means a different image angle, thus defining a different transition time from its neighboring tasks. Therefore, the setup time for each imaging process depends on the selection of its observation start time, making the problem a time-dependent scheduling problem. To solve it, we develop a metaheuristic, called adaptive large neighborhood search (ALNS), thus creating a conflict-free observational timeline. ALNS is a local search framework in which a number of simple operators compete to modify the current solution. In our ALNS implementation, we define six removal operators and three insertion operators. At each iteration, a pair of operators is selected to destroy the current solution and generate a new solution with a large collection of variables modified. Time slacks are introduced to confine the propagation of the time-dependent constraint of transition time. Computational experiments show that the ALNS metaheuristic performs effectively, fulfilling more tasks with a good robustness.",
"Earth Observation Satellite (EOS) scheduling is an important oversubscribed constraint optimization problem. Permutation-based scheduling methods have recently been shown to be effective on these problems. However, the new agile EOS satellites present additional scheduling complexity because they allow image acquisition over a window of possible observation times. Constraint propagation algorithms have been successfully applied in traditional local search methods for these problems. In this paper, we describe a synthesis of permutation-based search and constraint propagation for agile EOS scheduling. Our approach incorporates the advantages of both techniques. We obtain the large neighbourhood behaviour of permutation search for oversubscribed resource scheduling problems. As well, we exploit the power of constraint propagation to retain as much flexibility as possible while building the schedule. We investigate different local optimization algorithms (including hill-climbing, simulated annealing and squeaky wheel optimization) coupled with constraint propagation over image acquisition time windows. We compare our method to recent permutation-based methods for non-agile EOS scheduling which rely upon a greedy scheduler for assigning image acquisition times. Experiments are performed on synthetic EOS data sets using both uniform random image targets and actual urban image target sets. We measure both schedule quality and solution degradation as new image requests are added dynamically to the problem. Our results suggest that permutation-based search coupled with constraint propagation works very well for agile EOS scheduling.",
"Abstract This article concerns the problem of managing the new generation of Agile Earth Observing Satellites (AEOS). This kind of satellites is presently studied by the French Centre National d'Etudes Spatiales (PLEIADES project). The mission of an Earth Observing Satellite is to acquire images of specified areas on the Earth surface, in response to observation requests from customers. Whereas non-agile satellites such as SPOT5 have only one degree of freedom for acquiring images, the new generation satellites have three, giving opportunities for a more efficient use of the satellite imaging capabilities. Counterwise to this advantage, the selection and scheduling of observations becomes significantly more difficult, due to the larger search space for potential solutions. Hence, selecting and scheduling observations of agile satellites is a highly combinatorial problem. This article sets out the overall problem and analyses its difficulties. Then it presents different methods which have been investigated in order to solve a simplified version of the complete problem: a greedy algorithm, a dynamic programming algorithm, a constraint programming approach and a local search method.",
"This paper studies an image collection planning problem for a Korean satellite, KOMPSAT-2 (KOrea Multi-Purpose SATellite-2). KOMPSAT-2 has the mission goal of maximizing image acquisition in time and quality requested by customers and operates under several complicating conditions. One of the characteristics in KOMPSAT-2 is its strip mode operation, in which segments of continuous-observation areas with known sizes are captured one at a time. In this paper, we regard the segment as a group of adjoining geographical square regions (scenes), whose size must also be determined. Thus, the problem involves the determination of proper segment lengths as well as an image collection schedule. We present a binary integer programming model for this problem in a multi-orbit long-term planning environment and provide a heuristic solution approach based on the Lagrangian relaxation and subgradient methods. We also present the results of our computational experiment based on randomly generated data."
]
}
|
1811.12114
|
2950914466
|
We address the multi-satellite scheduling problem with limited observation capacities that arises from the need to observe a set of targets on the Earth's surface using imaging resources installed on a set of satellites. We define and analyze the conflict indicators of all available visible time windows of missions, as well as the feasible time intervals of resources. The problem is then formulated as a mixed integer linear programming model, in which constraints are derived from a careful analysis of the interdependency between feasible time intervals that are eligible for observations. We apply the proposed model to several different problem instances that reflect real-world situations. The computational results verify that our approach is effective for obtaining optimum solutions or solutions with a very good quality.
|
In comparison to the single satellite scheduling problem, the use of multiple satellites gives more flexibility and is thus more challenging . @cite_6 @cite_11 @cite_4 use graph representations to formulate the problem, for which dynamic programming and ant colony optimization algorithms are proposed to produce a near-optimal solution. To this end, simple sequential missions with conflicts can easily be represented as graphs. However, if the problem involves multiple satellites, the visibility fields of different resources may overlap. Furthermore, several targets may be in the field of view of the same resource simultaneously, and a target may be observed by more than one resource at the same time. Thus, the visible time windows are highly overlapping during the scheduling period, making the combinational characteristic of the problem more prominent. This ultimately renders the uniform modelling of the problem difficult .
|
{
"cite_N": [
"@cite_4",
"@cite_6",
"@cite_11"
],
"mid": [
"2186460581",
"2087326379",
"2082026440"
],
"abstract": [
"The development of appropriate project management techniques for Research and Development (R&D) projects has received significant academic and practical attention over the past few decades. Project managers typically face the problem of allocating resources and scheduling activities, for which the underlying combinatorial problem is NP-hard. The inherent uncertainty in many R&D environments increases the complexity of the problem.",
"Satellite observation scheduling plays a significant role in improving the efficiency of satellite observation systems. Although extensive scheduling algorithms have been proposed for the satellite observation scheduling problem (SOSP), the task clustering strategy has not been taken into account up to now. This paper presents a novel two-phase based scheduling method with the consideration of task clustering for solving SOSP. This method comprises two phases: a task clustering phase and a task scheduling phase. In the task clustering phase, we construct a task clustering graph model and use an improved minimum clique partition algorithm to obtain cluster-tasks. In the task scheduling phase, based on overall tasks and obtained cluster-tasks, we construct an acyclic directed graph model and utilize a hybrid ant colony optimization coming with a mechanism of local search, called ACO-LS, to produce optimal or near optimal schedules. Extensive experimental simulations demonstrate the efficiency of the proposed scheduling method.",
"In this paper, we develop models and algorithms for solving the single-satellite, multi-ground station communication scheduling problem, with the objective of maximizing the total amount of data downloaded from space. With the growing number of small satellites gathering large quantities of data in space and seeking to download this data to a capacity-constrained ground station network, effective scheduling is critical to mission success. Our goal in this research is to develop tools that yield high-quality schedules in a timely fashion while accurately modeling on-board satellite energy and data dynamics as well as realistic constraints of the space environment and ground network. We formulate an under-constrained mixed integer program (MIP) to model the problem. We then introduce an iterative algorithm that progressively tightens the constraints of this model to obtain a feasible and thus optimal solution. Computational experiments are conducted on diverse real-world data sets to demonstrate tractability and solution quality. Additional experiments on a broad test bed of contrived problem instances are used to test the boundaries of tractability for applying this approach to other problem domains. Our computational results suggest that our approach is viable for real-world instances, as well as providing a strong foundation for more complex problems with multiple satellites and stochastic conditions."
]
}
|
1811.12182
|
2902603613
|
The widespread mobile devices facilitated the emergence of many new applications and services. Among them are location-based services (LBS) that provide services based on user's location. Several techniques have been presented to enable LBS even in indoor environments where Global Positioning System (GPS) has low localization accuracy. These methods use some environment measurements (like Channel State Information (CSI) or Received Signal Strength (RSS)) for user localization. In this paper, we will use CSI and a novel deep learning algorithm to design a robust and efficient system for indoor localization. More precisely, we use supervised autoencoder (SAE) to model the environment using the data collected during the training phase. Then, during the testing phase, we use the trained model and estimate the coordinates of the unknown point by checking different possible labels. Unlike the previous fingerprinting approaches, in this work, we do not store the CSI RSS of fingerprints and instead we model the environment only with a single SAE. The performance of the proposed scheme is then evaluated in two indoor environments and compared with that of similar approaches.
|
Fingerprinting-based approaches usually require a training phase to survey the floor plan and a testing phase to search for the best matched fingerprints for location estimation @cite_16 . These fingerprints can be built by WiFi @cite_8 , RFID @cite_3 and Bluetooth technologies. Among different methods and due to availability of WiFi APs in most environments, WiFi based schemes are more common than others. Most previous WiFi based works was mostly done with RSSI. RADAR @cite_6 is first deterministic work based on WiFi which tries to build fingerprints of RSS using one or more APs and applies K-Nearest Neighborhood (KNN) for position estimation @cite_16 . Horus @cite_8 is another RSS based scheme, where the RSS from an AP is modeled as a random variable over time and space @cite_16 . It identifies different causes for the wireless channel variations and uses probabilistic techniques to achieve its high accuracy.
|
{
"cite_N": [
"@cite_16",
"@cite_6",
"@cite_3",
"@cite_8"
],
"mid": [
"",
"2170102584",
"2163993204",
"2051376734"
],
"abstract": [
"",
"The proliferation of mobile computing devices and local-area wireless networks has fostered a growing interest in location-aware systems and services. In this paper we present RADAR, a radio-frequency (RF)-based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location-aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy.",
"Growing convergence among mobile computing devices and embedded technology sparks the development and deployment of \"context-aware\" applications, where location is the most essential context. We present LANDMARC, a location sensing prototype system that uses Radio Frequency Identification (RFID) technology for locating objects inside buildings. The major advantage of LANDMARC is that it improves the overall accuracy of locating objects by utilizing the concept of reference tags. Based on experimental analysis, we demonstrate that active RFID is a viable and cost-effective candidate for indoor location sensing. Although RFID is not designed for indoor location sensing, we point out three major features that should be added to make RFID technologies competitive in this new and growing market.",
"We present the design and implementation of the Horus WLAN location determination system. The design of the Horus system aims at satisfying two goals: high accuracy and low computational requirements. The Horus system identifies different causes for the wireless channel variations and addresses them to achieve its high accuracy. It uses location-clustering techniques to reduce the computational requirements of the algorithm. The lightweight Horus algorithm helps in supporting a larger number of users by running the algorithm at the clients.We discuss the different components of the Horus system and its implementation under two different operating systems and evaluate the performance of the Horus system on two testbeds. Our results show that the Horus system achieves its goal. It has an error of less than 0.6 meter on the average and its computational requirements are more than an order of magnitude better than other WLAN location determination systems. Moreover, the techniques developed in the context of the Horus system are general and can be applied to other WLAN location determination systems to enhance their accuracy. We also report lessons learned from experimenting with the Horus system and provide directions for future work."
]
}
|
1811.12047
|
2947107456
|
Computer vision is difficult, partly because the desired mathematical function connecting input and output data is often complex, fuzzy and thus hard to learn. Coarse-to-fine (C2F) learning is a promising direction, but it remains unclear how it is applied to a wide range of vision problems. This paper presents a generalized C2F framework by making two technical contributions. First, we provide a unified way of C2F propagation, in which the coarse prediction (a class vector, a detected box, a segmentation mask, etc.) is encoded into a dense (pixel-level) matrix and concatenated to the original input, so that the fine model takes the same design of the coarse model but sees additional information. Second, we present a progressive training strategy which starts with feeding the ground-truth instead of the coarse output into the fine model, and gradually increases the fraction of coarse output, so that at the end of training the fine model is ready for testing. We also relate our approach to curriculum learning by showing that data difficulty keeps increasing during the training process. We apply our framework to three vision tasks including image classification, object localization and semantic segmentation, and demonstrate consistent accuracy gain compared to the baseline training strategy.
|
Deep learning @cite_7 in particular deep convolutional neural networks have been dominating the field of computer vision. The fundamental idea is to build a hierarchical structure to learn complicated visual patterns from a large-scale database @cite_14 . As the number of network layers increases from tens @cite_0 @cite_12 @cite_5 to hundreds @cite_8 @cite_40 , the network's representation ability becomes stronger, but training these networks becomes more and more challenging. Various techniques have been proposed to improve numerical stability @cite_10 @cite_43 and over-fitting @cite_42 , but the transferability from training data to testing data is still below satisfactory. It was pointed out that this issue is mainly caused by the overhigh complexity of deep networks, so that the limited amount of training data can be interpreted in an unexpected way @cite_36 . There exist two types of solutions, namely, curriculum learning and coarse-to-fine learning.
|
{
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_42",
"@cite_0",
"@cite_43",
"@cite_40",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"2108598243",
"",
"2949650786",
"2432717477",
"2095705004",
"",
"2949117887",
"",
"2950179405",
"",
"1686810756"
],
"abstract": [
"The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.",
"",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6 to 93.2 and from 88.0 to 93.8 on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.",
"Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different \"thinned\" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.",
"",
"Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.",
"",
"We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.",
"",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
}
|
1811.12047
|
2947107456
|
Computer vision is difficult, partly because the desired mathematical function connecting input and output data is often complex, fuzzy and thus hard to learn. Coarse-to-fine (C2F) learning is a promising direction, but it remains unclear how it is applied to a wide range of vision problems. This paper presents a generalized C2F framework by making two technical contributions. First, we provide a unified way of C2F propagation, in which the coarse prediction (a class vector, a detected box, a segmentation mask, etc.) is encoded into a dense (pixel-level) matrix and concatenated to the original input, so that the fine model takes the same design of the coarse model but sees additional information. Second, we present a progressive training strategy which starts with feeding the ground-truth instead of the coarse output into the fine model, and gradually increases the fraction of coarse output, so that at the end of training the fine model is ready for testing. We also relate our approach to curriculum learning by showing that data difficulty keeps increasing during the training process. We apply our framework to three vision tasks including image classification, object localization and semantic segmentation, and demonstrate consistent accuracy gain compared to the baseline training strategy.
|
The basic idea of curriculum learning @cite_23 is to gradually increase the difficulty of training data, so that the model can be optimized in a faster and or more stable manner. This idea was first brought up by referring to how humans are taught to learn a concept and verified effective also for computer algorithms @cite_13 . It was later widely applied to a wide range of learning tasks, including visual recognition @cite_38 @cite_11 and generation @cite_45 , natural language processing @cite_6 @cite_16 and reinforcement learning @cite_44 @cite_31 . Curriculum learning was theoretically verified a good choice in transfer learning @cite_26 , multi-task learning @cite_29 and sequential learning @cite_19 scenarios, and there have been discussions on the principle of designing curriculum towards better performance @cite_35 . A similar idea (gradually increasing training difficulty) was also adopted in online hard example mining @cite_32 @cite_21 , but the latter case often started with a regular data distribution which is gradually adjusted towards difficult training data. The major drawback of curriculum learning lies in the requirement of evaluating the difficulty of training data, which is not easy in general. This paper provides a framework to bypass this problem.
|
{
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_26",
"@cite_29",
"@cite_21",
"@cite_32",
"@cite_6",
"@cite_44",
"@cite_19",
"@cite_45",
"@cite_23",
"@cite_31",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"2751360571",
"",
"2785355717",
"2952897246",
"2795832139",
"",
"2410983263",
"2204302769",
"2950304420",
"2883739162",
"",
"2751516180",
"2176263492",
"2166493072",
"2886327376"
],
"abstract": [
"Visual attributes, from simple objects (e.g., backpacks, hats) to soft-biometrics (e.g., gender, height, clothing) have proven to be a powerful representational approach for many applications such as image description and human identification. In this paper, we introduce a novel method to combine the advantages of both multi-task and curriculum learning in a visual attribute classification framework. Individual tasks are grouped based on their correlation so that two groups of strongly and weakly correlated tasks are formed. The two groups of tasks are learned in a curriculum learning setup by transferring the acquired knowledge from the strongly to the weakly correlated. The learning process within each group though, is performed in a multi-task classification setup. The proposed method learns better and converges faster than learning all the tasks in a typical multi-task learning paradigm. We demonstrate the effectiveness of our approach on the publicly available, SoBiR, VIPeR and PETA datasets and report state-of-the-art results across the board.",
"",
"Our first contribution in this paper is a theoretical investigation of curriculum learning in the context of stochastic gradient descent when optimizing the least squares loss function. We prove that the rate of convergence of an ideal curriculum learning method in monotonically increasing with the difficulty of the examples, and that this increase in convergence rate is monotonically decreasing as training proceeds. In our second contribution we analyze curriculum learning in the context of training a CNN for image classification. Here one crucial problem is the means to achieve a curriculum. We describe a method which infers the curriculum by way of transfer learning from another network, pre-trained on a different task. While this approach can only approximate the ideal curriculum, we observe empirically similar behavior to the one predicted by the theory, namely, a significant boost in convergence speed at the beginning of training. When the task is made more difficult, improvement in generalization performance is observed. Finally, curriculum learning exhibits robustness against unfavorable conditions such as strong regularization.",
"Sharing information between multiple tasks enables algorithms to achieve good generalization performance even from small amounts of training data. However, in a realistic scenario of multi-task learning not all tasks are equally related to each other, hence it could be advantageous to transfer information only between the most related tasks. In this work we propose an approach that processes multiple tasks in a sequence with sharing between subsequent tasks instead of solving all tasks jointly. Subsequently, we address the question of curriculum learning of tasks, i.e. finding the best order of tasks to be learned. Our approach is based on a generalization bound criterion for choosing the task order that optimizes the average expected classification performance over all tasks. Our experimental results show that learning multiple related tasks sequentially can be more effective than learning them jointly, the order in which tasks are being solved affects the overall performance, and that our model is able to automatically discover the favourable order of tasks.",
"State-of-the-art techniques of artificial intelligence, in particular deep learning, are mostly data-driven. However, collecting and manually labeling a large scale dataset is both difficult and expensive. A promising alternative is to introduce synthesized training data, so that the dataset size can be significantly enlarged with little human labor. But, this raises an important problem in active vision: given an infinite data space, how to effectively sample a finite subset to train a visual classifier? This paper presents an approach for learning from synthesized data effectively. The motivation is straightforward -- increasing the probability of seeing difficult training data. We introduce a module named SampleAhead to formulate the learning process into an online communication between a classifier and a sampler , and update them iteratively. In each round, we adjust the sampling distribution according to the classification results, and train the classifier using the data sampled from the updated distribution. Experiments are performed by introducing synthesized images rendered from ShapeNet models to assist PASCAL3D+ classification. Our approach enjoys higher classification accuracy, especially in the scenario of a limited number of training samples. This demonstrates its efficiency in exploring the infinite data space.",
"",
"Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues.",
"The Neural Turing Machine (NTM) is more expressive than all previously considered models because of its external memory. It can be viewed as a broader effort to use abstract external Interfaces and to learn a parametric model that interacts with them. The capabilities of a model can be extended by providing it with proper Interfaces that interact with the world. These external Interfaces include memory, a database, a search engine, or a piece of software such as a theorem verifier. Some of these Interfaces are provided by the developers of the model. However, many important existing Interfaces, such as databases and search engines, are discrete. We examine feasibility of learning models to interact with discrete Interfaces. We investigate the following discrete Interfaces: a memory Tape, an input Tape, and an output Tape. We use a Reinforcement Learning algorithm to train a neural network that interacts with such Interfaces to solve simple algorithmic tasks. Our Interfaces are expressive enough to make our model Turing complete.",
"Recurrent Neural Networks can be trained to produce sequences of tokens given some input, as exemplified by recent results in machine translation and image captioning. The current approach to training them consists of maximizing the likelihood of each token in the sequence given the current (recurrent) state and the previous token. At inference, the unknown previous token is then replaced by a token generated by the model itself. This discrepancy between training and inference can yield errors that can accumulate quickly along the generated sequence. We propose a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead. Experiments on several sequence prediction tasks show that this approach yields significant improvements. Moreover, it was used successfully in our winning entry to the MSCOCO image captioning challenge, 2015.",
"In this paper we introduce Curriculum GANs, a curriculum learning strategy for training Generative Adversarial Networks that increases the strength of the discriminator over the course of training, thereby making the learning task progressively more difficult for the generator. We demonstrate that this strategy is key to obtaining state-of-the-art results in image generation. We also show evidence that this strategy may be broadly applicable to improving GAN training in other data modalities.",
"",
"In this paper, we propose a novel framework for training vision-based agent for First-Person Shooter (FPS) Game, in particular Doom. Our framework combines the state-of-the-art reinforcement learning approach (Asynchronous Advantage Actor-Critic (A3C) model) with curriculum learning. Our model is simple in design and only uses game states from the AI side, rather than using opponents' information. On a known map, our agent won 10 out of the 11 attended games and the champion of Track1 in ViZDoom AI Competition 2016 by a large margin, 35 higher score than the second place.",
"Many natural language processing applications use language models to generate text. These models are typically trained to predict the next word in a sequence, given the previous words and some context such as an image. However, at test time the model is expected to generate the entire sequence from scratch. This discrepancy makes generation brittle, as errors may accumulate along the way. We address this issue by proposing a novel sequence level training algorithm that directly optimizes the metric used at test time, such as BLEU or ROUGE. On three different tasks, our approach outperforms several strong baselines for greedy generation. The method is also competitive when these baselines employ beam search, while being several times faster.",
"We study the empirical strategies that humans follow as they teach a target concept with a simple 1D threshold to a robot.1 Previous studies of computational teaching, particularly the teaching dimension model and the curriculum learning principle, offer contradictory predictions on what optimal strategy the teacher should follow in this teaching task. We show through behavioral studies that humans employ three distinct teaching strategies, one of which is consistent with the curriculum learning principle, and propose a novel theoretical framework as a potential explanation for this strategy. This framework, which assumes a teaching goal of minimizing the learner's expected generalization error at each iteration, extends the standard teaching dimension model and offers a theoretical justification for curriculum learning.",
"In this work, we exploit the task of joint classification and weakly supervised localization of thoracic diseases from chest radiographs, with only image-level disease labels coupled with disease severity-level (DSL) information of a subset. A convolutional neural network (CNN) based attention-guided curriculum learning (AGCL) framework is presented, which leverages the severity-level attributes mined from radiology reports. Images in order of difficulty (grouped by different severity-levels) are fed to CNN to boost the learning gradually. In addition, highly confident samples (measured by classification probabilities) and their corresponding class-conditional heatmaps (generated by the CNN) are extracted and further fed into the AGCL framework to guide the learning of more distinctive convolutional features in the next iteration. A two-path network architecture is designed to regress the heatmaps from selected seed samples in addition to the original classification task. The joint learning scheme can improve the classification and localization performance along with more seed samples for the next iteration. We demonstrate the effectiveness of this iterative refinement framework via extensive experimental evaluations on the publicly available ChestXray14 dataset. AGCL achieves over 5.7 (averaged over 14 diseases) increase in classification AUC and 7 11 increases in Recall Precision for the localization task compared to the state of the art."
]
}
|
1811.12047
|
2947107456
|
Computer vision is difficult, partly because the desired mathematical function connecting input and output data is often complex, fuzzy and thus hard to learn. Coarse-to-fine (C2F) learning is a promising direction, but it remains unclear how it is applied to a wide range of vision problems. This paper presents a generalized C2F framework by making two technical contributions. First, we provide a unified way of C2F propagation, in which the coarse prediction (a class vector, a detected box, a segmentation mask, etc.) is encoded into a dense (pixel-level) matrix and concatenated to the original input, so that the fine model takes the same design of the coarse model but sees additional information. Second, we present a progressive training strategy which starts with feeding the ground-truth instead of the coarse output into the fine model, and gradually increases the fraction of coarse output, so that at the end of training the fine model is ready for testing. We also relate our approach to curriculum learning by showing that data difficulty keeps increasing during the training process. We apply our framework to three vision tasks including image classification, object localization and semantic segmentation, and demonstrate consistent accuracy gain compared to the baseline training strategy.
|
Another idea, named coarse-to-fine learning, was based on the observation that a vision model can rethink its prediction to amend errors @cite_46 . Researchers designed several approaches for refining visual recognition in an iterative manner. These approaches can be explained using auto-context @cite_3 or formulated into a fixed-point model @cite_41 . Examples include the coarse-to-fine models for image classification @cite_9 , object detection @cite_1 , semantic segmentation @cite_28 , pose estimation @cite_39 , image captioning @cite_37 , etc . It was verified that joint optimization over coarse and fine stages boosts the performance @cite_27 , which raised an issue of the communication between coarse and fine stages in the training process -- we desire feeding coarse-stage output to fine-stage input, but when the coarse model has not been well optimized, this can lead to unstable performance in optimization. The method proposed in this paper can largely alleviate this issue.
|
{
"cite_N": [
"@cite_37",
"@cite_41",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_39",
"@cite_27",
"@cite_46"
],
"mid": [
"",
"2134304618",
"2618237340",
"",
"2560920277",
"2122006243",
"2255781698",
"2949385043",
"2221625691"
],
"abstract": [
"",
"In this paper, we propose a simple but effective solution to the structured labeling problem: a fixed-point model. Recently, layered models with sequential classifiers regressors have gained an increasing amount of interests for structural prediction. Here, we design an algorithm with a new perspective on layered models; we aim to find a fixed-point function with the structured labels being both the output and the input. Our approach alleviates the burden in learning multiple different classifiers in different layers. We devise a training strategy for our method and provide justifications for the fixed-point function to be a contraction mapping. The learned function captures rich contextual information and is easy to train and test. On several widely used benchmark datasets, the proposed method observes significant improvement in both performance and efficiency over many state-of-the-art algorithms.",
"Deep neural networks have been widely adopted for automatic organ segmentation from abdominal CT scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the NIH pancreas segmentation dataset, and outperform the state-of-the-art by more than (4 ), measured by the average Dice-Sorensen Coefficient (DSC). In addition, we report (62.43 ) DSC in the worst case, which guarantees the reliability of our approach in clinical applications.",
"",
"The number of mitoses per tissue area gives an important aggressiveness indication of the invasive breast carcinoma. However, automatic mitosis detection in histology images remains a challenging problem. Traditional methods either employ hand-crafted features to discriminate mitoses from other cells or construct a pixel-wise classifier to label every pixel in a sliding window way. While the former suffers from the large shape variation of mitoses and the existence of many mimics with similar appearance, the slow speed of the later prohibits its use in clinical practice. In order to overcome these shortcomings, we propose a fast and accurate method to detect mitosis by designing a novel deep cascaded convolutional neural network, which is composed of two components. First, by leveraging the fully convolutional neural network, we propose a coarse retrieval model to identify and locate the candidates of mitosis while preserving a high sensitivity. Based on these candidates, a fine discrimination model utilizing knowledge transferred from cross-domain is developed to further single out mitoses from hard mimics. Our approach outperformed other methods by a large margin in 2014 ICPR MITOS-ATYPIA challenge in terms of detection accuracy. When compared with the state-of-the-art methods on the 2012 ICPR MITOSIS data (a smaller and less challenging dataset), our method achieved comparable or better results with a roughly 60 times faster speed.",
"The notion of using context information for solving high-level vision problems has been increasingly realized in the field. However, how to learn an effective and efficient context model, together with the image appearance, remains mostly unknown. The current literature using Markov random fields (MRFs) and conditional random fields (CRFs) often involves specific algorithm design, in which the modeling and computing stages are studied in isolation. In this paper, we propose an auto-context algorithm. Given a set of training images and their corresponding label maps, we first learn a classifier on local image patches. The discriminative probability (or classification confidence) maps by the learned classifier are then used as context information, in addition to the original image patches, to train a new classifier. The algorithm then iterates to approach the ground truth. Auto-context learns an integrated low-level and context model, and is very general and easy to implement. Under nearly the identical parameter setting in the training, we apply the algorithm on three challenging vision applications: object segmentation, human body configuration, and scene region labeling. It typically takes about 30 70 seconds to run the algorithm in testing. Moreover, the scope of the proposed algorithm goes beyond high-level vision. It has the potential to be used for a wide variety of problems of multi-variate labeling.",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"We aim at segmenting small organs (e.g., the pancreas) from abdominal CT scans. As the target often occupies a relatively small region in the input image, deep neural networks can be easily confused by the complex and variable background. To alleviate this, researchers proposed a coarse-to-fine approach, which used prediction from the first (coarse) stage to indicate a smaller input region for the second (fine) stage. Despite its effectiveness, this algorithm dealt with two stages individually, which lacked optimizing a global energy function, and limited its ability to incorporate multi-stage visual cues. Missing contextual information led to unsatisfying convergence in iterations, and that the fine stage sometimes produced even lower segmentation accuracy than the coarse stage. This paper presents a Recurrent Saliency Transformation Network. The key innovation is a saliency transformation module, which repeatedly converts the segmentation probability map from the previous iteration as spatial weights and applies these weights to the current iteration. This brings us two-fold benefits. In training, it allows joint optimization over the deep networks dealing with different input scales. In testing, it propagates multi-stage visual information throughout iterations to improve segmentation accuracy. Experiments in the NIH pancreas segmentation dataset demonstrate the state-of-the-art accuracy, which outperforms the previous best by an average of over 2 . Much higher accuracies are also reported on several small organs in a larger dataset collected by ourselves. In addition, our approach enjoys better convergence properties, making it more efficient and reliable in practice.",
"While feedforward deep convolutional neural networks (CNNs) have been a great success in computer vision, it is important to note that the human visual cortex generally contains more feedback than feedforward connections. In this paper, we will briefly introduce the background of feedbacks in the human visual cortex, which motivates us to develop a computational feedback mechanism in deep neural networks. In addition to the feedforward inference in traditional neural networks, a feedback loop is introduced to infer the activation status of hidden layer neurons according to the \"goal\" of the network, e.g., high-level semantic labels. We analogize this mechanism as \"Look and Think Twice.\" The feedback networks help better visualize and understand how deep neural networks work, and capture visual attention on expected objects, even in images with cluttered background and multiple objects. Experiments on ImageNet dataset demonstrate its effectiveness in solving tasks such as image classification and object localization."
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.