aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1812.06162 | 2903697572 | In an increasing number of domains it has been demonstrated that deep learning models can be trained using relatively large batch sizes without sacrificing data efficiency. However the limits of this massive data parallelism seem to differ from domain to domain, ranging from batches of tens of thousands in ImageNet to batches of millions in RL agents that play the game Dota 2. To our knowledge there is limited conceptual understanding of why these limits to batch size differ or how we might choose the correct batch size in a new domain. In this paper, we demonstrate that a simple and easy-to-measure statistic called the gradient noise scale predicts the largest useful batch size across many domains and applications, including a number of supervised learning datasets (MNIST, SVHN, CIFAR-10, ImageNet, Billion Word), reinforcement learning domains (Atari and Dota), and even generative model training (autoencoders on SVHN). We find that the noise scale increases as the loss decreases over a training run and depends on the model size primarily through improved model performance. Our empirically-motivated theory also describes the tradeoff between compute-efficiency and time-efficiency, and provides a rough model of the benefits of adaptive batch-size training. | There has been a variety of work studying the Neural Network loss landscape and using it to draw conclusions about optimal training. Local properties of the loss landscape are not necessarily a good guide to overall optimal training @cite_47 . The loss tends to be fairly smooth when interpolating between the start and end of training @cite_53 . But noise may be useful early in training @cite_4 @cite_20 , perhaps because it leads to minima that generalize better @cite_27 . | {
"cite_N": [
"@cite_4",
"@cite_53",
"@cite_27",
"@cite_47",
"@cite_20"
],
"mid": [
"2263490141",
"1850240193",
"",
"2963842222",
"2786899637"
],
"abstract": [
"Deep feedforward and recurrent networks have achieved impressive results in many perception and language processing applications. Recently, more complex architectures such as Neural Turing Machines and Memory Networks have been proposed for tasks including question answering and general computation, creating a new set of optimization challenges. In this paper, we explore the low-overhead and easy-to-implement optimization technique of adding annealed Gaussian noise to the gradient, which we find surprisingly effective when training these very deep architectures. Unlike classical weight noise, gradient noise injection is complementary to advanced stochastic optimization algorithms such as Adam and AdaGrad. The technique not only helps to avoid overfitting, but also can result in lower training loss. We see consistent improvements in performance across an array of complex models, including state-of-the-art deep networks for question answering and algorithm learning. We observe that this optimization strategy allows a fully-connected 20-layer deep network to escape a bad initialization with standard stochastic gradient descent. We encourage further application of this technique to additional modern neural architectures.",
"Training neural networks involves solving large-scale non-convex optimization problems. This task has long been believed to be extremely difficult, with fear of local minima and other obstacles motivating a variety of schemes to improve optimization, such as unsupervised pretraining. However, modern neural networks are able to achieve negligible training error on complex tasks, using only direct training with stochastic gradient descent. We introduce a simple analysis technique to look for evidence that such networks are overcoming local optima. We find that, in fact, on a straight path from initialization to solution, a variety of state of the art neural networks never encounter any significant obstacles.",
"",
"Careful tuning of the learning rate, or even schedules thereof, can be crucial to effective neural net training. There has been much recent interest in gradient-based meta-optimization, where one tunes hyperparameters, or even learns an optimizer, in order to minimize the expected loss when the training procedure is unrolled. But because the training procedure must be unrolled thousands of times, the meta-objective must be defined with an orders-of-magnitude shorter time horizon than is typical for neural net training. We show that such short-horizon meta-objectives cause a serious bias towards small step sizes, an effect we term short-horizon bias. We introduce a toy problem, a noisy quadratic cost function, on which we analyze short-horizon bias by deriving and comparing the optimal schedules for short and long time horizons. We then run meta-optimization experiments (both offline and online) on standard benchmark datasets, showing that meta-optimization chooses too small a learning rate by multiple orders of magnitude, even when run with a moderately long time horizon (100 steps) typical of work in the area. We believe short-horizon bias is a fundamental problem that needs to be addressed if meta-optimization is to scale to practical neural net training regimes.",
"It has been experimentally observed that distributed implementations of mini-batch stochastic gradient descent (SGD) algorithms exhibit speedup saturation and decaying generalization ability beyond a particular batch-size. In this work, we present an analysis hinting that high similarity between concurrently processed gradients may be a cause of this performance degradation. We introduce the notion of gradient diversity that measures the dissimilarity between concurrent gradient updates, and show its key role in the performance of mini-batch SGD. We prove that on problems with high gradient diversity, mini-batch SGD is amenable to better speedups, while maintaining the generalization performance of serial (one sample) SGD. We further establish lower bounds on convergence where mini-batch SGD slows down beyond a particular batch-size, solely due to the lack of gradient diversity. We provide experimental evidence indicating the key role of gradient diversity in distributed learning, and discuss how heuristics like dropout, Langevin dynamics, and quantization can improve it."
]
} |
1812.06369 | 2904025731 | As the success of deep learning reaches more grounds, one would like to also envision the potential limits of deep learning. This paper gives a first set of results proving that certain deep learning algorithms fail at learning certain efficiently learnable functions. The results put forward a notion of cross-predictability that characterizes when such failures take place. Parity functions provide an extreme example with a cross-predictability that decays exponentially, while a mere super-polynomial decay of the cross-predictability is shown to be sufficient to obtain failures. Examples in community detection and arithmetic learning are also discussed. Recall that it is known that the class of neural networks (NNs) with polynomial network size can express any function that can be implemented in polynomial time, and that their sample complexity scales polynomially with the network size. The challenge is with the optimization error (the ERM is NP-hard), and the success behind deep learning is to train deep NNs with descent algorithms. The failures shown in this paper apply to training poly-size NNs on function distributions of low cross-predictability with a descent algorithm that is either run with limited memory per sample or that is initialized and run with enough randomness. We further claim that such types of constraints are necessary to obtain failures, in that exact SGD with careful non-random initialization can be shown to learn parities. The cross-predictability in our results plays a similar role the statistical dimension in statistical query (SQ) algorithms, with distinctions explained in the paper. The proof techniques are based on exhibiting algorithmic constraints that imply a statistical indistinguishability between the algorithm's output on the test model v.s. a null model, using information measures to bound the total variation distance. | The difficulty of learning parities with NNs is not new. The parity was already known to be hard based on the early works on the perceptron @cite_32 , see also @cite_14 @cite_25 . | {
"cite_N": [
"@cite_14",
"@cite_25",
"@cite_32"
],
"mid": [
"2017290750",
"1509849361",
"2086789740"
],
"abstract": [
"Proving lower bounds on the amount of resources needed to compute specific functions is one of the most active branches of theoretical computer science. Significant progress has been made recently in proving lower bounds in two restricted models of Boolean circuits. One is the model of small depth circuits, and in this book Johan Torkel Hastad has developed very powerful techniques for proving exponential lower bounds on the size of small depth circuits' computing functions.The techniques described in \"Computational Limitations for Small Depth Circuits\" can be used to demonstrate almost optimal lower bounds on the size of small depth circuits computing several different functions, such as parity and majority. The main tool used in the proof of the lower bounds is a lemma, stating that any AND of small fanout OR gates can be converted into an OR of small fanout AND gates with high probability when random values are substituted for the variables.Hastad also applies this tool to relativized complexity, and discusses in great detail the computation of parity and majority in small depth circuits.Contents: Introduction. Small Depth Circuits. Outline of Lower Bound Proofs. Main Lemma. Lower Bounds for Small Depth Circuits. Functions Requiring Depth k to Have Small Circuits. Applications to Relativized Complexity. How Well Can We Compute Parity in Small Depth? Is Majority Harder than Parity? Conclusions.John Hastad is a postdoctoral fellow in the Department of Mathematics at MIT C\"omputational Limitations of Small Depth Circuits\" is a winner of the 1986 ACM Doctoral Dissertation Award.",
"The 1980's saw rapid and exciting development of techniques for proving lower bounds in circuit complexity. This pace has slowed recently, and there has even been work indicating that quite different proof techniques must be employed to advance beyond the current frontier of circuit lower bounds. Although this has engendered pessimism in some quarters, there have in fact been many positive developments in the past few years showing that significant progress is possible on many fronts. This paper is a (necessarily incomplete) survey of the state of circuit complexity as we await the dawn of the new millennium.",
"Cambridge, Mass.: MIT Press, 1972. 2nd. ed. The book's aim is to seek general results from the close study of abstract version of devices known as perceptrons"
]
} |
1812.06369 | 2904025731 | As the success of deep learning reaches more grounds, one would like to also envision the potential limits of deep learning. This paper gives a first set of results proving that certain deep learning algorithms fail at learning certain efficiently learnable functions. The results put forward a notion of cross-predictability that characterizes when such failures take place. Parity functions provide an extreme example with a cross-predictability that decays exponentially, while a mere super-polynomial decay of the cross-predictability is shown to be sufficient to obtain failures. Examples in community detection and arithmetic learning are also discussed. Recall that it is known that the class of neural networks (NNs) with polynomial network size can express any function that can be implemented in polynomial time, and that their sample complexity scales polynomially with the network size. The challenge is with the optimization error (the ERM is NP-hard), and the success behind deep learning is to train deep NNs with descent algorithms. The failures shown in this paper apply to training poly-size NNs on function distributions of low cross-predictability with a descent algorithm that is either run with limited memory per sample or that is initialized and run with enough randomness. We further claim that such types of constraints are necessary to obtain failures, in that exact SGD with careful non-random initialization can be shown to learn parities. The cross-predictability in our results plays a similar role the statistical dimension in statistical query (SQ) algorithms, with distinctions explained in the paper. The proof techniques are based on exhibiting algorithmic constraints that imply a statistical indistinguishability between the algorithm's output on the test model v.s. a null model, using information measures to bound the total variation distance. | In @cite_29 , it is shown that one needs either quadratic memory or an exponential number of samples in order to learn parities, settling a conjecture from @cite_1 . This gives a first non-trivial lower bound on the number of samples needed for a learning problem and a first complete negative result in this context. Applications to bounded-storage cryptography are also given in @cite_29 . Other works have extended the results of @cite_29 ; in particular @cite_7 applies to k-sparse sources, @cite_40 to other functions than parities, and @cite_17 exploits properties of two-source extractors The cross-predictability has also similarity with notions of almost orthogonal matrices used in @math -extractors for two independent sources @cite_5 @cite_17 , although it does not appear to be exactly the same @cite_6 . to obtain comparable memory v.s. sample complexity trade-offs, with similar results obtained in @cite_39 . | {
"cite_N": [
"@cite_7",
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_39",
"@cite_40",
"@cite_5",
"@cite_17"
],
"mid": [
"2626222921",
"",
"2465504090",
"",
"2745049847",
"2769279287",
"2151303208",
"2962750290"
],
"abstract": [
"We define a concept class ℱ to be time-space hard (or memory-samples hard) if any learning algorithm for ℱ requires either a memory of size super-linear in n or a number of samples super-polynomial in n, where n is the length of one sample. A recent work shows that the class of all parity functions is time-space hard [Raz, FOCS'16]. Building on [Raz, FOCS'16], we show that the class of all sparse parities of Hamming weight â is time-space hard, as long as l ≥ ω(logn loglogn). Consequently, linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas are all time-space hard. Our result is more general and provides time-space lower bounds for learning any concept class of parity functions. We give applications of our results in the field of bounded-storage cryptography. For example, for every ωlogn) ≤ k ≤ n, we obtain an encryption scheme that requires a private key of length k, and time complexity of n per encryption decryption of each bit, and is provably and unconditionally secure as long as the attacker uses at most o(nk) memory bits and the scheme is used at most 2o(k) times. Previously, this was known only for k=n [Raz, FOCS'16].",
"",
"If a concept class can be represented with a certain amount of memory, can it be efficiently learned with the same amount of memory? What concepts can be efficiently learned by algorithms that extract only a few bits of information from each example? We introduce a formal framework for studying these questions, and investigate the relationship between the fundamental resources of memory or communication and the sample complexity of the learning task. We relate our memory-bounded and communication-bounded learning models to the well-studied statistical query model. This connection can be leveraged to obtain both upper and lower bounds: we show strong lower bounds on learning parity functions with bounded communication, as well as the first upper bounds on solving generic sparse linear regression problems with limited memory.",
"",
"We develop an extension of recently developed methods for obtaining time-space tradeoff lower bounds for problems of learning from random test samples to handle the situation where the space of tests is signficantly smaller than the space of inputs, a class of learning problems that is not handled by prior work. This extension is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices. As applications that follow from our new technique, we show that any algorithm that learns @math -variate homogeneous polynomial functions of degree at most @math over @math from evaluations on randomly chosen inputs either requires space @math or @math time where @math is the dimension of the space of such functions. These bounds are asymptotically optimal since they match the tradeoffs achieved by natural learning algorithms for the problems.",
"We prove a general time-space lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of quadratic size or an exponential number of samples. As a special case, this gives a new proof for the time-space lower bound for parity learning [R16]. Our result is stated in terms of the norm of the matrix that corresponds to the learning problem. Let X, A be two finite sets. Let M: A × X -1,1 be a matrix. The matrix M corresponds to the following learning problem: An unknown element x ∊ X was chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2)..., where for every i, a_i ∊ A is chosen uniformly at random and b_i = M(a_i,x). Let be the largest singular value of M and note that always ≤ |A|^ 1 2 ⋅ |X|^ 1 2 . We show that if ≤ |A|^ 1 2 ⋅ |X|^ 1 2 - ≥ilon, then any learning algorithm for the corresponding learning problem requires either a memory of size quadratic in ≥ilon n or number of samples exponential in ≥ilon n, where n = |X|.As a special case, this gives a new proof for the memorysamples lower bound for parity learning [14].",
"A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g., the Santha and Vazirani model [27]). The sources considered output strings according to probability distributions in which no single string is too probable.The new model provides a fruitful viewpoint on problems studied previously such as: • Extracting almost-perfect bits from sources of weak randomness. The question of possibility as well as the question of efficiency of such extraction schemes are addressed. • Probabilistic communication complexity. It is shown that most functions have linear communication complexity in a very strong probabilistic sense. • Robustness of BPP with respect to sources of weak randomness (generalizing a result of Vazirani and Vazirani [32], [33]).",
"A matrix M: A × X → −1,1 corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a1, b1), (a2, b2) …, where for every i, ai ∈ A is chosen uniformly at random and bi = M(ai,x). Assume that k, l, r are such that any submatrix of M of at least 2−k · |A| rows and at least 2−l · |X| columns, has a bias of at most 2−r. We show that any learning algorithm for the learning problem corresponding to M requires either a memory of size at least Ω(k · l ), or at least 2Ω(r) samples. The result holds even if the learner has an exponentially small success probability (of 2−Ω(r)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((log|X|) · (log|A|)) or an exponential number of samples, achieving a tight Ω((log|X|) · (log|A|)) lower bound on the size of the memory, rather than a bound of Ω(min (log|X|)2,(log|A|)2 ) obtained in previous works by Raz [FOCS’17] and Moshkovitz and Moshkovitz [ITCS’18]. Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on the work of Raz [FOCS’17] that gave a general technique for proving memory samples lower bounds."
]
} |
1812.06298 | 2905364877 | We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvements. We study RPL in six challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. For initial controllers, we consider both hand-designed policies and model-predictive controllers with known or learned transition models. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently. Video and code at https: k-r-allen.github.io residual-policy-learning . | There has been a substantial body of work on improving the data efficiency of deep RL by combining model-free and model-based approaches. These methods often first learn a dynamics model and then use this dynamics model to simulate experience @cite_6 @cite_23 @cite_8 or compute gradients for model-free updates @cite_5 @cite_7 . Another set of approaches uses the learned dynamics model (or inverse dynamics model) to perform trajectory optimization or model-predictive control @cite_0 @cite_27 . Further work uses such model-based methods to guide a model-free learner in a DAGGER-style imitation strategy @cite_16 . More recent work has shown an equivalence between model-free and model-based RL with goal-conditioned value functions @cite_13 , and used this to improve model-free RL data efficiency. RPL can be seen as an extension of this line of work, as it provides a new means for combining the benefits of model-based and model-free RL. We show in experiments that the model-based method proposed by can be improved upon with RPL. However, RPL is also more general; it can be used to improve upon arbitrary policies, including but not limited to model-based ones. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_5",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"",
"1491843047",
"2805762288",
"2416477367",
"",
"2964006217",
"2743381431",
"2964036701"
],
"abstract": [
"",
"",
"This paper extends previous work with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned model of the world. In this paper, I present and show results for two Dyna architectures. The Dyna-PI architecture is based on dynamic programming's policy iteration method and can be related to existing AI ideas such as evaluation functions and universal plans (reactive systems). Using a navigation task, results are shown for a simple Dyna-PI system that simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. The Dyna-Q architecture is based on Watkins's Q-learning, a new kind of reinforcement learning. Dyna-Q uses a less familiar set of data structures than does Dyna-PI, but is arguably simpler to implement and use. We show that Dyna-Q architectures are easy to adapt for use in changing environments.",
"Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertainty-aware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertainty-aware deep network dynamics models with sampling-based uncertainty propagation. Our comparison to state-of-the-art model-based and model-free deep RL algorithms shows that our approach matches the asymptotic performance of model-free algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g., 8 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the half-cheetah task).",
"We present an automatic method for interactive control of physical humanoid robots based on high-level tasks that does not require manual specification of motion trajectories or specially-designed control policies. The method is based on the combination of a model-based policy that is trained off-line in simulation and sends high-level commands to a model-free controller that executes these commands on the physical robot. This low-level controller simultaneously learns and adapts a local model of dynamics on-line and computes optimal controls under the learned model. The high-level policy is trained using a combination of trajectory optimization and neural network learning, while considering physical limitations such as limited sensors and communication delays. The entire system runs in real-time on the robot's computer and uses only on-board sensors. We demonstrate successful policy execution on a range of tasks such as leaning, hand reaching, and robust balancing behaviors atop a tilting base on the physical robot and in simulation.",
"",
"We present a unified framework for learning continuous control policies using backpropagation. It supports stochastic control by treating stochasticity in the Bellman equation as a deterministic function of exogenous noise. The product is a spectrum of general policy gradient algorithms that range from model-free methods with value functions to model-based methods without value functions. We use learned models but only require observations from the environment instead of observations from model-predicted trajectories, minimizing the impact of compounded model errors. We apply these algorithms first to a toy stochastic control problem and then to several physics-based control problems in simulation. One of these variants, SVG(1), shows the effectiveness of learning models, value functions, and policies simultaneously in continuous domains.",
"Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents. Videos can be found at this https URL",
"Model-free reinforcement learning (RL) has been proven to be a powerful, general tool for learning complex behaviors. However, its sample efficiency is often impractically large for solving challenging real-world problems, even for off-policy algorithms such as Q-learning. A limiting factor in classic model-free RL is that the learning signal consists only of scalar rewards, ignoring much of the rich information contained in state transition tuples. Model-based RL uses this information, by training a predictive model, but often does not achieve the same asymptotic performance as model-free RL due to model bias. We introduce temporal difference models (TDMs), a family of goal-conditioned value functions that can be trained with model-free learning and used for model-based control. TDMs combine the benefits of model-free and model-based RL: they leverage the rich information in state transitions to learn very efficiently, while still attaining asymptotic performance that exceeds that of direct model-based RL methods. Our experimental results show that, on a range of continuous control tasks, TDMs provide a substantial improvement in efficiency compared to state-of-the-art model-based and model-free methods."
]
} |
1812.06298 | 2905364877 | We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvements. We study RPL in six challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. For initial controllers, we consider both hand-designed policies and model-predictive controllers with known or learned transition models. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently. Video and code at https: k-r-allen.github.io residual-policy-learning . | From robotics, many methods exist for learning different aspects of the perception, control, execution pipeline. Focusing on control specifically, Bayesian optimization approaches are popular for learning controllers based on Gaussian process models of objective functions to be optimized @cite_2 @cite_21 @cite_29 @cite_3 @cite_11 . Learning an accurate dynamics model is another central focus for robotics (termed system identification), and has been approached using analytic gradients @cite_28 @cite_19 , finite differences @cite_26 or Bayesian Optimization @cite_20 . In contrast, RPL does not presuppose which aspect of the controller needs correction. This is particularly valuable in partially observable settings, where it is unclear how to learn a good dynamics model or design a better objective function. | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_3",
"@cite_19",
"@cite_2",
"@cite_20",
"@cite_11"
],
"mid": [
"2209762605",
"1749494163",
"",
"",
"2110029398",
"",
"2963903510",
"2018705428",
""
],
"abstract": [
"Successful model based control relies heavily on proper system identification and accurate state estimation. We present a framework for solving these problems in the context of robotic control applications. We are particularly interested in robotic manipulation tasks, which are especially hard due to the non-linear nature of contact phenomena. We developed a solution that solves both the problems of estimation and system identification jointly. We show that these two problems are difficult to solve separately in the presence of discontinuous phenomena such as contacts. The problem is posed as a joint optimization across both trajectory and model parameters and solved via Newton's method. We present several challenges we encountered while modeling contacts and performing state estimation and propose solutions within the MuJoCo physics engine. We present experimental results performed on our manipulation system consisting of 3-DOF Phantom Haptic Devices, turned into finger manipulators. Cross-validation between different datasets, as well as leave-one-out cross-validation show that our method is robust and is able to accurately explain sensory data.",
"This paper points out the flaws in using the extended Kalman filter (EKE) and introduces an improvement, the unscented Kalman filter (UKF), proposed by Julier and Uhlman (1997). A central and vital operation performed in the Kalman filter is the propagation of a Gaussian random variable (GRV) through the system dynamics. In the EKF the state distribution is approximated by a GRV, which is then propagated analytically through the first-order linearization of the nonlinear system. This can introduce large errors in the true posterior mean and covariance of the transformed GRV, which may lead to sub-optimal performance and sometimes divergence of the filter. The UKF addresses this problem by using a deterministic sampling approach. The state distribution is again approximated by a GRV, but is now represented using a minimal set of carefully chosen sample points. These sample points completely capture the true mean and covariance of the GRV, and when propagated through the true nonlinear system, captures the posterior mean and covariance accurately to the 3rd order (Taylor series expansion) for any nonlinearity. The EKF in contrast, only achieves first-order accuracy. Remarkably, the computational complexity of the UKF is the same order as that of the EKF. Julier and Uhlman demonstrated the substantial performance gains of the UKF in the context of state-estimation for nonlinear control. Machine learning problems were not considered. We extend the use of the UKF to a broader class of nonlinear estimation problems, including nonlinear system identification, training of neural networks, and dual estimation problems. In this paper, the algorithms are further developed and illustrated with a number of additional examples.",
"",
"",
"Several categories of optimization problems suffer from expensive objective function evaluation, driving the need for smart selection of subsequent experiments. One such category of problems involves physical robotic systems, which often require significant time, effort, and monetary expenditure in order to run tests. To assist in the selection of the next experiment, there has been a focus on the idea of response surfaces in recent years. These surfaces interpolate the existing data and provide a measure of confidence in their error, serving as a low-fidelity surrogate function that can be used to more intelligently choose the next experiment. In this paper, we robustly implement a previous algorithm based on the response surface methodology with an expected improvement criteria. We apply this technique to optimize open-loop gait parameters for snake robots, and demonstrate improved locomotive capabilities.",
"",
"The objective of this work is to augment the basic abilities of a robot by learning to use new sensorimotor primitives to enable the solution of complex long-horizon problems. Solving long-horizon problems in complex domains requires flexible generative planning that can combine primitive abilities in novel combinations to solve problems as they arise in the world. In order to plan to combine primitive actions, we must have models of the preconditions and effects of those actions: under what circumstances will executing this primitive achieve some particular effect in the world? We use, and develop novel improvements on, state-of-the-art methods for active learning and sampling. We use Gaussian process methods for learning the conditions of operator effectiveness from small numbers of expensive training examples collected by experimentation on a robot. We develop adaptive sampling methods for generating diverse elements of continuous sets (such as robot configurations and object poses) during planning for solving a new task, so that planning is as efficient as possible. We demonstrate these methods in an integrated system, combining newly learned models with an efficient continuous-space robot task and motion planner to learn to solve long horizon problems more efficiently than was previously possible.",
"Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.",
""
]
} |
1812.05869 | 2904445255 | The problem of non-rigid point set registration is a key problem for many computer vision tasks. In many cases the nature of the data or capabilities of the point detection algorithms can give us some prior information on point sets distribution. In non-rigid case this information is able to drastically improve registration results by limiting number of possible solutions. In this paper we explore use of prior information about point sets clustering, such information can be obtained with preliminary segmentation. We extend existing probabilistic framework for fitting two level Gaussian mixture model and derive closed form solution for maximization step of the EM algorithm. This enables us to improve method accuracy with almost no performance loss. We evaluate our approach and compare the Cluster Coherent Point Drift with other existing non-rigid point set registration methods and show it's advantages for digital medicine tasks, especially for heart template model personalization using patient's medical data. | Extended Coherent Point Drift @cite_8 integrates prior knowledge into registration algorithm. It requires sparse set of points correspondences between point sets to be defined. These correspondences are defined as pairs @math where @math . Correspondence priors are modeled as a product of particular independent density functions @math where @math and @math is priors' degree of reliability. Correspondence priors are incorporated into GMM @math Modified negative log-likelihood will be written as @math Upper bound @math will be defined as @math where @math defined as @math @math matrix can be pre-calculated. For non-rigid case upper-bound function @math will be written as The following linear system is solved during maximization step of the EM algorithm @math | {
"cite_N": [
"@cite_8"
],
"mid": [
"2407185988"
],
"abstract": [
"The problem of dense point set registration, given a sparse set of prior correspondences, often arises in computer vision tasks. Unlike in the rigid case, integrating prior knowledge into a registration algorithm is especially demanding in the non-rigid case due to the high variability of motion and deformation. In this paper we present the Extended Coherent Point Drift registration algorithm. It enables, on the one hand, to couple correspondence priors into the dense registration procedure in a closed form and, on the other hand, to process large point sets in reasonable time through adopting an optimal coarse-to-fine strategy. Combined with a suitable keypoint extractor during the preprocessing step, our method allows for non-rigid registrations with increased accuracy for point sets with structured outliers. We demonstrate advantages of our approach against other non-rigid point set registration methods in synthetic and real-world scenarios."
]
} |
1812.05964 | 2869766455 | Applications for deep learning and big data analytics have compute and memory requirements that exceed the limits of a single GPU. However, effectively scaling out an application to multiple GPUs is challenging due to the complexities of communication between the GPUs, particularly for collective communication with irregular message sizes. In this work, we provide a performance evaluation of the Allgatherv routine on multi-GPU systems, focusing on GPU network topology and the communication library used. We present results from the OSU-micro benchmark as well as conduct a case study for sparse tensor factorization, one application that uses Allgatherv with highly irregular message sizes. We extend our existing tensor factorization tool to run on systems with different node counts and varying number of GPUs per node. We then evaluate the communication performance of our tool when using traditional MPI, CUDA-aware MVAPICH and NCCL across a suite of real-world data sets on three different systems: a 16-node cluster with one GPU per node, NVIDIA's DGX-1 with 8 GPUs and Cray's CS-Storm with 16 GPUs. Our results show that irregularity in the tensor data sets produce trends that contradict those in the OSU micro-benchmark, as well as trends that are absent from the benchmark. | There have been several efforts to evaluate and improve the performance of GPU communication. @cite_25 extended the popular OSU Micro-Benchmarks (OMB) suite to evaluate the performance of the MVAPICH and OpenMPI CUDA-aware libraries for both point-to-point and collective communication routines. However, to the best of our knowledge, the irregular collectives are not evaluated with different sized messages per rank in the OMB suite, which does not align with the focus of this work. Tr "a @cite_11 benchmark an MPI Allgatherv implementation across a set of different message size distributions, but was restricted to host-based communication. Of particular relevance to our work, @cite_3 have compared the performance of the broadcast collective in NCCL and an extended version of MVAPICH-GDR for deep learning workloads. At the time of their work, the current version of NCCL did not support inter-node communication, so that aspect of the study was omitted. Furthermore, their study consisted of an evaluation of NCCL and MVAPICH-GDR for regular workloads with respect to message sizes, as well as a focus on deep learning applications, which differs from our work. | {
"cite_N": [
"@cite_25",
"@cite_3",
"@cite_11"
],
"mid": [
"1637731592",
"2740001873",
"1554378292"
],
"abstract": [
"General-Purpose Graphics Processing Units (GPGPUs) are becoming a common component of modern supercomputing systems. Many MPI applications are being modified to take advantage of the superior compute potential offered by GPUs. To facilitate this process, many MPI libraries are being extended to support MPI communication from GPU device memory. However, there is lack of a standardized benchmark suite that helps users evaluate common communication models on GPU clusters and do a fair comparison for different MPI libraries. In this paper, we extend the widely used OSU Micro-Benchmarks (OMB) suite with benchmarks that evaluate performance of point-point, multi-pair and collective MPI communication for different GPU cluster configurations. Benefits of the proposed benchmarks for MVAPICH2 and OpenMPI libraries are illustrated.",
"Traditionally, MPI runtimes have been designed for clusters with a large number of nodes. However, with the advent of MPI+CUDA applications and dense multi-GPU systems, it has become important to design efficient communication schemes. This coupled with new application workloads brought forward by Deep Learning frameworks like Caffe and Microsoft CNTK pose additional design constraints due to very large message communication of GPU buffers during the training phase. In this context, special-purpose libraries like NCCL have been proposed. In this paper, we propose a pipelined chain (ring) design for the MPI_Bcast collective operation along with an enhanced collective tuning framework in MVAPICH2-GDR that enables efficient intra- internode multi-GPU communication. We present an in-depth performance landscape for the proposed MPI_Bcast schemes along with a comparative analysis of NCCL Broadcast and NCCL-based MPI_Bcast. The proposed designs for MVAPICH2-GDR enable up to 14X and 16.6X improvement, compared to NCCL-based solutions, for intra- and internode broadcast latency, respectively. In addition, the proposed designs provide up to 7 improvement over NCCL-based solutions for data parallel training of the VGG network on 128 GPUs using Microsoft CNTK. The proposed solutions outperform the recently introduced NCCL2 library for small and medium message sizes and offer comparable better performance for very large message sizes.",
"We present and evaluate a new, simple, pipelined algorithm for large, irregularall-gather problems, useful for the implementation of the MPI_Allgatherv collective operation of MPI. The algorithm can be viewed as an adaptation of a linear ring algorithm for regular all-gather problems for single-ported, clustered multiprocessors to the irregular problem. Compared to the standard ring algorithm, whose performance is dominated by the largest data size broadcast by a process (times the number of processes), the performance of the new algorithm depends only on the total amount of data over all processes. The new algorithm has been implemented within different MPI libraries. Benchmark results on NEC SX-8, Linux clusters with InfiniBand and Gigabit Ethernet, Blue Gene P, and SiCortex systems show huge performance gains in accordance with the expected behavior."
]
} |
1812.05964 | 2869766455 | Applications for deep learning and big data analytics have compute and memory requirements that exceed the limits of a single GPU. However, effectively scaling out an application to multiple GPUs is challenging due to the complexities of communication between the GPUs, particularly for collective communication with irregular message sizes. In this work, we provide a performance evaluation of the Allgatherv routine on multi-GPU systems, focusing on GPU network topology and the communication library used. We present results from the OSU-micro benchmark as well as conduct a case study for sparse tensor factorization, one application that uses Allgatherv with highly irregular message sizes. We extend our existing tensor factorization tool to run on systems with different node counts and varying number of GPUs per node. We then evaluate the communication performance of our tool when using traditional MPI, CUDA-aware MVAPICH and NCCL across a suite of real-world data sets on three different systems: a 16-node cluster with one GPU per node, NVIDIA's DGX-1 with 8 GPUs and Cray's CS-Storm with 16 GPUs. Our results show that irregularity in the tensor data sets produce trends that contradict those in the OSU micro-benchmark, as well as trends that are absent from the benchmark. | In regards to tensor factorization, designing high performance implementations for CP-ALS, as well as measuring their performance, is an active area of research @cite_15 . There have been efforts to perform tensor factorization on both shared and distributed memory systems @cite_23 @cite_10 @cite_20 , as well as on GPUs @cite_14 @cite_16 . However, to the best of our knowledge, ReFacTo is the only current implementation of CP-ALS that runs on multiple GPUs in a distributed fashion and is able to utilize GPU communication hardware and software. | {
"cite_N": [
"@cite_14",
"@cite_23",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2560878429",
"1511885491",
"2768445292",
"2616934551",
"",
"2079487069"
],
"abstract": [
"This paper presents the optimized design and implementation of sparse tensor-times-dense matrix multiply (SpTTM) for CPU and GPU platforms. This primitive is a critical bottleneck in data analysis and mining applications based on tensor methods, such as the Tucker decomposition. We first design and implement sequential SpTTM to avoid explicit data transformations between a tensor and a matrix, which is the conventional approach. We further optimize SpTTM on multicore CPU and GPU systems by parallelizing, avoiding locks, and exploiting data locality. Our sequential SpTTM is up to 3.5× faster than the SpTTM from Tensor Toolbox and 1.5× over that from Cyclops Tensor Framework. Our parallel algorithms show 4.1× speedup on multicore Intel Core i7 and 18.8× speedup on NVIDIA K40c GPU over our sequential SpTTM respectively.",
"Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.",
"Abstract Tensor decomposition, the higher-order analogue to singular value decomposition, has emerged as a useful tool for finding relationships in large, sparse, multidimensional data. As this technique matures and is applied to increasingly larger data sets, the need for high performance implementations becomes critical. A better understanding of the performance characteristics of tensor decomposition on large and sparse tensors can help drive the development of such implementations. In this work, we perform an objective empirical evaluation of three state of the art parallel tools that implement the Canonical Decomposition Parallel Factorization tensor decomposition algorithm using alternating least squares fitting (CP-ALS): SPLATT, DFacTo, and ENSIGN. We conduct performance studies across a variety of data sets and evaluate the tools with respect to total memory required, processor stall cycles, execution time, data distribution, and communication patterns. Furthermore, we investigate the performance of the implementations on tensors with up to 6 dimensions and when executing high rank decompositions. We find that tensor data structure layout and distribution choices can result in differences as large as 14.6x with respect to memory usage and 39.17x with respect to execution time. We provide an outline of a distributed heterogeneous CP-ALS implementation that addresses the performance issues we observe.",
"Sparse tensors appear in many large-scale applications with multidimensional and sparse data. While multidimensional sparse data often need to be processed on manycore processors, attempts to develop highly-optimized GPU-based implementations of sparse tensor operations are rare. The irregular computation patterns and sparsity structures as well as the large memory footprints of sparse tensor operations make such implementations challenging. We leverage the fact that sparse tensor operations share similar computation patterns to propose a unified tensor representation called F-COO. Combined with GPU-specific optimizations, F-COO provides highly-optimized implementations of sparse tensor computations on GPUs. The performance of the proposed unified approach is demonstrated for tensor-based kernels such as the Sparse Matricized Tensor-Times-Khatri-Rao Product (SpMTTKRP) and the Sparse Tensor-Times-Matrix Multiply (SpTTM) and is used in tensor decomposition algorithms. Compared to state-of-the-art work we improve the performance of SpTTM and SpMTTKRP up to 3.7 and 30.6 times respectively on NVIDIA Titan-X GPUs. We implement a CANDECOMP PARAFAC (CP) decomposition and achieve up to 14.9 times speedup using the unified method over state-of-the-art libraries on NVIDIA Titan-X GPUs.",
"",
"We investigate an efficient parallelization of the most common iterative sparse tensor decomposition algorithms on distributed memory systems. A key operation in each iteration of these algorithms is the matricized tensor times Khatri-Rao product (MTTKRP). This operation amounts to element-wise vector multiplication and reduction depending on the sparsity of the tensor. We investigate a fine and a coarse-grain task definition for this operation, and propose hypergraph partitioning-based methods for these task definitions to achieve the load balance as well as reduce the communication requirements. We also design a distributed memory sparse tensor library, HyperTensor, which implements a well-known algorithm for the CANDECOMP- PARAFAC (CP) tensor decomposition using the task definitions and the associated partitioning methods. We use this library to test the proposed implementation of MTTKRP in CP decomposition context, and report scalability results up to 1024 MPI ranks. We observed up to 194 fold speedups using 512 MPI processes on a well-known real world data, and significantly better performance results with respect to a state of the art implementation."
]
} |
1812.05920 | 2903799412 | Deep neural networks can learn complex and abstract representations, that are progressively obtained by combining simpler ones. A recent trend in speech and speaker recognition consists in discovering these representations starting from raw audio samples directly. Differently from standard hand-crafted features such as MFCCs or FBANK, the raw waveform can potentially help neural networks discover better and more customized representations. The high-dimensional raw inputs, however, can make training significantly more challenging. This paper summarizes our recent efforts to develop a neural architecture that efficiently processes speech from audio waveforms. In particular, we propose SincNet, a novel Convolutional Neural Network (CNN) that encourages the first layer to discover meaningful filters by exploiting parametrized sinc functions. In contrast to standard CNNs, which learn all the elements of each filter, only low and high cutoff frequencies of band-pass filters are directly learned from data. This inductive bias offers a very compact way to derive a customized front-end, that only depends on some parameters with a clear physical meaning. Our experiments, conducted on both speaker and speech recognition, show that the proposed architecture converges faster, performs better, and is more computationally efficient than standard CNNs. | Several works have recently explored the use of low-level speech representations to process audio and speech with CNNs. Most prior attempts exploit magnitude spectrogram features @cite_17 @cite_5 @cite_6 @cite_31 @cite_27 @cite_16 . Although spectrograms retain more information than standard hand-crafted features, their design still requires careful tuning of some crucial hyper-parameters, such as the duration, overlap, and typology of the frame window, as well as the number of frequency bins. For this reason, a more recent trend is to directly learn from raw waveforms, thus completely avoiding any feature extraction step. This approach has shown promising in speech @cite_29 @cite_7 @cite_4 @cite_34 @cite_12 , including emotion tasks @cite_18 , speaker recognition @cite_32 , and spoofing detection @cite_28 . Similar to SincNet, some previous works have proposed to add constraints on the CNN filters, for instance forcing them to work on specific bands @cite_31 @cite_27 . | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_29",
"@cite_32",
"@cite_6",
"@cite_34",
"@cite_27",
"@cite_5",
"@cite_31",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"2399733683",
"1542280630",
"2398826216",
"2592641653",
"1666984270",
"2770454110",
"",
"",
"2587891104",
"2746742816",
"1969851134",
"2658929981",
"2408093180",
"2802973008"
],
"abstract": [
"The automatic recognition of spontaneous emotions from speech is a challenging task. On the one hand, acoustic features need to be robust enough to capture the emotional content for various styles of speaking, and while on the other, machine learning algorithms need to be insensitive to outliers while being able to model the context. Whereas the latter has been tackled by the use of Long Short-Term Memory (LSTM) networks, the former is still under very active investigations, even though more than a decade of research has provided a large set of acoustic descriptors. In this paper, we propose a solution to the problem of ‘context-aware’ emotional relevant feature extraction, by combining Convolutional Neural Networks (CNNs) with LSTM networks, in order to automatically learn the best representation of the speech signal directly from the raw time representation. In this novel work on the so-called end-to-end speech emotion recognition, we show that the use of the proposed topology significantly outperforms the traditional approaches based on signal processing techniques for the prediction of spontaneous and natural emotions on the RECOLA database.",
"Standard deep neural network-based acoustic models for automatic speech recognition (ASR) rely on hand-engineered input features, typically log-mel filterbank magnitudes. In this paper, we describe a convolutional neural network - deep neural network (CNN-DNN) acoustic model which takes raw multichannel waveforms as input, i.e. without any preceding feature extraction, and learns a similar feature representation through supervised training. By operating directly in the time domain, the network is able to take advantage of the signal's fine time structure that is discarded when computing filterbank magnitude features. This structure is especially useful when analyzing multichannel inputs, where timing differences between input channels can be used to localize a signal in space. The first convolutional layer of the proposed model naturally learns a filterbank that is selective in both frequency and direction of arrival, i.e. a bank of bandpass beamformers with an auditory-like frequency scale. When trained on data corrupted with noise coming from different spatial locations, the network learns to filter them out by steering nulls in the directions corresponding to the noise sources. Experiments on a simulated multichannel dataset show that the proposed acoustic model outperforms a DNN that uses log-mel filterbank magnitude features under noisy and reverberant conditions.",
"Learning an acoustic model directly from the raw waveform has been an active area of research. However, waveformbased models have not yet matched the performance of logmel trained neural networks. We will show that raw waveform features match the performance of log-mel filterbank energies when used with a state-of-the-art CLDNN acoustic model trained on over 2,000 hours of speech. Specifically, we will show the benefit of the CLDNN, namely the time convolution layer in reducing temporal variations, the frequency convolution layer for preserving locality and reducing frequency variations, as well as the LSTM layers for temporal modeling. In addition, by stacking raw waveform features with log-mel features, we achieve a 3 relative reduction in word error rate.",
"Albeit recent progress in speaker verification generates powerful models, malicious attacks in the form of spoofed speech, are generally not coped with. Recent results in ASVSpoof2015 and BTAS2016 challenges indicate that spoof-aware features are a possible solution to this problem. Most successful methods in both challenges focus on spoof-aware features, rather than focusing on a powerful classifier. In this paper we present a novel raw waveform based deep model for spoofing detection, which jointly acts as a feature extractor and classifier, thus allowing it to directly classify speech signals. This approach can be considered as an end-to-end classifier, which removes the need for any pre- or post-processing on the data, making training and evaluation a streamlined process, consuming less time than other neural-network based approaches. The experiments on the BTAS2016 dataset show that the system performance is significantly improved by the proposed raw waveform convolutional long short term neural network (CLDNN), from the previous best published 1.26 half total error rate (HTER) to the current 0.82 HTER. Moreover it shows that the proposed system also performs well under the unknown (RE-PH2-PH3,RE-LPPH2-PH3) conditions.",
"Abstract Automaticspeechrecognitionsystemstypicallymodeltherela-tionship between the acoustic speech signal and the phones intwo separate steps: feature extraction and classier training. Inourrecentworks, wehaveshownthat, intheframeworkofcon-volutionalneuralnetworks(CNN),therelationshipbetweentheraw speech signal and the phones can be directly modeled andASR systems competitive to standard approach can be built. Inthis paper, we rst analyze and show that, between the rst twoconvolutional layers, the CNN learns (in parts) and models thephone-specic spectral envelope information of 2-4 ms speech.Given that we show that the CNN-based approach yields ASRtrends similar to standard short-term spectral based ASR sys-tem under mismatched (noisy) conditions, with the CNN-basedapproach being more robust.Index Terms: automatic speech recognition, convolutionalneural networks, raw signal, robust speech recognition. 1. Introduction State-of-the-art automatic speech recognition (ASR) systemstypically model the relationship between the acoustic speechsignal and the phones in two separate steps, which are op-timized in an independent manner [1]. In a rst step, thespeech signal is transformed into features, usually composed ofa dimensionality reduction phase and an information selectionphase, based on the task-specic knowledge of the phenomena.These two phases have been carefully hand-crafted, leading tostate-of-the-art features such as Mel frequency cepstral coef-cients(MFCCs)orperceptuallinearpredictioncepstralfeatures(PLPs). In a second step, the likelihood of subword units suchas, phonemes is estimated using generative models or discrimi-native models.In recent years, in the hybrid HMM ANN framework [1],there has been growing interests in using intermediate rep-resentations instead of conventional features, such as cepstral-based features, as input for neural networks-based systems.ANNs with deep learning architectures, more precisely, deepneural networks (DNNs) [2, 3], which can yield better systemthan a single hidden layer MLP have been proposed to addressvarious aspects of acoustic modeling. More specically, useof context-dependent phonemes [4, 5]; use of spectral featuresas opposed to cepstral features [6, 7]; CNN-based system withMel lter bank energies as input [8, 9, 10]; combination of dif-ferent features [11], to name a few. Features learning from therawspeechsignalusingneuralnetworks-basedsystemshasalsobeen investigated in [12]. In all these approaches, the features",
"Speaker verification systems traditionally extract and model cepstral features or filter bank energies from the speech signal. In this paper, inspired by the success of neural network-based approaches to model directly raw speech signal for applications such as speech recognition, emotion recognition and anti-spoofing, we propose a speaker verification approach where speaker discriminative information is directly learned from the speech signal by: (a) first training a CNN-based speaker identification system that takes as input raw speech signal and learns to classify on speakers (unknown to the speaker verification system); and then (b) building a speaker detector for each speaker in the speaker verification system by replacing the output layer of the speaker identification system by two outputs (genuine, impostor), and adapting the system in a discriminative manner with enrollment speech of the speaker and impostor speech data. Our investigations on the Voxforge database shows that this approach can yield systems competitive to state-of-the-art systems. An analysis of the filters in the first convolution layer shows that the filters give emphasis to information in low frequency regions (below 1000 Hz) and implicitly learn to model fundamental frequency information in the speech signal for speaker discrimination.",
"",
"",
"With the development of speech synthesis techniques, automatic speaker verification systems face the serious challenge of spoofing attack. In order to improve the reliability of speaker verification systems, we develop a new filter bank-based cepstral feature, deep neural network (DNN) filter bank cepstral coefficients, to distinguish between natural and spoofed speech. The DNN filter bank is automatically generated by training a filter bank neural network (FBNN) using natural and synthetic speech. By adding restrictions on the training rules, the learned weight matrix of FBNN is band limited and sorted by frequency, similar to the normal filter bank. Unlike the manually designed filter bank, the learned filter bank has different filter shapes in different channels, which can capture the differences between natural and synthetic speech more effectively. The experimental results on the ASVspoof 2015 database show that the Gaussian mixture model maximum-likelihood classifier trained by the new feature performs better than the state-of-the-art linear frequency triangle filter bank cepstral coefficients-based classifier, especially on detecting unknown attacks.",
"",
"Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5 relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.",
"Deep neural networks (DNN) have achieved significant success in the field of speech recognition. One of the main advantages of the DNN is automatic feature extraction without human intervention. Therefore, we incorporate a pseudo-filterbank layer to the bottom of DNN and train the whole filterbank layer and the following networks jointly, while most systems take pre-defined mel-scale filterbanks as acoustic features to DNN. In the experiment, we use Gaussian functions instead of triangular mel-scale filterbanks. This technique enables a filterbank layer to maintain the functionality of frequency domain smoothing. The proposed method provides an 8.0 relative improvement in clean condition on ASJ+JNAS corpus and a 2.7 relative improvement on noise-corrupted ASJ+JNAS corpus compared with traditional fully-connected DNN. Experimental results show that the frame-level transformation of filterbank layer constrains flexibility and promotes learning efficiency in acoustic modeling.",
"",
"The effectiveness of introducing deep neural networks into conventional speaker recognition pipelines has been broadly shown to benefit system performance. A novel text-independent speaker verification (SV) framework based on the triplet loss and a very deep convolutional neural network architecture (i.e., Inception-Resnet-v1) are investigated in this study, where a fixed-length speaker discriminative embedding is learned from sparse speech features and utilized as a feature representation for the SV tasks. A concise description of the neural network based speaker discriminative training with triplet loss is presented. An Euclidean distance similarity metric is applied in both network training and SV testing, which ensures the SV system to follow an end-to-end fashion. By replacing the final max average pooling layer with a spatial pyramid pooling layer in the Inception-Resnet-v1 architecture, the fixed-length input constraint is relaxed and an obvious performance gain is achieved compared with the fixed-length input speaker embedding system. For datasets with more severe training test condition mismatches, the probabilistic linear discriminant analysis (PLDA) back end is further introduced to replace the distance based scoring for the proposed speaker embedding system. Thus, we reconstruct the SV task with a neural network based front-end speaker embedding system and a PLDA that provides channel and noise variabilities compensation in the back end. Extensive experiments are conducted to provide useful hints that lead to a better testing performance. Comparison with the state-of-the-art SV frameworks on three public datasets (i.e., a prompt speech corpus, a conversational speech Switchboard corpus, and NIST SRE10 10 s–10 s condition) justifies the effectiveness of our proposed speaker embedding system."
]
} |
1812.05920 | 2903799412 | Deep neural networks can learn complex and abstract representations, that are progressively obtained by combining simpler ones. A recent trend in speech and speaker recognition consists in discovering these representations starting from raw audio samples directly. Differently from standard hand-crafted features such as MFCCs or FBANK, the raw waveform can potentially help neural networks discover better and more customized representations. The high-dimensional raw inputs, however, can make training significantly more challenging. This paper summarizes our recent efforts to develop a neural architecture that efficiently processes speech from audio waveforms. In particular, we propose SincNet, a novel Convolutional Neural Network (CNN) that encourages the first layer to discover meaningful filters by exploiting parametrized sinc functions. In contrast to standard CNNs, which learn all the elements of each filter, only low and high cutoff frequencies of band-pass filters are directly learned from data. This inductive bias offers a very compact way to derive a customized front-end, that only depends on some parameters with a clear physical meaning. Our experiments, conducted on both speaker and speech recognition, show that the proposed architecture converges faster, performs better, and is more computationally efficient than standard CNNs. | Differently from the proposed approach, the latter works operate on spectrogram features and still learn all the L elements of the CNN filters. An idea related to the proposed method has been recently explored in @cite_16 , where a set of parameterized Gaussian filters are employed. This approach operates on the spectrogram domain, while SincNet directly considers the raw time domain waveform. Similarly to our work, in @cite_26 the convolutional filters are initialized with a predefined filter shape. However, rather than focusing on cut-off frequencies only, all the basic taps of the FIR filters are still learned. This work extends our previous studies on the SincNet @cite_3 . To the best of our knowledge, this paper is the first that shows the effectiveness of this architecture in a speech recognition application. | {
"cite_N": [
"@cite_16",
"@cite_3",
"@cite_26"
],
"mid": [
"2658929981",
"2964052309",
"2962901777"
],
"abstract": [
"Deep neural networks (DNN) have achieved significant success in the field of speech recognition. One of the main advantages of the DNN is automatic feature extraction without human intervention. Therefore, we incorporate a pseudo-filterbank layer to the bottom of DNN and train the whole filterbank layer and the following networks jointly, while most systems take pre-defined mel-scale filterbanks as acoustic features to DNN. In the experiment, we use Gaussian functions instead of triangular mel-scale filterbanks. This technique enables a filterbank layer to maintain the functionality of frequency domain smoothing. The proposed method provides an 8.0 relative improvement in clean condition on ASJ+JNAS corpus and a 2.7 relative improvement on noise-corrupted ASJ+JNAS corpus compared with traditional fully-connected DNN. Experimental results show that the frame-level transformation of filterbank layer constrains flexibility and promotes learning efficiency in acoustic modeling.",
"Deep learning is progressively gaining popularity as a viable alternative to i-vectors for speaker recognition. Promising results have been recently obtained with Convolutional Neural Networks (CNNs) when fed by raw speech samples directly. Rather than employing standard hand-crafted features, the latter CNNs learn low-level speech representations from waveforms, potentially allowing the network to better capture important narrow-band speaker characteristics such as pitch and formants. Proper design of the neural network is crucial to achieve this goal.This paper proposes a novel CNN architecture, called SincNet, that encourages the first convolutional layer to discover more meaningful filters. SincNet is based on parametrized sinc functions, which implement band-pass filters. In contrast to standard CNNs, that learn all elements of each filter, only low and high cutoff frequencies are directly learned from data with the proposed method. This offers a very compact and efficient way to derive a customized filter bank specifically tuned for the desired application.Our experiments, conducted on both speaker identification and speaker verification tasks, show that the proposed architecture converges faster and performs better than a standard CNN on raw waveforms.",
"We train a bank of complex filters that operates on the raw waveform and is fed into a convolutional neural network for end-to-end phone recognition. These time-domain filterbanks (TD-filterbanks) are initialized as an approximation of mel-filterbanks, and then fine-tuned jointly with the remaining convolutional architecture. We perform phone recognition experiments on TIMIT and show that for several architectures, models trained on TD- filterbanks consistently outperform their counterparts trained on comparable mel-filterbanks. We get our best performance by learning all front-end steps, from pre-emphasis up to averaging. Finally, we observe that the filters at convergence have an asymmetric impulse response, and that some of them remain almost analytic."
]
} |
1812.05850 | 2904340362 | Abstract Semantic segmentation (i.e. image parsing) aims to annotate each image pixel with its corresponding semantic class label. Spatially consistent labeling of the image requires an accurate description and modeling of the local contextual information. Segmentation result is typically improved by Markov Random Field (MRF) optimization on the initial labels. However this improvement is limited by the accuracy of initial result and how the contextual neighborhood is defined. In this paper, we develop generalized and flexible contextual models for segmentation neighborhoods in order to improve parsing accuracy. Instead of using a fixed segmentation and neighborhood definition, we explore various contextual models for fusion of complementary information available in alternative segmentations of the same image. In other words, we propose a novel MRF framework that describes and optimizes the contextual dependencies between multiple segmentations. Simulation results on two common datasets demonstrate significant improvement in parsing accuracy over the baseline approaches. | Context in image parsing is typically introduced in the form of MRF or CRF models that describe the local and or global dependencies among object labels and scene content. Several CNN-based parsing methods adopt CRFs as a post-processing step to refine their outputs ( @cite_40 @cite_4 ). @cite_41 employs a fully connected CRF among pixels to capture both local and global context. These methods require separate training steps for learning the CNN and CRF. Recurrent neural networks (RNNs) are also used to model context among pixels objects ( @cite_33 @cite_35 ), hence introducing context information into the neural network architecture. @cite_15 shows how to formulate CRF model as an RNN; in this manner CRF can be combined with any CNN-based parser for end-to-end training of the whole network. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_33",
"@cite_41",
"@cite_40",
"@cite_15"
],
"mid": [
"",
"",
"1909234690",
"2412782625",
"2950672966",
"2124592697"
],
"abstract": [
"",
"",
"This paper addresses the problem of pixel-level segmentation and classification of scene images with an entirely learning-based approach using Long Short Term Memory (LSTM) recurrent neural networks, which are commonly used for sequence classification. We investigate two-dimensional (2D) LSTM networks for natural scene images taking into account the complex spatial dependencies of labels. Prior methods generally have required separate classification and image segmentation stages and or pre- and post-processing. In our approach, classification, segmentation, and context integration are all carried out by 2D LSTM networks, allowing texture and spatial model parameters to be learned within a single model. The networks efficiently capture local and global contextual information over raw RGB values and adapt well for complex scene images. Our approach, which has a much lower computational complexity than prior methods, achieved state-of-the-art performance over the Stanford Background and the SIFT Flow datasets. In fact, if no pre- or post-processing is applied, LSTM networks outperform other state-of-the-art approaches. Hence, only with a single-core Central Processing Unit (CPU), the running time of our approach is equivalent or better than the compared state-of-the-art approaches which use a Graphics Processing Unit (GPU). Finally, our networks' ability to visualize feature maps from each layer supports the hypothesis that LSTM networks are overall suited for image processing tasks.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Recognizing materials in real-world images is a challenging task. Real-world materials have rich surface texture, geometry, lighting conditions, and clutter, which combine to make the problem particularly difficult. In this paper, we introduce a new, large-scale, open dataset of materials in the wild, the Materials in Context Database (MINC), and combine this dataset with deep learning to achieve material recognition and segmentation of images in the wild. MINC is an order of magnitude larger than previous material databases, while being more diverse and well-sampled across its 23 categories. Using MINC, we train convolutional neural networks (CNNs) for two tasks: classifying materials from patches, and simultaneous material recognition and segmentation in full images. For patch-based classification on MINC we found that the best performing CNN architectures can achieve 85.2 mean class accuracy. We convert these trained CNN classifiers into an efficient fully convolutional framework combined with a fully connected conditional random field (CRF) to predict the material at every pixel in an image, achieving 73.1 mean class accuracy. Our experiments demonstrate that having a large, well-sampled dataset such as MINC is crucial for real-world material recognition and segmentation.",
"Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark."
]
} |
1812.05850 | 2904340362 | Abstract Semantic segmentation (i.e. image parsing) aims to annotate each image pixel with its corresponding semantic class label. Spatially consistent labeling of the image requires an accurate description and modeling of the local contextual information. Segmentation result is typically improved by Markov Random Field (MRF) optimization on the initial labels. However this improvement is limited by the accuracy of initial result and how the contextual neighborhood is defined. In this paper, we develop generalized and flexible contextual models for segmentation neighborhoods in order to improve parsing accuracy. Instead of using a fixed segmentation and neighborhood definition, we explore various contextual models for fusion of complementary information available in alternative segmentations of the same image. In other words, we propose a novel MRF framework that describes and optimizes the contextual dependencies between multiple segmentations. Simulation results on two common datasets demonstrate significant improvement in parsing accuracy over the baseline approaches. | To circumvent the shortcomings of previous models, our generalized MRF model defines a flexible framework to combine information coming from multiple segmentations and parsing methods. The closest work to our proposal is Associative Hierarchical Random Fields (AHRF) of @cite_3 . AHRF provides a hierarchical MRF model for multiple segmentations at different scales. AHRF is introduced as a generalization of different MRF models defined over pixels, superpixels or a hierarchy of segmentations (such as @cite_22 ). While AHRF defines a strict hierarchy between pixels, segments and super-segments, our model allows for combination of different segmentations without any fixed parent-child or coarse-fine scale relationship in between. In addition, we investigate the fusion of decisions from different (superpixel-based and CNN-based) parsers, while @cite_3 does not explain how to extend AHRF to incorporate several different classifiers. | {
"cite_N": [
"@cite_22",
"@cite_3"
],
"mid": [
"1576235636",
"2113940248"
],
"abstract": [
"The joint tasks of object recognition and object segmentation from a single image are complex in their requirement of not only correct classification, but also deciding exactly which pixels belong to the object. Exploring all possible pixel subsets is prohibitively expensive, leading to recent approaches which use unsupervised image segmentation to reduce the size of the configuration space. Image segmentation, however, is known to be unstable, strongly affected by small image perturbations, feature choices, or different segmentation algorithms. This instability has led to advocacy for using multiple segmentations of an image. In this paper, we explore the question of how to best integrate the information from multiple bottom-up segmentations of an image to improve object recognition robustness. By integrating the image partition hypotheses in an intuitive combined top-down and bottom-up recognition approach, we improve object and feature support. We further explore possible extensions of our method and whether they provide improved performance. Results are presented on the MSRC 21-class data set and the Pascal VOC2007 object segmentation challenge.",
"This paper makes two contributions: the first is the proposal of a new model—The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results."
]
} |
1812.05850 | 2904340362 | Abstract Semantic segmentation (i.e. image parsing) aims to annotate each image pixel with its corresponding semantic class label. Spatially consistent labeling of the image requires an accurate description and modeling of the local contextual information. Segmentation result is typically improved by Markov Random Field (MRF) optimization on the initial labels. However this improvement is limited by the accuracy of initial result and how the contextual neighborhood is defined. In this paper, we develop generalized and flexible contextual models for segmentation neighborhoods in order to improve parsing accuracy. Instead of using a fixed segmentation and neighborhood definition, we explore various contextual models for fusion of complementary information available in alternative segmentations of the same image. In other words, we propose a novel MRF framework that describes and optimizes the contextual dependencies between multiple segmentations. Simulation results on two common datasets demonstrate significant improvement in parsing accuracy over the baseline approaches. | The main novelty of this paper is the fusion of multiple parsing methods within MRF formalism. @cite_7 also labels segments by late fusion of SVM classifiers over multiple segmentations; however, fusion is simply performed by taking the mean max multiplication of classifier probabilities in intersecting regions and label smoothing by relaxation labeling is treated as a post-processing step on the fused result. Methods such as @cite_31 , @cite_11 , @cite_37 define hierarchical MRF models over multiple segmentations but do not consider segmentations and class scores coming from alternative methods. In these approaches, since the segmentations and their unary potentials at different levels of the hierarchy are not independently generated, there will be no significant complementary information for fusion over the hierarchical MRF. As a result, gains in labeling accuracy are limited. On the other hand, our MRF framework allows for the fusion of independent segmentations and class likelihoods coming from much different classifiers. | {
"cite_N": [
"@cite_31",
"@cite_37",
"@cite_7",
"@cite_11"
],
"mid": [
"2301304385",
"2737171127",
"2075695045",
"2137881638"
],
"abstract": [
"Abstract We propose a novel label inference approach for segmenting natural images into perceptually meaningful regions. Each pixel is assigned a serial label indicating its category using a Markov Random Field (MRF) model. To this end, we introduce a framework for latent semantic inference of serial labels, called LSI, by integrating local pixel, global region, and scale information of an natural image into a MRF-inspired model. The key difference from traditional MRF based image segmentation methods is that we infer semantic segments in the label space instead of the pixel space. We first design a serial label formation algorithm named Color and Location Density Clustering (CLDC) to capture the local pixel information. Then we propose a label merging strategy to combine global cues of labels in the Cross-Region potential to grasp the contextual information within an image. In addition, to align with the structure of segmentation, a hierarchical label alignment mechanism is designed to formulate the Cross-Scale potential by utilizing the scale information to catch the hierarchy of image at different scales for final segmentation optimization. We evaluate the performance of the proposed approach on the Berkeley Segmentation Dataset and preferable results are achieved.",
"Automatic image annotation and image segmentation are two prominent research fields of Computer Vision, that are getting higher attention these days to accomplish image analysis and scene understanding. In this work, we present an annotation algorithm based on a hierarchical image partition, that makes use of Markov Random Fields (MRFs) to model spatial and hierarchical relations among regions in the image. In this way, we can capture local, global and contextual information. Also, we combine the processes of annotation and segmentation in an iterative way so that each process benefits from the other. Furthermore, we investigate the selection of the starting segmentation level for the hierarchical annotation process, to show its relevance for the final results. We experimentally validate our approach in three well-known datasets: CorelA, Stanford Background and MSRC-21 datasets. In these datasets, we achieved better or comparable results to other state-of-the-art algorithms, improving our base classifier ...",
"In this paper we study the problem of the detection of semantic objects from known categories in images. Unlike existing techniques which operate at the pixel or at a patch level for recognition, we propose to rely on the categorization of image segments. Recent work has highlighted that image segments provide a sound support for visual object class recognition. In this work, we use image segments as primitives to extract robust features and train detection models for a predefined set of categories. Several segmentation algorithms are benchmarked and their performances for segment recognition are compared. We then propose two methods for enhancing the segments classification, one based on the fusion of the classification results obtained with the different segmentations, the other one based on the optimization of the global labelling by correcting local ambiguities between neighbor segments. We use as a benchmark the Microsoft MSRC-21 image database and show that our method competes with the current state-of-the-art.",
"In this paper we propose an approach to holistic scene understanding that reasons jointly about regions, location, class and spatial extent of objects, presence of a class in the image, as well as the scene type. Learning and inference in our model are efficient as we reason at the segment level, and introduce auxiliary variables that allow us to decompose the inherent high-order potentials into pairwise potentials between a few variables with small number of states (at most the number of classes). Inference is done via a convergent message-passing algorithm, which, unlike graph-cuts inference, has no submodularity restrictions and does not require potential specific moves. We believe this is very important, as it allows us to encode our ideas and prior knowledge about the problem without the need to change the inference engine every time we introduce a new potential. Our approach outperforms the state-of-the-art on the MSRC-21 benchmark, while being much faster. Importantly, our holistic model is able to improve performance in all tasks."
]
} |
1812.05961 | 2963941012 | In big-data analytics, using tensor decomposition to extract patterns from large, sparse multivariate data is a popular technique. Many challenges exist for designing parallel, high performance tensor decomposition algorithms due to irregular data accesses and the growing size of tensors that are processed. There have been many efforts at implementing shared-memory algorithms for tensor decomposition, most of which have focused on the traditional C C++ with OpenMP framework. However, Chapel is becoming an increasingly popular programing language due to its expressiveness and simplicity for writing scalable parallel programs. In this work, we port a state of the art C OpenMP parallel sparse tensor decomposition tool, SPLATT, to Chapel. We present a performance study that investigates bottlenecks in our Chapel code and discusses approaches for improving its performance. Also, we discuss features in Chapel that would have been beneficial to our porting effort. We demonstrate that our Chapel code is competitive with the C OpenMP code for both runtime and scalability, achieving 83 -96 performance of the original code and near linear scalability up to 32 cores. | Designing and implementing sparse parallel tensor decomposition algorithms has been a rich research area in recent years. There are several shared-memory based tensor decomposition implementations using novel approaches to performing MTTKRP in a scalable and efficient manner @cite_6 @cite_10 @cite_12 . However, to the best of our knowledge, the work presented in this paper is the first to implement parallel sparse tensor decomposition in a high productivity programming language such as Chapel. | {
"cite_N": [
"@cite_10",
"@cite_12",
"@cite_6"
],
"mid": [
"2724545582",
"2766967332",
"2245094585"
],
"abstract": [
"HPC systems are increasingly used for data intensive computations which exhibit irregular memory accesses, non-uniform work distributions, large memory footprints, and high memory bandwidth demands. To address these challenging demands, HPC systems are turning to many-core architectures that feature a large number of energy-efficient cores backed by high-bandwidth memory. These features are exemplified in Intel's recent Knights Landing many-core processor (KNL), which typically has 68 cores and 16GB of on-package multi-channel DRAM (MCDRAM). This work investigates how the novel architectural features offered by KNL can be used in the context of decomposing sparse, unstructured tensors using the canonical polyadic decomposition (CPD). The CPD is used extensively to analyze large multi-way datasets arising in various areas including precision healthcare, cybersecurity, and e-commerce. Towards this end, we (i) develop problem decompositions for the CPD which are amenable to hundreds of concurrent threads while maintaining load balance and low synchronization costs; and (ii) explore the utilization of architectural features such as MCDRAM. Using one KNL processor, our algorithm achieves up to 1.8x speedup over a dual socket Intel Xeon system with 44 cores.",
"Tensor decompositions are a powerful technique for enabling comprehensive and complete analysis of real-world data. Data analysis through tensor decompositions involves intensive computations over large-scale irregular sparse data. Optimizing the execution of such data intensive computations is key to reducing the time-to-solution (or response time) in real-world data analysis applications. As high-performance computing (HPC) systems are increasingly used for data analysis applications, it is becoming increasingly important to optimize sparse tensor computations and execute them efficiently on modern and advanced HPC systems. In addition to utilizing the large processing capability of HPC systems, it is crucial to improve memory performance (memory usage, communication, synchronization, memory reuse, and data locality) in HPC systems. In this paper, we present multiple optimizations that are targeted towards faster and memory-efficient execution of large-scale tensor analysis on HPC systems. We demonstrate that our techniques achieve reduction in memory usage and execution time of tensor decomposition methods when they are applied on multiple datasets of varied size and structure from different application domains. We achieve up to 11× reduction in memory usage and up to 7× improvement in performance. More importantly, we enable the application of large tensor decompositions on some important datasets on a multi-core system that would not have been feasible without our optimization.",
"The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way data and is used extensively to analyze very large and extremely sparse datasets. The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensor-matrix products fall into two classes. The first class saves floating point operations by storing a compressed tensor for each dimension of the data. These methods are fast but suffer high memory costs. The second class uses a single uncompressed tensor at the cost of additional floating point operations. In this work, we bridge the gap between the two approaches and introduce the compressed sparse fiber (CSF) a data structure for sparse tensors along with a novel parallel algorithm for tensor-matrix multiplication. CSF offers similar operation reductions as existing compressed methods while using only a single tensor structure. We validate our contributions with experiments comparing against state-of-the-art methods on a diverse set of datasets. Our work uses 58 less memory than the state-of-the-art while achieving 81 of the parallel performance on 16 threads."
]
} |
1812.05961 | 2963941012 | In big-data analytics, using tensor decomposition to extract patterns from large, sparse multivariate data is a popular technique. Many challenges exist for designing parallel, high performance tensor decomposition algorithms due to irregular data accesses and the growing size of tensors that are processed. There have been many efforts at implementing shared-memory algorithms for tensor decomposition, most of which have focused on the traditional C C++ with OpenMP framework. However, Chapel is becoming an increasingly popular programing language due to its expressiveness and simplicity for writing scalable parallel programs. In this work, we port a state of the art C OpenMP parallel sparse tensor decomposition tool, SPLATT, to Chapel. We present a performance study that investigates bottlenecks in our Chapel code and discusses approaches for improving its performance. Also, we discuss features in Chapel that would have been beneficial to our porting effort. We demonstrate that our Chapel code is competitive with the C OpenMP code for both runtime and scalability, achieving 83 -96 performance of the original code and near linear scalability up to 32 cores. | There has been a significant effort to evaluate and analyze the performance of Chapel programs for both single- and multi-node environments. Johnson and Hollingsworth ported and optimized several C OpenMP based benchmarks to single-node Chapel including LULESH, MiniMD, and CLOMP @cite_8 . Haque and Richards implemented an optimized multi-node version of CoMD in Chapel as well as identified key limitations of Chapel in regards to scope-based code locality @cite_4 . Our work, while similar, differs from these efforts in that SPLATT is a sparse application from a different problem domain. It also is a full application with several components, ranging from file I O and sorting to custom sparse data structures and parallel algorithms, rather than a benchmark or proxy application. | {
"cite_N": [
"@cite_4",
"@cite_8"
],
"mid": [
"2566316545",
"2481619518"
],
"abstract": [
"Chapel supports distributed computing with an underlying PGAS memory address space. While it provides abstractions for writing simple and elegant distributed code, the type system currently lacks a notion of locality i.e. a description of an object's access behavior in relation to its actual location. This often necessitates programmer intervention to avoid redundant non-local data access. Moreover, due to insufficient locality information the compiler ends up using “wide” pointers—that can point to non-local data—for objects referenced in an otherwise completely local manner, adding to the runtime overhead.In this work we describe CoMD-Chapel, our distributed Chapel implementation of the CoMD benchmark. We demonstrate that optimizing data access through replication and localization is crucial for achieving performance comparable to the reference implementation. We discuss limitations of existing scope-based locality optimizations and argue instead for a more general (and robust) type-based approach. Lastly, we also evaluate code performance and scaling characteristics. The fully optimized version of CoMD-Chapel can perform to within 62 –87 of the reference implementation.",
"This paper investigates how Chapel performance compares with other parallel frameworks. We provide specific examples of how programmers may improve their single-node (single-locale) Chapel programs to improve performance. We also identify some changes that would be possible to the language to make it easier to get these performance gains. Specifically, we compare the intranode performance of Chapel programs with OpenMP in C C++ by conducting case studies profiling the LULESH, MiniMD, SSCA#2, and CLOMP benchmarks. Our optimization techniques demonstrate improved runtime performance of Chapel benchmarks by factors of 3x, 5.3x, 6.3x, and 4.8x respectively and outperformed their OpenMP counterparts by factors of 2x for LULESH, 1.6x for SSCA#2, and 4.8x for CLOMP."
]
} |
1812.05961 | 2963941012 | In big-data analytics, using tensor decomposition to extract patterns from large, sparse multivariate data is a popular technique. Many challenges exist for designing parallel, high performance tensor decomposition algorithms due to irregular data accesses and the growing size of tensors that are processed. There have been many efforts at implementing shared-memory algorithms for tensor decomposition, most of which have focused on the traditional C C++ with OpenMP framework. However, Chapel is becoming an increasingly popular programing language due to its expressiveness and simplicity for writing scalable parallel programs. In this work, we port a state of the art C OpenMP parallel sparse tensor decomposition tool, SPLATT, to Chapel. We present a performance study that investigates bottlenecks in our Chapel code and discusses approaches for improving its performance. Also, we discuss features in Chapel that would have been beneficial to our porting effort. We demonstrate that our Chapel code is competitive with the C OpenMP code for both runtime and scalability, achieving 83 -96 performance of the original code and near linear scalability up to 32 cores. | There has also been work on developing techniques to more effectively measure the performance of Chapel programs, where a data-centric view of performance data is studied as opposed to more traditional code-centric views @cite_13 . In our work, we employed code-centric profiling to identify performance bottlenecks via gprof and source-code level timers. In the future, we would be interested in applying such data-centric techniques to improve our code. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2728705232"
],
"abstract": [
"Chapel is an emerging PGAS (Partitioned Global Address Space) language whose design goal is to make parallel programming more productive and generally accessible. To date, the implementation effort has focused primarily on correctness over performance. We present a performance measurement technique for Chapel and the idea is also applicable to other PGAS models. The unique feature of our tool is that it associates the performance statistics not to the code regions (functions), but to the variables (including the heap allocated, static, and local variables) in the source code. Unlike code-centric methods, this data-centric analysis capability exposes new optimization opportunities that are useful in resolving data locality problems. This paper introduces our idea and implementations of the approach with three benchmarks. We also include a case study optimizing benchmarks based on the information from our tool. The optimized versions improved the performance by a factor of 1.4x for LULESH, 2.3x for MiniMD, and 2.1x for CLOMP with simple modifications to the source code."
]
} |
1812.05506 | 2905111361 | The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of state and input constraints. In this paper, we address this problem for nonlinear systems with continuous state and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical system into an unconstrained safe system, to which any RL algorithm can be applied out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the current system state, if it can be safely applied to the real system, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven system model and considering state and input dependent uncertainties. | There is a growing awareness of safety questions in artificial intelligence @cite_5 , where reinforcement learning technologies have been proposed, see e.g. for an overview. , e.g., provide safety in expectation based on a trust-region approach with respect to the policy gradient. Other approaches are based on Bayesian optimization in order to carefully tune parametric policies @cite_26 also with respect to worst case scenarios @cite_19 @cite_11 @cite_1 . | {
"cite_N": [
"@cite_26",
"@cite_1",
"@cite_19",
"@cite_5",
"@cite_11"
],
"mid": [
"2143346970",
"2964201544",
"2209113413",
"2462906003",
"1018941212"
],
"abstract": [
"This paper introduces a learning-based robust control algorithm that provides robust stability and performance guarantees during learning. The approach uses Gaussian process (GP) regression based on data gathered during operation to update an initial model of the system and to gradually decrease the uncertainty related to this model. Embedding this data-based update scheme in a robust control framework guarantees stability during the learning process. Traditional robust control approaches have not considered online adaptation of the model and its uncertainty before. As a result, their controllers do not improve performance during operation. Typical machine learning algorithms that have achieved similar high-performance behavior by adapting the model and controller online do not provide the guarantees presented in this paper. In particular, this paper considers a stabilization task, linearizes the nonlinear, GP-based model around a desired operating point, and solves a convex optimization problem to obtain a linear robust controller. The resulting performance improvements due to the learning-based controller are demonstrated in experiments on a quadrotor vehicle.",
"Recent successes in reinforcement learning have lead to the development of complex controllers for realworld robots. As these robots are deployed in safety-critical applications and interact with humans, it becomes critical to ensure safety in order to avoid causing harm. A first step in this direction is to test the controllers in simulation. To be able to do this, we need to capture what we mean by safety and then efficiently search the space of all behaviors to see if they are safe. In this paper, we present an active-testing framework based on Bayesian Optimization. We specify safety constraints using logic and exploit structure in the problem in order to test the system for adversarial counter examples that violate the safety specifications. These specifications are defined as complex boolean combinations of smooth functions on the trajectories and, unlike reward functions in reinforcement learning, are expressive and impose hard constraints on the system. In our framework, we exploit regularity assumptions on individual functions in form of a Gaussian Process (GP) prior. We combine these into a coherent optimization framework using problem structure. The resulting algorithm is able to provably verify complex safety specifications or alternatively find counter examples. Experimental results show that the proposed method is able to find adversarial examples quickly.",
"Robotic systems typically have numerous parameters, e.g. the choice of planning algorithm, real-valued parameters of motion and vision modules, and control parameters. We consider the problem of optimizing these parameters for best worst-case performance over a range of environments. To this end we first propose to evaluate system parameters by adversarially optimizing over environment parameters to find particularly hard environments. This is then nested in a game-theoretic minimax optimization setting, where an outerloop aims to find best worst-case system parameters. For both optimization levels we use Bayesian global optimization (GP-UCB) which provides the necessary confidence bounds to handle the stochasticity of the performance. We compare our method (Nested Minimax) with an existing relaxation method we adapted to become applicable in our setting. By construction our approach provides more robustness to performance stochasticity. We demonstrate the method for planning algorithm selection on a pick'n'place application and for control parameter optimization on a triple inverted pendulum for robustness to adversarial perturbations.",
"Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (\"avoiding side effects\" and \"avoiding reward hacking\"), an objective function that is too expensive to evaluate frequently (\"scalable supervision\"), or undesirable behavior during the learning process (\"safe exploration\" and \"distributional shift\"). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking applications of AI.",
"Worst-case design is important whenever robustness to adverse environmental conditions must be ensured regardless of their probability. It leads to minimax optimization, which is most often treated assuming that prior knowledge makes the worst environmental conditions obvious, or that a closed-form expression for the performance index is available. This paper considers the important situation where none of these assumptions is true and where the performance index must be evaluated via costly numerical simulations. Strategies to limit the number of these evaluations are then of paramount importance. One such strategy is proposed here, which further improves the performance of an algorithm recently presented that combines a relaxation procedure for minimax search with the well-known Kriging-based EGO algorithm. Expected Improvement is computed in the minimax optimization context, which allows to further reduce the number of costly evaluations of the performance index. The interest of the approach is demonstrated on test cases and a simple engineering problem from the literature, which facilitates comparison with alternative approaches."
]
} |
1812.05506 | 2905111361 | The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of state and input constraints. In this paper, we address this problem for nonlinear systems with continuous state and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical system into an unconstrained safe system, to which any RL algorithm can be applied out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the current system state, if it can be safely applied to the real system, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven system model and considering state and input dependent uncertainties. | Originating from robust model predictive control, several extensions based on model predictive control techniques to safe learning-based methods have been proposed, e.g. in , as well as various extensions towards consideration of machine learning based model estimation techniques @cite_9 @cite_14 @cite_6 @cite_16 @cite_12 @cite_8 , also in an adaptive manner @cite_7 @cite_23 . Note, that in the robotics literature similar concepts exist, which are often also referred to as funneling, see e.g. and references therein, as well as so called LQR-trees @cite_21 . | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_21",
"@cite_6",
"@cite_23",
"@cite_16",
"@cite_12"
],
"mid": [
"2619551236",
"1500462030",
"2900806034",
"2765311358",
"2100538121",
"2700407638",
"2887531520",
"",
"2791704483"
],
"abstract": [
"Gaussian process (GP) regression has been widely used in supervised machine learning due to its flexibility and inherent ability to describe uncertainty in function estimation. In the context of control, it is seeing increasing use for modeling of nonlinear dynamical systems from data, as it allows the direct assessment of residual model uncertainty. We present a model predictive control (MPC) approach that integrates a nominal system with an additive nonlinear part of the dynamics modeled as a GP. Approximation techniques for propagating the state distribution are reviewed and we describe a principled way of formulating the chance constrained MPC problem, which takes into account residual uncertainties provided by the GP model to enable cautious control. Using additional approximations for efficient computation, we finally demonstrate the approach in a simulation example, as well as in a hardware implementation for autonomous racing of remote controlled race cars, highlighting improvements with regard to both performance and safety over a nominal controller.",
"A novel adaptive output feedback control technique for uncertain linear systems is proposed, able to cope with input and output constraints and measurement noise. At each time step, the collected input-output data are exploited to refine the set of models that are consistent with the available information on the system. Then, the control input is computed according to a receding horizon strategy, which guarantees recursive constraint satisfaction for all the admissible models, hence also for the actual plant. The technique relies only on the solution of linear and quadratic programs. The effectiveness of the approach is illustrated in a numerical example.",
"Abstract A robust model predictive control (RMPC) approach for linear systems with bounded state-dependent uncertainties is proposed. Such uncertainties can arise from unmodeled non-linearities or external disturbances, for example. By explicitly considering the state dependency of the uncertainty sets in the RMPC approach, it is shown how closed-loop performance can be improved over existing approaches that consider worst-case uncertainty. Being able to handle state-dependent uncertainties is particularly relevant in learning-based MPC where the system model is learned from data and confidence in the model typically varies over the state space. The efficacy of the proposed approach for learning-based RMPC is illustrated with a numerical example, where uncertainty sets are obtained from data using Gaussian Process regression.",
"Spanish MINECO Grant PRX15-00300 and projects DPI2013-48243-C2-2-R and DPI2016-76493-C3-1-R. UK Engineering and Physical Research Council, grant no.EP J012300 1.",
"Advances in the direct computation of Lyapunov functions using convex optimization make it possible to efficiently evaluate regions of attraction for smooth non-linear systems. Here we present a feedback motion-planning algorithm which uses rigorously computed stability regions to build a sparse tree of LQR-stabilized trajectories. The region of attraction of this non-linear feedback policy âprobabilistically coversâ the entire controllable subset of state space, verifying that all initial conditions that are capable of reaching the goal will reach the goal. We numerically investigate the properties of this systematic non-linear feedback design algorithm on simple non-linear systems, prove the property of probabilistic coverage, and discuss extensions and implementation details of the basic algorithm.",
"Trial-and-error based reinforcement learning (RL) has seen rapid advancements in recent times, especially with the advent of deep neural networks. However, the majority of autonomous RL algorithms require a large number of interactions with the environment. A large number of interactions may be impractical in many real-world applications, such as robotics, and many practical systems have to obey limitations in the form of state space or control constraints. To reduce the number of system interactions while simultaneously handling constraints, we propose a model-based RL framework based on probabilistic Model Predictive Control (MPC). In particular, we propose to learn a probabilistic transition model using Gaussian Processes (GPs) to incorporate model uncertainty into long-term predictions, thereby, reducing the impact of model errors. We then use MPC to find a control sequence that minimises the expected long-term cost. We provide theoretical guarantees for first-order optimality in the GP-based transition models with deterministic approximate inference for long-term planning. We demonstrate that our approach does not only achieve state-of-the-art data efficiency, but also is a principled way for RL in constrained environments.",
"In this paper, we present a novel constraint tightening approach for nonlinear robust model predictive control (MPC). This approach uses a simple constructive constraint tightening based on growing tubes. Contrary to other approaches, we require no complex offline computations to obtain a stabilizing control law. Instead, we consider the notion of incremental stabilizability and design tubes based on an estimate of the achievable exponential decay rate. In addition, we show how this tightening can be used as an ad-hoc modification to improve the robustness of MPC without terminal constraints. We study the system theoretic properties of the resulting closed-loop system, including bounds on the region of attraction and the minimal robust positively invariant (RPI) set. Within an MPC framework without terminal constraints, the proposed constraint tightening leads to a nonlinear robust controller without complex design procedures, which makes it appealing for practical applications.",
"",
"Reinforcement learning has been successfully used to solve difficult tasks in complex unknown environments. However, these methods typically do not provide any safety guarantees during the learning process. This is particularly problematic, since reinforcement learning agent actively explore their environment. This prevents their use in safety-critical, real-world applications. In this paper, we present a learning-based model predictive control scheme that provides high-probability safety guarantees throughout the learning process. Based on a reliable statistical model, we construct provably accurate confidence intervals on predicted trajectories. Unlike previous approaches, we allow for input-dependent uncertainties. Based on these reliable predictions, we guarantee that trajectories satisfy safety constraints. Moreover, we use a terminal set constraint to recursively guarantee the existence of safe control actions at every iteration. We evaluate the resulting algorithm to safely explore the dynamics of an inverted pendulum and to solve a reinforcement learning task on a cart-pole system with safety constraints."
]
} |
1812.05506 | 2905111361 | The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of state and input constraints. In this paper, we address this problem for nonlinear systems with continuous state and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical system into an unconstrained safe system, to which any RL algorithm can be applied out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the current system state, if it can be safely applied to the real system, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven system model and considering state and input dependent uncertainties. | While some of these approaches have shown to work well in practice @cite_20 @cite_18 , they typically either lack of rigorous theoretical safety guarantees, tend to be overly conservative by relying on Lipschitz based estimates in the prediction of the uncertain system evolution, or are restricted to a very specific class of systems. | {
"cite_N": [
"@cite_18",
"@cite_20"
],
"mid": [
"2803275621",
"2007726556"
],
"abstract": [
"This paper presents an adaptive high performance control method for autonomous miniature race cars. Racing dynamics are notoriously hard to model from first principles, which is addressed by means of a cautious nonlinear model predictive control (NMPC) approach that learns to improve its dynamics model from data and safely increases racing performance. The approach makes use of a Gaussian Process (GP) and takes residual model uncertainty into account through a chance constrained formulation. We present a sparse GP approximation with dynamically adjusting inducing inputs, enabling a real-time implementable controller. The formulation is demonstrated in simulations, which show significant improvement with respect to both lap time and constraint satisfaction compared to an NMPC without model learning.",
"In this paper, we present details of the real time implementation onboard a quadrotor helicopter of learning-based model predictive control (LBMPC). LBMPC rigorously combines statistical learning with control engineering, while providing levels of guarantees about safety, robustness, and convergence. Experimental results show that LBMPC can learn physically based updates to an initial model, and how as a result LBMPC improves transient response performance. We demonstrate robustness to mis-learning. Finally, we show the use of LBMPC in an integrated robotic task demonstration—The quadrotor is used to catch a ball thrown with an a priori unknown trajectory."
]
} |
1812.05506 | 2905111361 | The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of state and input constraints. In this paper, we address this problem for nonlinear systems with continuous state and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical system into an unconstrained safe system, to which any RL algorithm can be applied out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the current system state, if it can be safely applied to the real system, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven system model and considering state and input dependent uncertainties. | Using Bayesian model estimates from data, certification techniques were proposed @cite_25 @cite_2 that validate the resulting closed-loop system. The techniques share similar limitations with safe policy approaches, namely that they are tailored to a specific task. In order to decouple safety from a specific task, the concept of a safety framework has been introduced @cite_15 , which consists of a model-based computation of a safe set of system states and a corresponding safe control policy, which is entitled to override a potentially unsafe RL algorithm in order to ensure invariance with respect to the safe set of system states. This concept was exploited in several papers, that provide methods to compute the safe set as well as the corresponding safe policy @cite_17 @cite_22 @cite_13 @cite_0 , and also builds the foundation of the presented safety filtration scheme. | {
"cite_N": [
"@cite_22",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"2772882203",
"2793750335",
"2618318883",
"2127192854",
"2766730210",
"2973836089",
"2611136914"
],
"abstract": [
"The control of complex systems faces a trade-off between high performance and safety guarantees, which in particular restricts the application of learning-based methods to safety-critical systems. A recently proposed framework to address this issue is the use of a safety controller, which guarantees to keep the system within a safe region of the state space. This paper introduces efficient techniques for the synthesis of a safe set and control law, which offer improved scalability properties by relying on approximations based on convex optimization problems. The first proposed method requires only an approximate linear system model and Lipschitz continuity of the unknown nonlinear dynamics. The second method extends the results by showing how a Gaussian process prior on the unknown system dynamics can be used in order to reduce conservatism of the resulting safe set. We demonstrate the results with numerical examples, including an autonomous convoy of vehicles.",
"While it has been repeatedly shown that learning-based controllers can provide superior performance, they often lack of safety guarantees. This paper aims at addressing this problem by introducing a model predictive safety certification (MPSC) scheme for polytopic linear systems with additive disturbances. The scheme verifies safety of a proposed learning-based input and modifies it as little as necessary in order to keep the system within a given set of constraints. Safety is thereby related to the existence of a model predictive controller (MPC) providing a feasible trajectory towards a safe target set. A robust MPC formulation accounts for the fact that the model is generally uncertain in the context of learning, which allows proving constraint satisfaction at all times under the proposed MPSC strategy. The MPSC scheme can be used in order to expand any potentially conservative set of safe states for learning and we prove an iterative technique for enlarging the safe set. Finally, a practical data-based design procedure for MPSC is proposed using scenario optimization.",
"Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.",
"For some time now machine learning methods have been widely used in perception for autonomous robots. While there have been many results describing the performance of machine learning techniques with regards to their accuracy or convergence rates, relatively little work has been done on developing theoretical performance guarantees about their stability and robustness. As a result, many machine learning techniques are still limited to being used in situations where safety and robustness are not critical for success. One way to overcome this difficulty is by using reachability analysis, which can be used to compute regions of the state space, known as reachable sets, from which the system can be guaranteed to remain safe over some time horizon regardless of the disturbances. In this paper we show how reachability analysis can be combined with machine learning in a scenario in which an aerial robot is attempting to learn the dynamics of a ground vehicle using a camera with a limited field of view. The resulting simulation data shows that by combining these two paradigms, one can create robotic systems that feature the best qualities of each, namely high performance and guaranteed safety.",
"Abstract Learning in interacting dynamical systems can lead to instabilities and violations of critical safety constraints, which is limiting its application to constrained system networks. This paper introduces two safety frameworks that can be applied together with any learning method for ensuring constraint satisfaction in a network of uncertain systems, which are coupled in the dynamics and in the state constraints. The proposed techniques make use of a safe set to modify control inputs that may compromise system safety, while accepting safe inputs from the learning procedure. Two different safe sets for distributed systems are proposed by extending recent results for structured invariant sets. The sets differ in their dynamical allocation to local sets and provide different trade-offs between required communication and achieved set size. The proposed algorithms are proven to keep the system in the safe set at all times and their effectiveness and behavior is illustrated in a numerical example.",
"Control theory can provide useful insights into the properties of controlled, dynamic systems. One important property of nonlinear systems is the region of attraction (ROA), a safe subset of the state space in which a given controller renders an equilibrium point asymptotically stable. The ROA is typically estimated based on a model of the system. However, since models are only an approximation of the real world, the resulting estimated safe region can contain states outside the ROA of the real system. This is not acceptable in safety-critical applications. In this paper, we consider an approach that learns the ROA from experiments on a real system, without ever leaving the true ROA and, thus, without risking safety-critical failures. Based on regularity assumptions on the model errors in terms of a Gaussian process prior, we use an underlying Lyapunov function in order to determine a region in which an equilibrium point is asymptotically stable with high probability. Moreover, we provide an algorithm to actively and safely explore the state space in order to expand the ROA estimate. We demonstrate the effectiveness of this method in simulation.",
"The proven efficacy of learning-based control schemes strongly motivates their application to robotic systems operating in the physical world. However, guaranteeing correct operation during the learning process is currently an unresolved issue, which is of vital importance in safety-critical systems. We propose a general safety framework based on Hamilton-Jacobi reachability methods that can work in conjunction with an arbitrary learning algorithm. The method exploits approximate knowledge of the system dynamics to guarantee constraint satisfaction while minimally interfering with the learning process. We further introduce a Bayesian mechanism that refines the safety analysis as the system acquires new evidence, reducing initial conservativeness when appropriate while strengthening guarantees through real-time validation. The result is a least-restrictive, safety-preserving control law that intervenes only when (a) the computed safety guarantees require it, or (b) confidence in the computed guarantees decays in light of new observations. We prove theoretical safety guarantees combining probabilistic and worst-case analysis and demonstrate the proposed framework experimentally on a quadrotor vehicle. Even though safety analysis is based on a simple point-mass model, the quadrotor successfully arrives at a suitable controller by policy-gradient reinforcement learning without ever crashing, and safely retracts away from a strong external disturbance introduced during flight."
]
} |
1812.05718 | 2963538197 | Network models have been increasingly used in the past years to support summarization and analysis of narratives, such as famous TV series, books and news. Inspired by social network analysis, most of these models focus on the characters at play. The network model well captures all characters interactions, giving a broad picture of the narration’s content. A few works went beyond by introducing additional semantic elements, always captured in a single layer network. In contrast, we introduce in this work a multilayer network model to capture more elements of the narration of a movie from its script: people, locations, and other semantic elements. This model enables new measures and insights on movies. We demonstrate this model on two very popular movies. | Yeung @cite_18 proposed a scene transition graph and an analysis method for movie browsing and navigation. Each node in the scene transition graph denotes a cluster of shots. There is a link between two nodes @math and @math if a shot represented by node @math immediately precedes any shots represented by node @math . This approach builds a network with interactions between shots and then analyzes it in order to extract the story units of scenes. This model uses only a hierarchical clustering of shots with visual primitives for browsing, but it cannot retrieve movie story elements such as actors, scenes, and dialogues. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2119261219"
],
"abstract": [
"Content based browsing and navigation in digital video collections have been centered on sequential and linear presentation of images. To facilitate such applications, nonlinear and non sequential access into video documents is essential, especially with long programs. For many programs, this can be achieved by identifying underlying story structures which are reflected both by visual content and temporal organization of composing elements. A new framework of video analysis and associated techniques are proposed to automatically parse long programs, to extract story structures and identify story units. The proposed analysis and representation contribute to the extraction of scenes and story units, each representing a distinct locale or event, that cannot be achieved by shot boundary detection alone. Analysis is performed on MPEG compressed video and without a prior models. The result is a compact representation that serves as a summary of the story and allows hierarchical organization of video documents."
]
} |
1812.05718 | 2963538197 | Network models have been increasingly used in the past years to support summarization and analysis of narratives, such as famous TV series, books and news. Inspired by social network analysis, most of these models focus on the characters at play. The network model well captures all characters interactions, giving a broad picture of the narration’s content. A few works went beyond by introducing additional semantic elements, always captured in a single layer network. In contrast, we introduce in this work a multilayer network model to capture more elements of the narration of a movie from its script: people, locations, and other semantic elements. This model enables new measures and insights on movies. We demonstrate this model on two very popular movies. | Jung @cite_10 proposed a narrative structure graph to summarize a movie. The graph is composed of scene nodes which are narrative elements with character interactions, and connections between scenes decided by editorial relations. Using only scenes to construct a narrative structure graph for movie summarization is not sufficient. Indeed, story elements such as major characters and their interactions cannot be retrieved from this graph. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2059213912"
],
"abstract": [
"TV program review services, especially drama review services, are one of the most popular video on demand services on the Web. In this paper, we propose a novel video abstraction model for a review service of story-oriented video such as dramas. In a drama review service, viewers want to understand the story in a short time and service providers want to provide video abstracts at minimum cost. The proposed model enables the automatic creation of a video abstract that still allows viewers to understand the overall story of the source video. Also, the model has a flexible structure so that the duration of an abstract can be adjusted depending on the requirements given by viewers. We get clues for human understanding of a story from scenario writing rules and editorial techniques which are popularly used in the process of video producing. We have implemented the proposed model and successfully applied it to several TV dramas."
]
} |
1812.05718 | 2963538197 | Network models have been increasingly used in the past years to support summarization and analysis of narratives, such as famous TV series, books and news. Inspired by social network analysis, most of these models focus on the characters at play. The network model well captures all characters interactions, giving a broad picture of the narration’s content. A few works went beyond by introducing additional semantic elements, always captured in a single layer network. In contrast, we introduce in this work a multilayer network model to capture more elements of the narration of a movie from its script: people, locations, and other semantic elements. This model enables new measures and insights on movies. We demonstrate this model on two very popular movies. | Tan @cite_20 proposed an analysis of the character networks in two science fiction television series. These networks are constructed based on the scene co-occurrence between characters to indicate the presence of a connection. Global network topological measures such as the average path length, graph density, network diameter, average degree, are computed and found to be similar between the two series. Furthermore, various node centrality scores are computed and used to reflect on the interplay between the central characters and the overall narrative. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2002774075"
],
"abstract": [
"This work is an analysis of the character networks in two science fiction television series: Stargate and Star Trek. These networks are constructed on the basis of scene co-occurrence between characters to indicate the presence of a connection. Global network structure measures such as the average path length, graph density, network diameter, average degree, median degree, maximum degree, and average clustering coefficient are computed as well as individual node centrality scores. The two fictional networks constructed are found to be quite similar in structure which is astonishing given that Stargate only ran for 18 years in comparison to the 48 years for Star Trek."
]
} |
1812.05718 | 2963538197 | Network models have been increasingly used in the past years to support summarization and analysis of narratives, such as famous TV series, books and news. Inspired by social network analysis, most of these models focus on the characters at play. The network model well captures all characters interactions, giving a broad picture of the narration’s content. A few works went beyond by introducing additional semantic elements, always captured in a single layer network. In contrast, we introduce in this work a multilayer network model to capture more elements of the narration of a movie from its script: people, locations, and other semantic elements. This model enables new measures and insights on movies. We demonstrate this model on two very popular movies. | Some studies have been conducted to apply social network analysis (SNA) for movie story analysis. RoleNet @cite_24 is an SNA-based approach that was proposed to analyze movie stories. It can identify automatically leading roles and corresponding communities by investigating the social interactions between characters using a weighted graph where nodes represent characters, and edges represent co-appearance relationships, two characters appear in the scene. Edge weight represents the number of co-appearances of two characters in the same scene. However, using only scenes to model the movie story is not enough: some scenes may be very long, and others are short. Using an additional source such as the dialog would be more adaptable than this assumption. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2119190392"
],
"abstract": [
"With the idea of social network analysis, we propose a novel way to analyze movie videos from the perspective of social relationships rather than audiovisual features. To appropriately describe role's relationships in movies, we devise a method to quantify relations and construct role's social networks, called RoleNet. Based on RoleNet, we are able to perform semantic analysis that goes beyond conventional feature-based approaches. In this work, social relations between roles are used to be the context information of video scenes, and leading roles and the corresponding communities can be automatically determined. The results of community identification provide new alternatives in media management and browsing. Moreover, by describing video scenes with role's context, social-relation-based story segmentation method is developed to pave a new way for this widely-studied topic. Experimental results show the effectiveness of leading role determination and community identification. We also demonstrate that the social-based story segmentation approach works much better than the conventional tempo-based method. Finally, we give extensive discussions and state that the proposed ideas provide insights into context-based video analysis."
]
} |
1812.05718 | 2963538197 | Network models have been increasingly used in the past years to support summarization and analysis of narratives, such as famous TV series, books and news. Inspired by social network analysis, most of these models focus on the characters at play. The network model well captures all characters interactions, giving a broad picture of the narration’s content. A few works went beyond by introducing additional semantic elements, always captured in a single layer network. In contrast, we introduce in this work a multilayer network model to capture more elements of the narration of a movie from its script: people, locations, and other semantic elements. This model enables new measures and insights on movies. We demonstrate this model on two very popular movies. | Character-net @cite_6 is another interesting work that proposes a story-based movie analysis method via social network analysis. While RoleNet uses co-appearance as relationship between characters, Character-net utilizes dialog. Edges are weighted by the quantity of dialogue exchanged by the characters. Once the weighted network is built, characters are classified according to their degree centrality value as major, minor or extra role. Finally, the classification result is used to detect the movie sequences through clique clustering, and major and minor role clustering. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2045807247"
],
"abstract": [
"There have been various approaches to analyzing movie stories using social networks. Social network analysis is an effective means to extract semantic information from movies. Movie analysis through social relationships among characters can support various types of information retrieval better than audio-visual feature analysis. The relationships among characters form the main structure of the story. Therefore, through social network analysis among characters, movie story information such as the major roles and the corresponding communities can be determined. Progression of most movie stories is done by characters, and the scriptwriter or director narrates the story and relationships among characters using character dialogs. A dialog has a direction and time that supplies information. Therefore, the dialog is better for constructing social networks of characters than the co-appearance. Additionally, through social networks using the dialog, we can extract accurate movie stories such as classification of major, minor or extra roles, community clustering, and sequence detection. To achieve this, we propose a Character-net that can represent the relationships between characters using dialogs, and a method that can extract the sequences via clustering communities composed of characters. Our experiments show that our proposed method can efficiently detect sequences."
]
} |
1812.05637 | 2953048415 | Video action recognition, a critical problem in video understanding, has been gaining increasing attention. To identify actions induced by complex object-object interactions, we need to consider not only spatial relations among objects in a single frame, but also temporal relations among different or the same objects across multiple frames. However, existing approaches that model video representations and non-local features are either incapable of explicitly modeling relations at the object-object level or unable to handle streaming videos. In this paper, we propose a novel dynamic hidden graph module to model complex object-object interactions in videos, of which two instantiations are considered: a visual graph that captures appearance motion changes among objects and a location graph that captures relative spatiotemporal position changes among objects. Additionally, the proposed graph module allows us to process streaming videos, setting it apart from existing methods. Experimental results on benchmark datasets, Something-Something and ActivityNet, show the competitive performance of our method. | Similar to @cite_6 , we devise a dynamic graph module based on region proposals in video frames and utilize the graph structure to model relations between interactive objects in across multiple frames. However, distinct from previous works, our work builds graphs in both spatial and temporal domains dynamically at each time step. We also add an explicit message-passing process to propagate interactions among objects. Our model can classify actions in an incremental manner which endows our model with the ability to process partially observed videos, , video streams. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2951656361"
],
"abstract": [
"How do humans recognize the action \"opening a book\" ? We argue that there are two important cues: modeling temporal shape dynamics and modeling functional relationships between humans and objects. In this paper, we propose to represent videos as space-time region graphs which capture these two important cues. Our graph nodes are defined by the object region proposals from different frames in a long range video. These nodes are connected by two types of relations: (i) similarity relations capturing the long range dependencies between correlated objects and (ii) spatial-temporal relations capturing the interactions between nearby objects. We perform reasoning on this graph representation via Graph Convolutional Networks. We achieve state-of-the-art results on both Charades and Something-Something datasets. Especially for Charades, we obtain a huge 4.4 gain when our model is applied in complex environments."
]
} |
1812.05645 | 2945773792 | Fourier methods have a long and proven track record as an excellent tool in data processing. We propose to integrate Fourier methods into complex recurrent neural network architectures and show accuracy improvements on prediction tasks as well as computational load reductions. We predict synthetic data drawn from synthetic-equations as well as real world power load data. | In machine learning, Fourier analysis is mostly associated with signal-processing heavy domains such as speech recognition @cite_1 , medical imaging @cite_21 and audio-processing @cite_25 @cite_12 . Recently, a comparison @cite_11 of the time versus frequency domain for audio event recognition with neural networks showed the discriminative gains of processing sound in the frequency domain. | {
"cite_N": [
"@cite_21",
"@cite_1",
"@cite_25",
"@cite_12",
"@cite_11"
],
"mid": [
"2962996460",
"2327501763",
"2059652044",
"2963493667",
"2963041956"
],
"abstract": [
"The task of MRI fingerprinting is to identify tissue parameters from complex-valued MRI signals. The prevalent approach is dictionary based, where a test MRI signal is compared to stored MRI signals with known tissue parameters and the most similar signals and tissue parameters retrieved. Such an approach does not scale with the number of parameters and is rather slow when the tissue parameter space is large. Our first novel contribution is to use deep learning as an efficient nonlinear inverse mapping approach. We generate synthetic (tissue, MRI) data from an MRI simulator, and use them to train a deep net to map the MRI signal to the tissue parameters directly. Our second novel contribution is to develop a complex-valued neural network with new cardioid activation functions. Our results demonstrate that complex-valued neural nets could be much more accurate than real-valued neural nets at complex-valued MRI fingerprinting.",
"We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1 without a dictionary or an external language model and 10.3 with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0 on the same set.",
"Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"This paper introduces a new large-scale music dataset, MusicNet, to serve as a source of supervision and evaluation of machine learning methods for music research. MusicNet consists of hundreds of freely-licensed classical music recordings by 10 composers, written for 11 instruments, together with instrument note annotations resulting in over 1 million temporal labels on 34 hours of chamber music performances under various studio and microphone conditions. @PARASPLIT The paper defines a multi-label classification task to predict notes in musical recordings, along with an evaluation protocol, and benchmarks several machine learning architectures for this task: i) learning from spectrogram features; ii) end-to-end learning with a neural net; iii) end-to-end learning with a convolutional neural net. These experiments show that end-to-end models trained for note prediction learn frequency selective filters as a low-level representation of audio.",
"Recognizing acoustic events is an intricate problem for a machine and an emerging field of research. Deep neural networks achieve convincing results and are currently the state-of-the-art approach for many tasks. One advantage is their implicit feature learning, opposite to an explicit feature extraction of the input signal. In this work, we analyzed whether more discriminative features can be learned from either the time-domain or the frequency-domain representation of the audio signal. For this purpose, we trained multiple deep networks with different architectures on the Freiburg-106 and ESC-10 datasets. Our results show that feature learning from the frequency domain is superior to the time domain. Moreover, additionally using convolution and pooling layers, to explore local structures of the audio signal, significantly improves the recognition performance and achieves state-of-the-art results."
]
} |
1812.05645 | 2945773792 | Fourier methods have a long and proven track record as an excellent tool in data processing. We propose to integrate Fourier methods into complex recurrent neural network architectures and show accuracy improvements on prediction tasks as well as computational load reductions. We predict synthetic data drawn from synthetic-equations as well as real world power load data. | Within the deep learning community, the discrete Fourier transform has long been touted as a computationally efficient alternative to convolution, since convolution in time and space is equivalent to multiplication in the frequency domain. Such gains are especially relevant for convolutional neural networks in 2D @cite_0 @cite_36 @cite_27 @cite_9 and even more so in 3D @cite_16 . In addition, Fourier-based pooling as an alternative to max-pooling @cite_28 @cite_26 has been explored for CNNs. | {
"cite_N": [
"@cite_26",
"@cite_36",
"@cite_28",
"@cite_9",
"@cite_0",
"@cite_27",
"@cite_16"
],
"mid": [
"2896786907",
"1922123711",
"",
"2777685882",
"2613634265",
"2963367891",
"2767177281"
],
"abstract": [
"We propose a novel discrete Fourier transform-based pooling layer for convolutional neural networks. The DFT magnitude pooling replaces the traditional max average pooling layer between the convolution and fully-connected layers to retain translation invariance and shape preserving (aware of shape difference) properties based on the shift theorem of the Fourier transform. Thanks to the ability to handle image misalignment while keeping important structural information in the pooling stage, the DFT magnitude pooling improves the classification accuracy significantly. In addition, we propose the DFT+ method for ensemble networks using the middle convolution layer outputs. The proposed methods are extensively evaluated on various classification tasks using the ImageNet, CUB 2010-2011, MIT Indoors, Caltech 101, FMD and DTD datasets. The AlexNet, VGG-VD 16, Inception-v3, and ResNet are used as the base networks, upon which DFT and DFT+ methods are implemented. Experimental results show that the proposed methods improve the classification performance in all networks and datasets.",
"Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.",
"",
"The Fourier domain is used in computer vision and machine learning as image analysis tasks in the Fourier domain are analogous to spatial domain methods but are achieved using different operations. Convolutional Neural Networks (CNNs) use machine learning to achieve state-of-the-art results with respect to many computer vision tasks. One of the main limiting aspects of CNNs is the computational cost of updating a large number of convolution parameters. Further, in the spatial domain, larger images take exponentially longer than smaller image to train on CNNs due to the operations involved in convolution methods. Consequently, CNNs are often not a viable solution for large image computer vision tasks. In this paper a Fourier Convolution Neural Network (FCNN) is proposed whereby training is conducted entirely within the Fourier domain. The advantage offered is that there is a significant speed up in training time without loss of effectiveness. Using the proposed approach larger images can therefore be processed within viable computation time. The FCNN is fully described and evaluated. The evaluation was conducted using the benchmark Cifar10 and MNIST datasets, and a bespoke fundus retina image dataset. The results demonstrate that convolution in the Fourier domain gives a significant speed up without adversely affecting accuracy. For simplicity the proposed FCNN concept is presented in the context of a basic CNN architecture, however, the FCNN concept has the potential to improve the speed of any neural network system involving convolution.",
"One long-term goal of machine learning research is to produce methods that are applicable to highly complex tasks, such as perception (vision, audition), reasoning, intelligent control, and other artificially intelligent behaviors. We argue that in order to progress toward this goal, the Machine Learning community must endeavor to discover algorithms that can learn highly complex functions, with minimal need for prior knowledge, and with minimal human intervention. We present mathematical and empirical evidence suggesting that many popular approaches to non-parametric learning, particularly kernel methods, are fundamentally limited in their ability to learn complex high-dimensional functions. Our analysis focuses on two problems. First, kernel machines are shallow architectures, in which one large layer of simple template matchers is followed by a single layer of trainable coefficients. We argue that shallow architectures can be very inefficient in terms of required number of computational elements and examples. Second, we analyze a limitation of kernel machines with a local kernel, linked to the curse of dimensionality, that applies to supervised, unsupervised (manifold learning) and semi-supervised kernel machines. Using empirical results on invariant image recognition tasks, kernel methods are compared with deep architectures, in which lower-level features or concepts are progressively combined into more abstract and higher-level representations. We argue that deep architectures have the potential to generalize in non-local ways, i.e., beyond immediate neighbors, and that this is crucial in order to make progress on the kind of complex tasks required for artificial intelligence.",
"Abstract: We examine the performance profile of Convolutional Neural Network training on the current generation of NVIDIA Graphics Processing Units. We introduce two new Fast Fourier Transform convolution implementations: one based on NVIDIA's cuFFT library, and another based on a Facebook authored FFT implementation, fbfft, that provides significant speedups over cuFFT (over 1.5x) for whole CNNs. Both of these convolution implementations are available in open source, and are faster than NVIDIA's cuDNN implementation for many common convolutional layers (up to 23.5x for some synthetic kernel configurations). We discuss different performance regimes of convolutions, comparing areas where straightforward time domain convolutions outperform Fourier frequency domain convolutions. Details on algorithmic applications of NVIDIA GPU hardware specifics in the implementation of fbfft are also provided.",
"Three-dimensional convolution neural networks (3D CNN) have achieved great success in many computer vision applications, such as video analysis, medical image classification, and human action recognition. However, the efficiency of this model suffers from great computational intensity. In this work, we reduce the algorithmic complexity of 3D CNN to accelerate this model with Winograd’s minimal algorithm. We benchmark a net model on GPU platform, resulting in a speed-up by a factor of 1.2 ( ) compared with cuDNN, which is commonly used in many current machine learning frameworks."
]
} |
1812.05645 | 2945773792 | Fourier methods have a long and proven track record as an excellent tool in data processing. We propose to integrate Fourier methods into complex recurrent neural network architectures and show accuracy improvements on prediction tasks as well as computational load reductions. We predict synthetic data drawn from synthetic-equations as well as real world power load data. | Generally, in both audio processing and speech recognition, only the magnitude of a frequency domain signal is processed, while the phase gets discarded @cite_1 @cite_12 . One likely reason for this is the fact that many machine learning methods and toolboxes are not designed to natively handle complex data. However, recent works in complex-valued networks @cite_29 @cite_5 @cite_6 @cite_3 makes it now possible process data fully in the frequency domain. The complex CNN presented in @cite_6 explores complex convolutions and applies them to the Fourier spectrum of music data, whereas in @cite_3 , a complex gated RNN was applied to similar data. | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_5",
"@cite_12"
],
"mid": [
"2963042606",
"2327501763",
"2962913459",
"",
"",
"2963493667"
],
"abstract": [
"Recurrent neural networks (RNNs) are notoriously difficult to train. When the eigenvalues of the hidden to hidden weight matrix deviate from absolute value 1, optimization becomes difficult due to the well studied issue of vanishing and exploding gradients, especially when trying to learn long-term dependencies. To circumvent this problem, we propose a new architecture that learns a unitary weight matrix, with eigenvalues of absolute value exactly 1. The challenge we address is that of parametrizing unitary matrices in a way that does not require expensive computations (such as eigendecomposition) after each weight update. We construct an expressive unitary weight matrix by composing several structured matrices that act as building blocks with parameters to be learned. Optimization with this parameterization becomes feasible only when considering hidden states in the complex domain. We demonstrate the potential of this architecture by achieving state of the art results in several hard tasks involving very longterm dependencies.",
"We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1 without a dictionary or an external language model and 10.3 with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0 on the same set.",
"At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks.",
"",
"",
"This paper introduces a new large-scale music dataset, MusicNet, to serve as a source of supervision and evaluation of machine learning methods for music research. MusicNet consists of hundreds of freely-licensed classical music recordings by 10 composers, written for 11 instruments, together with instrument note annotations resulting in over 1 million temporal labels on 34 hours of chamber music performances under various studio and microphone conditions. @PARASPLIT The paper defines a multi-label classification task to predict notes in musical recordings, along with an evaluation protocol, and benchmarks several machine learning architectures for this task: i) learning from spectrogram features; ii) end-to-end learning with a neural net; iii) end-to-end learning with a convolutional neural net. These experiments show that end-to-end models trained for note prediction learn frequency selective filters as a low-level representation of audio."
]
} |
1907.01949 | 2954982059 | The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of 'groundtruth' aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity. | The predictive uncertainty can be decomposed into two parts. By the law of total variance, we can write predictive variances as a sum of these two independent components: where we have used the notation @math and @math for the expectation and variance operator. We have labeled the two right-hand terms as and uncertainty. The aleatoric term measures the average of the output variance @math , under all settings of the variables @math . If @math were a delta peak, we would expect this term not to vanish and thus is it associated with aleatoric (data) uncertainty @cite_4 . The epistemic term measures fluctuations in the mean prediction. These fluctuations exist because of uncertainty in the approximate posterior @math . If @math were a delta peak, then this term would vanish to zero, and thus we associate it with epistemic (model) uncertainty @cite_4 @cite_17 . | {
"cite_N": [
"@cite_4",
"@cite_17"
],
"mid": [
"2610571781",
"2600383743"
],
"abstract": [
"In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks."
]
} |
1907.01949 | 2954982059 | The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of 'groundtruth' aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity. | Current techniques for estimating aleatoric and epistemic uncertainty follow similar line. In Tanno @cite_4 the authors treat MRI superresolution as a regression problem. They build a CNN directly outputting @math and @math . They model epistemic uncertainty using variational dropout @cite_7 . Bragman @cite_6 build on this technique, applying it to radiotherapy-treatment planning and multi-task learning. Concurrent to @cite_4 Kendall and Gal proposed a similar method using Monte Carlo (MC) instead of variational dropout @cite_12 .They also proposed a method which would work for classification, where they predict a mean and variance in the logit-space just before a sigmoid. Jungo @cite_19 estimate epistemic uncertainty in the context of postoperative brain tumor cavity segmentation using MC dropout @cite_12 . In @cite_18 Ayhan and Berens treat the data augmentation process as part of the approximate posterior @math . They claim this is aleatoric uncertainty, but from their method it appears they really compute epistemic uncertainty. None of these works quantitatively evaluates the quality of the epistemic and aleatoric uncertainties. In this work, we show that the aleatoric uncertainty can indeed be measured. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_19",
"@cite_12"
],
"mid": [
"2910489404",
"2610571781",
"1826234144",
"",
"2806861199",
"2964059111"
],
"abstract": [
"",
"In this work, we investigate the value of uncertainty modelling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.",
"We investigate a local reparameterizaton technique for greatly reducing the variance of stochastic gradients for variational Bayesian inference (SGVB) of a posterior over model parameters, while retaining parallelizability. This local reparameterization translates uncertainty about global parameters into local noise that is independent across datapoints in the minibatch. Such parameterizations can be trivially parallelized and have variance that is inversely proportional to the mini-batch size, generally leading to much faster convergence. Additionally, we explore a connection with dropout: Gaussian dropout objectives correspond to SGVB with local reparameterization, a scale-invariant prior and proportionally fixed posterior variance. Our method allows inference of more flexibly parameterized posteriors; specifically, we propose variational dropout, a generalization of Gaussian dropout where the dropout rates are learned, often leading to better models. The method is demonstrated through several experiments.",
"",
"Uncertainty estimates of modern neuronal networks provide additional information next to the computed predictions and are thus expected to improve the understanding of the underlying model. Reliable uncertainties are particularly interesting for safety-critical computer-assisted applications in medicine, e.g., neurosurgical interventions and radiotherapy planning. We propose an uncertainty-driven sanity check for the identification of segmentation results that need particular expert review. Our method uses a fully-convolutional neural network and computes uncertainty estimates by the principle of Monte Carlo dropout. We evaluate the performance of the proposed method on a clinical dataset with 30 postoperative brain tumor images. The method can segment the highly inhomogeneous resection cavities accurately (Dice coefficients 0.792 @math 0.154). Furthermore, the proposed sanity check is able to detect the worst segmentation and three out of the four outliers. The results highlight the potential of using the additional information from the model's parameter uncertainty to validate the segmentation performance of a deep learning model.",
"Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs - extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and nonlinearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning."
]
} |
1907.01949 | 2954982059 | The accurate estimation of predictive uncertainty carries importance in medical scenarios such as lung node segmentation. Unfortunately, most existing works on predictive uncertainty do not return calibrated uncertainty estimates, which could be used in practice. In this work we exploit multi-grader annotation variability as a source of 'groundtruth' aleatoric uncertainty, which can be treated as a target in a supervised learning problem. We combine this groundtruth uncertainty with a Probabilistic U-Net and test on the LIDC-IDRI lung nodule CT dataset and MICCAI2012 prostate MRI dataset. We find that we are able to improve predictive uncertainty estimates. We also find that we can improve sample accuracy and sample diversity. | In the Probabilistic U-Net @cite_1 , the approximate posterior distribution is given the form @math , where we have set @math . The hidden variables are thus activations @math dependent on the training data. A (conditional) prior over @math is given by a @math . To train this setup, the authors employ a variant of the ELBO with a @math -weight on the KL-penalty Again, @math represents the variational parameters to be optimized. Since at test time we do not have access to @math , we use the prior network and Monte Carlo sample in @math . The specific form of the likelihood @math can be found in the original paper @cite_1 . This method is known to produce very diverse samples, from which we could estimate aleatoric uncertainty. In this paper, we endow the Probabilistic U-Net with a mechanism to estimate epistemic uncertainty and extend this method yet further, such that the aleatoric uncertainty estimates are automatically calibrated to the training set. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2963632741"
],
"abstract": [
"Many real-world vision problems suffer from inherent ambiguities. In clinical applications for example, it might not be clear from a CT scan alone which particular region is cancer tissue. Therefore a group of graders typically produces a set of diverse but plausible segmentations. We consider the task of learning a distribution over segmentations given an input. To this end we propose a generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible hypotheses. We show on a lung abnormalities segmentation task and on a Cityscapes segmentation task that our model reproduces the possible segmentation variants as well as the frequencies with which they occur, doing so significantly better than published approaches. These models could have a high impact in real-world applications, such as being used as clinical decision-making algorithms accounting for multiple plausible semantic segmentation hypotheses to provide possible diagnoses and recommend further actions to resolve the present ambiguities."
]
} |
1907.01988 | 2954931147 | We investigate trade-offs in static and dynamic evaluation of hierarchical queries with arbitrary free variables. In the static setting, the trade-off is between the time to partially compute the query result and the delay needed to enumerate its tuples. In the dynamic setting, we additionally consider the time needed to update the query result in the presence of single-tuple inserts and deletes to the input database. Our approach observes the degree of values in the database and uses different computation and maintenance strategies for high-degree and low-degree values. For the latter it partially computes the result, while for the former it computes enough information to allow for on-the-fly enumeration. The main result of this work defines the preprocessing time, the update time, and the enumeration delay as functions of the light heavy threshold and of the factorization width of the hierarchical query. By conveniently choosing this threshold, our approach can recover a number of prior results when restricted to hierarchical queries. | Static Evaluation. Prior seminal work exhibits a dependency between the space and enumeration delay for conjunctive queries with access patterns @cite_8 . It constructs a succinct representation of the query result that allows for enumeration of tuples over some variables under value bindings for all other variables. It does not support enumeration for queries with free variables, as addressed in our work. Example is stated as an open problem in their work. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2757542261"
],
"abstract": [
"Relational queries, and in particular join queries, often generate large output results when executed over a huge dataset. In such cases, it is often infeasible to store the whole materialized output if we plan to reuse it further down a data processing pipeline. Motivated by this problem, we study the construction of space-efficient compressed representations of the output of conjunctive queries, with the goal of supporting the efficient access of the intermediate compressed result for a given access pattern. In particular, we initiate the study of an important tradeoff: minimizing the space necessary to store the compressed result, versus minimizing the answer time and delay for an access request over the result. Our main contribution is a novel parameterized data structure, which can be tuned to trade off space for answer time. The tradeoff allows us to control the space requirement of the data structure precisely, and depends both on the structure of the query and the access pattern. We show how we can use the data structure in conjunction with query decomposition techniques, in order to efficiently represent the outputs for several classes of conjunctive queries."
]
} |
1907.01988 | 2954931147 | We investigate trade-offs in static and dynamic evaluation of hierarchical queries with arbitrary free variables. In the static setting, the trade-off is between the time to partially compute the query result and the delay needed to enumerate its tuples. In the dynamic setting, we additionally consider the time needed to update the query result in the presence of single-tuple inserts and deletes to the input database. Our approach observes the degree of values in the database and uses different computation and maintenance strategies for high-degree and low-degree values. For the latter it partially computes the result, while for the former it computes enough information to allow for on-the-fly enumeration. The main result of this work defines the preprocessing time, the update time, and the enumeration delay as functions of the light heavy threshold and of the factorization width of the hierarchical query. By conveniently choosing this threshold, our approach can recover a number of prior results when restricted to hierarchical queries. | Acyclicity itself is necessary for having constant delay enumeration: A conjunctive query admits constant delay enumeration after linear-time preprocessing if and only if it is free-connex acyclic @cite_24 . This is based on a stronger hypothesis that the existence of a triangle in a hypergraph of @math vertices cannot be tested in time @math and that for any @math , testing the presence of a @math -dimensional tetrahedron cannot be tested in linear time. An in-depth pre-2015 overview on constant-delay enumeration is provided by Segoufin @cite_29 . | {
"cite_N": [
"@cite_24",
"@cite_29"
],
"mid": [
"2624791742",
"2234276446"
],
"abstract": [
"Au-dela de la decision de problemes de satisfaisabilite, on s’interesse a la generation exhaustive de leurs solutions, l’enumeration. Nous interrogeons d’abord la pertinence du probleme d’enumeration dans le cadre tres classique de la logique propositionnelle. La dichotomie de Creignou et Hebrard prouve deja l’equivalence entre les classes polynomiales pour la decision non triviale et celles pour l’enumeration. On donne des algorithmes d’enumeration optimaux pour chacune de ces classes, qui generalisent tout algorithme de decision non triviale, suggerant que l’enumeration est le probleme pertinent dans ce cadre. Ensuite, nous completons et simplifions des resultats de dichotomie de qui etablissent un lien etroit entre la facilite d’une requete conjonctive et une notion d’acyclicite d’hypergraphe. On prouve alors, grâce a un nouvel algorithme, des resultats similaires pour la classe duale de celle des requetes conjonctives. Finalement, en generalisant le resultat classique de combinatoire de Brouwer et Kolen, on unifie l’ensemble de ces resultats sous forme d’une dichotomie pour l’enumeration des requetes conjonctives dites signees, qui etablit un lien fort entre facilite de l’enumeration et facilite de la decision.",
"We survey some of the recent results about enumerating the answers to queries over a database. We focus on the case where the enumeration is performed with a constant delay between any two consecutive solutions, after a linear time preprocessing. This cannot be always achieved. It requires restricting either the class of queries or the class of databases. We consider conjunctive queries and describe several scenarios when this is possible."
]
} |
1907.01988 | 2954931147 | We investigate trade-offs in static and dynamic evaluation of hierarchical queries with arbitrary free variables. In the static setting, the trade-off is between the time to partially compute the query result and the delay needed to enumerate its tuples. In the dynamic setting, we additionally consider the time needed to update the query result in the presence of single-tuple inserts and deletes to the input database. Our approach observes the degree of values in the database and uses different computation and maintenance strategies for high-degree and low-degree values. For the latter it partially computes the result, while for the former it computes enough information to allow for on-the-fly enumeration. The main result of this work defines the preprocessing time, the update time, and the enumeration delay as functions of the light heavy threshold and of the factorization width of the hierarchical query. By conveniently choosing this threshold, our approach can recover a number of prior results when restricted to hierarchical queries. | Our approach generalizes the original union algorithm for enumerating distinct tuples in the union of two relations (Proposition 8 @cite_31 ) to the enumeration from the union of factorized data structures. Such structures are trees whose inner nodes are Cartesian products and possibly overlapping unions and whose leaves are generalized multiset relations (sets of tuples with multiplicities). Our approach also needs to compute the multiplicities of the distinct tuples in the factorized data structures. Prior works (Theorem 4 in @cite_20 and Lemma 4.5 in @cite_1 ) used the union algorithm for enumerating distinct values in a union of sets with delay linear in the number of sets. | {
"cite_N": [
"@cite_31",
"@cite_1",
"@cite_20"
],
"mid": [
"2240520130",
"2758730793",
""
],
"abstract": [
"We consider query problems defined by first order formulas of the form F(x,T) with free first order and second order variables and study the data complexity of enumerating results of such queries. By considering the number of alternations in the quantifier prefixes of formulas, we show that such query problems either admit a constant delay or a polynomial delay enumeration algorithm or are hard to enumerate. We also exhibit syntactically defined fragments inside the hard cases that still admit good enumeration algorithms and discuss the case of some restricted classes of database structures as inputs.",
"We investigate the query evaluation problem for fixed queries over fully dynamic databases where tuples can be inserted or deleted. The task is to design a dynamic data structure that can immediately report the new result of a fixed query after every database update. We consider unions of conjunctive queries (UCQs) and focus on the query evaluation tasks testing (decide whether an input tuple belongs to the query result), enumeration (enumerate, without repetition, all tuples in the query result), and counting (output the number of tuples in the query result). We identify three increasingly restrictive classes of UCQs which we call t-hierarchical, q-hierarchical, and exhaustively q-hierarchical UCQs. Our main results provide the following dichotomies: If the query's homomorphic core is t-hierarchical (q-hierarchical, exhaustively q-hierarchical), then the testing (enumeration, counting) problem can be solved with constant update time and constant testing time (delay, counting time). Otherwise, it cannot be solved with sublinear update time and sublinear testing time (delay, counting time), unless the OV-conjecture and or the OMv-conjecture fails. We also study the complexity of query evaluation in the dynamic setting in the presence of integrity constraints, and we obtain according dichotomy results for the special case of small domain constraints (i.e., constraints which state that all values in a particular column of a relation belong to a fixed domain of constant size).",
""
]
} |
1907.01988 | 2954931147 | We investigate trade-offs in static and dynamic evaluation of hierarchical queries with arbitrary free variables. In the static setting, the trade-off is between the time to partially compute the query result and the delay needed to enumerate its tuples. In the dynamic setting, we additionally consider the time needed to update the query result in the presence of single-tuple inserts and deletes to the input database. Our approach observes the degree of values in the database and uses different computation and maintenance strategies for high-degree and low-degree values. For the latter it partially computes the result, while for the former it computes enough information to allow for on-the-fly enumeration. The main result of this work defines the preprocessing time, the update time, and the enumeration delay as functions of the light heavy threshold and of the factorization width of the hierarchical query. By conveniently choosing this threshold, our approach can recover a number of prior results when restricted to hierarchical queries. | Dynamic evaluation. The @math -hierarchical queries are the conjunctive queries that admit linear-time preprocessing and constant-time update and delay @cite_14 @cite_21 . If a conjunctive query without repeating relation symbols is not @math -hierarchical, there is no @math such that the result of the query can be enumerated with arbitrary preprocessing time, and @math delay and update time, unless the Online Matrix Vector Multiplication conjecture fails. Likewise, if a conjunctive query is not @math -hierarchical, the size of its result cannot be maintained with arbitrary preprocessing time and @math update time for any @math , unless Online Matrix Vector Multiplication conjecture and the Orthogonal Vector Conjecture fail. The latter lower bound holds even when we allow the update time to be amortized. The upper bound complexities for maintaining @math -hierarchical queries are carried over to unions of @math -hierarchical queries @cite_1 , @math -hierarchical queries with small domain constraints, and first-order queries with modulo-counting quantifiers on bounded degree databases @cite_26 . Similar lower bounds conditioned on the Online Matrix Vector Multiplication and the Orthogonal Vector Multiplication conjectures hold for unions of @math -hierarchical queries and @math -hierarchical queries with small domain constraints. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_21",
"@cite_26"
],
"mid": [
"2592257482",
"2758730793",
"",
"2953047175"
],
"abstract": [
"We consider the task of enumerating and counting answers to k-ary conjunctive queries against relational databases that may be updated by inserting or deleting tuples. We exhibit a new notion of q-hierarchical conjunctive queries and show that these can be maintained efficiently in the following sense. During a linear time pre-processing phase, we can build a data structure that enables constant delay enumeration of the query results; and when the database is updated, we can update the data structure and restart the enumeration phase within constant time. For the special case of self-join free conjunctive queries we obtain a dichotomy: if a query is not q-hierarchical, then query enumeration with sublinear *) delay and sublinear update time (and arbitrary preprocessing time) is impossible. For answering Boolean conjunctive queries and for the more general problem of counting the number of solutions of k-ary queries we obtain complete dichotomies: if the query's homomorphic core is q-hierarchical, then size of the the query result can be computed in linear time and maintained with constant update time. Otherwise, the size of the query result cannot be maintained with sublinear update time. All our lower bounds rely on the OMv-conjecture, a conjecture on the hardness of online matrix-vector multiplication that has recently emerged in the field of fine-grained complexity to characterise the hardness of dynamic problems. The lower bound for the counting problem additionally relies on the orthogonal vectors conjecture, which in turn is implied by the strong exponential time hypothesis.*) By sublinear we mean O(n(1-e) for some e > 0, where n is the size of the active domain of the current database.",
"We investigate the query evaluation problem for fixed queries over fully dynamic databases where tuples can be inserted or deleted. The task is to design a dynamic data structure that can immediately report the new result of a fixed query after every database update. We consider unions of conjunctive queries (UCQs) and focus on the query evaluation tasks testing (decide whether an input tuple belongs to the query result), enumeration (enumerate, without repetition, all tuples in the query result), and counting (output the number of tuples in the query result). We identify three increasingly restrictive classes of UCQs which we call t-hierarchical, q-hierarchical, and exhaustively q-hierarchical UCQs. Our main results provide the following dichotomies: If the query's homomorphic core is t-hierarchical (q-hierarchical, exhaustively q-hierarchical), then the testing (enumeration, counting) problem can be solved with constant update time and constant testing time (delay, counting time). Otherwise, it cannot be solved with sublinear update time and sublinear testing time (delay, counting time), unless the OV-conjecture and or the OMv-conjecture fails. We also study the complexity of query evaluation in the dynamic setting in the presence of integrity constraints, and we obtain according dichotomy results for the special case of small domain constraints (i.e., constraints which state that all values in a particular column of a relation belong to a fixed domain of constant size).",
"",
"We investigate the query evaluation problem for fixed queries over fully dynamic databases, where tuples can be inserted or deleted. The task is to design a dynamic algorithm that immediately reports the new result of a fixed query after every database update. We consider queries in first-order logic (FO) and its extension with modulo-counting quantifiers (FO+MOD), and show that they can be efficiently evaluated under updates, provided that the dynamic database does not exceed a certain degree bound. In particular, we construct a data structure that allows to answer a Boolean FO+MOD query and to compute the size of the result of a non-Boolean query within constant time after every database update. Furthermore, after every update we are able to immediately enumerate the new query result with constant delay between the output tuples. The time needed to build the data structure is linear in the size of the database. Our results extend earlier work on the evaluation of first-order queries on static databases of bounded degree and rely on an effective Hanf normal form for FO+MOD recently obtained by Heimberg, Kuske, and Schweikardt (LICS 2016)."
]
} |
1907.01988 | 2954931147 | We investigate trade-offs in static and dynamic evaluation of hierarchical queries with arbitrary free variables. In the static setting, the trade-off is between the time to partially compute the query result and the delay needed to enumerate its tuples. In the dynamic setting, we additionally consider the time needed to update the query result in the presence of single-tuple inserts and deletes to the input database. Our approach observes the degree of values in the database and uses different computation and maintenance strategies for high-degree and low-degree values. For the latter it partially computes the result, while for the former it computes enough information to allow for on-the-fly enumeration. The main result of this work defines the preprocessing time, the update time, and the enumeration delay as functions of the light heavy threshold and of the factorization width of the hierarchical query. By conveniently choosing this threshold, our approach can recover a number of prior results when restricted to hierarchical queries. | The work closest in spirit to ours characterizes the dynamic space for counting triangles @cite_35 . Our approach furthers the adaptive maintenance techniques presented in that work. First, since our approach considers a class of queries and not only a single query, it employs a less trivial light heavy partitioning scheme, where the same relation may be subject to partition on different tuples of variables and where the overall number of cases is reduced by considering the all-light case and at-leat-one-heavy case whenever a partition is made. Second, it uses view trees to capture the query evaluation and maintenance strategy, which is reminiscent of F-IVM @cite_23 . Third, a key challenge in our work is to achieve sublinear delay for the enumeration problem. The triangle counting query from prior work has one result tuple (a scalar) and trivial enumeration. | {
"cite_N": [
"@cite_35",
"@cite_23"
],
"mid": [
"2795570227",
"2798659310"
],
"abstract": [
"We consider the problem of incrementally maintaining the triangle count query under single- tuple updates to the input relations. We introduce an approach that exhibits a space-time tradeoff such that the space-time product is quadratic in the size of the input database and the update time can be as low as the square root of this size. This lowest update time is worst-case optimal conditioned on the Online Matrix-Vector Multiplication conjecture. The classical and factorized incremental view maintenance approaches are recovered as special cases of our approach within the space-time tradeoff. In particular, they require linear- time update maintenance, which is suboptimal. Our approach also recovers the worst-case optimal time complexity for computing the triangle count in the non-incremental setting.",
"We introduce F-IVM, a unified incremental view maintenance (IVM) approach for a variety of tasks, including gradient computation for learning linear regression models over joins, matrix chain multiplication, and factorized evaluation of conjunctive queries. F-IVM is a higher-order IVM algorithm that reduces the maintenance of the given task to the maintenance of a hierarchy of increasingly simpler views. The views are functions mapping keys, which are tuples of input data values, to payloads, which are elements from a task-specific ring. Whereas the computation over the keys is the same for all tasks, the computation over the payloads depends on the task. F-IVM achieves efficiency by factorizing the computation of the keys, payloads, and updates. We implemented F-IVM as an extension of DBToaster. We show in a range of scenarios that it can outperform classical first-order IVM, DBToaster's fully recursive higher-order IVM, and plain recomputation by orders of magnitude while using less memory."
]
} |
1907.01839 | 2953591274 | Depth cameras, typically in RGB-D configurations, are common devices in mobile robotic platforms given their appealing features: high frequency and resolution, low price and power requirements, among others. These sensors may come with significant, non-linear errors in the depth measurements that jeopardize robot tasks, like free-space detection, environment reconstruction or visual robot-human interaction. This paper presents a method to calibrate such systematic errors with the help of a second, more precise range sensor, in our case a radial laser scanner. In contrast to what it may seem at first, this does not mean a serious limitation in practice since these two sensors are often mounted jointly in many mobile robotic platforms, as they complement well each other. Moreover, the laser scanner can be used just for the calibration process and get rid of it after that. The main contributions of the paper are: i) the calibration is formulated from a probabilistic perspective through a Maximum Likelihood Estimation problem, and ii) the proposed method can be easily executed automatically by mobile robotic platforms. To validate the proposed approach we evaluated for both, local distortion of 3D planar reconstructions and global shifts in the measurements, obtaining considerably more accurate results. A C++ open-source implementation of the presented method has been released for the benefit of the community. | Early works in depth error calibration aimed to calibrate distortions along with the extrinsic parameters with respect to an RGB camera. For example, the authors in @cite_13 considered the calibration of an RGB-D camera pair resorting to a linear depth distortion function, while Herrera al @cite_3 tackled the calibration of two colour cameras and a depth one. In the latter case the disparity distortion was modelled as a per-pixel offset with exponential decay governed by two global parameters. Both approaches employ planar surfaces for depth compensation, tendency that still holds in recent works. An example of this is the work by Basso al @cite_17 , which proposed a calibration method based on the observation of a planar pattern with a regular camera, while the extrinsic calibration is more a side effect''. | {
"cite_N": [
"@cite_13",
"@cite_3",
"@cite_17"
],
"mid": [
"2097107088",
"2137504719",
"2582728348"
],
"abstract": [
"Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples.",
"We present an algorithm that simultaneously calibrates two color cameras, a depth camera, and the relative pose between them. The method is designed to have three key features: accurate, practical, and applicable to a wide range of sensors. The method requires only a planar surface to be imaged from various poses. The calibration does not use depth discontinuities in the depth image, which makes it flexible and robust to noise. We apply this calibration to a Kinect device and present a new depth distortion model for the depth sensor. We perform experiments that show an improved accuracy with respect to the manufacturer's calibration.",
"Color-depth cameras (RGB-D cameras) have become the primary sensors in most robotics systems, from service robotics to industrial robotics applications. Typical consumer-grade RGB-D cameras are provided with a coarse intrinsic and extrinsic calibration that generally does not meet the accuracy requirements needed by many robotics applications [e.g., highly accurate three-dimensional (3-D) environment reconstruction and mapping, high precision object recognition, localization, etc.]. In this paper, we propose a human-friendly, reliable, and accurate calibration framework that enables to easily estimate both the intrinsic and extrinsic parameters of a general color-depth sensor couple. Our approach is based on a novel two components error model. This model unifies the error sources of RGB-D pairs based on different technologies, such as structured-light 3-D cameras and time-of-flight cameras. Our method provides some important advantages compared to other state-of-the-art systems: It is general (i.e., well suited for different types of sensors), based on an easy and stable calibration protocol, provides a greater calibration accuracy, and has been implemented within the robot operating system robotics framework. We report detailed experimental validations and performance comparisons to support our statements."
]
} |
1907.01976 | 2955187146 | We consider a basic resource allocation game, where the players' strategy spaces are subsets of @math and cost utility functions are parameterized by some common vector @math and, otherwise, only depend on the own strategy choice. A strategy of a player can be interpreted as a vector of resource consumption and a joint strategy profile naturally leads to an aggregate consumption vector. We assume that resources can be priced, that is, the game is augmented by a price vector @math and players have quasi-linear overall costs utilities meaning that in addition to the original costs utilities, a player needs to pay the corresponding price per consumed unit. We investigate the following question: for which aggregated consumption vectors @math can we find prices @math that induce an equilibrium realizing the targeted consumption profile? For answering this question, we develop a duality-based framework and derive a characterization of the existence of such @math and @math . We show that our characterization can help to unify parts of three largely independent streams in the literature -- tolls in transportation systems, Walrasian market equilibria and congestion control in communication networks. Besides reproving existing results we establish novel existence results by drawing connections to polyhedral combinatorics and discrete convexity. | Our first main result (Theorem ) relies on a decomposition property of the Lagrangian (for separable problems) and the use of Lagrange multipliers for pricing the resources. This approach is not new and has been developed before, see for instance Bertsekas and Ghallager @cite_16 and Palomar and Chiang @cite_44 . In particular, motivated by the dual-decomposition of the convex programming formulation of the bandwidth allocation problem of @cite_56 , Palomar and Chiang @cite_44 described how the Lagrangian of a general separable optimization problem [ i N U_i( x_i) x_i X_i, i N, ; i N h_i( x_i) u ] can be decomposed into @math independent problems. The difference of this model to ours is the parameterization of the cost utility functions @math with respect to the capacity vector @math . This degree of freedom is a strict generalization and allows to model dependencies of targeted capacity vectors with respect to the intrinsic cost utilities - a prime example appears in nonatomic congestion games, where the cost function of an agent depends on the aggregated load vector. Moreover, this dependency allows to model with respect to allocations which are not possible in the model of Palomar and Chiang @cite_44 . | {
"cite_N": [
"@cite_44",
"@cite_16",
"@cite_56"
],
"mid": [
"2162986857",
"",
"2159715570"
],
"abstract": [
"A systematic understanding of the decomposability structures in network utility maximization is key to both resource allocation and functionality allocation. It helps us obtain the most appropriate distributed algorithm for a given network resource allocation problem, and quantifies the comparison across architectural alternatives of modularized network design. Decomposition theory naturally provides the mathematical language to build an analytic foundation for the design of modularized and distributed control of networks. In this tutorial paper, we first review the basics of convexity, Lagrange duality, distributed subgradient method, Jacobi and Gauss-Seidel iterations, and implication of different time scales of variable updates. Then, we introduce primal, dual, indirect, partial, and hierarchical decompositions, focusing on network utility maximization problem formulations and the meanings of primal and dual decompositions in terms of network architectures. Finally, we present recent examples on: systematic search for alternative decompositions; decoupling techniques for coupled objective functions; and decoupling techniques for coupled constraint sets that are not readily decomposable",
"",
"This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing."
]
} |
1907.01976 | 2955187146 | We consider a basic resource allocation game, where the players' strategy spaces are subsets of @math and cost utility functions are parameterized by some common vector @math and, otherwise, only depend on the own strategy choice. A strategy of a player can be interpreted as a vector of resource consumption and a joint strategy profile naturally leads to an aggregate consumption vector. We assume that resources can be priced, that is, the game is augmented by a price vector @math and players have quasi-linear overall costs utilities meaning that in addition to the original costs utilities, a player needs to pay the corresponding price per consumed unit. We investigate the following question: for which aggregated consumption vectors @math can we find prices @math that induce an equilibrium realizing the targeted consumption profile? For answering this question, we develop a duality-based framework and derive a characterization of the existence of such @math and @math . We show that our characterization can help to unify parts of three largely independent streams in the literature -- tolls in transportation systems, Walrasian market equilibria and congestion control in communication networks. Besides reproving existing results we establish novel existence results by drawing connections to polyhedral combinatorics and discrete convexity. | A large body of work in the area of transportation networks is concerned with congestion toll pricing, see for example Knight @cite_12 , @cite_13 , Smith @cite_8 , and @cite_9 . @cite_13 showed that for the Wardrop model with homogeneous users, charging the difference between the marginal cost and the real cost in the socially optimal solution (marginal cost pricing) leads to an equilibrium flow which is optimal. @cite_75 considered the case of heterogeneous users, that is, users value latency relative to monetary cost differently. For single-commodity networks, the authors showed the existence of tolls that induce an optimal flow as Nash flow. Yang and Huang @cite_55 , @cite_0 and Karakostas and Kolliopoulos @cite_61 proved that there are tolls inducing an optimal flow for heterogenous users even in general networks - all proofs are based on linear programming duality. Swamy @cite_25 and Yang and Zhang @cite_26 proved the existence of optimal tolls for the atomic splittable model using convex programming duality. | {
"cite_N": [
"@cite_61",
"@cite_26",
"@cite_75",
"@cite_8",
"@cite_9",
"@cite_55",
"@cite_0",
"@cite_13",
"@cite_25",
"@cite_12"
],
"mid": [
"1521705705",
"2064824188",
"2099481767",
"1996393333",
"1565111024",
"2129592590",
"2123733911",
"",
"2145512846",
"2051688498"
],
"abstract": [
"We examine how the selfish behavior of heterogeneous users in a network can be regulated through economic disincentives, i.e., through the introduction of appropriate taxation. One wants to impose taxes on the edges so that any traffic equilibrium reached by the selfish users who are conscious of both the travel latencies and the taxes will minimize the social cost, i.e., will minimize the total latency. We generalize previous results of Cole, Dodis and Roughgarden that held for a single origin-destination pair to the multicommodity setting. Our approach, which could be of independent interest, is based on the formulation of traffic equilibria as a nonlinear complementarity problem by Aashtiani and Magnanti (1981), We extend this formulation so that each of its solutions will give us a set of taxes that forces the network users to conform, at equilibrium, to a certain prescribed routing. We use the special nature of the prescribed minimum-latency flow in order to reduce the difficult nonlinear complementarity formulation to a pair of primal-dual linear programs. LP duality is then enough to derive our results.",
"The notions of user equilibrium (UE) and system optimum (SO) often allude to the literature together with the well-known principle of marginal-cost pricing in traffic network analyses. This pricing principle states that the UE flow pattern on a network can be driven to an SO in the sense of total travel cost minimization by charging a toll on each link equal to the difference between marginal social cost and marginal private cost. In reality, users do not always behave in a UE manner, typically when there exist oligopoly Cournot-Nash (CN) firms. Users in a CN firm cooperate among themselves to minimize total cost of the firm and compete against others. In the presence of such UE-CN mixed equilibrium behaviors, we are interested in whether an SO flow pattern remains attainable by meaningful link tolls. In this paper we show that in a network with both UE and CN users, applying the traditional marginal-cost pricing for a system optimum requires that link tolls be differentiated across user classes. Because users differ from one another in an unobservable way, it is impossible to introduce discriminatory tolling on a network in a mixed behaviour equilibrium. We then seek alternative meaningful tolls by establishing the existence of nonnegative anonymous link tolls to decentralize the SO into a UE-CN mixed behavior equilibrium with resort to a rigorous mathematical programming approach.",
"We study the negative consequences of selfish behavior in a congested network and economic means of influencing such behavior. We consider a model of selfish routing in which the latency experienced by network traffic on an edge of the network is a function of the edge congestion, and network users are assumed to selfishly route traffic on minimum-latency paths. The quality of a routing of traffic is measured by the sum of travel times (the total latency).It is well known that the outcome of selfish routing (a Nash equilibrium) does not minimize the total latency. An ancient strategy for improving the selfish solution is the principle of marginal cost pricing, which asserts that on each edge of the network, each network user on the edge should pay a tax offsetting the congestion effects caused by its presence. By pricing network edges according to this principle, the inefficiency of selfish routing can always be eradicated.This result, while fundamental, assumes a very strong homogeneity property: all network users are assumed to trade off time and money in an identical way. The guarantee also ignores both the algorithmic aspects of edge pricing and the unfortunate possibility that an efficient routing of traffic might only be achieved with exorbitant taxes. Motivated by these shortcomings, we extend this classical work on edge pricing in several different directions and prove the following results.We prove that the edges of a single-commodity network can always be priced so that an optimal routing of traffic arises as a Nash equilibrium, even for very general heterogeneous populations of network users.When there are only finitely many different types of network users and all edge latency functions are convex, we show how to compute such edge prices efficiently.We prove that an easy-to-check mathematical condition on the population of heterogeneous network users is both necessary and sufficient for the existence of edge prices that induce an optimal routing while requiring only moderate taxes.",
"The paper shows that if the cost and demand functions satisfy certain weak smoothness conditions then the marginal cost taxation of a transportation network is optimal in the usual local sense. Interactions between the cost of travel along a link and flow along other links and between the demand for travel along a route and flow along other routes are permitted.",
"Congestion toll pricing addresses the classic traffic assignment problem for which Wardrop enunciated two principles of traffic flow: user-optimal behavioral hypothesis and the notion and system-optimality. (See Florian and Hearn, 1995, for a recent review of the traffic assignment problem and Johnson and Mattson, 1992 for a recent volume of papers on road pricing.) The traditional objective of congestion pricing has been to determine link tolls which will cause the solution of the tolled user-optimal problem to be optimal for the untolled system problem (Arnott and Small, 1994). In most of the literature, the one choice given has been the vector of marginal social cost pricing tolls.",
"It is well known that in the standard traffic network equilibrium model with a single value of time (VOT) for all users, a so-called marginal-cost toll can drive a user equilibrium flow pattern to a system optimum. This result holds when either cost (money) or time units are used in expressing the objective function of the system optimum and the criterion for user equilibrium. This paper examines the multi-criteria or the cost-versus-time network equilibrium and system optimum problem in a network with a discrete set of VOTs for several user classes. Specifically, the following questions are investigated: Are the user-optimal flows dependent upon the unit (time or money) used in measuring the travel disutility in the presence of road pricing? Are there any uniform link tolls across all individuals (link tolls that are identical for all user classes) that can support a multi-class user equilibrium flow pattern as a system optimum when the system objective function is measured by either money or time units? What are the general properties of the valid toll set?",
"We prove the existence of tolls to induce multicommodity, heterogeneous network users that independently choose routes minimizing their own linear function of tolls versus latency to collectively form the traffic pattern of a minimum average latency flow. This generalizes both the previous known results of the existence of tolls for multicommodity, homogeneous users (, 1956) and for single commodity, heterogeneous users (, 2003). Unlike previous proofs for single commodity users in general graphs, our proof is constructive - it does not rely on a fixed point theorem - and results in a simple polynomial-sized linear program to compute tolls when the number of different types of users is bounded by a polynomial. We show that our proof gives a complete characterization of flows that are enforceable by tolls. In particular, tolls exist to induce any traffic pattern that is the result of minimizing an arbitrary function from R sup E(G) to the reals that is nondecreasing in each of its arguments. Thus, tolls exist to induce flows with minimum average weighted latency, minimum maximum latency, and other natural objectives. We give an exponential bound on tolls that is independent of the number of network users and the number of commodities. We use this to show that multicommodity tolls also exist when users are not from discrete classes, but instead define a general function that trades off latency versus toll preference. Finally, we show that our result extends to very general frameworks. In particular, we show that tolls exist to induce the Nash equilibrium of general nonatomic congestion games to be system optimal. In particular, tolls exist even when 1) latencies depend on user type; 2) latency functions are nonseparable functions of traffic on edges; 3) the latency of a set S is an arbitrary function of the latencies of the resources contained in S. Our exponential bound on size of tolls also holds in this case; and we give an example of a congestion game that shows this is tight; it requires tolls that are exponential in the size of the game.",
"",
"It is well known that in a network with arbitrary (convex) latency functions that are a function of edge traffic, the worst-case ratio, over all inputs, of the system delay caused due to selfish behavior versus the system delay of the optimal centralized solution may be unbounded even if the system consists of only two parallel links. This ratio is called the price of anarchy (PoA). In this paper, we investigate ways by which one can reduce the performance degradation due to selfish behavior. We investigate two primary methods (a) Stackelberg routing strategies, where a central authority, e.g., network manager, controls a fixed fraction of the flow, and can route this flow in any desired way so as to influence the flow of selfish users; and (b) network tolls, where tolls are imposed on the edges to modify the latencies of the edges, and thereby influence the induced Nash equilibrium. We obtain results demonstrating the effectiveness of both Stackelberg strategies and tolls in controlling the price of anarchy. For Stackelberg strategies, we obtain the first results for nonatomic routing in graphs more general than parallel-link graphs, and strengthen existing results for parallel-link graphs, (i) In series-parallel graphs, we show that Stackelberg routing reduces the PoA to a constant (depending on the fraction of flow controlled). (ii) For general graphs, we obtain latency-class specific bounds on the PoA with Stackelberg routing, which give a continuous trade-off between the fraction of flow controlled and the price of anarchy, (iii) In parallel-link graphs, we show that for any given class L of latency functions, Stackelberg routing reduces the PoA to at most α + (1 - α) · ρ(L), where α is the fraction of flow controlled and ρ(L) is the PoA of class L (when α = 0). For network tolls, motivated by the known strong results for nonatomic games, we consider the more general setting of atomic splittable routing games. We show that tolls inducing an optimal flow always exist, even for general asymmetric games with heterogeneous users, and can be computed efficiently by solving a convex program. Furthermore, we give a complete characterization of flows that can be induced via tolls. These are the first results on the effectiveness of tolls for atomic splittable games.",
"Arguments for social interference developed by Pigou and Graham illustrate common misinterpretations of the meaning of cost and its variation with output, 582. — I. The private owner of a natural opportunity secures maximum return from it by charging that rent which halts the application of investment at the point which is socially most advantageous, 584. — II. The notion of decreasing cost is a fallacy; competitive price fixation under decreasing cost or increasing returns an impossible situation, 592. — III. The law of comparative advantage in international trade is fundamentally sound, 599. — Importation a method of using resources to produce the imported good, and will be employed under competitive conditions only when more efficient than a direct method, 603. — The competitive system has important defects, but they lie outside the mechanical theory of exchange relations, 605."
]
} |
1907.01976 | 2955187146 | We consider a basic resource allocation game, where the players' strategy spaces are subsets of @math and cost utility functions are parameterized by some common vector @math and, otherwise, only depend on the own strategy choice. A strategy of a player can be interpreted as a vector of resource consumption and a joint strategy profile naturally leads to an aggregate consumption vector. We assume that resources can be priced, that is, the game is augmented by a price vector @math and players have quasi-linear overall costs utilities meaning that in addition to the original costs utilities, a player needs to pay the corresponding price per consumed unit. We investigate the following question: for which aggregated consumption vectors @math can we find prices @math that induce an equilibrium realizing the targeted consumption profile? For answering this question, we develop a duality-based framework and derive a characterization of the existence of such @math and @math . We show that our characterization can help to unify parts of three largely independent streams in the literature -- tolls in transportation systems, Walrasian market equilibria and congestion control in communication networks. Besides reproving existing results we establish novel existence results by drawing connections to polyhedral combinatorics and discrete convexity. | For atomic (unsplittable) network congestion games much less is known regarding the existence of tolls. @cite_62 studied the existence of tolls for singleton congestion games. Fotakis and Spirakis @cite_11 proved the existence of tolls inducing any acyclic integral flow for symmetric @math , @math network games with homogeneous players. @cite_46 further extended this result to heterogeneous players and networks with a common source but different sinks. @cite_22 transferred the idea of charging marginal cost tolls to congestion games and showed the existence of tolls enforcing the load vector of a socially optimal strategy distribution. | {
"cite_N": [
"@cite_46",
"@cite_62",
"@cite_22",
"@cite_11"
],
"mid": [
"1599430102",
"2070173191",
"2810102121",
"2007169922"
],
"abstract": [
"We consider network congestion games in which a finite number of non-cooperative users select paths. The aim is to mitigate the inefficiency caused by the selfish users by introducing taxes on the network edges. A tax vector is strongly (weakly)-optimal if all (at least one of) the equilibria in the resulting game minimize(s) the total latency. The issue of designing optimal tax vectors for selfish routing games has been studied extensively in the literature. We study for the first time taxation for networks with atomic users which have unsplittable traffic demands and are heterogeneous, i.e., have different sensitivities to taxes. On the positive side, we show the existence of weakly-optimal taxes for single-source network games. On the negative side, we show that the cases of homogeneous and heterogeneous users differ sharply as far as the existence of strongly-optimal taxes is concerned: there are parallel-link games with linear latencies and heterogeneous users that do not admit strongly-optimal taxes.",
"We study congestion games where players aim to access a set of resources. Each player has a set of possible strategies and each resource has a function associating the latency it incurs to the players using it. Players are non--cooperative and each wishes to follow a strategy that minimizes her own latency with no regard to the global optimum. Previous work has studied the impact of this selfish behavior on system performance. In this article, we study the question of how much the performance can be improved if players are forced to pay taxes for using resources. Our objective is to extend the original game so that selfish behavior does not deteriorate performance. We consider atomic congestion games with linear latency functions and present both negative and positive results. Our negative results show that optimal system performance cannot be achieved even in very simple games. On the positive side, we show that there are ways to assign taxes that can improve the performance of linear congestion games by forcing players to follow strategies where the total latency suffered is within a factor of 2 of the minimum possible; this result is shown to be tight. Furthermore, even in cases where in the absence of taxes the system behavior may be very poor, we show that the total disutility of players (latency plus taxes) is not much larger than the optimal total latency. Besides existential results, we show how to compute taxes in time polynomial in the size of the game by solving convex quadratic programs. Similar questions have been extensively studied in the model of non-atomic congestion games. To the best of our knowledge, this is the first study of the efficiency of taxes in atomic congestion games.",
"We consider multi-player repeated games involving a large number of players with large strategy spaces and enmeshed utility structures. In these ldquolarge-scalerdquo games, players are inherently faced with limitations in both their observational and computational capabilities. Accordingly, players in large-scale games need to make their decisions using algorithms that accommodate limitations in information gathering and processing. This disqualifies some of the well known decision making models such as ldquoFictitious Playrdquo (FP), in which each player must monitor the individual actions of every other player and must optimize over a high dimensional probability space. We will show that Joint Strategy Fictitious Play (JSFP), a close variant of FP, alleviates both the informational and computational burden of FP. Furthermore, we introduce JSFP with inertia, i.e., a probabilistic reluctance to change strategies, and establish the convergence to a pure Nash equilibrium in all generalized ordinal potential games in both cases of averaged or exponentially discounted historical data. We illustrate JSFP with inertia on the specific class of congestion games, a subset of generalized ordinal potential games. In particular, we illustrate the main results on a distributed traffic routing problem and derive tolling procedures that can lead to optimized total traffic congestion.",
"We investigate the existence of optimal tolls for atomic symmetric network congestion games with unsplittable traffic and arbitrary nondecreasing latency functions. We focus on pure Nash equilibria, and consider a natural toll mechanism, which we call cost-balancing tolls. A set of cost-balancing tolls turns every path with positive traffic on its edges into a minimum-cost path. Hence any given configuration is induced as a pure Nash equilibrium of the modified game with the corresponding cost-balancing tolls. We show how to compute in linear time a set of cost-balancing tolls for the optimal solution such that the total amount of tolls paid by any player in any pure Nash equilibrium of the modified game does not exceed the latency on the maximum-latency path in the optimal solution. Our main result is that for congestion games on series-parallel networks with strictly increasing latency functions, the optimal solution is induced as the unique pure Nash equilibrium of the game with the corresponding cost-..."
]
} |
1907.01976 | 2955187146 | We consider a basic resource allocation game, where the players' strategy spaces are subsets of @math and cost utility functions are parameterized by some common vector @math and, otherwise, only depend on the own strategy choice. A strategy of a player can be interpreted as a vector of resource consumption and a joint strategy profile naturally leads to an aggregate consumption vector. We assume that resources can be priced, that is, the game is augmented by a price vector @math and players have quasi-linear overall costs utilities meaning that in addition to the original costs utilities, a player needs to pay the corresponding price per consumed unit. We investigate the following question: for which aggregated consumption vectors @math can we find prices @math that induce an equilibrium realizing the targeted consumption profile? For answering this question, we develop a duality-based framework and derive a characterization of the existence of such @math and @math . We show that our characterization can help to unify parts of three largely independent streams in the literature -- tolls in transportation systems, Walrasian market equilibria and congestion control in communication networks. Besides reproving existing results we establish novel existence results by drawing connections to polyhedral combinatorics and discrete convexity. | For multi-unit items, several recent papers studied the existence of Walrasian equilibria. @cite_45 investigated the existence of Walrasian equilibria in multi-unit auctions and identified general conditions on the demand sets and valuations related to discrete convexity. The conditions of Milgrom and Struluvici @cite_4 and Ausubel @cite_76 appear as special cases of those in @cite_45 . Baldwin and Klemperer @cite_20 explored a connection with tropical geometry and gave necessary and sufficient condition for the existence of competitive equilibrium in product-mix auctions of indivisible goods. This result is also closely related to the work of Danilov, Koshevoy, and Murota @cite_45 , see also Sun and Yang @cite_71 . Tran and Yu @cite_57 gave a new proof of the sufficiency condition of @cite_20 using a unimodularity theorem in integer programming. For a comparison of the above works especially with respect to the role of discrete convexity, we refer to the excellent survey of Shioura and Tamura @cite_54 . @cite_74 @cite_64 showed that valuations classes (beyond GS valuations) based on graphical structures also imply the existence of Walrasian equilibria. Their proof uses integrality of optimal solutions of an associated linear min-cost flow formulation. | {
"cite_N": [
"@cite_64",
"@cite_4",
"@cite_54",
"@cite_57",
"@cite_45",
"@cite_71",
"@cite_74",
"@cite_76",
"@cite_20"
],
"mid": [
"2788862187",
"1973520166",
"2099063943",
"2966800818",
"2008980884",
"2159078604",
"2304647284",
"2150880868",
"2127265337"
],
"abstract": [
"We introduce feature valuations, a new class of valuations that compactly capture preferences of agents who value items based on the features they possess. Such preferences are relevant in many important practical settings, such as Internet advertising markets (where impressions have associated attributes), labor markets (where workers’ skills titles correspond to features), and long-term bond markets (where coupon payments can be associated with features). We focus on settings where features can be organized in a tree network, and features possessed by items are captured by paths on the tree. In such settings, under appropriate consistency conditions, we establish that Walrasian equilibria exist. In addition, we provide a computationally tractable price update algorithm that terminates with market-clearing prices that support an efficient outcome. The e-companion is available at https: doi.org 10.1287 mnsc.2017.2917. This paper was accepted by Yinyu Ye, optimization.",
"This paper identifies two notions of substitutes for auction and equilibrium analysis. Weak substitutes, the usual price-theory notion, guarantees monotonicity of Tatonnement processes and convergence of clock auctions to a pseudo-equilibrium, but only strong substitutes, which treats each unit traded as a distinct good with its own price, guarantees that every pseudo-equilibrium is a Walrasian equilibrium, that the Vickrey outcome is in the core, and that the \"law of aggregate demand\" is satisfied. When goods are divisible, weak substitutes along with concavity guarantees all of the above properties, except for the law of aggregate demand.",
"Efficient allocation of indivisible goods is an important problem in mathematical economics and operations research, where the concept of Walrasian equilibrium plays a fundamental role. As a sufficient condition for the existence of a Walrasian equilibrium, the concept of gross substitutes condition for valuation functions is introduced by Kelso and Crawford (1982). Since then, several variants of gross substitutes condition as well as a discrete concavity concept, called M ♮ -concavity, have been introduced to show the existence of an equilibrium in various models. In this paper, we survey the relationship among Kelso and Crawford's gross substitutes condition and its variants, and discuss the connection with M ♮ -concavity. We also review various characterizations and properties of these concepts.",
"In a recent and ongoing work, Baldwin and Klemperer explored a connection between tropical geometry and economics. They gave a sufficient condition for the existence of competitive equilibrium in product-mix auctions of indivisible goods. This result, which we call the Unimodularity Theorem, can also be traced back to the work of Danilov, Koshevoy, and Murota in discrete convex analysis. We give a new proof of the Unimodularity Theorem via the classical unimodularity theorem in integer programming. We give a unified treatment of these results via tropical geometry and formulate a new sufficient condition for competitive equilibrium when there are only two types of product. Generalizations of our theorem in higher dimensions are equivalent to various forms of the Oda conjecture in algebraic geometry.",
"Abstract We consider a production economy with many indivisible goods and one perfectly divisible good. The aim of the paper is to provide some light on the reasons for which equilibrium exists for such an economy. It turns out, that a main reason for the existence is that supplies and demands of indivisible goods should be sets of a class of discrete convexity. The class of generalized polymatroids provides one of the most interesting classes of discrete convexity.",
"We propose a new Walrasian tatonnement process called a double-track procedure for efficiently allocating multiple heterogeneous indivisible items in two distinct sets to many buyers who view items in the same set as substitutes but items across the two sets as complements. In each round of the process, a Walrasian auctioneer first announces the current prices for all items, buyers respond by reporting their demands at these prices, and then the auctioneer adjusts simultaneously the prices of items in one set upward but those of items in the other set downward. It is shown that this procedure converges globally to a Walrasian equilibrium in finitely many rounds. Copyright 2009 The Econometric Society.",
"We study pricing equilibria for graphical valuations, whichare a class of valuations that admit a compact representation. These valuations are associated with a value graph, whose nodes correspond to items, and edges encode (pairwise) complementarities substitutabilities between items. It is known that for graphical valuations a Walrasian equilibrium (a pricing equilibrium that relies on anonymous item prices) does not exist in general. On the other hand, a pricing equilibrium exists when the seller uses an agent-specific graphical pricing rule that involves prices for each item and markups discounts for pairs of items. We study the existence of pricing equilibria with simpler pricing rules which either (i) require anonymity (so that prices are identical for all agents) while allowing for pairwise markups discounts or (ii) involve offering prices only for items. We show that a pricing equilibrium with the latter pricing rule exists if and only if a Walrasian equilibrium exists, whereas the former pricing rule may guarantee the existence of a pricing equilibrium even for graphical valuations that do not admit a Walrasian equilibrium. Interestingly, by exploiting a novel connection between the existence of a pricing equilibrium and the partitioning polytope associated with the underlying graph, we also establish that for simple (series-parallel) value graphs, a pricing equilibrium with anonymous graphical pricing rule exists if and only if a Walrasian equilibrium exists. These equivalence results imply that simpler pricing rules (i) and (ii) do not guarantee the existence of a pricing equilibrium for all graphical valuations.",
"This article proposes a new dynamic design for auctioning multiple heterogeneous commodities. An auctioneer wishes to allocate K types of commodities among n bidders. The auctioneer announces a vector of current prices, bidders report quantities demanded at these prices, and the auctioneer adjusts the prices. Units are credited to bidders at the current prices as their opponents' demands decline, and the process continues until every commodity market clears. Bidders, rather than being assumed to behave as price-takers, are permitted to strategically exercise their market power. Nevertheless, the proposed auction yields Walrasian equilibrium prices and, as from a Vickrey-Clarke-Groves mechanism, an efficient allocation. (JEL D44)",
"We propose new techniques for understanding agents' valuations. Our classification into types\", incorporates existing definitions (substitutes, complements, substitutes\", etc.) and permits new ones. Our Unimodularity Theorem generalises previous results about when competitive equilibrium exists for any set of agents whose valuations are all of a type\" for indivisible goods. Contrary to popular belief, equilibrium is guaranteed for more classes of purely-complements, than of purely-substitutes, preferences. Our Intersection Count Theorem checks equilibrium existence for combinations of agents with specific valuations by counting the intersection points of geometric objects. Applications include matching and coalition-formation; and the Product-Mix Auction, introduced by the Bank of England in response to the financial crisis."
]
} |
1907.01976 | 2955187146 | We consider a basic resource allocation game, where the players' strategy spaces are subsets of @math and cost utility functions are parameterized by some common vector @math and, otherwise, only depend on the own strategy choice. A strategy of a player can be interpreted as a vector of resource consumption and a joint strategy profile naturally leads to an aggregate consumption vector. We assume that resources can be priced, that is, the game is augmented by a price vector @math and players have quasi-linear overall costs utilities meaning that in addition to the original costs utilities, a player needs to pay the corresponding price per consumed unit. We investigate the following question: for which aggregated consumption vectors @math can we find prices @math that induce an equilibrium realizing the targeted consumption profile? For answering this question, we develop a duality-based framework and derive a characterization of the existence of such @math and @math . We show that our characterization can help to unify parts of three largely independent streams in the literature -- tolls in transportation systems, Walrasian market equilibria and congestion control in communication networks. Besides reproving existing results we establish novel existence results by drawing connections to polyhedral combinatorics and discrete convexity. | Our existence result for polymatroid environments differs to these previous works in the sense that we allow valuations to depend on the allocation of items to other players (negative externalities). Much fewer works allow for externalities in valuation functions, see for instance Zame and Noguchi @cite_38 . Models with positive (network-based) externalities have been considered by @cite_79 . @cite_15 considered a setting with weighted negative network-based externalities and unit-demand buyers. @cite_36 consider a problem of selling a base of polymatroid. In their model, however, the prices are not anonymous (rather VCG) for several items of the same type. The same holds true for @cite_21 who also consider polymatroids even with budget constraints. @cite_60 proposed the notion of combinatorial Walrasian equilibria, where items can be packed a priori into bundles. This ensures the existence of equilibria with approximately optimal welfare guarantees. Roughgarden and Talgam-Cohen @cite_37 linked the equilibrium existence of Walrasian equilibria with the computational complexity of the allocation and demand problems. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_60",
"@cite_36",
"@cite_21",
"@cite_79",
"@cite_15"
],
"mid": [
"2152145060",
"2021146489",
"2270382553",
"2077339343",
"1454477978",
"2136041258",
"2109741490"
],
"abstract": [
"This paper presents a general model of a competitive market with consumption externalities, and establishes the existence of equilibrium in the model, under assumptions comparable to those in classical models. The model allows production and indivisible goods. Examples illustrate the generality and applicability of the results.",
"Understanding when equilibria are guaranteed to exist is a central theme in economic theory, seemingly unrelated to computation. This paper shows that the existence of pricing equilibria is inextricably connected to the computational complexity of related optimization problems: demand oracles, revenue-maximization, and welfare-maximization. This relationship implies, under suitable complexity assumptions, a host of impossibility results. We also suggest a complexity-theoretic explanation for the lack of useful extensions of the Walrasian equilibrium concept: such extensions seem to require the invention of novel polynomial-time algorithms for welfare-maximization.",
"We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept for such problems is Walrasian equilibrium (WE), which provides a simple and transparent pricing structure that achieves optimal social welfare. The main weakness of the WE notion is that it exists only in very restrictive cases. To overcome this limitation, we introduce the notion of a combinatorial Walrasian equilibium (CWE), a natural relaxation of WE. The difference between a CWE and a (noncombinatorial) WE is that the seller can package the items into indivisible bundles prior to sale, and the market does not necessarily clear. We show that every valuation profile admits a CWE that obtains at least half the optimal (unconstrained) social welfare. Moreover, we devise a polynomial time algorithm that, given an arbitrary allocation, computes a CWE that achieves at least half its welfare. Thus, the economic ...",
"Consider selling bundles of indivisible goods to buyers with concave utilities that are additively separable in money and goods. We propose an ascending auction for the case when the seller is constrained to sell bundles whose elements form a basis of a matroid. It extends easily to polymatroids. Applications include scheduling, allocation of homogeneous goods, and spatially distributed markets, among others. Our ascending auction induces buyers to bid truthfully and returns the economically efficient basis. Unlike other ascending auctions for this environment, ours runs in pseudopolynomial or polynomial time. Furthermore, we prove the impossibility of an ascending auction for nonmatroidal independence set-systems.",
"A central issue in applying auction theory in practice is the problem of dealing with budget-constrained agents. A desirable goal in practice is to design incentive compatible, individually rational, and Pareto optimal auctions while respecting the budget constraints. Achieving this goal is particularly challenging in the presence of nontrivial combinatorial constraints over the set of feasible allocations. Toward this goal and motivated by AdWords auctions, we present an auction for polymatroidal environments satisfying these properties. Our auction employs a novel clinching technique with a clean geometric description and only needs an oracle access to the submodular function defining the polymatroid. As a result, this auction not only simplifies and generalizes all previous results, it applies to several new applications including AdWords Auctions, bandwidth markets, and video on demand. In particular, our characterization of the AdWords auction as polymatroidal constraints might be of independent interest. This allows us to design the first mechanism for Ad Auctions taking into account simultaneously budgets, multiple keywords and multiple slots. We show that it is impossible to extend this result to generic polyhedral constraints. This also implies an impossibility result for multiunit auctions with decreasing marginal utilities in the presence of budget constraints.",
"We study the optimal pricing strategies of a monopolist selling a divisible good (service) to consumers who are embedded in a social network. A key feature of our model is that consumers experience a (positive) local network effect. In particular, each consumer's usage level depends directly on the usage of her neighbors in the social network structure. Thus, the monopolist's optimal pricing strategy may involve offering discounts to certain agents who have a central position in the underlying network. Our results can be summarized as follows. First, we consider a setting where the monopolist can offer individualized prices and derive a characterization of the optimal price for each consumer as a function of her network position. In particular, we show that it is optimal for the monopolist to charge each agent a price that consists of three components: (i) a nominal term that is independent of the network structure, (ii) a discount term proportional to the influence that this agent exerts over the rest of the social network (quantified by the agent's Bonacich centrality), and (iii) a markup term proportional to the influence that the network exerts on the agent. In the second part of the paper, we discuss the optimal strategy of a monopolist who can only choose a single uniform price for the good and derive an algorithm polynomial in the number of agents to compute such a price. Third, we assume that the monopolist can offer the good in two prices, full and discounted, and we study the problem of determining which set of consumers should be given the discount. We show that the problem is NP-hard; however, we provide an explicit characterization of the set of agents who should be offered the discounted price. Next, we describe an approximation algorithm for finding the optimal set of agents. We show that if the profit is nonnegative under any feasible price allocation, the algorithm guarantees at least 88 of the optimal profit. Finally, we highlight the value of network information by comparing the profits of a monopolist who does not take into account the network effects when choosing her pricing policy to those of a monopolist who uses this information optimally.",
"We consider the problem of a monopolist seller who wants to sell some items to a set of buyers. The buyers are strategic, unit-demand, and connected by a social network. Furthermore, the utility of a buyer is a decreasing function of the number of neighbors who do not own the item. In other words, they exhibit negative externalities, deriving utility from being unique in their purchases. In this model, any fixed setting of the price induces a sub-game on the buyers. We show that it is an exact potential game which admits multiple pure Nash Equilibria. A natural problem is to compute those pure Nash equilibria that raise the most and least revenue for the seller. These correspond respectively to the most optimistic and most pessimistic revenues that can be raised. We show that the revenues of both the best and worst equilibria are hard to approximate within sub-polynomial factors. Given this hardness, we consider a relaxed notion of pricing, where the price for the same item can vary within a constant factor for different buyers. We show a 4-approximation to the pessimistic revenue when the prices are relaxed by a factor of 4. The interesting aspect of this algorithm is that it uses a linear programming relaxation that only encodes part of the strategic behavior of the buyers in its constraints, and rounds this relaxation to obtain a starting configuration for performing relaxed Nash dynamics. Finally, for the maximum revenue Nash equilibrium, we show a 2-approximation for bipartite graphs (without price relaxation), and complement this result by showing that the problem is NP-Hard even on trees."
]
} |
1907.01976 | 2955187146 | We consider a basic resource allocation game, where the players' strategy spaces are subsets of @math and cost utility functions are parameterized by some common vector @math and, otherwise, only depend on the own strategy choice. A strategy of a player can be interpreted as a vector of resource consumption and a joint strategy profile naturally leads to an aggregate consumption vector. We assume that resources can be priced, that is, the game is augmented by a price vector @math and players have quasi-linear overall costs utilities meaning that in addition to the original costs utilities, a player needs to pay the corresponding price per consumed unit. We investigate the following question: for which aggregated consumption vectors @math can we find prices @math that induce an equilibrium realizing the targeted consumption profile? For answering this question, we develop a duality-based framework and derive a characterization of the existence of such @math and @math . We show that our characterization can help to unify parts of three largely independent streams in the literature -- tolls in transportation systems, Walrasian market equilibria and congestion control in communication networks. Besides reproving existing results we establish novel existence results by drawing connections to polyhedral combinatorics and discrete convexity. | @cite_56 proposed to model congestion control via analyzing optimal solutions of a convex optimization problem, where an aggregated bandwidth utility subject to network capacity constraints is maximized. By dualizing the problem and then decomposing terms (as we do in this paper), it is shown that Lagrangian multipliers correspond to equilibrium enforcing congestion prices. For an overview on more related work in this area, we refer to the book by Srikant @cite_39 . Kelly and Vazirani @cite_78 drew connections between market equilibrium computation and the congestion control model of Kelly. @cite_24 also studied the convex programming formulation of and established connections to the Wardrop equilibrium model. The most obvious difference of these work to ours is that they assume convex strategy spaces and concave utility functions. Our framework allows to add integrality conditions or non-convexities to the model. | {
"cite_N": [
"@cite_24",
"@cite_39",
"@cite_78",
"@cite_56"
],
"mid": [
"2118007118",
"1572996156",
"60782194",
"2159715570"
],
"abstract": [
"In this paper we consider an integrated model for TCP IP protocols with multipath routing. The model combines a Network Utility Maximization for rate control based on end-to-end queuing delays, with a Markovian Traffic Equilibrium for routing based on total expected delays. We prove the existence of a unique equilibrium state which is characterized as the solution of an unconstrained strictly convex program. A distributed algorithm for solving this optimization problem is proposed, with a brief discussion of how it can be implemented by adapting the current Internet protocols.",
"Preface Introduction Resource Allocation Congestion Control: A Decentralized Solution Relationship to Current Internet Protocols Linear Analysis with Delay: The Single Link Case Linear Analysis with Delay: The Network Case Global Stability for a Single Link and Single Flow Stochastic Models and Their Deterministic Limits Connection-level Models Real-Time Sources and Distributed Admission Control Conclusions References Index",
"",
"This paper analyses the stability and fairness of two classes of rate control algorithm for communication networks. The algorithms provide natural generalisations to large-scale networks of simple additive increase multiplicative decrease schemes, and are shown to be stable about a system optimum characterised by a proportional fairness criterion. Stability is established by showing that, with an appropriate formulation of the overall optimisation problem, the network's implicit objective function provides a Lyapunov function for the dynamical system defined by the rate control algorithm. The network's optimisation problem may be cast in primal or dual form: this leads naturally to two classes of algorithm, which may be interpreted in terms of either congestion indication feedback signals or explicit rates based on shadow prices. Both classes of algorithm may be generalised to include routing control, and provide natural implementations of proportionally fair pricing."
]
} |
1907.01879 | 2955970513 | This work considers robot keypoint estimation on color images as a supervised machine learning task. We propose the use of probabilistically created renderings to overcome the lack of labeled real images. Rather than sampling from stationary distributions, our approach introduces a feedback mechanism that constantly adapts probability distributions according to current training progress. Initial results show, our approach achieves near-human-level accuracy on real images. Additionally, we demonstrate that feedback leads to fewer required training steps, while maintaining the same model quality on synthetic data sets. | In @cite_3 a multi-task DCNN is proposed that learns to predict robot joint positions, robot type and segmentation masks from color input images. In contrast to this work, train on pictures of real robots, collected using a calibrated color depth sensor. The effort for creating these training samples is considerable and limited to the depth range of the sensor used for data acquisition. The work most closely related to ours is @cite_6 , who propose a two-layer architecture that learns joint positions and instance segmentation masks from artificial color input images. We take up their idea of artificial data generation and formulate it in terms of a probabilistic model. This way, posterior inference can be exploited to accelerate model learning. | {
"cite_N": [
"@cite_6",
"@cite_3"
],
"mid": [
"2914991053",
"2914707534"
],
"abstract": [
"This paper considers the task of locating articulated poses of multiple robots in images. Our approach simultaneously infers the number of robots in a scene, identifies joint locations and estimates sparse depth maps around joint locations. The proposed method applies staged convolutional feature detectors to 2D image inputs and computes robot instance masks using a recurrent network architecture. In addition, regression maps of most likely joint locations in pixel coordinates together with depth information are computed. Compositing 3D robot joint kinematics is accomplished by applying masks to joint readout maps. Our end-to-end formulation is in contrast to previous work in which the composition of robot joints into kinematics is performed in a separate post-processing step. Despite the fact that our models are trained on artificial data, we demonstrate generalizability to real world images.",
"Collaborative robots are becoming more common on factory floors as well as regular environments, however, their safety still is not a fully solved issue. Collision detection does not always perform as expected and collision avoidance is still an active research area. Collision avoidance works well for fixed robot-camera setups, however, if they are shifted around, Eye-to-Hand calibration becomes invalid making it difficult to accurately run many of the existing collision avoidance algorithms. We approach the problem by presenting a stand-alone system capable of detecting the robot and estimating its position, including individual joints, by using a simple 2D colour image as an input, where no Eye-to-Hand calibration is needed. As an extension of previous work, a two-stage transfer learning approach is used to re-train a multi-objective convolutional neural network (CNN) to allow it to be used with heterogeneous robot arms. Our method is capable of detecting the robot in real-time and new robot types can be added by having significantly smaller training datasets compared to the requirements of a fully trained network. We present data collection approach, the structure of the multi-objective CNN, the two-stage transfer learning training and test results by using real robots from Universal Robots, Kuka, and Franka Emika. Eventually, we analyse possible application areas of our method together with the possible improvements."
]
} |
1907.01879 | 2955970513 | This work considers robot keypoint estimation on color images as a supervised machine learning task. We propose the use of probabilistically created renderings to overcome the lack of labeled real images. Rather than sampling from stationary distributions, our approach introduces a feedback mechanism that constantly adapts probability distributions according to current training progress. Initial results show, our approach achieves near-human-level accuracy on real images. Additionally, we demonstrate that feedback leads to fewer required training steps, while maintaining the same model quality on synthetic data sets. | From a data augmentation perspective, the family of Generative Adversarial Networks (GANs) @cite_4 provide a generic framework for artificial data generation from noisy input. However, application to our use case requires generated images to carry precise meta-information (e.g joint positions). Such conditional GAN approaches @cite_2 are much harder to train and require a large amount of labeled input. Because we try to avoid tedious real data acquisition, we do not consider this approach for the remainder of this work. However, we do highlight the work of @cite_5 who propose the use of GANs in combination with deterministic simulators to add the necessary levels of realism to images, while guaranteeing pixel-exact semantic context. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_2"
],
"mid": [
"2940878519",
"2099471712",
"2963073614"
],
"abstract": [
"Deep Convolutional Neuronal Networks (DCNNs) are showing remarkable performance on many computer vision tasks. Due to their large parameter space, they require many labeled samples when trained in a supervised setting. The costs of annotating data manually can render the use of DCNNs infeasible. We present a novel framework called RenderGAN that can generate large amounts of realistic, labeled images by combining a 3D model and the Generative Adversarial Network framework. In our approach, image augmentations (e.g. lighting, background, and detail) are learned from unlabeled data such that the generated images are strikingly realistic while preserving the labels known from the 3D model. We apply the RenderGAN framework to generate images of barcode-like markers that are attached to honeybees. Training a DCNN on data generated by the RenderGAN yields considerably better performance than training it on various baselines.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Moreover, since the release of the pix2pix software associated with this paper, hundreds of twitter users have posted their own artistic experiments using our system. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either."
]
} |
1907.01847 | 2954431383 | We address the problem of spatio-temporal action detection in videos. Existing methods commonly either ignore temporal context in action recognition and localization, or lack the modelling of flexible shapes of action tubes. In this paper, we propose a two-stage action detector called Deformable Tube Network (DTN), which is composed of a Deformation Tube Proposal Network (DTPN) and a Deformable Tube Recognition Network (DTRN) similar to the Faster R-CNN architecture. In DTPN, a fast proposal linking algorithm (FTL) is introduced to connect region proposals across frames to generate multiple deformable action tube proposals. To perform action detection, we design a 3D convolution network with skip connections for tube classification and regression. Modelling action proposals as deformable tubes explicitly considers the shape of action tubes compared to 3D cuboids. Moreover, 3D convolution based recognition network can learn temporal dynamics sufficiently for action detection. Our experimental results show that we significantly outperform the methods with 3D cuboids and obtain the state-of-the-art results on both UCF-Sports and AVA datasets. | We will introduce several previous works on action detection in this section. In addition, action detection is also much related to object detection and action recognition. A lot of researches are inspired by the methodologies of object detection @cite_41 @cite_15 @cite_46 @cite_8 @cite_40 and the advance in action recognition @cite_21 @cite_25 @cite_0 @cite_11 @cite_27 @cite_4 . Therefore, we will walk through these three directions. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_41",
"@cite_46",
"@cite_21",
"@cite_0",
"@cite_40",
"@cite_27",
"@cite_15",
"@cite_25",
"@cite_11"
],
"mid": [
"2963155035",
"2963287324",
"2168356304",
"2743473392",
"2074381234",
"2610718658",
"2884944276",
"2951183276",
"2613718673",
"2156303437",
"2764138706"
],
"abstract": [
"In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which produces CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101, and HMDB51.",
"Most existing detection pipelines treat object proposals independently and predict bounding box locations and classification scores over them separately. However, the important semantic and spatial layout correlations among proposals are often ignored, which are actually useful for more accurate object detection. In this paper, we propose a new EM-like group recursive learning approach to iteratively refine object proposals by incorporating such context of surrounding proposals and provide an optimal spatial configuration of object detections. In addition, we propose to incorporate the weakly supervised object segmentation cues and region-based object detection into a multistage architecture in order to fully exploit the learned segmentation features for better object detection in an end-to-end way. The proposed architecture consists of three cascaded networks that, respectively, learn to perform weakly supervised object segmentation, object proposal generation, and recursive detection refinement. Combining the group recursive learning and the multistage architecture provides competitive mAPs of @math and @math on the PASCAL VOC2007 and VOC2012 datasets, respectively, which outperform many well-established baselines significantly.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL",
"Visual concept detection and action recognition are one of the most important tasks in content-based multimedia information retrieval (CBMIR) technology. It aims at annotating images using a vocabulary defined by a set of concepts of interest including scenes types (mountains, snow, etc.) or human actions (phoning, playing instrument). This paper describes our system in the ImageCLEF@ICPR10, Pascal VOC 08 Visual Concept Detection and Pascal VOC 10 Action Recognition Challenges. The proposed system ranked first in these large-scale tasks when evaluated independently by the organizers. The proposed system involves state-of-the-art local descriptor computation, vector quantization via clustering, structured scene or object representation via localized histograms of vector codes, similarity measure for kernel construction and classifier learning. The main novelty is the classifier-level and kernel-level fusion using Kernel Discriminant Analysis and Spectral Regression (SR-KDA) with RBF Chi-Squared kernels obtained from various image descriptors. The distinctiveness of the proposed method is also assessed experimentally using a video benchmark: the Mediamill Challenge along with benchmarks from ImageCLEF@ICPR10, Pascal VOC 10 and Pascal VOC 08. From the experimental results, it can be derived that the presented system consistently yields significant performance gains when compared with the state-of-the art methods. The other strong point is the introduction of SR-KDA in the classification stage where the time complexity scales linearly with respect to the number of concepts and the main computational complexity is independent of the number of categories.",
"Local features have been widely used in computer vision tasks, e.g., human action recognition, but it tends to be an extremely challenging task to deal with large-scale local features of high dimensionality with redundant information. In this paper, we propose a novel fully supervised local descriptor learning algorithm called discriminative embedding method based on the image-to-class distance (I2CDDE) to learn compact but highly discriminative local feature descriptors for more accurate and efficient action recognition. By leveraging the advantages of the I2C distance, the proposed I2CDDE incorporates class labels to enable fully supervised learning of local feature descriptors, which achieves highly discriminative but compact local descriptors. The objective of our I2CDDE is to minimize the I2C distances from samples to their corresponding classes while maximizing the I2C distances to the other classes in the low-dimensional space. To further improve the performance, we propose incorporating a manifold regularization based on the graph Laplacian into the objective function, which can enhance the smoothness of the embedding by extracting the local intrinsic geometrical structure. The proposed I2CDDE for the first time achieves fully supervised learning of local feature descriptors. It significantly improves the performance of I2C-based methods by increasing the discriminative ability of local features while greatly reducing the computational burden by dimensionality reduction to handle large-scale data. We apply the proposed I2CDDE algorithm to human action recognition on four widely used benchmark datasets. The results have shown that I2CDDE can significantly improve I2C-based classifiers and achieves state-of-the-art performance.",
"Compared to conventional saliency detection by handcrafted features, deep convolutional neural networks (CNNs) recently have been successfully applied to saliency detection field with superior performance on locating salient objects. However, due to repeated sub-sampling operations inside CNNs such as pooling and convolution, many CNN-based saliency models fail to maintain fine-grained spatial details and boundary structures of objects. To remedy this issue, this paper proposes a novel end-to-end deep learning-based refinement model named Refinet , which is based on fully convolutional network augmented with segmentation hypotheses. Intermediate saliency maps that are edge-aware are computed from segmentation-based pooling and then feed to a two-tier fully convolutional network for effective fusion and refinement, leading to more precise object details and boundaries. In addition, the resolution of feature maps in the proposed Refinet is carefully designed to guarantee sufficient boundary clarity of the refined saliency output. Compared to widely employed dense conditional random field, Refinet is able to enhance coarse saliency maps generated by existing models with more accurate spatial details, and its effectiveness is demonstrated by experimental results on seven benchmark datasets.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multitask learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.",
"Semantic parts have shown a powerful discriminative capacity for action recognition. However, many existing methods select parts according to predefined heuristic rules, which may cause the correlation among parts to be lost, or do not appropriately consider the cluttered candidate part space, which may result in weak generalizability of the resulting action labels. Therefore, better consideration of the correlation among parts and refinement of the candidate space will lead to a more discriminative action representation. This paper achieves improved performance by more elegantly addressing these two factors. First, considering the cluttered nature of the candidate space, we propose a recursive part elimination strategy for iterative refinement of the candidate parts. In each iteration, we eliminate the parts with the lowest weights, which are deemed to be noise. Second, we measure the discriminative capabilities of the candidates and select the top-ranked parts by applying a maximum margin model, which can alleviate overfitting while simultaneously improving generalizability and correlation extraction. Finally, using the selected parts, we extract mid-level features. We report experiments conducted on four datasets (KTH, Olympic Sports, UCF50, and HMDB51). The proposed method can achieve significant improvements compared with other recent methods, including a lower computational cost, a faster speed, and higher accuracy."
]
} |
1907.01847 | 2954431383 | We address the problem of spatio-temporal action detection in videos. Existing methods commonly either ignore temporal context in action recognition and localization, or lack the modelling of flexible shapes of action tubes. In this paper, we propose a two-stage action detector called Deformable Tube Network (DTN), which is composed of a Deformation Tube Proposal Network (DTPN) and a Deformable Tube Recognition Network (DTRN) similar to the Faster R-CNN architecture. In DTPN, a fast proposal linking algorithm (FTL) is introduced to connect region proposals across frames to generate multiple deformable action tube proposals. To perform action detection, we design a 3D convolution network with skip connections for tube classification and regression. Modelling action proposals as deformable tubes explicitly considers the shape of action tubes compared to 3D cuboids. Moreover, 3D convolution based recognition network can learn temporal dynamics sufficiently for action detection. Our experimental results show that we significantly outperform the methods with 3D cuboids and obtain the state-of-the-art results on both UCF-Sports and AVA datasets. | Compared to action recognition, action detection requires accurate boundaries regression. A natural way is to follow the standard sliding window strategy in object detection @cite_32 @cite_3 @cite_18 . The main difference mainly lies in the feature selection and the method to generate candidate action proposals. For example, Rohrbach al @cite_18 generated multiple candidate segments by sliding windows and perform recognition over dense trajectories and human pose features. Lan al @cite_48 made use of figure-centric visual word features to represent actions. | {
"cite_N": [
"@cite_48",
"@cite_18",
"@cite_32",
"@cite_3"
],
"mid": [
"",
"2156798932",
"2129666410",
"2097342496"
],
"abstract": [
"",
"Activity recognition has shown impressive progress in recent years. However, the challenges of detecting fine-grained activities and understanding how they are combined into composite activities have been largely overlooked. In this work we approach both tasks and present a dataset which provides detailed annotations to address them. The first challenge is to detect fine-grained activities, which are defined by low inter-class variability and are typically characterized by fine-grained body motions. We explore how human pose and hands can help to approach this challenge by comparing two pose-based and two hand-centric features with state-of-the-art holistic features. To attack the second challenge, recognizing composite activities, we leverage the fact that these activities are compositional and that the essential components of the activities can be obtained from textual descriptions or scripts. We show the benefits of our hand-centric approach for fine-grained activity classification and detection. For composite activity recognition we find that decomposition into attributes allows sharing information across composites and is essential to attack this hard task. Using script data we can recognize novel composites without having training data for them.",
"We address recognition and localization of human actions in realistic scenarios. In contrast to the previous work studying human actions in controlled settings, here we train and test algorithms on real movies with substantial variation of actions in terms of subject appearance, motion, surrounding scenes, viewing angles and spatio-temporal extents. We introduce a new annotated human action dataset and use it to evaluate several existing methods. We in particular focus on boosted space-time window classifiers and introduce \"keyframe priming\" that combines discriminative models of human motion and shape within an action. Keyframe priming is shown to significantly improve the performance of action detection. We present detection results for the action class \"drinking\" evaluated on two episodes of the movie \"Coffee and Cigarettes\".",
"In recent years, many research works have been carried out to recognize human actions from video clips. To learn an effective action classifier, most of the previous approaches rely on enough training labels. When being required to recognize the action in a different dataset, these approaches have to re-train the model using new labels. However, labeling video sequences is a very tedious and time-consuming task, especially when detailed spatial locations and time durations are required. In this paper, we propose an adaptive action detection approach which reduces the requirement of training labels and is able to handle the task of cross-dataset action detection with few or no extra training labels. Our approach combines model adaptation and action detection into a Maximum a Posterior (MAP) estimation framework, which explores the spatial-temporal coherence of actions and makes good use of the prior information which can be obtained without supervision. Our approach obtains state-of-the-art results on KTH action dataset using only 50 of the training labels in tradition approaches. Furthermore, we show that our approach is effective for the cross-dataset detection which adapts the model trained on KTH to two other challenging datasets1."
]
} |
1907.01847 | 2954431383 | We address the problem of spatio-temporal action detection in videos. Existing methods commonly either ignore temporal context in action recognition and localization, or lack the modelling of flexible shapes of action tubes. In this paper, we propose a two-stage action detector called Deformable Tube Network (DTN), which is composed of a Deformation Tube Proposal Network (DTPN) and a Deformable Tube Recognition Network (DTRN) similar to the Faster R-CNN architecture. In DTPN, a fast proposal linking algorithm (FTL) is introduced to connect region proposals across frames to generate multiple deformable action tube proposals. To perform action detection, we design a 3D convolution network with skip connections for tube classification and regression. Modelling action proposals as deformable tubes explicitly considers the shape of action tubes compared to 3D cuboids. Moreover, 3D convolution based recognition network can learn temporal dynamics sufficiently for action detection. Our experimental results show that we significantly outperform the methods with 3D cuboids and obtain the state-of-the-art results on both UCF-Sports and AVA datasets. | As the performance of object detection went up, most recent approaches turned to link frame-level object detections to form action tubes. Based on visual and motion cues from two-stream network, Gkioxari and Malik @cite_39 classified region proposals generated by selective search, which are then linked to action tubes along time for temporal consistency. Weinzaepfei al @cite_17 proposed to track high-scoring proposals using a tracking-by-detection approach. Saha al @cite_34 fused appearance and motion detection boxes based on estimated action scores and their spatial overlaps between each other, and constructed spatio-temporal action tubes with a two-pass dynamic programming method. Peng and Schmid @cite_1 replaced selective search with region proposal network and embedded a multi-region scheme into their two-stream classification network. Singh al @cite_29 introduced a real-time action localization method with a SSD object detector and an online linking algorithm. All these methods rely much on frame-level human detections. However, distinguishing actions based on single frames are difficult without considering temporal dynamics. | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_39",
"@cite_34",
"@cite_17"
],
"mid": [
"2589264020",
"",
"1923332106",
"2484328966",
"1797109199"
],
"abstract": [
"We present a deep-learning framework for real-time multiple spatio-temporal (S T) action localisation, classification and early prediction. Current state-of-the-art approaches work offline and are too slow to be useful in real- world settings. To overcome their limitations we introduce two major developments. Firstly, we adopt real-time SSD (Single Shot MultiBox Detector) convolutional neural networks to regress and classify detection boxes in each video frame potentially containing an action of interest. Secondly, we design an original and efficient online algorithm to incrementally construct and label action tubes' from the SSD frame level detections. As a result, our system is not only capable of performing S T detection in real time, but can also perform early action prediction in an online fashion. We achieve new state-of-the-art results in both S T action localisation and early action prediction on the challenging UCF101-24 and J-HMDB-21 benchmarks, even when compared to the top offline competitors. To the best of our knowledge, ours is the first real-time (up to 40fps) system able to perform online S T action localisation and early action prediction on the untrimmed videos of UCF101-24.",
"",
"We address the problem of action detection in videos. Driven by the latest progress in object detection from 2D images, we build action models using rich feature hierarchies derived from shape and kinematic cues. We incorporate appearance and motion in two ways. First, starting from image region proposals we select those that are motion salient and thus are more likely to contain the action. This leads to a significant reduction in the number of regions being processed and allows for faster computations. Second, we extract spatio-temporal feature representations to build strong classifiers using Convolutional Neural Networks. We link our predictions to produce detections consistent in time, which we call action tubes. We show that our approach outperforms other techniques in the task of action detection.",
"In this work, we propose an approach to the spatiotemporal localisation (detection) and classification of multiple concurrent actions within temporally untrimmed videos. Our framework is composed of three stages. In stage 1, appearance and motion detection networks are employed to localise and score actions from colour images and optical flow. In stage 2, the appearance network detections are boosted by combining them with the motion detection scores, in proportion to their respective spatial overlap. In stage 3, sequences of detection boxes most likely to be associated with a single action instance, called action tubes, are constructed by solving two energy maximisation problems via dynamic programming. While in the first pass, action paths spanning the whole video are built by linking detection boxes over time using their class-specific scores and their spatial overlap, in the second pass, temporal trimming is performed by ensuring label consistency for all constituting detection boxes. We demonstrate the performance of our algorithm on the challenging UCF101, J-HMDB-21 and LIRIS-HARL datasets, achieving new state-of-the-art results across the board and significantly increasing detection speed at test time. We achieve a huge leap forward in action detection performance and report a 20 and 11 gain in mAP (mean average precision) on UCF-101 and J-HMDB-21 datasets respectively when compared to the state-of-the-art.",
"We propose an effective approach for spatio-temporal action localization in realistic videos. The approach first detects proposals at the frame-level and scores them with a combination of static and motion CNN features. It then tracks high-scoring proposals throughout the video using a tracking-by-detection approach. Our tracker relies simultaneously on instance-level and class-level detectors. The tracks are scored using a spatio-temporal motion histogram, a descriptor at the track level, in combination with the CNN features. Finally, we perform temporal localization of the action using a sliding-window approach at the track level. We present experimental results for spatio-temporal localization on the UCF-Sports, J-HMDB and UCF-101 action localization datasets, where our approach outperforms the state of the art with a margin of 15 , 7 and 12 respectively in mAP."
]
} |
1907.01847 | 2954431383 | We address the problem of spatio-temporal action detection in videos. Existing methods commonly either ignore temporal context in action recognition and localization, or lack the modelling of flexible shapes of action tubes. In this paper, we propose a two-stage action detector called Deformable Tube Network (DTN), which is composed of a Deformation Tube Proposal Network (DTPN) and a Deformable Tube Recognition Network (DTRN) similar to the Faster R-CNN architecture. In DTPN, a fast proposal linking algorithm (FTL) is introduced to connect region proposals across frames to generate multiple deformable action tube proposals. To perform action detection, we design a 3D convolution network with skip connections for tube classification and regression. Modelling action proposals as deformable tubes explicitly considers the shape of action tubes compared to 3D cuboids. Moreover, 3D convolution based recognition network can learn temporal dynamics sufficiently for action detection. Our experimental results show that we significantly outperform the methods with 3D cuboids and obtain the state-of-the-art results on both UCF-Sports and AVA datasets. | To further include temporal dynamics for action recognition and localization, Kalogeiton al @cite_37 came up with anchor cuboids to generate action proposals directly, which encode enough spatio-temporal information for action recognition. Hou al @cite_43 further generalized Region-of-Interest (RoI) pooling layer to 3D Tube-of-Interest (ToI) pooling layer. Although these approaches built on anchor cuboids offer an opportunity to integrate temporal dynamics for action recognition compared with those connecting frame-level human detections, action tubes are too deformable to be modelled by regular 3D volumes in practice. In contrast, we propose a novel detection network, which generates deformable action tube candidates by a fast linking algorithm and perform recognition and regression over these proposals. RTPR @cite_16 is probably most similar to us in spirit, which also link region proposals to action tubes and perform action recognition by LSTM. However, in their work, action localization are regressed recurrently and did not consider global temporal information to enhance detections in each frame. Instead, we design a fully convolutional neural network to perform recognition and regression as a whole over generated deformable action tubes. | {
"cite_N": [
"@cite_43",
"@cite_37",
"@cite_16"
],
"mid": [
"2962790054",
"2611596598",
"2895738954"
],
"abstract": [
"Deep learning has been demonstrated to achieve excellent results for image classification and object detection. However, the impact of deep learning on video analysis has been limited due to complexity of video data and lack of annotations. Previous convolutional neural networks (CNN) based video action detection approaches usually consist of two major steps: frame-level action proposal generation and association of proposals across frames. Also, most of these methods employ two-stream CNN framework to handle spatial and temporal feature separately. In this paper, we propose an end-to-end deep network called Tube Convolutional Neural Network (T-CNN) for action detection in videos. The proposed architecture is a unified deep network that is able to recognize and localize action based on 3D convolution features. A video is first divided into equal length clips and next for each clip a set of tube proposals are generated based on 3D Convolutional Network (ConvNet) features. Finally, the tube proposals of different clips are linked together employing network flow and spatio-temporal action detection is performed using these linked video proposals. Extensive experiments on several video datasets demonstrate the superior performance of T-CNN for classifying and localizing actions in both trimmed and untrimmed videos compared to state-of-the-arts.",
"Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework [19]. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences offrantes significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB [12] and UCF-101 [31] datasets, in particular at high overlap thresholds.",
"Detecting actions in videos is a challenging task as video is an information intensive media with complex variations. Existing approaches predominantly generate action proposals for each individual frame or fixed-length clip independently, while overlooking temporal context across them. Such temporal contextual relations are vital for action detection as an action is by nature a sequence of movements. This motivates us to leverage the localized action proposals in previous frames when determining action regions in the current one. Specifically, we present a novel deep architecture called Recurrent Tubelet Proposal and Recognition (RTPR) networks to incorporate temporal context for action detection. The proposed RTPR consists of two correlated networks, i.e., Recurrent Tubelet Proposal (RTP) networks and Recurrent Tubelet Recognition (RTR) networks. The RTP initializes action proposals of the start frame through a Region Proposal Network and then estimates the movements of proposals in next frame in a recurrent manner. The action proposals of different frames are linked to form the tubelet proposals. The RTR capitalizes on a multi-channel architecture, where in each channel, a tubelet proposal is fed into a CNN plus LSTM to recurrently recognize action in the tubelet. We conduct extensive experiments on four benchmark datasets and demonstrate superior results over state-of-the-art methods. More remarkably, we obtain mAP of 98.6 , 81.3 , 77.9 and 22.3 with gains of 2.9 , 4.3 , 0.7 and 3.9 over the best competitors on UCF-Sports, J-HMDB, UCF-101 and AVA, respectively."
]
} |
1907.02090 | 2954960065 | This paper investigates the application of machine learning (ML) techniques to enable intelligent systems to learn multi-party turn-taking models from dialogue logs. The specific ML task consists of determining who speaks next, after each utterance of a dialogue, given who has spoken and what was said in the previous utterances. With this goal, this paper presents comparisons of the accuracy of different ML techniques such as Maximum Likelihood Estimation (MLE), Support Vector Machines (SVM), and Convolutional Neural Networks (CNN) architectures, with and without utterance data. We present three corpora: the first with dialogues from an American TV situated comedy (chit-chat), the second with logs from a financial advice multi-bot system and the third with a corpus created from the Multi-Domain Wizard-of-Oz dataset (both are topic-oriented). The results show: (i) the size of the corpus has a very positive impact on the accuracy for the content-based deep learning approaches and those models perform best in the larger datasets; and (ii) if the dialogue dataset is small and topic-oriented (but with few topics), it is sufficient to use an agent-only MLE or SVM models, although slightly higher accuracies can be achieved with the use of the content of the utterances with a CNN model. | On the other hand, several ML-based end-to-end data-driven dialogue systems have been built and evaluated @cite_9 , including some which consider multi-party dialogues, albeit disentangling them into dyadic dialogues @cite_18 . Further studies have also been conducted in order to build participant social role models @cite_0 . | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_18"
],
"mid": [
"154079225",
"2810821963",
"2115615127"
],
"abstract": [
"In this paper, we describe our experience with collecting and creating an annotated corpus of multi-party online conversations in a chat-room environment. This effort is part of a larger project to develop computational models of social phenomena such as agenda control, influence, and leadership in on-line interactions. Such models will help capturing the dialogue dynamics that are essential for developing, among others, realistic human-machine dialogue systems, including autonomous virtual chat agents. In this paper we describe data collection method used and the characteristics of the initial dataset of English chat. We have devised a multi-tiered collection process in which the subjects start from simple, free-flowing conversations and progress towards more complex and structured interactions. In this paper, we report on the first two stages of this process, which were recently completed. The third, large-scale collection effort is currently being conducted. All English dialogue has been annotated at four levels: communication links, dialogue acts, local topics and meso-topics.",
"During the past decade, several areas of speech and language understanding have witnessed substantial breakthroughs from the use of data-driven models. In the area of dialogue systems, the trend is less obvious, and most practical systems are still built through significant engineering and expert knowledge. Nevertheless, several recent results suggest that data-driven approaches are feasible and quite promising. To facilitate research in this area, we have carried out a wide survey of publicly available datasets suitable for data-driven learning of dialogue systems. We discuss important characteristics of these datasets, how they can be used to learn diverse dialogue strategies, and their other potential uses. We also examine methods for transfer learning between datasets and the use of external knowledge. Finally, we discuss appropriate choice of evaluation metrics for the learning objective.",
"When multiple conversations occur simultaneously, a listener must decide which conversation each utterance is part of in order to interpret and respond to it appropriately. We refer to this task as disentanglement. We present a corpus of Internet Relay Chat (IRC) dialogue in which the various conversations have been manually disentangled, and evaluate annotator reliability. This is, to our knowledge, the first such corpus for internet chat. We propose a graph-theoretic model for disentanglement, using discourse-based features which have not been previously applied to this task. The model’s predicted disentanglements are highly correlated with manual annotations."
]
} |
1907.01989 | 2955715866 | On-device inference of machine learning models for mobile phones is desirable due to its lower latency and increased privacy. Running such a compute-intensive task solely on the mobile CPU, however, can be difficult due to limited computing power, thermal constraints, and energy consumption. App developers and researchers have begun exploiting hardware accelerators to overcome these challenges. Recently, device manufacturers are adding neural processing units into high-end phones for on-device inference, but these account for only a small fraction of hand-held devices. In this paper, we present how we leverage the mobile GPU, a ubiquitous hardware accelerator on virtually every phone, to run inference of deep neural networks in real-time for both Android and iOS devices. By describing our architecture, we also discuss how to design networks that are mobile GPU-friendly. Our state-of-the-art mobile GPU inference engine is integrated into the open-source project TensorFlow Lite and publicly available at this https URL. | Neural network researchers have focused on optimizing their network architectures explicitly for processing on-device in various domains such as image classification @cite_7 @cite_10 , object localization @cite_13 , and image enhancements @cite_8 @cite_0 . Many of these techniques involve reducing the model size by re-designing the network architecture and adding pre- post-training quantization of weights. With these, one can achieve faster computation and smaller memory footprint, leading to reduced inference latency at the cost of slightly degraded model accuracy. MorphNet @cite_20 takes a unique path of reducing the number of floating point operations per second which is optimized during training of the model. Our work is complementary to these efforts and instead focuses on optimizing the inference engine that runs the neural network rather than the model or training. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_10",
"@cite_0",
"@cite_13",
"@cite_20"
],
"mid": [
"2612445135",
"2607202125",
"2963163009",
"2895518292",
"2557728737",
"2964217527"
],
"abstract": [
"We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.",
"Despite a rapid rise in the quality of built-in smartphone cameras, their physical limitations – small sensor size, compact lenses and the lack of specific hardware, – impede them to achieve the quality results of DSLR cameras. In this work we present an end-to-end deep learning approach that bridges this gap by translating ordinary photos into DSLR-quality images. We propose learning the translation function using a residual convolutional neural network that improves both color rendition and image sharpness. Since the standard mean squared loss is not well suited for measuring perceptual image quality, we introduce a composite perceptual error function that combines content, color and texture losses. The first two losses are defined analytically, while the texture loss is learned in an adversarial fashion. We also present DPED, a large-scale dataset that consists of real photos captured from three different phones and one high-end reflex camera. Our quantitative and qualitative assessments reveal that the enhanced image quality is comparable to that of DSLR-taken photos, while the methodology is generalized to any type of digital camera.",
"In this paper we describe a new mobile architecture, MobileNetV2, that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes. We also describe efficient ways of applying these mobile models to object detection in a novel framework we call SSDLite. Additionally, we demonstrate how to build mobile semantic segmentation models through a reduced form of DeepLabv3 which we call Mobile DeepLabv3. is based on an inverted residual structure where the shortcut connections are between the thin bottleneck layers. The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. Additionally, we find that it is important to remove non-linearities in the narrow layers in order to maintain representational power. We demonstrate that this improves performance and provide an intuition that led to this design. Finally, our approach allows decoupling of the input output domains from the expressiveness of the transformation, which provides a convenient framework for further analysis. We measure our performance on ImageNet [1] classification, COCO object detection [2], VOC image segmentation [3]. We evaluate the trade-offs between accuracy, and number of operations measured by multiply-adds (MAdd), as well as actual latency, and the number of parameters.",
"This paper reviews the first challenge on efficient perceptual image enhancement with the focus on deploying deep learning models on smartphones. The challenge consisted of two tracks. In the first one, participants were solving the classical image super-resolution problem with a bicubic downscaling factor of 4. The second track was aimed at real-world photo enhancement, and the goal was to map low-quality photos from the iPhone 3GS device to the same photos captured with a DSLR camera. The target metric used in this challenge combined the runtime, PSNR scores and solutions’ perceptual results measured in the user study. To ensure the efficiency of the submitted models, we additionally measured their runtime and memory requirements on Android smartphones. The proposed solutions significantly improved baseline results defining the state-of-the-art for image enhancement on smartphones.",
"The goal of this paper is to serve as a guide for selecting a detection architecture that achieves the right speed memory accuracy balance for a given application and platform. To this end, we investigate various ways to trade accuracy for speed and memory usage in modern convolutional object detection systems. A number of successful systems have been proposed in recent years, but apples-toapples comparisons are difficult due to different base feature extractors (e.g., VGG, Residual Networks), different default image resolutions, as well as different hardware and software platforms. We present a unified implementation of the Faster R-CNN [30], R-FCN [6] and SSD [25] systems, which we view as meta-architectures and trace out the speed accuracy trade-off curve created by using alternative feature extractors and varying other critical parameters such as image size within each of these meta-architectures. On one extreme end of this spectrum where speed and memory are critical, we present a detector that achieves real time speeds and can be deployed on a mobile device. On the opposite end in which accuracy is critical, we present a detector that achieves state-of-the-art performance measured on the COCO detection task.",
"We present MorphNet, an approach to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network, shrinking via a resource-weighted sparsifying regularizer on activations and expanding via a uniform multiplicative factor on all layers. In contrast to previous approaches, our method is scalable to large networks, adaptable to specific resource constraints (e.g. the number of floating-point operations per inference), and capable of increasing the network's performance. When applied to standard network architectures on a wide variety of datasets, our approach discovers novel structures in each domain, obtaining higher performance while respecting the resource constraint."
]
} |
1907.01849 | 2955291764 | The diffusion strategy for distributed learning from streaming data employs local stochastic gradient updates along with exchange of iterates over neighborhoods. In Part I [2] of this work we established that agents cluster around a network centroid and proceeded to study the dynamics of this point. We established expected descent in non-convex environments in the large-gradient regime and introduced a short-term model to examine the dynamics over finite-time horizons. Using this model, we establish in this work that the diffusion strategy is able to escape from strict saddle-points in O(1 @math ) iterations; it is also able to return approximately second-order stationary points in a polynomial number of iterations. Relative to prior works on the polynomial escape from saddle-points, most of which focus on centralized perturbed or stochastic gradient descent, our approach requires less restrictive conditions on the gradient noise process. | Motivated by these considerations, in this work, we focus on implementations that employ gradient approximations and step-sizes. This is driven by the fact that computation of the exact gradients ( J_k( ) ) is generally infeasible in practice because (a) data may be streaming in, making it impossible to compute ( x_k Q_k( ; ) ) in the absence of knowledge about the distribution of the data or (b) the data set, while available as a batch, may be so large that efficient computation of the full gradient is infeasible. As such, the exact gradient will need to be replaced by an approximate gradient, which ends up introducing in a natural manner some form of into the operation of the algorithm; this noise is the difference between the true gradient and its approximation. The gradient noise seeps into the operation of the algorithm continually and becomes coupled with the evolution of the iterates, resulting in perturbations that are neither identically nor independently distributed over time. For instance, the presence of the gradient noise process complicates the dynamics of the iterate evolution relative to the centralized recursions considered in @cite_24 . | {
"cite_N": [
"@cite_24"
],
"mid": [
"1697075315"
],
"abstract": [
"We analyze stochastic gradient descent for optimizing non-convex functions. In many cases for non-convex functions the goal is to find a reasonable local minimum, and the main concern is that gradient updates are trapped in saddle points. In this paper we identify strict saddle property for non-convex problem that allows for efficient optimization. Using this property we show that stochastic gradient descent converges to a local minimum in a polynomial number of iterations. To the best of our knowledge this is the first work that gives global convergence guarantees for stochastic gradient descent on non-convex functions with exponentially many local minima and saddle points. Our analysis can be applied to orthogonal tensor decomposition, which is widely used in learning a rich class of latent variable models. We propose a new optimization formulation for the tensor decomposition problem that has strict saddle property. As a result we get the first online algorithm for orthogonal tensor decomposition with global convergence guarantee."
]
} |
1907.01984 | 2954872192 | Recent work in decentralized, schedule-driven traffic control has demonstrated the ability to improve the efficiency of traffic flow in complex urban road networks. In this approach, a scheduling agent is associated with each intersection. Each agent senses the traffic approaching its intersection and in real-time constructs a schedule that minimizes the cumulative wait time of vehicles approaching the intersection over the current look-ahead horizon. In this paper, we propose a cooperative algorithm that utilizes both connected and autonomous vehicles (CAV) and schedule-driven traffic control to create better traffic flow in the city. The algorithm enables an intersection scheduling agent to adjust the arrival time of an approaching platoon through use of wireless communication to control the velocity of vehicles. The sequence of approaching platoons is thus shifted toward a new shape that has smaller cumulative delay. We demonstrate how this algorithm outperforms the original approach in a real-time traffic signal control problem. | Traditionally, there are three general approaches to control traffic signal: a) fixed timing; b) actuated; and c) adaptive. The earliest implementations are based on a fixed timing method optimized using historical traffic data offline. The later advancements have used actuated or adaptive signals. Then, if all cars are equipped with wireless communication technologies, e.g., Dedicated Short Range Communications (DSRC), to communicate with a centralized infrastructure, we can optimize the traffic flow by ordering the phases of traffic signal more efficiently. In @cite_15 @cite_2 information from equipped vehicles is used to determine demands and optimize the cycle length and green splits of a traffic signal once every cycle. In @cite_8 the presence of platoons is detected using V2I communication and a mixed integer non-linear program is solved to produce the optimal phasing sequence. However, this approach does not scale well for generating long horizon plans and does not deal with uncertainty of traffic states. | {
"cite_N": [
"@cite_8",
"@cite_15",
"@cite_2"
],
"mid": [
"2514358729",
"2161271108",
"1987437394"
],
"abstract": [
"A unified platoon-based mathematical formulation called PAMSCOD is presented to perform arterial (network) traffic signal control while considering multiple travel modes in a vehicle-to-infrastructure communications environment. First, a headway-based platoon recognition algorithm is developed to identify pseudo-platoons given probe vehicles’ online information. It is assumed that passenger vehicles constitute a significant majority of the vehicles in the network. This algorithm identifies existing queues and significant platoons approaching each intersection. Second, a mixed-integer linear program (MILP) is solved to determine future optimal signal plans based on the current traffic controller status, online platoon data and priority requests from special vehicles, such as transit buses. Deviating from the traditional common network cycle length, PAMSCOD aims to provide multi-modal dynamical progression (MDP) on the arterial based on the probe information. Microscopic simulation using VISSIM shows that PAMSCOD can easily handle two common traffic modes, transit buses and automobiles, and significantly reduce delays for both modes under both non-saturated and oversaturated traffic conditions as compared to traditional state-of-practice coordinated-actuated signal control with timings optimized by SYNCHRO.",
"A novel concept for a decentralized adaptive traffic signal control in urban networks using in future available vehicle to infratructure (V2I) communication data is presented. The phase-based strategy takes advantage of the improved detection data and optimizes each time interval of 5 seconds the phase sequence in order to reduce the total queue length within a forecast horizon of 20 seconds. For optimization the methods of dynamic programming and complete enumeration are used. The methods are embedded in the simulation environment of the microscopic traffic simulator AIMSUN NG. The market penetration level is the critical factor that impacts the quality of the new signal control. Hence, various penetration levels are modelled. For reference TRANSYT-7F is used.",
"The operation of traffic signals is currently limited by the data available from traditional point sensors. Point detectors can provide only limited vehicle information at a fixed location. The most advanced adaptive control strategies are often not implemented in the field because of their operational complexity and high-resolution detection requirements. However, a new initiative known as connected vehicles allows the wireless transmission of the positions, headings, and speeds of vehicles for use by the traffic controller. A new traffic control algorithm, the predictive microscopic simulation algorithm, which uses these new, more robust data, was developed. The decentralized, fully adaptive traffic control algorithm uses a rolling-horizon strategy in which the phasing is chosen to optimize an objective function over a 15-s period in the future. The objective function uses either delay only or a combination of delay, stops, and decelerations. To measure the objective function, the algorithm uses a micro..."
]
} |
1907.01984 | 2954872192 | Recent work in decentralized, schedule-driven traffic control has demonstrated the ability to improve the efficiency of traffic flow in complex urban road networks. In this approach, a scheduling agent is associated with each intersection. Each agent senses the traffic approaching its intersection and in real-time constructs a schedule that minimizes the cumulative wait time of vehicles approaching the intersection over the current look-ahead horizon. In this paper, we propose a cooperative algorithm that utilizes both connected and autonomous vehicles (CAV) and schedule-driven traffic control to create better traffic flow in the city. The algorithm enables an intersection scheduling agent to adjust the arrival time of an approaching platoon through use of wireless communication to control the velocity of vehicles. The sequence of approaching platoons is thus shifted toward a new shape that has smaller cumulative delay. We demonstrate how this algorithm outperforms the original approach in a real-time traffic signal control problem. | Taking advantage of autonomously controlled vehicles, and the use of information from connected vehicles for intersection control has been investigated in several researches. The trajectory of fully autonomous vehicles can be manipulated to optimize an objective function @cite_5 @cite_0 @cite_12 @cite_10 . Those approaches can achieve either better safety or efficiency through interaction between intersections and vehicles. @cite_3 propose an extension of AIM to enable vehicles to apply motion planning for optimizing speed. | {
"cite_N": [
"@cite_3",
"@cite_0",
"@cite_5",
"@cite_10",
"@cite_12"
],
"mid": [
"1955667629",
"",
"2146363089",
"2884002829",
"2008082561"
],
"abstract": [
"The impressive results of the 2007 DARPA Urban Challenge showed that fully autonomous vehicles are technologically feasible with current intelligent vehicle hardware. It is natural to ask how current transportation infrastructure can be improved when most vehicles are driven autonomously in the future. Dresner and Stone proposed a new intersection control mechanism called Autonomous Intersection Management (AIM) and showed in simulation that intersection control can be made more efficient than the traditional control mechanisms such as traffic signals and stop signs. In this paper, we extend the study by examining the relationship between the precision of cars' motion controllers and the efficiency of the intersection controller. We propose a planning-based motion controller that can reduce the chance that autonomous vehicles stop before intersections, and show that this controller can increase the efficiency of the intersection control mechanism.",
"",
"Cooperative driving technology with intervehicle communication has attracted increasing attention recently. It aims to improve driving safety and efficiency using appropriate motion scheduling of all the encountered vehicles. Under cooperative driving control, the motion of individual vehicles could be conducted in a safe, deterministic, and smooth manner. This is particularly useful to heavy-duty vehicles since their acceleration deceleration capacity is relatively low. Specifically in this paper, cooperative driving at blind crossings (crossings without traffic lights) is studied. A concept of safety driving patterns is proposed to represent the collision-free movements of vehicles at crossings. The solution space of all allowable movement schedules is then described by a spanning tree in terms of safety driving patterns; four trajectory planning algorithms are formulated to determine the driving plans with least execution times using schedule trees. The group communication strategy for intervehicle networks is also analyzed. Finally, simulation studies have been conducted, and results demonstrate the potentiality and usefulness of the proposed algorithms for cooperative driving at blind crossings",
"This paper develops a real-time traffic signal optimization algorithm in the presence of connected and autonomous vehicles (CAVs). The proposed algorithm leverages information from connected vehicl...",
"Under the Connected Vehicles (CV) environment, it is possible to create a Cooperative Vehicle Intersection Control (CVIC) system that enables cooperation between vehicles and infrastructure for effective intersection operations and management when all vehicles are fully automated. Assuming such a CVIC environment, this paper proposed a CVIC algorithm that does not require a traffic signal. The CVIC algorithm was designed to manipulate individual vehicles' maneuvers so that vehicles can safely cross the intersection without colliding with other vehicles. By eliminating the potential overlaps of vehicular trajectories coming from all conflicting approaches at the intersection, the CVIC algorithm seeks a safe maneuver for every vehicle approaching the intersection and manipulates each of them. An additional algorithm was designed to deal with the system failure cases resulting from inevitable trajectory overlaps at the intersection and infeasible solutions. A simulation-based case study implemented on a hypothetical four-way single-lane approach intersection under varying congestion conditions showed that the CVIC algorithm significantly improved intersection performance compared with conventional actuated intersection control: 99 and 33 of stop delay and total travel time reductions, respectively, were achieved. In addition, the CVIC algorithm significantly improved air quality and energy savings: 44 reductions of CO2 and 44 savings of fuel consumption."
]
} |
1907.02003 | 2955063886 | Planning of radiotherapy involves accurate segmentation of a large number of organs at risk, i.e. organs for which irradiation doses should be minimized to avoid important side effects of the therapy. We propose a deep learning method for segmentation of organs at risk inside the brain region, from Magnetic Resonance (MR) images. Our system performs segmenta-tion of eight structures: eye, lens, optic nerve, optic chiasm, pituitary gland, hippocampus, brainstem and brain. We propose an efficient algorithm to train neural networks for an end-to-end segmentation of multiple and non-exclusive classes, addressing problems related to computational costs and missing ground truth segmentations for a subset of classes. We enforce anatomical consistency of the result in a postprocessing step, in particular we introduce a graph-based algorithm for segmentation of the optic nerves, enforcing the connectivity between the eyes and the optic chiasm. We report cross-validated quantitative results on a database of 44 contrast-enhanced T1-weighted MRIs with provided segmentations of the considered organs at risk, which were originally used for radiotherapy planning. In addition, the segmentations produced by our model on an independent test set of 50 MRIs are evaluated by an experienced radiotherapist in order to qualitatively assess their accuracy. The mean distances between produced segmentations and the ground truth ranged from 0.1 mm to 0.7 mm across different organs. A vast majority (96 ) of the produced segmentations were found acceptable for radiotherapy planning. | The network architecture used in our work is a modified version of 2D U-net @cite_49 . The choice of a 2D architecture rather than variants of 3D U-Net @cite_10 is motivated by the ability of 2D CNNs to capture a long-range spatial context without downsampling the image. This property is important in our problem as we segment several anatomical structures in large images, including very small structures such as the lens, the pituitary gland or the optic nerve. 2D CNNs were recently applied in @cite_28 for segmentation of head and neck organs in CT scans. | {
"cite_N": [
"@cite_28",
"@cite_10",
"@cite_49"
],
"mid": [
"2903432333",
"2951839332",
"1901129140"
],
"abstract": [
"This paper deals with segmentation of organs at risk (OAR) in head and neck area in CT images which is a crucial step for reliable intensity modulated radiotherapy treatment. We introduce a convolution neural network with encoder-decoder architecture and a new loss function, the batch soft Dice loss function, used to train the network. The resulting model produces segmentations of every OAR in the public MICCAI 2015 Head And Neck Auto-Segmentation Challenge dataset. Despite the heavy class imbalance in the data, we improve accuracy of current state-of-the-art methods by 0.33 mm in terms of average surface distance and by 0.11 in terms of Dice overlap coefficient on average.",
"This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases.",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http: lmb.informatik.uni-freiburg.de people ronneber u-net ."
]
} |
1907.02003 | 2955063886 | Planning of radiotherapy involves accurate segmentation of a large number of organs at risk, i.e. organs for which irradiation doses should be minimized to avoid important side effects of the therapy. We propose a deep learning method for segmentation of organs at risk inside the brain region, from Magnetic Resonance (MR) images. Our system performs segmenta-tion of eight structures: eye, lens, optic nerve, optic chiasm, pituitary gland, hippocampus, brainstem and brain. We propose an efficient algorithm to train neural networks for an end-to-end segmentation of multiple and non-exclusive classes, addressing problems related to computational costs and missing ground truth segmentations for a subset of classes. We enforce anatomical consistency of the result in a postprocessing step, in particular we introduce a graph-based algorithm for segmentation of the optic nerves, enforcing the connectivity between the eyes and the optic chiasm. We report cross-validated quantitative results on a database of 44 contrast-enhanced T1-weighted MRIs with provided segmentations of the considered organs at risk, which were originally used for radiotherapy planning. In addition, the segmentations produced by our model on an independent test set of 50 MRIs are evaluated by an experienced radiotherapist in order to qualitatively assess their accuracy. The mean distances between produced segmentations and the ground truth ranged from 0.1 mm to 0.7 mm across different organs. A vast majority (96 ) of the produced segmentations were found acceptable for radiotherapy planning. | Most of the proposed deep learning methods for segmentation of organs at risk were applied on CT scans in the context of head and neck cancers @cite_45 , i.e. cancers of the upper parts of respiratory and digestive systems (mouth, larynx, throat). To the best of our knowledge, the only deep learning method for segmentation of organs at risk in MRIs of the brain is the one proposed in @cite_33 (MRI T1 and T2). | {
"cite_N": [
"@cite_45",
"@cite_33"
],
"mid": [
"2108836663",
"2900962800"
],
"abstract": [
"Summary Most head and neck cancers are squamous cell carcinomas that develop in the upper aerodigestive epithelium after exposure to carcinogens such as tobacco and alcohol. Human papillomavirus has also been strongly implicated as a causative agent in a subset of these cancers. The complex anatomy and vital physiological role of the tumour-involved structures dictate that the goals of treatment are not only to improve survival outcomes but also to preserve organ function. Major improvements have been accomplished in surgical techniques and radiotherapy delivery. Moreover, systemic therapy including chemotherapy and molecularly targeted agents—namely, the epidermal growth factor receptor inhibitors—has been successfully integrated into potentially curative treatment of locally advanced squamous-cell carcinoma of the head and neck. In deciding which treatment strategy would be suitable for an individual patient, important considerations include expected functional outcomes, ability to tolerate treatment, and comorbid illnesses. The collaboration of many specialties is the key for optimum assessment and decision making. We review the epidemiology, molecular pathogenesis, diagnosis and staging, and the latest multimodal management of squamous cell carcinoma of the head and neck.",
"Organ-at-risk (OAR) segmentation is a key step for radiotherapy treatment planning. Model-based segmentation (MBS) has been successfully used for the fully automatic segmentation of anatomical structures and it has proven to be robust to noise due to its incorporated shape prior knowledge. In this work, we investigate the advantages of combining neural networks with the prior anatomical shape knowledge of the model-based segmentation of organs-at-risk for brain radiotherapy (RT) on Magnetic Resonance Imaging (MRI). We train our boundary detectors using two different approaches: classic strong gradients as described in [4] and as a locally adaptive regression task, where for each triangle a convolutional neural network (CNN) was trained to estimate the distances between the mesh triangles and organ boundary, which were then combined into a single network, as described by [1]. We evaluate both methods using a 5-fold cross-validation on both T1w and T2w brain MRI data from sixteen primary and metastatic brain cancer patients (some post-surgical). Using CNN-based boundary detectors improved the results for all structures in both T1w and T2w data. The improvements were statistically significant ( (p<0.05 )) for all segmented structures in the T1w images and only for the auditory system in the T2w images."
]
} |
1907.02065 | 2954876025 | In recent years, the biggest advances in major Computer Vision tasks, such as object recognition, handwritten-digit identification, facial recognition, and many others., have all come through the use of Convolutional Neural Networks (CNNs). Similarly, in the domain of Natural Language Processing, Recurrent Neural Networks (RNNs), and Long Short Term Memory networks (LSTMs) in particular, have been crucial to some of the biggest breakthroughs in performance for tasks such as machine translation, part-of-speech tagging, sentiment analysis, and many others. These individual advances have greatly benefited tasks even at the intersection of NLP and Computer Vision, and inspired by this success, we studied some existing neural image captioning models that have proven to work well. In this work, we study some existing captioning models that provide near state-of-the-art performances, and try to enhance one such model. We also present a simple image captioning model that makes use of a CNN, an LSTM, and the beam search1 algorithm, and study its performance based on various qualitative and quantitative metrics. | ( @cite_1 ): This paper broke down what the authors believed were the necessary components for generating high quality image captions. The focus of this work was to generate annotated regions that were represented with an embedding that, once combined with an RNN, was able to generate a full sentence describing the image. The takeaway was the importance of good embeddings as it inherently correlates with the quality of the sentence created. In our own models, we followed this example and focused on the features and captions related to the image as opposed to the image directly. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2951805548"
],
"abstract": [
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations."
]
} |
1907.02065 | 2954876025 | In recent years, the biggest advances in major Computer Vision tasks, such as object recognition, handwritten-digit identification, facial recognition, and many others., have all come through the use of Convolutional Neural Networks (CNNs). Similarly, in the domain of Natural Language Processing, Recurrent Neural Networks (RNNs), and Long Short Term Memory networks (LSTMs) in particular, have been crucial to some of the biggest breakthroughs in performance for tasks such as machine translation, part-of-speech tagging, sentiment analysis, and many others. These individual advances have greatly benefited tasks even at the intersection of NLP and Computer Vision, and inspired by this success, we studied some existing neural image captioning models that have proven to work well. In this work, we study some existing captioning models that provide near state-of-the-art performances, and try to enhance one such model. We also present a simple image captioning model that makes use of a CNN, an LSTM, and the beam search1 algorithm, and study its performance based on various qualitative and quantitative metrics. | ( @cite_3 ): This paper utilized feature, language, and attention inputs to build their model for captioning. Attention deconstructs the image into weighted sections that represent that section's supposed importance or relevance. Instead of weighing all features in the image equally, features that fall under regions with higher attention will be weighted higher in the caption generation, causing the caption to be more biased towards features found in areas where the attention was defined. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2950178297"
],
"abstract": [
"Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO."
]
} |
1907.02065 | 2954876025 | In recent years, the biggest advances in major Computer Vision tasks, such as object recognition, handwritten-digit identification, facial recognition, and many others., have all come through the use of Convolutional Neural Networks (CNNs). Similarly, in the domain of Natural Language Processing, Recurrent Neural Networks (RNNs), and Long Short Term Memory networks (LSTMs) in particular, have been crucial to some of the biggest breakthroughs in performance for tasks such as machine translation, part-of-speech tagging, sentiment analysis, and many others. These individual advances have greatly benefited tasks even at the intersection of NLP and Computer Vision, and inspired by this success, we studied some existing neural image captioning models that have proven to work well. In this work, we study some existing captioning models that provide near state-of-the-art performances, and try to enhance one such model. We also present a simple image captioning model that makes use of a CNN, an LSTM, and the beam search1 algorithm, and study its performance based on various qualitative and quantitative metrics. | ( @cite_0 ): This paper built on the principles presented in Show, Attend, and Tell but produced a novel model that worked with the inputs in a different manner. This paper produced one of the best results in image captioning when compared to existing models and is the model we decided to focus on understanding and building upon. Top Down is also a relatively simple model by design that gains most of its power from the structure of the model. We decided to build upon this model in an attempt to create better or comparable results. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2951590222"
],
"abstract": [
"Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr SPICE BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge."
]
} |
1907.02053 | 2955657029 | In this paper, we propose HyperFlowCutter, an algorithm for balanced hypergraph bipartitioning. It is based on minimum S-T hyperedge cuts and maximum flows. It computes a sequence of bipartitions that optimize cut size and balance in the Pareto sense, being able to trade one for the other. HyperFlowCutter builds on the FlowCutter algorithm for partitioning graphs. We propose additional features, such as handling disconnected hypergraphs, novel methods for obtaining starting S,T pairs as well as an approach to refine a given partition with HyperFlowCutter. Our main contribution is ReBaHFC, a new algorithm which obtains an initial partition with the fast multilevel hypergraph partitioner PaToH and then improves it using HyperFlowCutter as a refinement algorithm. ReBaHFC is able to significantly improve the solution quality of PaToH at little additional running time. The solution quality is only marginally worse than that of the best-performing hypergraph partitioners KaHyPar and hMETIS, while being one order of magnitude faster. Thus ReBaHFC offers a new time-quality trade-off in the current spectrum of hypergraph partitioners. For the special case of perfectly balanced bipartitioning, only the much slower plain HyperFlowCutter yields slightly better solutions than ReBaHFC, while only PaToH is faster than ReBaHFC. | Compared to graph partitioning, the performance of local vertex moving suffers from the presence of large hyperedges with vertices scattered over multiple blocks, since many moves have zero cut improvement. On coarse levels of the multilevel hierarchy, this problem is alleviated since hyperedges contain fewer vertices. A second remedy are flow-based refinement algorithms. For graphs, Sanders and Schulz @cite_17 extract a size-constrained corridor around the cut and compute a minimum cut within this corridor. If the cut is balanced, an improved solution was found, otherwise the step is repeated with a smaller corridor. Heuer al @cite_21 extend their approach to hypergraphs by using @cite_26 . The Lawler network of a hypergraph is a flow network such that minimum @math - @math hyperedge cuts can be computed via max-flow. | {
"cite_N": [
"@cite_21",
"@cite_26",
"@cite_17"
],
"mid": [
"2962835892",
"2011646234",
"2130822890"
],
"abstract": [
"We present a refinement framework for multilevel hypergraph partitioning that uses max-flow computations on pairs of blocks to improve the solution quality of a k-way partition. The framework generalizes the flow-based improvement algorithm of KaFFPa from graphs to hypergraphs and is integrated into the hypergraph partitioner KaHyPar. By reducing the size of hypergraph flow networks, improving the flow model used in KaFFPa, and developing techniques to improve the running time of our algorithm, we obtain a partitioner that computes the best solutions for a wide range of benchmark hypergraphs from different application areas while still having a running time comparable to that of hMetis.",
"A hypergraph is a combinatorial structure with nodes and arcs, similar to an ordinary „linear” graph, except that arcs are incident to arbitrary subsets of nodes, instead of pairs of nodes. Cutsets of hypergraphs are defined in a natural way, and it is shown that optimal cutsets can be found by means of a network flow computation. The optimal cutset computation can be used to generate a family of subsets of nodes, which we call LS sets. Intuitively, an LS set is a subset of nodes that are more strongly connected to each other than to nodes in the complementary set. LS sets are useful for constructing optimal or near-optimal partitions of the nodes. A polynomial-bounded partitioning algorithm is presented, and various applications are suggested.",
"We present a multi-level graph partitioning algorithm using novel local improvement algorithms and global search strategies transferred from multigrid linear solvers. Local improvement algorithms are based on max-flow min-cut computations and more localized FM searches. By combining these techniques, we obtain an algorithm that is fast on the one hand and on the other hand is able to improve the best known partitioning results for many inputs. For example, in Walshaw's well known benchmark tables we achieve 317 improvements for the tables at 1 , 3 and 5 imbalance. Moreover, in 118 out of the 295 remaining cases we have been able to reproduce the best cut in this benchmark."
]
} |
1907.02053 | 2955657029 | In this paper, we propose HyperFlowCutter, an algorithm for balanced hypergraph bipartitioning. It is based on minimum S-T hyperedge cuts and maximum flows. It computes a sequence of bipartitions that optimize cut size and balance in the Pareto sense, being able to trade one for the other. HyperFlowCutter builds on the FlowCutter algorithm for partitioning graphs. We propose additional features, such as handling disconnected hypergraphs, novel methods for obtaining starting S,T pairs as well as an approach to refine a given partition with HyperFlowCutter. Our main contribution is ReBaHFC, a new algorithm which obtains an initial partition with the fast multilevel hypergraph partitioner PaToH and then improves it using HyperFlowCutter as a refinement algorithm. ReBaHFC is able to significantly improve the solution quality of PaToH at little additional running time. The solution quality is only marginally worse than that of the best-performing hypergraph partitioners KaHyPar and hMETIS, while being one order of magnitude faster. Thus ReBaHFC offers a new time-quality trade-off in the current spectrum of hypergraph partitioners. For the special case of perfectly balanced bipartitioning, only the much slower plain HyperFlowCutter yields slightly better solutions than ReBaHFC, while only PaToH is faster than ReBaHFC. | In their Flow-Balanced-Bipartition algorithm (FBB), Yang and Wong @cite_50 use incremental maximum flows on the Lawler network to compute @math -balanced hypergraph bipartitions. Liu and Wong @cite_30 enhance FBB with a heuristic, which is inspired by the correspondence between @math - @math minimum cuts and closed node sets due to Picard and Queyranne @cite_8 . It is similar to the heuristics used in the multilevel graph partitioning tool KaHiP @cite_17 and KaHyPar-MF @cite_21 as well as the piercing heuristics of FlowCutter @cite_48 and HyperFlowCutter. Li al @cite_54 propose a push-relabel algorithm, which operates directly on the hypergraph. Furthermore they present heuristics rooted in VLSI design for choosing sets of initial seed vertices @math and @math as well as piercing vertices. The performance of their approach in other contexts than VLSI design remains unclear. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_48",
"@cite_54",
"@cite_21",
"@cite_50",
"@cite_17"
],
"mid": [
"2148215956",
"1994788283",
"2963809597",
"2163136138",
"2962835892",
"",
"2130822890"
],
"abstract": [
"Network flow is an excellent approach to finding min-cuts because of the celebrated max-flow min-cut theorem. For a long time, however, it was perceived as computationally expensive and deemed impractical for circuit partitioning. Recently, the algorithm FBB successfully applied network flow to two-way balanced partitioning. It for the first time demonstrated that network flow was a viable approach to circuit partitioning. In this paper, we present FBB-MW, which is an extension of FBB, to solve the problem of multiway partitioning with area and pin constraints. Experimental results show that FBB-MW outperforms previous approaches for multiple field programmable gate array partitioning. In particular, although FBB-MW does not employ logic replication and logic resynthesis, it still outperforms some other algorithms, which allow replication and resynthesis for optimization.",
"This paper presents a characterization of all minimum cuts, separating a source from a sink in a network. A binary relation is associated with any maximum flow in this network, and minimum cuts are identified with closures for this relation. As a consequence, finding all minimum cuts reduces to a straightforward enumeration. Applications of this results arise in sensitivity and parametric analyses of networks, the vertex packing and maximum closure problems, in unconstrained pseudo-boolean optimization and project selection, as well as in other areas of application of minimum cuts.",
"We introduce FlowCutter, a novel algorithm to compute a set of edge cuts or node separators that optimize cut size and balance in the Pareto sense. Our core algorithm heuristically solves the balanced connected st-edge-cut problem, where two given nodes s and t must be separated by removing edges to obtain two connected parts. Using the core algorithm as a subroutine, we build variants that compute node separators that are independent of s and t. From the computed Pareto set, we can identify cuts with a particularly good tradeoff between cut size and balance that can be used to compute contraction and minimum fill-in orders, which can be used in Customizable Contraction Hierarchies (CCHs), a speed-up technique for shortest-path computations. Our core algorithm runs in O(cmEm) time, where E is the set of edges and c is the size of the largest outputted cut. This makes it well suited for separating large graphs with small cuts, such as road graphs, which is the primary application motivating our research. For road graphs, we present an extensive experimental study demonstrating that FlowCutter outperforms the current state of the art in terms of both cut sizes and CCH performance. By evaluating FlowCutter on a standard graph partitioning benchmark, we further show that FlowCutter also finds small, balanced cuts on nonroad graphs. Another application is the computation of small tree decompositions. To evaluate the quality of our algorithm in this context, we entered the PACE 2016 challenge [13] and won first place in the corresponding sequential competition track. We can therefore conclude that our FlowCutter algorithm finds small, balanced cuts on a wide variety of graphs.",
"We propose a unified solution to both linear placement and partitioning. Our approach combines the well-known eigenvector optimization method with the recursive max-flow min-cut method. A linearized eigenvector method is proposed to improve the linear placement. A hypergraph maxflow algorithm is then adopted to efficiently find the max-flow min-cut. In our unified approach, the max-flow min-cut provides an optimal ordered partition subject to the given seeds and the eigenvector placement provides heuristic information for seed selection. Experimental results on MCNC benchmarks show that our approach is superior to other methods for both linear placement and partitioning problems. On average, our approach yields an improvement of 45.1 over eigenvector approach in terms of total wire length, and yields an improvement of 26.9 over PARABOLI[6] in terms of cut size.",
"We present a refinement framework for multilevel hypergraph partitioning that uses max-flow computations on pairs of blocks to improve the solution quality of a k-way partition. The framework generalizes the flow-based improvement algorithm of KaFFPa from graphs to hypergraphs and is integrated into the hypergraph partitioner KaHyPar. By reducing the size of hypergraph flow networks, improving the flow model used in KaFFPa, and developing techniques to improve the running time of our algorithm, we obtain a partitioner that computes the best solutions for a wide range of benchmark hypergraphs from different application areas while still having a running time comparable to that of hMetis.",
"",
"We present a multi-level graph partitioning algorithm using novel local improvement algorithms and global search strategies transferred from multigrid linear solvers. Local improvement algorithms are based on max-flow min-cut computations and more localized FM searches. By combining these techniques, we obtain an algorithm that is fast on the one hand and on the other hand is able to improve the best known partitioning results for many inputs. For example, in Walshaw's well known benchmark tables we achieve 317 improvements for the tables at 1 , 3 and 5 imbalance. Moreover, in 118 out of the 295 remaining cases we have been able to reproduce the best cut in this benchmark."
]
} |
1907.02053 | 2955657029 | In this paper, we propose HyperFlowCutter, an algorithm for balanced hypergraph bipartitioning. It is based on minimum S-T hyperedge cuts and maximum flows. It computes a sequence of bipartitions that optimize cut size and balance in the Pareto sense, being able to trade one for the other. HyperFlowCutter builds on the FlowCutter algorithm for partitioning graphs. We propose additional features, such as handling disconnected hypergraphs, novel methods for obtaining starting S,T pairs as well as an approach to refine a given partition with HyperFlowCutter. Our main contribution is ReBaHFC, a new algorithm which obtains an initial partition with the fast multilevel hypergraph partitioner PaToH and then improves it using HyperFlowCutter as a refinement algorithm. ReBaHFC is able to significantly improve the solution quality of PaToH at little additional running time. The solution quality is only marginally worse than that of the best-performing hypergraph partitioners KaHyPar and hMETIS, while being one order of magnitude faster. Thus ReBaHFC offers a new time-quality trade-off in the current spectrum of hypergraph partitioners. For the special case of perfectly balanced bipartitioning, only the much slower plain HyperFlowCutter yields slightly better solutions than ReBaHFC, while only PaToH is faster than ReBaHFC. | For perfectly balanced graph partitioning, diffusion-based methods have been successful @cite_35 . Furthermore Sanders and Schulz @cite_38 propose an algorithm based on detecting negative cycles, which is used on top of their evolutionary partitioner. Delling and Werneck @cite_5 provide an efficient implementation of an optimal branch-and-bound algorithm. Additionally there are metaheuristic approaches such as PROBE @cite_3 , as well as multilevel memetic algorithms due to Benlic and Hao @cite_44 @cite_39 @cite_6 . | {
"cite_N": [
"@cite_35",
"@cite_38",
"@cite_3",
"@cite_39",
"@cite_44",
"@cite_6",
"@cite_5"
],
"mid": [
"2041869086",
"",
"2117271622",
"2162482386",
"1978104777",
"2112882545",
"2117522609"
],
"abstract": [
"Graph partitioning requires the division of a graph's vertex set into k equally sized subsets s.t. some objective function is optimized. High-quality partitions are important for many applications, whose objective functions are often NP-hard to optimize. Most state-of-the-art graph partitioning libraries use a variant of the Kernighan-Lin (KL) heuristic within a multilevel framework. While these libraries are very fast, their solutions do not always meet all user requirements. Moreover, due to its sequential nature, KL is not easy to parallelize. Its use as a load balancer in parallel numerical applications therefore requires complicated adaptations. That is why we developed previously an inherently parallel algorithm, called Bubble-FOS C [H. Meyerhenke, B. Monien, S. Schamberger, Accelerating shape optimizing load balancing for parallel FEM simulations by algebraic multigrid, in: Proceedings of the 20th IEEE International Parallel and Distributed Processing Symposium, IPDPS'06, IEEE Computer Society, 2006, p. 57 (CD)], which optimizes partition shapes by a diffusive mechanism. However, it is too slow for practical use, despite its high solution quality. In this paper, besides proving that Bubble-FOS C converges towards a local optimum of a potential function, we develop a much faster method for the improvement of partitionings. This faster method called TruncCons is based on a different diffusive process, which is restricted to local areas of the graph and also contains a high degree of parallelism. By coupling TruncCons with Bubble-FOS C in a multilevel framework based on two different hierarchy construction methods, we obtain our new graph partitioning heuristic DibaP. Compared to Bubble-FOS C, DibaP shows a considerable acceleration, while retaining the positive properties of the slower algorithm. Experiments with popular benchmark graphs show that DibaP computes consistently better results than the state-of-the-art libraries METIS and JOSTLE. Moreover, with our new algorithm, we have improved the best known edge-cut values for a significant number of partitionings of six widely used benchmark graphs.",
"",
"A new heuristic algorithm, PROBE_BA, which is based on the recently introduced metaheuristic paradigm population- reinforced optimization-based exploration (PROBE), is proposed for solving the Graph Partitioning Problem. The \"exploration\" part of PROBE_BA is implemented by using the differential-greedy algorithm of Battiti and Bertossi and a modification of the Kernighan-Lin algorithm at the heart of Bui and Moon's genetic algorithm BFS _GBA. Experiments are used to investigate properties of PROBE and show that PROBE_BA compares favorably with other solution methods based on genetic algorithms, randomized reactive tabu search, or more specialized multilevel partitioning techniques. In addition, PROBE_BA finds new best cut values for 10 of the 34 instances in Walshaw's graph partitioning archive.",
"Graph partitioning is one of the most studied NP-complete problems. Given a graph G=(V, E) , the task is to partition the vertex set V into k disjoint subsets of about the same size, such that the number of edges with endpoints in different subsets is minimized. In this paper, we present a highly effective multilevel memetic algorithm, which integrates a new multiparent crossover operator and a powerful perturbation-based tabu search algorithm. The proposed crossover operator tends to preserve the backbone with respect to a certain number of parent individuals, i.e., the grouping of vertices which is common to all parent individuals. Extensive experimental studies on numerous benchmark instances from the graph partitioning archive show that the proposed approach, within a time limit ranging from several minutes to several hours, performs far better than any of the existing graph partitioning algorithms in terms of solution quality.",
"The balanced graph partitioning consists in dividing the vertices of an undirected graph into a given number of subsets of approximately equal size, such that the number of edges crossing the subsets is minimized. In this work, we present a multilevel memetic algorithm for this NP-hard problem that relies on a powerful grouping recombination operator and a dedicated local search procedure. The proposed operator tends to preserve the backbone with respect to a set of parent individuals, i.e. the grouping of vertices which is same throughout each parent individual. Although our approach requires significantly longer computing time compared to some current state-of-art graph partitioning algorithms such as SCOTCH, METIS, CHACO, JOSTLE, etc., it competes very favorably with these approaches in terms of solution quality. Moreover, it easily reaches or improves on the best partitions ever reported in the literature.",
"Graph partitioning is one of the fundamental NP-complete problems which is widely applied in many domains, such as VLSI design, image segmentation, data mining, etc. Given a graph G=(V,E), the balanced k-partitioning problem consists in partitioning the vertex set V into k disjoint subsets of about the same size, such that the number of cutting edges is minimized. In this paper, we present a multilevel algorithm for balanced partition, which integrates a powerful refinement procedure based on tabu search with periodic perturbations. Experimental evaluations on a wide collection of benchmark graphs show that the proposed approach not only competes very favorably with the two well-known partitioning packages METIS and CHACO, but also improves more than two thirds of the best balanced partitions ever reported in the literature.",
"We introduce new lower bounds for the minimum graph bisection problem. Within a branch-and-bound framework, they enable the solution of a wide variety of instances with tens of thousands of vertices to optimality. Our algorithm compares favorably with the best previous approaches, solving long-standing open instances in minutes."
]
} |
1812.05447 | 2905161248 | Currently, accurate detection of natural phenomena, such as red tide, that adversely affect wildlife and human, using satellite images has been increasingly utilized. However, red tide detection on satellite images still remains a very hard task due to unpredictable nature of red tide occurrence, extreme sparsity of red tide samples, difficulties in accurate groundtruthing, etc. In this paper, we aim to tackle both the data sparsity and groundtruthing issues by primarily addressing two challenges: i) significant lack of hard examples of non-red tide that can enhance detection performance and ii) extreme data imbalance between red tide and non-red tide examples. In the proposed work, we devise a 9-layer fully convolutional network jointly optimized with two plug-in modules tailored to overcoming the two challenges: i) a hard negative example generator (HNG) to supplement the hard negative (non-red tide) examples and ii) cascaded online hard example mining (cOHEM) to ease the data imbalance. Our proposed network jointly trained with HNG and cOHEM provides state-of-the-art red tide detection accuracy on GOCI satellite images. | CNN used for Detecting Natural Phenomena in Marine Environment. Since CNN has been introduced and provided promising performance in image classification, there have been several attempts to use it in the marine environment. CNN has been effectively used for detection of coral reefs @cite_12 , classification of fish @cite_26 , detection of oil from shipwreck @cite_9 , and so on. However, applying deep neural network to detect objects-of-interest in the marine environment has been quite limited due mainly to difficulties in acquiring large amounts of annotated data unlike general object detection applications. In this paper, we devise a CNN training strategy coupled with advanced network architecture tailored to red tide detection while minimizing human labeling efforts. | {
"cite_N": [
"@cite_9",
"@cite_26",
"@cite_12"
],
"mid": [
"1556540526",
"2782490047",
"2508077530"
],
"abstract": [
"We propose a local modelling approach using deep convolutional neural networks (CNNs) for fine-grained image classification. Recently, deep CNNs trained from large datasets have considerably improved the performance of object recognition. However, to date there has been limited work using these deep CNNs as local feature extractors. This partly stems from CNNs having internal representations which are high dimensional, thereby making such representations difficult to model using stochastic models. To overcome this issue, we propose to reduce the dimensionality of one of the internal fully connected layers, in conjunction with layer-restricted retraining to avoid retraining the entire network. The distribution of low-dimensional features obtained from the modified layer is then modelled using a Gaussian mixture model. Comparative experiments show that considerable performance improvements can be achieved on the challenging Fish and UEC FOOD-100 datasets.",
"Studying fish recognition has important realistic and theoretical significance to aquaculture and marine biology. Fish recognition is challenging problem because of distortion, overlap and occlusion of digital images. Previous researchers have done a lot of work on fish recognition, but the classification accuracy may be not high enough. Classification and recognition methods based on convolutional neural network (CNN) develop fast in recent years because of its higher accuracy and the support of GPU. In this paper, we design several architectures for convolutional neural network for the fish recognition. After performing a series of experiments, we can get the CNN architecture which has best performance and robustness.",
"Coral reefs exhibit significant within-class variations, complex between-class boundaries and inconsistent image clarity. This makes coral classification a challenging task. In this paper, we report the application of generic CNN representations combined with hand-crafted features for coral reef classification to take advantage of the complementary strengths of these representation types. We extract CNN based features from patches centred at labelled pixels at multiple scales. We use texture and color based hand-crafted features extracted from the same patches to complement the CNN features. Our proposed method achieves a classification accuracy that is higher than the state-of-art methods on the MLC benchmark dataset for corals."
]
} |
1812.05447 | 2905161248 | Currently, accurate detection of natural phenomena, such as red tide, that adversely affect wildlife and human, using satellite images has been increasingly utilized. However, red tide detection on satellite images still remains a very hard task due to unpredictable nature of red tide occurrence, extreme sparsity of red tide samples, difficulties in accurate groundtruthing, etc. In this paper, we aim to tackle both the data sparsity and groundtruthing issues by primarily addressing two challenges: i) significant lack of hard examples of non-red tide that can enhance detection performance and ii) extreme data imbalance between red tide and non-red tide examples. In the proposed work, we devise a 9-layer fully convolutional network jointly optimized with two plug-in modules tailored to overcoming the two challenges: i) a hard negative example generator (HNG) to supplement the hard negative (non-red tide) examples and ii) cascaded online hard example mining (cOHEM) to ease the data imbalance. Our proposed network jointly trained with HNG and cOHEM provides state-of-the-art red tide detection accuracy on GOCI satellite images. | Training Generator via Adversarial Learning. @cite_28 introduce a method to generate an adversarial image by adding perturbation so that it is misclassified by a CNN-based recognition approach. These perturbed images become adversarial images to the recognition approach. @cite_1 introduce two models: a generator that captures the data distribution and a discriminator that estimates the probability that a sample came from the training data rather than the generator. A generator and a discriminator are trained at the same time in a direction to interfere with each other. This is called an adversarial learning framework. | {
"cite_N": [
"@cite_28",
"@cite_1"
],
"mid": [
"2964153729",
"2099471712"
],
"abstract": [
"Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."
]
} |
1812.05447 | 2905161248 | Currently, accurate detection of natural phenomena, such as red tide, that adversely affect wildlife and human, using satellite images has been increasingly utilized. However, red tide detection on satellite images still remains a very hard task due to unpredictable nature of red tide occurrence, extreme sparsity of red tide samples, difficulties in accurate groundtruthing, etc. In this paper, we aim to tackle both the data sparsity and groundtruthing issues by primarily addressing two challenges: i) significant lack of hard examples of non-red tide that can enhance detection performance and ii) extreme data imbalance between red tide and non-red tide examples. In the proposed work, we devise a 9-layer fully convolutional network jointly optimized with two plug-in modules tailored to overcoming the two challenges: i) a hard negative example generator (HNG) to supplement the hard negative (non-red tide) examples and ii) cascaded online hard example mining (cOHEM) to ease the data imbalance. Our proposed network jointly trained with HNG and cOHEM provides state-of-the-art red tide detection accuracy on GOCI satellite images. | @cite_11 devise an image generation approach based on CNN by adopting this adversarial learning framework. @cite_8 use the adversarial learning framework to train a network that creates artificial occlusion and deformation on images. Object detection network is trained against this adversary to improve performance. We also use the adversarial learning for training hard negative generation network. | {
"cite_N": [
"@cite_8",
"@cite_11"
],
"mid": [
"2952815469",
"2963684088"
],
"abstract": [
"How do we learn an object detector that is invariant to occlusions and deformations? Our current solution is to use a data-driven strategy -- collect large-scale datasets which have object instances under different conditions. The hope is that the final classifier can use these examples to learn invariances. But is it really possible to see all the occlusions in a dataset? We argue that like categories, occlusions and object deformations also follow a long-tail. Some occlusions and deformations are so rare that they hardly happen; yet we want to learn a model invariant to such occurrences. In this paper, we propose an alternative solution. We propose to learn an adversarial network that generates examples with occlusions and deformations. The goal of the adversary is to generate examples that are difficult for the object detector to classify. In our framework both the original detector and adversary are learned in a joint manner. Our experimental results indicate a 2.3 mAP boost on VOC07 and a 2.6 mAP boost on VOC2012 object detection challenge compared to the Fast-RCNN pipeline. We also release the code for this paper.",
"Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations."
]
} |
1812.05450 | 2901284941 | Abstract This paper presents a hybrid method for the detection of distributed denial-of-service (DDoS) attacks that combines feature-based and volume-based detection. Our approach is based on an exponential moving average algorithm for decision-making, applied to both entropy and packet number time series. The approach has been tested by performing a controlled DDoS experiment in a real academic network. The network setup and test scenarios including both high-rate and low-rate attacks are described in the paper. The performance of the proposed method is compared to the performance of two methods that are already known in the literature. One is based on the counting of SYN packets and is used for detection of SYN flood attacks, while the other is based on a CUSUM algorithm applied to the entropy time series. The results show the advantage of our approach compared to methods that are based on either entropy or number of packets only. | Mathematical modeling of a DDoS attack that would result in a practical, usable model (used for the provision of resources, etc.), is still an open issue, as DDoS attacks are changing. There are several approaches and we will mention only some of them. In @cite_7 , model the system under SYN flood DDoS attack as a two-dimensional queuing model with N servers, two arrival processes and two service times with different distributions. Both the arrival of regular request packets and the arrival of attack packets are modeled as Poisson processes, but with different arrival rates @math and @math . At most, N half-open connections are allowed at any one moment. A half-open connection for a regular request packet is held for a random time which is exponentially distributed. The two arrival processes are independent of each other and of the holding times for half-open connections. Based on these assumptions, DDoS is modeled as a two-dimensional embedded Markov chain. The authors give some security metrics for DDoS, such as a connection loss probability and buffer occupancy percentage of half-open connections for regular traffic. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1996593602"
],
"abstract": [
"In most network security analysis, researchers mainly focus on qualitative studies on security schemes and possible attacks, and there are few papers on quantitative analysis in the current literature. In this paper, we propose one queueing model for the evaluation of the denial of service (DoS) attacks in computer networks. The network under DoS attacks is characterized by a two-dimensional embedded Markov chain model. With this model, we can develop a memory-efficient algorithm for finding the stationary probability distribution which can be used to find other interesting performance metrics such as the connection loss probability and buffer occupancy percentages of half-open connections for regular traffic and attack traffic. Different from previous works in the literature, this paper gives a more general analytical approach to the study of security measures of a computer network under DoS attacks. We hope that our approach opens a new avenue to the quantitative evaluation of more complicated security schemes in computer networks."
]
} |
1812.05450 | 2901284941 | Abstract This paper presents a hybrid method for the detection of distributed denial-of-service (DDoS) attacks that combines feature-based and volume-based detection. Our approach is based on an exponential moving average algorithm for decision-making, applied to both entropy and packet number time series. The approach has been tested by performing a controlled DDoS experiment in a real academic network. The network setup and test scenarios including both high-rate and low-rate attacks are described in the paper. The performance of the proposed method is compared to the performance of two methods that are already known in the literature. One is based on the counting of SYN packets and is used for detection of SYN flood attacks, while the other is based on a CUSUM algorithm applied to the entropy time series. The results show the advantage of our approach compared to methods that are based on either entropy or number of packets only. | Our detector uses Shannon entropy, but other entropy formulas have also been used @cite_13 @cite_10 @cite_15 . | {
"cite_N": [
"@cite_10",
"@cite_15",
"@cite_13"
],
"mid": [
"2128165614",
"2141910941",
"1959274303"
],
"abstract": [
"Detection is a crucial step towards efficiently diagnosing network traffic anomalies within an autonomous system (AS). We propose the adoption of nonextensive entropy - a one-parameter generalization of Shannon entropy - to detect anomalies in network traffic within an AS. Experimental results show that our approach based on nonextensive entropy outperforms previous ones based on classical entropy while providing enhanced flexibility, which is enabled by the possibility of fine-tuning the sensitivity of the detection mechanism.",
"Data mining is an interdisciplinary subfield of computer science involving methods at the intersection of artificial intelligence, machine learning and statistics. One of the data mining tasks is anomaly detection which is the analysis of large quantities of data to identify items, events or observations which do not conform to an expected pattern. Anomaly detection is applicable in a variety of domains, e.g., fraud detection, fault detection, system health monitoring but this article focuses on application of anomaly detection in the field of network intrusion detection.The main goal of the article is to prove that an entropy-based approach is suitable to detect modern botnet-like malware based on anomalous patterns in network. This aim is achieved by realization of the following points: (i) preparation of a concept of original entropy-based network anomaly detection method, (ii) implementation of the method, (iii) preparation of original dataset, (iv) evaluation of the method.",
"In this paper, we present results of application of Tsallis entropy in detection of denial of service attacks. Two detectors, one based on Tsallis and the other one based on Shannon's entropy, have been applied in several attack simulations, and their properties have been compared. The simulated attack is Synchronize packet SYN flood. A simple packet distribution, that is, entropy of source addresses are considered. In both cases, cumulative sum control chart algorithm is used for change point detection. Properties of two detectors that are compared are detection delay and rate of true and false positives. The results show that Tsallis entropy-based detector can outperform with respect to false positive rate Shannon-based one but that requires careful tuning of Tsallis Q parameter that depends on characteristics of network traffic. The detection delay of two detectors is approximately the same. Copyright © 2015 John Wiley & Sons, Ltd."
]
} |
1812.05199 | 2904701885 | In recent years, Recurrent Neural Networks (RNNs) based models have been applied to the Slot Filling problem of Spoken Language Understanding and achieved the state-of-the-art performances. In this paper, we investigate the effect of incorporating pre-trained language models into RNN based Slot Filling models. Our evaluation on the Airline Travel Information System (ATIS) data corpus shows that we can significantly reduce the size of labeled training data and achieve the same level of Slot Filling performance by incorporating extra word embedding and language model embedding layers pre-trained on unlabeled corpora. | However, RNN models usually need to be trained with a large amount of labeled data to achieve the expected performance. The work presented in this paper has been inspired by previous work of using pre-trained fine-tuning word embedding models to improve the performance of deep learning based models (e.g., @cite_5 ), and by the work of which used a pre-trained language model to encode the surrounding context of each word and improved the NER task performance. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2399456070"
],
"abstract": [
"One of the key problems in spoken language understanding (SLU) is the task of slot filling. In light of the recent success of applying deep neural network technologies in domain detection and intent identification, we carried out an in-depth investigation on the use of recurrent neural networks for the more difficult task of slot filling involving sequence discrimination. In this work, we implemented and compared several important recurrent-neural-network architectures, including the Elman-type and Jordan-type recurrent networks and their variants. To make the results easy to reproduce and compare, we implemented these networks on the common Theano neural network toolkit, and evaluated them on the ATIS benchmark. We also compared our results to a conditional random fields (CRF) baseline. Our results show that on this task, both types of recurrent networks outperform the CRF baseline substantially, and a bi-directional Jordantype network that takes into account both past and future dependencies among slots works best, outperforming a CRFbased baseline by 14 in relative error reduction."
]
} |
1812.05288 | 2968988576 | State-of-the-art named entity recognition (NER) systems have been improving continuously using neural architectures over the past several years. However, many tasks including NER require large sets of annotated data to achieve such performance. In particular, we focus on NER from clinical notes, which is one of the most fundamental and critical problems for medical text analysis. Our work centers on effectively adapting these neural architectures towards low-resource settings using parameter transfer methods. We complement a standard hierarchical NER model with a general transfer learning framework consisting of parameter sharing between the source and target tasks, and showcase scores significantly above the baseline architecture. These sharing schemes require an exponential search over tied parameter sets to generate an optimal configuration. To mitigate the problem of exhaustively searching for model optimization, we propose the Dynamic Transfer Networks (DTN), a gated architecture which learns the appropriate parameter sharing scheme between source and target datasets. DTN achieves the improvements of the optimized transfer learning framework with just a single training setting, effectively removing the need for exponential search. | NER models achieved their recent success with neural architectures. In 2016 several works @cite_7 @cite_30 @cite_16 proposed hierarchical sequence to sequence deep learning frameworks. The models enjoyed RNN, or CNN encoders, but generally utilized conditional random fields (CRF) as decoders. Many subsequent works have focused on fine-tuning for speed or parameter size, while keeping this model design at a high level. | {
"cite_N": [
"@cite_30",
"@cite_16",
"@cite_7"
],
"mid": [
"2963625095",
"2308486447",
"2296283641"
],
"abstract": [
"Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"We present a deep hierarchical recurrent neural network for sequence tagging. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels to encode morphology and context information, and applies a conditional random field layer to predict the tags. Our model is task independent, language independent, and feature engineering free. We further extend our model to multi-task and cross-lingual joint training by sharing the architecture and parameters. Our model achieves state-of-the-art results in multiple languages on several benchmark tasks including POS tagging, chunking, and NER. We also demonstrate that multi-task and cross-lingual joint training can improve the performance in various cases.",
"Comunicacio presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016."
]
} |
1812.05288 | 2968988576 | State-of-the-art named entity recognition (NER) systems have been improving continuously using neural architectures over the past several years. However, many tasks including NER require large sets of annotated data to achieve such performance. In particular, we focus on NER from clinical notes, which is one of the most fundamental and critical problems for medical text analysis. Our work centers on effectively adapting these neural architectures towards low-resource settings using parameter transfer methods. We complement a standard hierarchical NER model with a general transfer learning framework consisting of parameter sharing between the source and target tasks, and showcase scores significantly above the baseline architecture. These sharing schemes require an exponential search over tied parameter sets to generate an optimal configuration. To mitigate the problem of exhaustively searching for model optimization, we propose the Dynamic Transfer Networks (DTN), a gated architecture which learns the appropriate parameter sharing scheme between source and target datasets. DTN achieves the improvements of the optimized transfer learning framework with just a single training setting, effectively removing the need for exponential search. | Directly sharing parameters has been widely used, however transfer learning schemes have utilized a soft sharing paradigm as well, where model parameters or outputs are constrained to a similar space. Most similar to our work, use two constraints to promote shared representations of overlapping output distributions, as well as latent representations. This work minimizes parameter difference of the CRFs which is derived as the Kullback Leibler divergence upper bound minimization of the target task against the source across overlapping labels from both tasks. Additionally they constrain the model to produce similar latent representations for tokens with the same tag. This work is also applied towards NER across several medical sub-domains. Using soft sharing transfer learning for summarization Guo, Pasunuru, and Bansal jointly train three generative models. Their work was also novel to not have the forked design, in that both the input and output layers were independent. The same authors used a similar architecture with more ablation on sharing for sentence simplification @cite_25 . | {
"cite_N": [
"@cite_25"
],
"mid": [
"2809283440"
],
"abstract": [
"Sentence simplification aims to improve readability and understandability, based on several operations such as splitting, deletion, and paraphrasing. However, a valid simplified sentence should also be logically entailed by its input sentence. In this work, we first present a strong pointer-copy mechanism based sequence-to-sequence sentence simplification model, and then improve its entailment and paraphrasing capabilities via multi-task learning with related auxiliary tasks of entailment and paraphrase generation. Moreover, we propose a novel 'multi-level' layered soft sharing approach where each auxiliary task shares different (higher versus lower) level layers of the sentence simplification model, depending on the task's semantic versus lexico-syntactic nature. We also introduce a novel multi-armed bandit based training approach that dynamically learns how to effectively switch across tasks during multi-task learning. Experiments on multiple popular datasets demonstrate that our model outperforms competitive simplification systems in SARI and FKGL automatic metrics, and human evaluation. Further, we present several ablation analyses on alternative layer sharing methods, soft versus hard sharing, dynamic multi-armed bandit sampling approaches, and our model's learned entailment and paraphrasing skills."
]
} |
1812.05159 | 2951013084 | Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a forgetting event' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance. | Curriculum learning is a paradigm that favors learning along a curriculum of examples of increasing difficulty . This general idea has found success in a variety of areas since its introduction . implemented their curriculum by considering easy the examples with a small loss. In our experiments, we empirically validate that unforgettable examples can be safely removed without compromising generalization. relate sample importance to the norm of its loss gradient with respect to the parameters of the network. learn a curriculum directly from data in order to minimize the task loss. also study the robustness of their method in the context of noisy examples. This relates to a rich literature on outlier detection and removal of examples with noisy labels . We will provide evidence that noisy examples rank higher in terms of number of forgetting events. @cite_0 borrow influence functions from robust statistics to evaluate the impact of the training examples on a model's predictions. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2523060838"
],
"abstract": [
"The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say @math - @math data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap."
]
} |
1812.05276 | 2904140919 | We present a novel 3D object detection framework, named IPOD, based on raw point cloud. It seeds object proposal for each point, which is the basic element. This paradigm provides us with high recall and high fidelity of information, leading to a suitable way to process point cloud data. We design an end-to-end trainable architecture, where features of all points within a proposal are extracted from the backbone network and achieve a proposal feature for final bounding inference. These features with both context information and precise point cloud coordinates yield improved performance. We conduct experiments on KITTI dataset, evaluating our performance in terms of 3D object detection, Bird's Eye View (BEV) detection and 2D object detection. Our method accomplishes new state-of-the-art , showing great advantage on the hard set. | There have been several approaches to tackle semantic segmentation on point cloud. In @cite_7 , a projection function converts LIDAR points to a UV map, which is then classified by 2D semantic segmentation @cite_7 @cite_9 @cite_4 in pixel level. In @cite_13 @cite_18 , a multi-view based function produces the segmentation mask. The method fuses information from different views. Other solutions, such as @cite_5 @cite_2 @cite_6 @cite_8 @cite_24 , segment the point cloud from raw LIDAR data. They directly generate features on each point while keeping original structural information. Specifically, a max-pooling method gathers the global feature; it is then concatenated with local feature for processing. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_9",
"@cite_6",
"@cite_24",
"@cite_2",
"@cite_5",
"@cite_13"
],
"mid": [
"2594519801",
"2412782625",
"2766577666",
"2810641456",
"2952596663",
"",
"2963719584",
"2560609797",
"2624503621",
"2795014656"
],
"abstract": [
"A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available – current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"In this paper, we address semantic segmentation of road-objects from 3D LiDAR point clouds. In particular, we wish to detect and categorize instances of interest, such as cars, pedestrians and cyclists. We formulate this problem as a point- wise classification problem, and propose an end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN): the CNN takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Instance-level labels are then obtained by conventional clustering algorithms. Our CNN model is trained on LiDAR point clouds from the KITTI dataset, and our point-wise segmentation labels are derived from 3D bounding boxes from KITTI. To obtain extra training data, we built a LiDAR simulator into Grand Theft Auto V (GTA-V), a popular video game, to synthesize large amounts of realistic training data. Our experiments show that SqueezeSeg achieves high accuracy with astonishingly fast and stable runtime (8.7 ms per frame), highly desirable for autonomous driving applications. Furthermore, additionally training on synthesized data boosts validation accuracy on real-world data. Our source code and synthesized data will be open-sourced.",
"Recently, 3D understanding research sheds light on extracting features from point cloud directly, which requires effective shape pattern description of point clouds. Inspired by the outstanding 2D shape descriptor SIFT, we design a module called PointSIFT that encodes information of different orientations and is adaptive to scale of shape. Specifically, an orientation-encoding unit is designed to describe eight crucial orientations, and multi-scale representation is achieved by stacking several orientation-encoding units. PointSIFT module can be integrated into various PointNet-based architecture to improve the representation ability. Extensive experiments show our PointSIFT-based framework outperforms state-of-the-art method on standard benchmark datasets. The code and trained model will be published accompanied by this paper.",
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"",
"This paper presents SO-Net, a permutation invariant architecture for deep learning with orderless point clouds. The SO-Net models the spatial distribution of point cloud by building a Self-Organizing Map (SOM). Based on the SOM, SO-Net performs hierarchical feature extraction on individual points and SOM nodes, and ultimately represents the input point cloud by a single feature vector. The receptive field of the network can be systematically adjusted by conducting point-to-node k nearest neighbor search. In recognition tasks such as point cloud reconstruction, classification, object part segmentation and shape retrieval, our proposed network demonstrates performance that is similar with or better than state-of-the-art approaches. In addition, the training speed is significantly faster than existing point cloud recognition networks because of the parallelizability and simplicity of the proposed architecture. Our code is available at the project website.1",
"Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption.",
"Few prior works study deep learning on point sets. PointNet by is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.",
"We present 3DMV, a novel method for 3D semantic scene segmentation of RGB-D scans in indoor environments using a joint 3D-multi-view prediction network. In contrast to existing methods that either use geometry or RGB data as input for this task, we combine both data modalities in a joint, end-to-end network architecture. Rather than simply projecting color data into a volumetric grid and operating solely in 3D – which would result in insufficient detail – we first extract feature maps from associated RGB images. These features are then mapped into the volumetric feature grid of a 3D network using a differentiable back-projection layer. Since our target is 3D scanning scenarios with possibly many frames, we use a multi-view pooling approach in order to handle a varying number of RGB input views. This learned combination of RGB and geometric features with our joint 2D-3D architecture achieves significantly better results than existing baselines. For instance, our final result on the ScanNet 3D segmentation benchmark increases from 52.8 to 75 accuracy compared to existing volumetric architectures."
]
} |
1812.05276 | 2904140919 | We present a novel 3D object detection framework, named IPOD, based on raw point cloud. It seeds object proposal for each point, which is the basic element. This paradigm provides us with high recall and high fidelity of information, leading to a suitable way to process point cloud data. We design an end-to-end trainable architecture, where features of all points within a proposal are extracted from the backbone network and achieve a proposal feature for final bounding inference. These features with both context information and precise point cloud coordinates yield improved performance. We conduct experiments on KITTI dataset, evaluating our performance in terms of 3D object detection, Bird's Eye View (BEV) detection and 2D object detection. Our method accomplishes new state-of-the-art , showing great advantage on the hard set. | @math Method: There are several LIDAR-data based 3D object detection frameworks using voxel-grid representation. In @cite_33 , each non-empty voxel is encoded with 6 statistical quantities by the points within this voxel. A binary encoding is used in @cite_21 for each voxel grid. They utilized hand-crafted representation. VoxelNet @cite_22 instead stacks many VFE layers to generate machine-learned representation for each voxel. | {
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_33"
],
"mid": [
"2769571673",
"2951728436",
"2293349265"
],
"abstract": [
"Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.",
"2D fully convolutional network has been recently successfully applied to object detection from images. In this paper, we extend the fully convolutional network based detection techniques to 3D and apply it to point cloud data. The proposed approach is verified on the task of vehicle detection from lidar point cloud for autonomous driving. Experiments on the KITTI dataset shows a significant performance improvement over the previous point cloud based detection approaches.",
"This paper proposes an efficient and effective scheme to applying the sliding window approach popular in computer vision to 3D data. Specifically, the sparse nature of the problem is exploited via a voting scheme to enable a search through all putative object locations at any orientation. We prove that this voting scheme is mathematically equivalent to a convolution on a sparse feature grid and thus enables the processing, in full 3D, of any point cloud irrespective of the number of vantage points required to construct it. As such it is versatile enough to operate on data from popular 3D laser scanners such as a Velodyne as well as on 3D data obtained from increasingly popular push-broom configurations. Our approach is “embarrassingly parallelisable” and capable of processing a point cloud containing over 100K points at eight orientations in less than 0.5s. For the object classes car, pedestrian and bicyclist the resulting detector achieves best-in-class detection and timing performance relative to prior art on the KITTI dataset as well as compared to another existing 3D object detection approach."
]
} |
1812.05276 | 2904140919 | We present a novel 3D object detection framework, named IPOD, based on raw point cloud. It seeds object proposal for each point, which is the basic element. This paradigm provides us with high recall and high fidelity of information, leading to a suitable way to process point cloud data. We design an end-to-end trainable architecture, where features of all points within a proposal are extracted from the backbone network and achieve a proposal feature for final bounding inference. These features with both context information and precise point cloud coordinates yield improved performance. We conduct experiments on KITTI dataset, evaluating our performance in terms of 3D object detection, Bird's Eye View (BEV) detection and 2D object detection. Our method accomplishes new state-of-the-art , showing great advantage on the hard set. | @math Method: MV3D @cite_1 projected LIDAR point cloud to BEV and trained a Region Proposal Network (RPN) to generate positive proposals. Afterwards, it merged features from BEV, image view and front view in order to generate refined 3D bounding boxes. AVOD @cite_16 improved MV3D by fusing image and BEV features like @cite_31 . Unlike MV3D, which only merges features in the refinement phase, it also merges features from multiple views in the RPN phase to generate more accurate positive proposals. However, these methods still have the limitation when detecting small objects such as pedestrians and cyclists. They do not handle several cases that have multiple objects in depth direction. | {
"cite_N": [
"@cite_31",
"@cite_16",
"@cite_1"
],
"mid": [
"2949533892",
"2774996270",
"2950952351"
],
"abstract": [
"Feature pyramids are a basic component in recognition systems for detecting objects at different scales. But recent deep learning object detectors have avoided pyramid representations, in part because they are compute and memory intensive. In this paper, we exploit the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost. A top-down architecture with lateral connections is developed for building high-level semantic feature maps at all scales. This architecture, called a Feature Pyramid Network (FPN), shows significant improvement as a generic feature extractor in several applications. Using FPN in a basic Faster R-CNN system, our method achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles, surpassing all existing single-model entries including those from the COCO 2016 challenge winners. In addition, our method can run at 5 FPS on a GPU and thus is a practical and accurate solution to multi-scale object detection. Code will be made publicly available.",
"We present AVOD, an Aggregate View Object Detection network for autonomous driving scenarios. The proposed neural network architecture uses LIDAR point clouds and RGB images to generate features that are shared by two subnetworks: a region proposal network (RPN) and a second stage detector network. The proposed RPN uses a novel architecture capable of performing multimodal feature fusion on high resolution feature maps to generate reliable 3D object proposals for multiple object classes in road scenes. Using these proposals, the second stage detection network performs accurate oriented 3D bounding box regression and category classification to predict the extents, orientation, and classification of objects in 3D space. Our proposed architecture is shown to produce state of the art results on the KITTI 3D object detection benchmark while running in real time with a low memory footprint, making it a suitable candidate for deployment on autonomous vehicles. Code is at: this https URL",
"This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25 and 30 AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3 higher AP than the state-of-the-art on the hard data among the LIDAR-based methods."
]
} |
1812.05276 | 2904140919 | We present a novel 3D object detection framework, named IPOD, based on raw point cloud. It seeds object proposal for each point, which is the basic element. This paradigm provides us with high recall and high fidelity of information, leading to a suitable way to process point cloud data. We design an end-to-end trainable architecture, where features of all points within a proposal are extracted from the backbone network and achieve a proposal feature for final bounding inference. These features with both context information and precise point cloud coordinates yield improved performance. We conduct experiments on KITTI dataset, evaluating our performance in terms of 3D object detection, Bird's Eye View (BEV) detection and 2D object detection. Our method accomplishes new state-of-the-art , showing great advantage on the hard set. | @math Method: F-PointNet @cite_20 is the first method of utilizing raw point cloud to predict 3D objects. Initially, a 2D object detection module @cite_23 is applied to generate frustum proposals. Then it crops points and passes them into an instance segmentation module. At last, it regresses 3D bounding boxes by the positive points output from the segmentation module. Final performance heavily relies on the detection results from the 2D object detector. In contrast, our design is general and effective to utilize the strong representation power of point cloud. | {
"cite_N": [
"@cite_23",
"@cite_20"
],
"mid": [
"2579985080",
"2769205412"
],
"abstract": [
"The main contribution of this paper is an approach for introducing additional context into state-of-the-art general object detection. To achieve this we first combine a state-of-the-art classifier (Residual-101[14]) with a fast detection framework (SSD[18]). We then augment SSD+Residual-101 with deconvolution layers to introduce additional large-scale context in object detection and improve accuracy, especially for small objects, calling our resulting system DSSD for deconvolutional single shot detector. While these two contributions are easily described at a high-level, a naive implementation does not succeed. Instead we show that carefully adding additional stages of learned transformations, specifically a module for feed-forward connections in deconvolution and a new output module, enables this new approach and forms a potential way forward for further detection research. Results are shown on both PASCAL VOC and COCO detection. Our DSSD with @math input achieves 81.5 mAP on VOC2007 test, 80.0 mAP on VOC2012 test, and 33.2 mAP on COCO, outperforming a state-of-the-art method R-FCN[3] on each dataset.",
"In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | . Based on the network architecture presented in @cite_39 , Rasmus al @cite_14 proposed the ladder network, a model which is trained to simultaneously minimize the sum of supervised and unsupervised reconstruction cost functions by back-propagation. Laine and Aila al @cite_19 then simplified the ladder network to @math -model and introduced self-ensembling, a consensus prediction of the unknown labels using an exponential moving average of outputs of the network-in-training on different epochs. Tarvainen and Valpola @cite_9 developed a method, named Mean Teacher, which keeps an exponential moving averages of model weights instead of the self-ensembling mentioned above. | {
"cite_N": [
"@cite_19",
"@cite_9",
"@cite_14",
"@cite_39"
],
"mid": [
"2951970475",
"2592691248",
"2952229419",
"2147062276"
],
"abstract": [
"In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. We introduce self-ensembling, where we form a consensus prediction of the unknown labels using the outputs of the network-in-training on different epochs, and most importantly, under different regularization and input augmentation conditions. This ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Using our method, we set new records for two standard semi-supervised learning benchmarks, reducing the (non-augmented) classification error rate from 18.44 to 7.05 in SVHN with 500 labels and from 18.63 to 16.55 in CIFAR-10 with 4000 labels, and further to 5.12 and 12.16 by enabling the standard augmentations. We additionally obtain a clear improvement in CIFAR-100 classification accuracy by using random images from the Tiny Images dataset as unlabeled extra inputs during training. Finally, we demonstrate good tolerance to incorrect labels.",
"The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35 on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55 to 6.28 , and on ImageNet 2012 with 10 of the labels from 35.24 to 9.11 .",
"We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on the Ladder network proposed by Valpola (2015), which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification, in addition to permutation-invariant MNIST classification with all labels.",
"Abstract A network supporting deep unsupervised learning is presented. The network is an autoencoder with lateral shortcut connections from the encoder to the decoder at each level of the hierarchy. The lateral shortcut connections allow the higher levels of the hierarchy to focus on abstract invariant features. Whereas autoencoders are analogous to latent variable models with a single layer of stochastic variables, the proposed network is analogous to hierarchical latent variable models. Learning combines denoising autoencoder and denoising sources separation frameworks. Each layer of the network contributes to the cost function a term which measures the distance of the representations produced by the encoder and the decoder. Since training signals originate from all levels of the network, all layers can learn efficiently even in deep networks. The speedup offered by cost terms from higher levels of the hierarchy and the ability to learn invariant features are demonstrated in experiments."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | Recently, Chen al @cite_42 proposed a method capable of exploiting the memory of a model and introduced a memory mechanism into the network training process. | {
"cite_N": [
"@cite_42"
],
"mid": [
"2895771689"
],
"abstract": [
"We consider the semi-supervised multi-class classification problem of learning from sparse labelled and abundant unlabelled training data. To address this problem, existing semi-supervised deep learning methods often rely on the up-to-date “network-in-training” to formulate the semi-supervised learning objective. This ignores both the discriminative feature representation and the model inference uncertainty revealed by the network in the preceding learning iterations, referred to as the memory of model learning. In this work, we propose a novel Memory-Assisted Deep Neural Network (MA-DNN) capable of exploiting the memory of model learning to enable semi-supervised learning. Specifically, we introduce a memory mechanism into the network training process as an assimilation-accommodation interaction between the network and an external memory module. Experiments demonstrate the advantages of the proposed MA-DNN model over the state-of-the-art semi-supervised deep learning methods on three image classification benchmark datasets: SVHN, CIFAR10, and CIFAR100."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | . The objective of GAN training is to generate visually realistic images, which seems to be a very suitable choice for SSL as these images can be taken as additional training data. Springenberg @cite_34 presented a method for learning a discriminative classifier from unlabeled or partially labeled data, which can be interpreted as the first attempt trying to apply GANs to SSL. Salimans al @cite_20 improved the techniques for training GANs and showed how a discriminator that also predicts classes can be used for SSL. Dai al @cite_27 gave the definition of a preferred generator and derived a new formulation for improving previous feature matching GANs. Li al @cite_37 pointed out that a single discriminator only estimates the data without considering the labels and proposed Tri-GANs to address this problem. | {
"cite_N": [
"@cite_27",
"@cite_34",
"@cite_37",
"@cite_20"
],
"mid": [
"2619371851",
"2178768799",
"2596763562",
"2432004435"
],
"abstract": [
"Semi-supervised learning methods based on generative adversarial networks (GANs) obtained strong empirical results, but it is not clear 1) how the discriminator benefits from joint training with a generator, and 2) why good semi-supervised classification performance and a good generator cannot be obtained at the same time. Theoretically, we show that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and propose the definition of a preferred generator. Empirically, we derive a novel formulation based on our analysis that substantially improves over feature matching GANs, obtaining state-of-the-art results on multiple benchmark datasets.",
"In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information between observed examples and their predicted categorical class distribution, against robustness of the classifier to an adversarial generative model. The resulting algorithm can either be interpreted as a natural generalization of the generative adversarial networks (GAN) framework or as an extension of the regularized information maximization (RIM) framework to robust classification against an optimal adversary. We empirically evaluate our method - which we dub categorical generative adversarial networks (or CatGAN) - on synthetic data as well as on challenging image classification tasks, demonstrating the robustness of the learned classifiers. We further qualitatively assess the fidelity of samples generated by the adversarial generator that is learned alongside the discriminative classifier, and identify links between the CatGAN objective and discriminative clustering algorithms (such as RIM).",
"Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.",
"We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3 . We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | . Co-Training, first proposed by Blum and Mitchell @cite_5 , utilizes the diversity between two classifiers and let them label unlabeled data for each other. Zhou and Li @cite_6 presented Tri-Trainig to use bootstrap sampling to get three different training sets and generates three classifiers from these three training sets respectively. For deep models, Chen @cite_4 tried to build Tri-Net to combine tri-training with deep models. | {
"cite_N": [
"@cite_5",
"@cite_4",
"@cite_6"
],
"mid": [
"2048679005",
"2808139377",
"2133556223"
],
"abstract": [
"We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 mitchell+@cs.cmu.edu",
"Deep neural networks have witnessed great successes in various real applications, but it requires a large number of labeled data for training. In this paper, we propose tri-net, a deep neural network which is able to use massive unlabeled data to help learning with limited labeled data. We consider model initialization, diversity augmentation and pseudo-label editing simultaneously. In our work, we utilize output smearing to initialize modules, use fine-tuning on labeled data to augment diversity and eliminate unstable pseudo-labels to alleviate the influence of suspicious pseudolabeled data. Experiments show that our method achieves the best performance in comparison with state-of-the-art semi-supervised deep learning methods. In particular, it achieves 8:30 error rate on CIFAR- 10 by using only 4000 labeled examples.",
"In many practical data mining applications, such as Web page classification, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semi-supervised learning algorithms such as co-training have attracted much attention. In this paper, a new co-training style semi-supervised learning algorithm, named tri-training, is proposed. This algorithm generates three classifiers from the original labeled example set. These classifiers are then refined using unlabeled examples in the tri-training process. In detail, in each round of tri-training, an unlabeled example is labeled for a classifier if the other two classifiers agree on the labeling, under certain conditions. Since tri-training neither requires the instance space to be described with sufficient and redundant views nor does it put any constraints on the supervised learning algorithm, its applicability is broader than that of previous co-training style algorithms. Experiments on UCI data sets and application to the Web page classification task indicate that tri-training can effectively exploit unlabeled data to enhance the learning performance."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | In this paper, we choose to evaluate SSL methods based on consistency-regularization, adversarial training and entropy-based methods. The reason why we ignore GANs series is that pre-trained models are widely used in such methodologies. To keep pace with @cite_13 , we mainly perform experiments on @math model, Mean Teacher, VAT and Pseudo-Label. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2794523151"
],
"abstract": [
"Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | Yosinski al @cite_16 provided a thorough study about the fine-tuning performance across different network layers and varying image classes. As for the reason why we choose to fine-tune all layers, we argue that this fits the setting of SSL: training numerous parameters with the help from unlabeled data. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2949667497"
],
"abstract": [
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset."
]
} |
1812.05313 | 2905233337 | Semi-Supervised Learning (SSL) has been proved to be an effective way to leverage both labeled and unlabeled data at the same time. Recent semi-supervised approaches focus on deep neural networks and have achieved promising results on several benchmarks: CIFAR10, CIFAR100 and SVHN. However, most of their experiments are based on models trained from scratch instead of pre-trained models. On the other hand, transfer learning has demonstrated its value when the target domain has limited labeled data. Here comes the intuitive question: is it possible to incorporate SSL when fine-tuning a pre-trained model? We comprehensively study how SSL methods starting from pretrained models perform under varying conditions, including training strategies, architecture choice and datasets. From this study, we obtain several interesting and useful observations. While practitioners have had an intuitive understanding of these observations, we do a comprehensive emperical analysis and demonstrate that: (1) the gains from SSL techniques over a fully-supervised baseline are smaller when trained from a pre-trained model than when trained from random initialization, (2) when the domain of the source data used to train the pre-trained model differs significantly from the domain of the target task, the gains from SSL are significantly higher and (3) some SSL methods are able to advance fully-supervised baselines (like Pseudo-Label). We hope our studies can deepen the understanding of SSL research and facilitate the process of developing more effective SSL methods to utilize pre-trained models. Code is now available at github. | In the rest of this paper, we adhere to a similar idea proposed in @cite_16 except that we incorporate SSL into the fine-tuning process. Our work shares some similarities with a recent evaluation paper on SSL @cite_13 , in which the authors made a comprehensive study about the performance of SSL on real-world applications. But their experiments were mostly based on models trained from scratch and reported few results about fine-tuning a pre-trained model under various conditions (e.g., different datasets and model architectures). In this paper, we expand their analysis to the combination of fine-tuning and SSL. | {
"cite_N": [
"@cite_16",
"@cite_13"
],
"mid": [
"2949667497",
"2794523151"
],
"abstract": [
"Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset.",
"Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data when labels are limited or expensive to obtain. SSL algorithms based on deep neural networks have recently proven successful on standard benchmark tasks. However, we argue that these benchmarks fail to address many issues that these algorithms would face in real-world applications. After creating a unified reimplementation of various widely-used SSL techniques, we test them in a suite of experiments designed to address these issues. We find that the performance of simple baselines which do not use unlabeled data is often underreported, that SSL methods differ in sensitivity to the amount of labeled and unlabeled data, and that performance can degrade substantially when the unlabeled dataset contains out-of-class examples. To help guide SSL research towards real-world applicability, we make our unified reimplemention and evaluation platform publicly available."
]
} |
1812.05407 | 2949330231 | In neural abstractive summarization field, conventional sequence-to-sequence based models often suffer from summarizing the wrong aspect of the document with respect to the main aspect. To tackle this problem, we propose the task of reader-aware abstractive summary generation, which utilizes the reader comments to help the model produce better summary about the main aspect. Unlike traditional abstractive summarization task, reader-aware summarization confronts two main challenges: (1) Comments are informal and noisy; (2) jointly modeling the news document and the reader comments is challenging. To tackle the above challenges, we design an adversarial learning model named reader-aware summary generator (RASG), which consists of four components: (1) a sequence-to-sequence based summary generator; (2) a reader attention module capturing the reader focused aspects; (3) a supervisor modeling the semantic gap between the generated summary and reader focused aspects; (4) a goal tracker producing the goal for each generation step. The supervisor and the goal tacker are used to guide the training of our framework in an adversarial manner. Extensive experiments are conducted on our large-scale real-world text summarization dataset, and the results show that RASG achieves the state-of-the-art performance in terms of both automatic metrics and human evaluations. The experimental results also demonstrate the effectiveness of each module in our framework. We release our large-scale dataset for further research. | To consider the reader's comments into text summarization, the reader-aware summarization is proposed and it mainly takes the form of extractive approaches. Graph-based method has been used for comment oriented summarization task such as @cite_7 @cite_15 , where they identify three relations (topic, quotation, and mention) by which comments can be linked to one another. Recently, Nguyen al publish a small extractive sentence-comment dataset which can not be used to train neural models due to its small size. Li al propose an unsupervised compressive multi-document summarization model using sparse coding method. Following previous work, there are some models @cite_19 @cite_12 using variational auto-encoder to model the latent semantic of original article and reader comments. Different from our abstractive summarization task, these related works are all based on extractive or compressive approaches. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_12",
"@cite_7"
],
"mid": [
"",
"1985710361",
"2604777655",
"1982452440"
],
"abstract": [
"",
"Comments left by readers on Web documents contain valuable information that can be utilized in different information retrieval tasks including document search, visualization, and summarization. In this paper, we study the problem of comments-oriented document summarization and aim to summarize a Web document (e.g., a blog post) by considering not only its content, but also the comments left by its readers. We identify three relations (namely, topic, quotation, and mention) by which comments can be linked to one another, and model the relations in three graphs. The importance of each comment is then scored by: (i) graph-based method, where the three graphs are merged into a multi-relation graph; (ii) tensor-based method, where the three graphs are used to construct a 3rd-order tensor. To generate a comments-oriented summary, we extract sentences from the given Web document using either feature-biased approach or uniform-document approach. The former scores sentences to bias keywords derived from comments; while the latter scores sentences uniformly with comments. In our experiments using a set of blog posts with manually labeled sentences, our proposed summarization methods utilizing comments showed significant improvement over those not using comments. The methods using feature-biased sentence extraction approach were observed to outperform that using uniform-document approach.",
"We propose a new unsupervised sentence salience framework for Multi-Document Summarization (MDS), which can be divided into two components: latent semantic modeling and salience estimation. For latent semantic modeling, a neural generative model called Variational Auto-Encoders (VAEs) is employed to describe the observed sentences and the corresponding latent semantic representations. Neural variational inference is used for the posterior inference of the latent variables. For salience estimation, we propose an unsupervised data reconstruction framework, which jointly considers the reconstruction for latent semantic space and observed term vector space. Therefore, we can capture the salience of sentences from these two different and complementary vector spaces. Thereafter, the VAEs-based latent semantic model is integrated into the sentence salience estimation component in a unified fashion, and the whole framework can be trained jointly by back-propagation via multi-task learning. Experimental results on the benchmark datasets DUC and TAC show that our framework achieves better performance than the state-of-the-art models.",
"Much existing research on blogs focused on posts only, ignoring their comments. Our user study conducted on summarizing blog posts, however, showed that reading comments does change one's understanding about blog posts. In this research, we aim to extract representative sentences from a blog post that best represent the topics discussed among its comments. The proposed solution first derives representative words from comments and then selects sentences containing representative words. The representativeness of words is measured using ReQuT (i.e., Reader, Quotation, and Topic). Evaluated on human labeled sentences, ReQuT together with summation-based sentence selection showed promising results."
]
} |
1812.05477 | 2950558193 | The shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking. Being independent of appearance, it is possible to generalize to a large range of objects from only small amounts of data. However, shapes represented as silhouette images are challenging to model due to complicated likelihood functions leading to intractable posteriors. In this paper we present a generative model of shapes which provides a low dimensional latent encoding which importantly resides on a smooth manifold with respect to the silhouette images. The proposed model propagates uncertainty in a principled manner allowing it to learn from small amounts of data and providing predictions with associated uncertainty. We provide experiments that show how our proposed model provides favorable quantitative results compared with the state-of-the-art while simultaneously providing a representation that resides on a low-dimensional interpretable manifold. | Modelling of shape is important for many computer vision tasks. It is beyond the scope of this paper to make a complete review of the topic, we refer the reader to the comprehensive work of Taylor al @cite_19 . In our work we focus on recent unsupervised statistical models that operate directly on the pixel domain. Interest in these models was revived by the Shape Boltzmann Machine (SBM) work of Eslami et al. @cite_23 and they have been shown to be useful for a variety of vision applications @cite_8 @cite_7 @cite_12 . These deep models can also be readily extended into the 3D domain, , by recent work on 3D ShapeNets @cite_22 . Detailed analysis of the DBN, GPLVM and SBM is provided in . | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_19",
"@cite_23",
"@cite_12"
],
"mid": [
"2951755740",
"",
"2138813262",
"",
"2075505763",
"2226771013"
],
"abstract": [
"3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.",
"",
"The Shape Boltzmann Machine (SBM) [1] has recently been introduced as a state-of-the-art model of foreground background object shape. We extend the SBM to account for the foreground object's parts. Our new model, the Multinomial SBM (MSBM), can capture both local and global statistics of part shapes accurately. We combine the MSBM with an appearance model to form a fully generative model of images of objects. Parts-based object segmentations are obtained simply by performing probabilistic inference in the model. We apply the model to two challenging datasets which exhibit significant shape and appearance variability, and find that it obtains results that are comparable to the state-of-the-art.",
"",
"A good model of object shape is essential in applications such as segmentation, detection, inpainting and graphics. For example, when performing segmentation, local constraints on the shapes can help where object boundaries are noisy or unclear, and global constraints can resolve ambiguities where background clutter looks similar to parts of the objects. In general, the stronger the model of shape, the more performance is improved. In this paper, we use a type of deep Boltzmann machine (Salakhutdinov and Hinton, International Conference on Artificial Intelligence and Statistics, 2009) that we call a Shape Boltzmann Machine (SBM) for the task of modeling foreground background (binary) and parts-based (categorical) shape images. We show that the SBM characterizes a strong model of shape, in that samples from the model look realistic and it can generalize to generate samples that differ from training examples. We find that the SBM learns distributions that are qualitatively and quantitatively better than existing models for this task.",
"In this work we address the task of segmenting an object into its parts, or semantic part segmentation. We start by adapting a state-of-the-art semantic segmentation system to this task, and show that a combination of a fully-convolutional Deep CNN system coupled with Dense CRF labelling provides excellent results for a broad range of object categories. Still, this approach remains agnostic to high-level constraints between object parts. We introduce such prior information by means of the Restricted Boltzmann Machine, adapted to our task and train our model in an discriminative fashion, as a hidden CRF, demonstrating that prior information can yield additional improvements. We also investigate the performance of our approach in the wild'', without information concerning the objects' bounding boxes, using an object detector to guide a multi-scale segmentation scheme. We evaluate the performance of our approach on the Penn-Fudan and LFW datasets for the tasks of pedestrian parsing and face labelling respectively. We show superior performance with respect to competitive methods that have been extensively engineered on these benchmarks, as well as realistic qualitative results on part segmentation, even for occluded or deformable objects. We also provide quantitative and extensive qualitative results on three classes from the PASCAL Parts dataset. Finally, we show that our multi-scale segmentation scheme can boost accuracy, recovering segmentations for finer parts."
]
} |
1812.05477 | 2950558193 | The shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking. Being independent of appearance, it is possible to generalize to a large range of objects from only small amounts of data. However, shapes represented as silhouette images are challenging to model due to complicated likelihood functions leading to intractable posteriors. In this paper we present a generative model of shapes which provides a low dimensional latent encoding which importantly resides on a smooth manifold with respect to the silhouette images. The proposed model propagates uncertainty in a principled manner allowing it to learn from small amounts of data and providing predictions with associated uncertainty. We provide experiments that show how our proposed model provides favorable quantitative results compared with the state-of-the-art while simultaneously providing a representation that resides on a low-dimensional interpretable manifold. | ShapeOdds The recent ShapeOdds work of Elhabian and Whitaker @cite_14 confers state-of-the-art performance and captures many of the desired properties including a generative probabilistic model that propagates uncertainty. The approach taken is quite different to ours as they specify a detailed probabilistic model including a Gaussian Markov Random Field (MRF) with individual Bernoulli random variables for the pixel lattice. In contrast, our model is more flexible, we allow the network to learn the structure from the data directly but ensure that we still maintain uncertainty quantification throughout. We would also argue that the specific form of the low dimensional manifold we generate is desirable with its guaranteed smoothness that makes the latent space readily interpretable. This provides the tradeoff between the two models. We expect the ShapeOdds model to perform very well at generalisation due to the inclusion of the MRF prior. In contrast, our model will be more data dependent in this respect (weaker prior assumptions on the nature of images), however, it provides a generative space that is highly interpretable and easy to work with. We identify that a topic for further work would be to combine our smooth priors with the likelihood model of ShapeOdds. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2736311557"
],
"abstract": [
"Shape models provide a compact parameterization of a class of shapes, and have been shown to be important to a variety of vision problems, including object detection, tracking, and image segmentation. Learning generative shape models from grid-structured representations, aka silhouettes, is usually hindered by (1) data likelihoods with intractable marginals and posteriors, (2) high-dimensional shape spaces with limited training samples (and the associated risk of overfitting), and (3) estimation of hyperparameters relating to model complexity that often entails computationally expensive grid searches. In this paper, we propose a Bayesian treatment that relies on direct probabilistic formulation for learning generative shape models in the silhouettes space. We propose a variational approach for learning a latent variable model in which we make use of, and extend, recent works on variational bounds of logistic-Gaussian integrals to circumvent intractable marginals and posteriors. Spatial coherency and sparsity priors are also incorporated to lend stability to the optimization problem by regularizing the solution space while avoiding overfitting in this high-dimensional, low-sample-size scenario. We deploy a type-II maximum likelihood estimate of the model hyperparameters to avoid grid searches. We demonstrate that the proposed model generates realistic samples, generalizes to unseen examples, and is able to handle missing regions and or background clutter, while comparing favorably with recent, neural-network-based approaches."
]
} |
1812.05477 | 2950558193 | The shape of an object is an important characteristic for many vision problems such as segmentation, detection and tracking. Being independent of appearance, it is possible to generalize to a large range of objects from only small amounts of data. However, shapes represented as silhouette images are challenging to model due to complicated likelihood functions leading to intractable posteriors. In this paper we present a generative model of shapes which provides a low dimensional latent encoding which importantly resides on a smooth manifold with respect to the silhouette images. The proposed model propagates uncertainty in a principled manner allowing it to learn from small amounts of data and providing predictions with associated uncertainty. We provide experiments that show how our proposed model provides favorable quantitative results compared with the state-of-the-art while simultaneously providing a representation that resides on a low-dimensional interpretable manifold. | GPLVM Representations A possible workaround to the problem of non-Gaussian likelihoods is to perform a deterministic transformation to a domain where the data is approximately Gaussian. This has been successful for domains where, for example, the shape can be represented in a new geometric representation away from pixels, , parametric curves @cite_11 @cite_25 . However, this is application dependent and not suitable for arbitrary pixel based silhouettes considered here. A common approach that retains the pixel grid is to transform it into a level-set problem via the distance transform, , @cite_2 . This can improve results in some settings, however, the uncertainty is not correctly preserved and therefore not correctly captured in predictions. We denote this model GPLVMDT in our comparisons. | {
"cite_N": [
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"",
"2084262241",
"2069728322"
],
"abstract": [
"",
"We propose a novel nonlinear, probabilistic and variational method for adding shape information to level set-based segmentation and tracking. Unlike previous work, we represent shapes with elliptic Fourier descriptors and learn their lower dimensional latent space using Gaussian Process Latent Variable Models. Segmentation is done by a nonlinear minimisation of an image-driven energy function in the learned latent space. We combine it with a 2D pose recovery stage, yielding a single, one shot, optimisation of both shape and pose. We demonstrate the performance of our method, both qualitatively and quantitatively, with multiple images, video sequences and latent spaces, capturing both shape kinematics and object class variance.",
"The design and manipulation of typefaces and fonts is an area requiring substantial expertise; it can take many years of study to become a proficient typographer. At the same time, the use of typefaces is ubiquitous; there are many users who, while not experts, would like to be more involved in tweaking or changing existing fonts without suffering the learning curve of professional typography packages. Given the wealth of fonts that are available today, we would like to exploit the expertise used to produce these fonts, and to enable everyday users to create, explore, and edit fonts. To this end, we build a generative manifold of standard fonts. Every location on the manifold corresponds to a unique and novel typeface, and is obtained by learning a non-linear mapping that intelligently interpolates and extrapolates existing fonts. Using the manifold, we can smoothly interpolate and move between existing fonts. We can also use the manifold as a constraint that makes a variety of new applications possible. For instance, when editing a single character, we can update all the other glyphs in a font simultaneously to keep them compatible with our changes."
]
} |
1812.04945 | 2905306341 | Most existing semantic segmentation methods employ atrous convolution to enlarge the receptive field of filters, but neglect partial information. To tackle this issue, we firstly propose a novel Kronecker convolution which adopts Kronecker product to expand the standard convolutional kernel for taking into account the partial feature neglected by atrous convolutions. Therefore, it can capture partial information and enlarge the receptive field of filters simultaneously without introducing extra parameters. Secondly, we propose Tree-structured Feature Aggregation (TFA) module which follows a recursive rule to expand and forms a hierarchical structure. Thus, it can naturally learn representations of multi-scale objects and encode hierarchical contextual information in complex scenes. Finally, we design Tree-structured Kronecker Convolutional Networks (TKCN) which employs Kronecker convolution and TFA module. Extensive experiments on three datasets, PASCAL VOC 2012, PASCAL-Context and Cityscapes, verify the effectiveness of our proposed approach. We make the code and the trained model publicly available at this https URL. | KFC @cite_27 uses Kronecker product to exploit the local structures within convolution and fully-connected layers, by replacing the large weight matrices and by combinations of multiple Kronecker products of smaller matrices, which can approximate the weight matrices of the fully connected layer. In contrast to them, we employ Kronecker product to expand the standard convolutional kernel for enlarging the receptive field of filters, and capturing partial information neglected by atrous convolutions. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2201808379"
],
"abstract": [
"In this paper, we propose and study a technique to reduce the number of parameters and computation time in convolutional neural networks. We use Kronecker product to exploit the local structures within convolution and fully-connected layers, by replacing the large weight matrices by combinations of multiple Kronecker products of smaller matrices. Just as the Kronecker product is a generalization of the outer product from vectors to matrices, our method is a generalization of the low rank approximation method for convolution neural networks. We also introduce combinations of different shapes of Kronecker product to increase modeling capacity. Experiments on SVHN, scene text recognition and ImageNet dataset demonstrate that we can achieve @math speedup or @math parameter reduction with less than 1 drop in accuracy, showing the effectiveness and efficiency of our method. Moreover, the computation efficiency of Kronecker layer makes using larger feature map possible, which in turn enables us to outperform the previous state-of-the-art on both SVHN(digit recognition) and CASIA-HWDB (handwritten Chinese character recognition) datasets."
]
} |
1812.04945 | 2905306341 | Most existing semantic segmentation methods employ atrous convolution to enlarge the receptive field of filters, but neglect partial information. To tackle this issue, we firstly propose a novel Kronecker convolution which adopts Kronecker product to expand the standard convolutional kernel for taking into account the partial feature neglected by atrous convolutions. Therefore, it can capture partial information and enlarge the receptive field of filters simultaneously without introducing extra parameters. Secondly, we propose Tree-structured Feature Aggregation (TFA) module which follows a recursive rule to expand and forms a hierarchical structure. Thus, it can naturally learn representations of multi-scale objects and encode hierarchical contextual information in complex scenes. Finally, we design Tree-structured Kronecker Convolutional Networks (TKCN) which employs Kronecker convolution and TFA module. Extensive experiments on three datasets, PASCAL VOC 2012, PASCAL-Context and Cityscapes, verify the effectiveness of our proposed approach. We make the code and the trained model publicly available at this https URL. | Semantic segmentation is a fundamental task in computer vision. Recently, approaches based on Deep Convolutional Neural Networks @cite_0 @cite_13 @cite_9 achieve remarkable progress in semantic segmentation task, such as DeconvNets @cite_8 , DeepLab @cite_34 and FCNs @cite_26 . FCNs transfer the networks of image classification for pixel-level labeling. DeconvNets employ multiple deconvolution layers to enlarge feature maps and generate whole-image predictions. DeepLab methods use atrous convolutions to enlarge the receptive fields so as to capture contextual information. Following these structures, many frameworks are proposed to further improve the accuracy of semantic segmentation. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_9",
"@cite_0",
"@cite_34",
"@cite_13"
],
"mid": [
"2952632681",
"2952637581",
"2949650786",
"",
"2412782625",
"1686810756"
],
"abstract": [
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"We propose a novel semantic segmentation algorithm by learning a deconvolution network. We learn the network on top of the convolutional layers adopted from VGG 16-layer net. The deconvolution network is composed of deconvolution and unpooling layers, which identify pixel-wise class labels and predict segmentation masks. We apply the trained network to each proposal in an input image, and construct the final semantic segmentation map by combining the results from all proposals in a simple manner. The proposed algorithm mitigates the limitations of the existing methods based on fully convolutional networks by integrating deep deconvolution network and proposal-wise prediction; our segmentation method typically identifies detailed structures and handles objects in multiple scales naturally. Our network demonstrates outstanding performance in PASCAL VOC 2012 dataset, and we achieve the best accuracy (72.5 ) among the methods trained with no external data through ensemble with the fully convolutional network.",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."
]
} |
1812.04945 | 2905306341 | Most existing semantic segmentation methods employ atrous convolution to enlarge the receptive field of filters, but neglect partial information. To tackle this issue, we firstly propose a novel Kronecker convolution which adopts Kronecker product to expand the standard convolutional kernel for taking into account the partial feature neglected by atrous convolutions. Therefore, it can capture partial information and enlarge the receptive field of filters simultaneously without introducing extra parameters. Secondly, we propose Tree-structured Feature Aggregation (TFA) module which follows a recursive rule to expand and forms a hierarchical structure. Thus, it can naturally learn representations of multi-scale objects and encode hierarchical contextual information in complex scenes. Finally, we design Tree-structured Kronecker Convolutional Networks (TKCN) which employs Kronecker convolution and TFA module. Extensive experiments on three datasets, PASCAL VOC 2012, PASCAL-Context and Cityscapes, verify the effectiveness of our proposed approach. We make the code and the trained model publicly available at this https URL. | Since objects in scene images have various sizes, multi-scale feature fusion is widely used in semantic segmentation approaches for learning features of multiple scales. Some approaches aggregate features of multiple meddle layers. The original FCNs @cite_26 utilize skip connections to perform late fusion. Hypercolumn @cite_1 merges features from middle layers to learn dense classification layers. RefineNet @cite_22 proposes to pool features with multiple window sizes and fuses them together with residual connections and learnable weights. Some methods obtain multi-scale features from inputs, such as utilizing a Laplacian pyramid @cite_36 , employing multi-scale inputs sequentially from coarse-to-fine @cite_4 , or simply resizing input images into multiple sizes @cite_33 . Some other approaches propose feature pyramid modules. DeepLab-v2 @cite_34 employs four parallel atrous convolutional layers of different rates to capture objects and context information of multiple scales. PSPNet @cite_38 performs spatial pooling at four grid scales. More recently, DFN @cite_17 propose a Smooth Network for fusing feature maps across different stages, and CCL @cite_15 propose a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. Most multi-scale feature fusion methods are compromised by preset scales or relying on inherent network structure. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_33",
"@cite_36",
"@cite_1",
"@cite_15",
"@cite_34",
"@cite_17"
],
"mid": [
"2952596663",
"2952632681",
"2951277909",
"",
"2951732414",
"2022508996",
"1948751323",
"2798791840",
"2412782625",
"2799166040"
],
"abstract": [
"Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction tasks. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields new record of mIoU accuracy 85.4 on PASCAL VOC 2012 and accuracy 80.2 on Cityscapes.",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.",
"",
"State-of-the-art semantic image segmentation methods are mostly based on training deep convolutional neural networks (CNNs). In this work, we proffer to improve semantic segmentation with the use of contextual information. In particular, we explore patch-patch' context and patch-background' context in deep CNNs. We formulate deep structured models by combining CNNs and Conditional Random Fields (CRFs) for learning the patch-patch context between image regions. Specifically, we formulate CNN-based pairwise potential functions to capture semantic correlations between neighboring patches. Efficient piecewise training of the proposed deep structured model is then applied in order to avoid repeated expensive CRF inference during the course of back propagation. For capturing the patch-background context, we show that a network design with traditional multi-scale image inputs and sliding pyramid pooling is very effective for improving performance. We perform comprehensive evaluation of the proposed method. We achieve new state-of-the-art performance on a number of challenging semantic segmentation datasets including @math , @math - @math , @math , @math - @math , @math - @math , @math - @math , and @math datasets. Particularly, we report an intersection-over-union score of @math on the @math - @math dataset.",
"Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.",
"Recognition algorithms based on convolutional networks (CNNs) typically use the output of the last layer as a feature representation. However, the information in this layer may be too coarse spatially to allow precise localization. On the contrary, earlier layers may be precise in localization but will not capture semantics. To get the best of both worlds, we define the hypercolumn at a pixel as the vector of activations of all CNN units above that pixel. Using hypercolumns as pixel descriptors, we show results on three fine-grained localization tasks: simultaneous detection and segmentation [22], where we improve state-of-the-art from 49.7 mean APr [22] to 60.0, keypoint localization, where we get a 3.3 point boost over [20], and part labeling, where we show a 6.6 point gain over a strong baseline.",
"Scene segmentation is a challenging task as it need label every pixel in the image. It is crucial to exploit discriminative context and aggregate multi-scale features to achieve better segmentation. In this paper, we first propose a novel context contrasted local feature that not only leverages the informative context but also spotlights the local information in contrast to the context. The proposed context contrasted local feature greatly improves the parsing performance, especially for inconspicuous objects and background stuff. Furthermore, we propose a scheme of gated sum to selectively aggregate multi-scale features for each spatial position. The gates in this scheme control the information flow of different scale features. Their values are generated from the testing image by the proposed network learnt from the training data so that they are adaptive not only to the training data, but also to the specific testing image. Without bells and whistles, the proposed approach achieves the state-of-the-arts consistently on the three popular scene segmentation datasets, Pascal Context, SUN-RGBD and COCO Stuff.",
"In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed “DeepLab” system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.",
"Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2 mean IOU on PASCAL VOC 2012 and 80.3 mean IOU on Cityscapes dataset."
]
} |
1812.04971 | 2905481056 | Various general-purpose distributed systems have been proposed to cope with high-diversity applications in the pipeline of Big Data analytics. Most of them provide simple yet effective primitives to simplify distributed programming. While the rigid primitives offer great ease of use to savvy programmers, they probably compromise efficiency in performance and flexibility in data representation and programming specifications, which are critical properties in real systems. In this paper, we discuss the limitations of coarse-grained primitives and aim to provide an alternative for users to have flexible control over distributed programs and operate globally shared data more efficiently. We develop STEP, a novel distributed framework based on in-memory key-value store. The key idea of STEP is to adapt multi-threading in a single machine to a distributed environment. STEP enables users to take fine-grained control over distributed threads and apply task-specific optimizations in a flexible manner. The underlying key-value store serves as distributed shared memory to keep globally shared data. To ensure ease-of-use, STEP offers plentiful effective interfaces in terms of distributed shared data manipulation, cluster management, distributed thread management and synchronization. We conduct extensive experimental studies to evaluate the performance of STEP using real data sets. The results show that STEP outperforms the state-of-the-art general-purpose distributed systems as well as a specialized ML platform in many real applications. | To achieve better performance, many efforts have been devoted to developing specialized systems for particular classes of applications such as graph analytics @cite_19 @cite_20 @cite_6 @cite_40 @cite_41 @cite_30 and machine learning tasks @cite_26 @cite_1 @cite_7 . Pregel @cite_19 follows the Bulk Synchronous Parallel (BSP) model and proposes a vertex-centric computation model which is more efficient than MapReduce-based frameworks in distributed graph processing. GraphLab @cite_20 provides asynchronous graph computation to get further performance improvement. Petuum @cite_26 emerges as a distributed platform for ML applications. It uses parameter server to store intermediate results in the form of matrices. It also introduces Stale Synchronous Parallel (SSP) to trade off between fully synchronous and fully asynchronous modes for model training. Our work focuses on developing general-purpose distributed systems to cope with complex data analytics pipeline. offers high flexibility to express different classes of applications effectively. We also experimentally show the high efficiency of compared with the specialized systems. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_7",
"@cite_41",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_40",
"@cite_20"
],
"mid": [
"1969970763",
"2952508678",
"1982767656",
"",
"",
"78077100",
"2170616854",
"2160459668",
"2096544401"
],
"abstract": [
"GPS (for Graph Processing System) is a complete open-source system we developed for scalable, fault-tolerant, and easy-to-program execution of algorithms on extremely large graphs. This paper serves the dual role of describing the GPS system, and presenting techniques and experimental results for graph partitioning in distributed graph-processing systems like GPS. GPS is similar to Google's proprietary Pregel system, with three new features: (1) an extended API to make global computations more easily expressed and more efficient; (2) a dynamic repartitioning scheme that reassigns vertices to different workers during the computation, based on messaging patterns; and (3) an optimization that distributes adjacency lists of high-degree vertices across all compute nodes to improve performance. In addition to presenting the implementation of GPS and its novel features, we also present experimental results on the performance effects of both static and dynamic graph partitioning schemes, and we describe the compilation of a high-level domain-specific programming language to GPS, enabling easy expression of complex algorithms.",
"What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, allowing ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.",
"Recently, deep learning techniques have enjoyed success in various multimedia applications, such as image classification and multi-modal data analysis. Two key factors behind deep learning's remarkable achievement are the immense computing power and the availability of massive training datasets, which enable us to train large models to capture complex regularities of the data. There are two challenges to overcome before deep learning can be widely adopted in multimedia and other applications. One is usability, namely the implementation of different models and training algorithms must be done by non-experts without much effort. The other is scalability, that is the deep learning system must be able to provision for a huge demand of computing resources for training large models with massive datasets. To address these two challenges, in this paper, we design a distributed deep learning platform called SINGA which has an intuitive programming model and good scalability. Our experience with developing and training deep learning models for real-life multimedia applications in SINGA shows that the platform is both usable and scalable.",
"",
"",
"Large-scale graph-structured computation is central to tasks ranging from targeted advertising to natural language processing and has led to the development of several graph-parallel abstractions including Pregel and GraphLab. However, the natural graphs commonly found in the real-world have highly skewed power-law degree distributions, which challenge the assumptions made by these abstractions, limiting performance and scalability. In this paper, we characterize the challenges of computation on natural graphs in the context of existing graph-parallel abstractions. We then introduce the PowerGraph abstraction which exploits the internal structure of graph programs to address these challenges. Leveraging the PowerGraph abstraction we introduce a new approach to distributed graph placement and representation that exploits the structure of power-law graphs. We provide a detailed analysis and experimental evaluation comparing PowerGraph to two popular graph-parallel systems. Finally, we describe three different implementation strategies for PowerGraph and discuss their relative merits with empirical evaluations on large-scale real-world problems demonstrating order of magnitude gains.",
"Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs - in some cases billions of vertices, trillions of edges - poses challenges to their efficient processing. In this paper we present a computational model suitable for this task. Programs are expressed as a sequence of iterations, in each of which a vertex can receive messages sent in the previous iteration, send messages to other vertices, and modify its own state and that of its outgoing edges or mutate graph topology. This vertex-centric approach is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distribution-related details are hidden behind an abstract API. The result is a framework for processing large graphs that is expressive and easy to program.",
"Computations performed by graph algorithms are data driven, and require a high degree of random data access. Despite the great progresses made in disk technology, it still cannot provide the level of efficient random access required by graph computation. On the other hand, memory-based approaches usually do not scale due to the capacity limit of single machines. In this paper, we introduce Trinity, a general purpose graph engine over a distributed memory cloud. Through optimized memory management and network communication, Trinity supports fast graph exploration as well as efficient parallel computing. In particular, Trinity leverages graph access patterns in both online and offline computation to optimize memory and communication for best performance. These enable Trinity to support efficient online query processing and offline analytics on large graphs with just a few commodity machines. Furthermore, Trinity provides a high level specification language called TSL for users to declare data schema and communication protocols, which brings great ease-of-use for general purpose graph management and computing. Our experiments show Trinity's performance in both low latency graph queries as well as high throughput graph analytics on web-scale, billion-node graphs.",
"While high-level data parallel frameworks, like MapReduce, simplify the design and implementation of large-scale data processing systems, they do not naturally or efficiently support many important data mining and machine learning algorithms and can lead to inefficient learning systems. To help fill this critical void, we introduced the GraphLab abstraction which naturally expresses asynchronous, dynamic, graph-parallel computation while ensuring data consistency and achieving a high degree of parallel performance in the shared-memory setting. In this paper, we extend the GraphLab framework to the substantially more challenging distributed setting while preserving strong data consistency guarantees. We develop graph based extensions to pipelined locking and data versioning to reduce network congestion and mitigate the effect of network latency. We also introduce fault tolerance to the GraphLab abstraction using the classic Chandy-Lamport snapshot algorithm and demonstrate how it can be easily implemented by exploiting the GraphLab abstraction itself. Finally, we evaluate our distributed implementation of the GraphLab abstraction on a large Amazon EC2 deployment and show 1-2 orders of magnitude performance gains over Hadoop-based implementations."
]
} |
1812.04971 | 2905481056 | Various general-purpose distributed systems have been proposed to cope with high-diversity applications in the pipeline of Big Data analytics. Most of them provide simple yet effective primitives to simplify distributed programming. While the rigid primitives offer great ease of use to savvy programmers, they probably compromise efficiency in performance and flexibility in data representation and programming specifications, which are critical properties in real systems. In this paper, we discuss the limitations of coarse-grained primitives and aim to provide an alternative for users to have flexible control over distributed programs and operate globally shared data more efficiently. We develop STEP, a novel distributed framework based on in-memory key-value store. The key idea of STEP is to adapt multi-threading in a single machine to a distributed environment. STEP enables users to take fine-grained control over distributed threads and apply task-specific optimizations in a flexible manner. The underlying key-value store serves as distributed shared memory to keep globally shared data. To ensure ease-of-use, STEP offers plentiful effective interfaces in terms of distributed shared data manipulation, cluster management, distributed thread management and synchronization. We conduct extensive experimental studies to evaluate the performance of STEP using real data sets. The results show that STEP outperforms the state-of-the-art general-purpose distributed systems as well as a specialized ML platform in many real applications. | Prior DSM designs @cite_25 provided strong consistency like sequence consistency, which incurs high communication cost for applications with frequent writes. Recent researches @cite_22 @cite_5 adopted Partitioned Global Address Space (PGAS) model to exploit data locality, where each partition of the global address space is local to a node. Different from existing DSM solutions, leverages distributed key-value stores @cite_42 @cite_36 @cite_35 @cite_37 to maintain globally shared data. Different key-value store implementation provides slightly different interfaces and functionalities. has decoupled the specific key-value store implementation from shared memory management by introducing a DSM internal layer. We use memcached in our current implementation and can perform a light-weight switch to other key-value stores. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_22",
"@cite_36",
"@cite_42",
"@cite_5",
"@cite_25"
],
"mid": [
"982826035",
"2153704625",
"2090409324",
"2139391817",
"1734799737",
"2135875530",
"2044902313"
],
"abstract": [
"MICA is a scalable in-memory key-value store that handles 65.6 to 76.9 million key-value operations per second using a single general-purpose multi-core system. MICA is over 4-13.5x faster than current state-of-the-art systems, while providing consistently high throughput over a variety of mixed read and write workloads. MICA takes a holistic approach that encompasses all aspects of request handling, including parallel data access, network request handling, and data structure design, but makes unconventional choices in each of the three domains. First, MICA optimizes for multi-core architectures by enabling parallel access to partitioned data. Second, for efficient parallel data access, MICA maps client requests directly to specific CPU cores at the server NIC level by using client-supplied information and adopts a light-weight networking stack that bypasses the kernel. Finally, MICA's new data structures--circular logs, lossy concurrent hash indexes, and bulk chaining--handle both read-and write-intensive workloads at low overhead.",
"Reliability at massive scale is one of the biggest challenges we face at Amazon.com, one of the largest e-commerce operations in the world; even the slightest outage has significant financial consequences and impacts customer trust. The Amazon.com platform, which provides services for many web sites worldwide, is implemented on top of an infrastructure of tens of thousands of servers and network components located in many datacenters around the world. At this scale, small and large components fail continuously and the way persistent state is managed in the face of these failures drives the reliability and scalability of the software systems. This paper presents the design and implementation of Dynamo, a highly available key-value storage system that some of Amazon's core services use to provide an \"always-on\" experience. To achieve this level of availability, Dynamo sacrifices consistency under certain failure scenarios. It makes extensive use of object versioning and application-assisted conflict resolution in a manner that provides a novel interface for developers to use.",
"In this paper we consider productivity challenges for parallel programmers and explore ways that parallel language design might help improve end-user productivity. We offer a candidate list of desirable qualities for a parallel programming language, and describe how these qualities are addressed in the design of the Chapel language. In doing so, we provide an overview of Chapel's features and how they help address parallel productivity. We also survey current techniques for parallel programming and describe ways in which we consider them to fall short of our idealized productive programming model.",
"Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get put operations.",
"Speed up your database app with a simple, fast caching layer that uses your existing servers' spare memory.",
"We present Grappa, a modern take on software distributed shared memory (DSM) for in-memory data-intensive applications. Grappa enables users to program a cluster as if it were a single, large, non-uniform memory access (NUMA) machine. Performance scales up even for applications that have poor locality and input-dependent load distribution. Grappa addresses deficiencies of previous DSM systems by exploiting application parallelism, trading off latency for throughput. We evaluate Grappa with an in-memory MapReduce framework (10× faster than Spark [74]); a vertex-centric framework inspired by GraphLab (1.33× faster than native GraphLab [48]); and a relational query execution engine (12.5× faster than Shark [31]). All these frameworks required only 60-690 lines of Grappa code.",
"The memory coherence problem in designing and implementing a shared virtual memory on loosely coupled multiprocessors is studied in depth. Two classes of algorithms, centralized and distributed, for solving the problem are presented. A prototype shared virtual memory on an Apollo ring based on these algorithms has been implemented. Both theoretical and practical results show that the memory coherence problem can indeed be solved efficiently on a loosely coupled multiprocessor."
]
} |
1907.01478 | 2955261981 | Recently, with the prevalence of large-scale image dataset, the co-occurrence information among classes becomes rich, calling for a new way to exploit it to facilitate inference. In this paper, we propose Obj-GloVe, a generic scene-based contextual embedding for common visual objects, where we adopt the word embedding method GloVe to exploit the co-occurrence between entities. We train the embedding on pre-processed Open Images V4 dataset and provide extensive visualization and analysis by dimensionality reduction and projecting the vectors along a specific semantic axis, and showcasing the nearest neighbors of the most common objects. Furthermore, we reveal the potential applications of Obj-GloVe on object detection and text-to-image synthesis, then verify its effectiveness on these two applications respectively. | Word Embedding. Word embedding is a technique in NLP where words are mapped to vectors of real numbers. All word embedding can be classified to language model (LM) based and count-based. LM-based methods attempt to predict the next word with known words. The idea of word embedding becomes popular since embedding is derived as a by-product of Neural Network Language Model (NNLM), the first neural network language model proposed by Bengio al @cite_31 . Mikolov al @cite_10 @cite_36 proposed Skip-Gram and CBOW, which compose Word2Vec, a well-known word embedding method. Different from LM-based methods, count-based embedding methods use statistics to learn representation for each word. Deerwester al @cite_7 introduced Latent Semantic Analysis (LSA) and Singular Value Decomposition (SVD) applied on a term-document matrix, which can build word embedding. Lund and Burgess @cite_24 proposed Hyperspace Analogue to Language (HAL), using a context window around the word to get weighted word-word co-occurrence counts to build an co-occurrence matrix. GloVe, proposed by Pennington al @cite_23 , encodes semantic relationships between words as vector offsets in vector space, exploiting co-occurrence ratios instead of raw co-occurrence count. GloVe is fast to train and outperforms Word2Vec in multiple NLP tasks. | {
"cite_N": [
"@cite_7",
"@cite_36",
"@cite_24",
"@cite_23",
"@cite_31",
"@cite_10"
],
"mid": [
"2147152072",
"2153579005",
"1981617416",
"2250539671",
"2132339004",
"1614298861"
],
"abstract": [
"A new method for automatic indexing and retrieval is described. The approach is to take advantage of implicit higher-order structure in the association of terms with documents (“semantic structure”) in order to improve the detection of relevant documents on the basis of terms found in queries. The particular technique used is singular-value decomposition, in which a large term by document matrix is decomposed into a set of ca. 100 orthogonal factors from which the original matrix can be approximated by linear combination. Documents are represented by ca. 100 item vectors of factor weights. Queries are represented as pseudo-document vectors formed from weighted combinations of terms, and documents with supra-threshold cosine values are returned. initial tests find this completely automatic method for retrieval to be promising.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"A procedure that processes a corpus of text and produces numeric vectors containing information about its meanings for each word is presented. This procedure is applied to a large corpus of natural language text taken from Usenet, and the resulting vectors are examined to determine what information is contained within them. These vectors provide the coordinates in a high-dimensional space in which word relationships can be analyzed. Analyses of both vector similarity and multidimensional scaling demonstrate that there is significant semantic information carried in the vectors. A comparison of vector similarity with human reaction times in a single-word priming experiment is presented. These vectors provide the basis for a representational model of semantic memory, hyperspace analogue to language (HAL).",
"Recent methods for learning vector space representations of words have succeeded in capturing fine-grained semantic and syntactic regularities using vector arithmetic, but the origin of these regularities has remained opaque. We analyze and make explicit the model properties needed for such regularities to emerge in word vectors. The result is a new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods. Our model efficiently leverages statistical information by training only on the nonzero elements in a word-word cooccurrence matrix, rather than on the entire sparse matrix or on individual context windows in a large corpus. The model produces a vector space with meaningful substructure, as evidenced by its performance of 75 on a recent word analogy task. It also outperforms related models on similarity tasks and named entity recognition.",
"A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
""
]
} |
1907.01478 | 2955261981 | Recently, with the prevalence of large-scale image dataset, the co-occurrence information among classes becomes rich, calling for a new way to exploit it to facilitate inference. In this paper, we propose Obj-GloVe, a generic scene-based contextual embedding for common visual objects, where we adopt the word embedding method GloVe to exploit the co-occurrence between entities. We train the embedding on pre-processed Open Images V4 dataset and provide extensive visualization and analysis by dimensionality reduction and projecting the vectors along a specific semantic axis, and showcasing the nearest neighbors of the most common objects. Furthermore, we reveal the potential applications of Obj-GloVe on object detection and text-to-image synthesis, then verify its effectiveness on these two applications respectively. | Object Relation in Object Detection. Early work used object relations as a post-processing step @cite_12 @cite_1 @cite_39 @cite_6 @cite_29 . In these work, the detections are re-scored by considering object relationships. For example, co-occurrence is used by DPM @cite_15 to refine prediction scores. Also, subsequent studies @cite_2 @cite_20 attempt to use more complex relation features, taking position and size into account. Very recently, a few studies verified that modeling relations between objects can also bring improvement to Deep Convolutional Neural Network (DCNN) based object detectors, which is considered to have already implicitly incorporated contextual information. Chen al @cite_27 proposed Spatial Memory Network (SMN) for context reasoning in object detection to model instance-level context and successfully improved the performance of Faster-RCNN @cite_32 on COCO dataset. Hu al @cite_30 proposed Relation Network for object detection, using a neural network architecture to model relations between objects and forming an end-to-end object detector. Similarly, Liu al @cite_33 proposed a structure inference net using both instance-level relations and scene-level recognition for object detection augmentation. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_6",
"@cite_39",
"@cite_27",
"@cite_2",
"@cite_15",
"@cite_20",
"@cite_12"
],
"mid": [
"2964080601",
"2799215407",
"2081293863",
"2037511607",
"2613718673",
"1999378860",
"2140435402",
"2951934049",
"2116510030",
"2168356304",
"2125215748",
"2141364309"
],
"abstract": [
"Although it is well believed for years that modeling relations between objects would help object recognition, there has not been evidence that the idea is working in the deep learning era. All state-of-the-art object detection systems still rely on recognizing object instances individually, without exploiting their relations during learning. This work proposes an object relation module. It processes a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. It is lightweight and in-place. It does not require additional supervision and is easy to embed in existing networks. It is shown effective on improving object recognition and duplicate removal steps in the modern object detection pipeline. It verifies the efficacy of modeling object relations in CNN based detection. It gives rise to the first fully end-to-end object detector.",
"Context is important for accurate visual recognition. In this work we propose an object detection algorithm that not only considers object visual appearance, but also makes use of two kinds of context including scene contextual information and object relationships within a single image. Therefore, object detection is regarded as both a cognition problem and a reasoning problem when leveraging these structured information. Specifically, this paper formulates object detection as a problem of graph structure inference, where given an image the objects are treated as nodes in a graph and relationships between the objects are modeled as edges in such graph. To this end, we present a so-called Structure Inference Network (SIN), a detector that incorporates into a typical detection framework (e.g. Faster R-CNN) with a graphical model which aims to infer object state. Comprehensive experiments on PASCAL VOC and MS COCO datasets indicate that scene context and object relationships truly improve the performance of object detection with more desirable and reasonable outputs.",
"In the task of visual object categorization, semantic context can play the very important role of reducing ambiguity in objects' visual appearance. In this work we propose to incorporate semantic object context as a post-processing step into any off-the-shelf object categorization model. Using a conditional random field (CRF) framework, our approach maximizes object label agreement according to contextual relevance. We compare two sources of context: one learned from training data and another queried from Google Sets. The overall performance of the proposed framework is evaluated on the PASCAL and MSRC datasets. Our findings conclude that incorporating context into object categorization greatly improves categorization accuracy.",
"In this paper, we investigate how to iteratively and mutually boost object classification and detection by taking the outputs from one task as the context of the other one. First, instead of intuitive feature and context concatenation or postprocessing with context, the so-called Contextualized Support Vector Machine (Context-SVM) is proposed, where the context takes the responsibility of dynamically adjusting the classification hyperplane, and thus the context-adaptive classifier is achieved. Then, an iterative training procedure is presented. In each step, Context-SVM, associated with the output context from one task (object classification or detection), is instantiated to boost the performance for the other task, whose augmented outputs are then further used to improve the former task by Context-SVM. The proposed solution is evaluated on the object classification and detection tasks of PASCAL Visual Object Challenge (VOC) 2007 and 2010, and achieves the state-of-the-art performance.",
"State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2 mAP) and 2012 (70.4 mAP) using 300 proposals per image. Code is available at https: github.com ShaoqingRen faster_rcnn.",
"Recognizing objects in images is an active area of research in computer vision. In the last two decades, there has been much progress and there are already object recognition systems operating in commercial products. However, most of the algorithms for detecting objects perform an exhaustive search across all locations and scales in the image comparing local image regions with an object model. That approach ignores the semantic structure of scenes and tries to solve the recognition problem by brute force. In the real world, objects tend to covary with other objects, providing a rich collection of contextual associations. These contextual associations can be used to reduce the search space by looking only in places in which the object is expected to be; this also increases performance, by rejecting patterns that look like the target but appear in unlikely places. Most modeling attempts so far have defined the context of an object in terms of other previously recognized objects. The drawback of this approach is that inferring the context becomes as difficult as detecting each object. An alternative view of context relies on using the entire scene information holistically. This approach is algorithmically attractive since it dispenses with the need for a prior step of individual object recognition. In this paper, we use a probabilistic framework for encoding the relationships between context and object properties and we show how an integrated system provides improved performance. We view this as a significant step toward general purpose machine vision systems.",
"The use of context is critical for scene understanding in computer vision, where the recognition of an object is driven by both local appearance and the object's relationship to other elements of the scene (context). Most current approaches rely on modeling the relationships between object categories as a source of context. In this paper we seek to move beyond categories to provide a richer appearance-based model of context. We present an exemplar-based model of objects and their relationships, the Visual Memex, that encodes both local appearance and 2D spatial context between object instances. We evaluate our model on Torralba's proposed Context Challenge against a baseline category-based system. Our experiments suggest that moving beyond categories for context modeling appears to be quite beneficial, and may be the critical missing ingredient in scene understanding systems.",
"Modeling instance-level context and object-object relationships is extremely challenging. It requires reasoning about bounding boxes of different classes, locations . Above all, instance-level spatial reasoning inherently requires modeling conditional distributions on previous detections. Unfortunately, our current object detection systems do not have any memory to remember what to condition on! The state-of-the-art object detectors still detect all object in parallel followed by non-maximal suppression (NMS). While memory has been used for tasks such as captioning, they mostly use image-level memory cells without capturing the spatial layout. On the other hand, modeling object-object relationships requires spatial reasoning -- not only do we need a memory to store the spatial layout, but also a effective reasoning module to extract spatial patterns. This paper presents a conceptually simple yet powerful solution -- Spatial Memory Network (SMN), to model the instance-level context efficiently and effectively. Our spatial memory essentially assembles object instances back into a pseudo \"image\" representation that is easy to be fed into another ConvNet for object-object context reasoning. This leads to a new sequential reasoning architecture where image and memory are processed in parallel to obtain detections which update the memory again. We show our SMN direction is promising as it provides 2.2 improvement over baseline Faster RCNN on the COCO dataset so far.",
"There has been a growing interest in exploiting contextual information in addition to local features to detect and localize multiple object categories in an image. A context model can rule out some unlikely combinations or locations of objects and guide detectors to produce a semantically coherent interpretation of a scene. However, the performance benefit of context models has been limited because most of the previous methods were tested on data sets with only a few object categories, in which most images contain one or two object categories. In this paper, we introduce a new data set with images that contain many instances of different object categories, and propose an efficient model that captures the contextual information among more than a hundred object categories using a tree structure. Our model incorporates global image features, dependencies between object categories, and outputs of local detectors into one probabilistic framework. We demonstrate that our context model improves object recognition performance and provides a coherent interpretation of a scene, which enables a reliable image querying system by multiple object categories. In addition, our model can be applied to scene understanding tasks that local detectors alone cannot solve, such as detecting objects out of context or querying for the most typical and the least typical scenes in a data set.",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"In this paper we study the role of context in existing state-of-the-art detection and segmentation approaches. Towards this goal, we label every pixel of PASCAL VOC 2010 detection challenge with a semantic category. We believe this data will provide plenty of challenges to the community, as it contains 520 additional classes for semantic segmentation and object detection. Our analysis shows that nearest neighbor based approaches perform poorly on semantic segmentation of contextual classes, showing the variability of PASCAL imagery. Furthermore, improvements of exist ing contextual models for detection is rather modest. In order to push forward the performance in this difficult scenario, we propose a novel deformable part-based model, which exploits both local context around each candidate detection as well as global context at the level of the scene. We show that this contextual reasoning significantly helps in detecting objects at all scales.",
"This paper presents an empirical evaluation of the role of context in a contemporary, challenging object detection task - the PASCAL VOC 2008. Previous experiments with context have mostly been done on home-grown datasets, often with non-standard baselines, making it difficult to isolate the contribution of contextual information. In this work, we present our analysis on a standard dataset, using top-performing local appearance detectors as baseline. We evaluate several different sources of context and ways to utilize it. While we employ many contextual cues that have been used before, we also propose a few novel ones including the use of geographic context and a new approach for using object spatial support."
]
} |
1907.01478 | 2955261981 | Recently, with the prevalence of large-scale image dataset, the co-occurrence information among classes becomes rich, calling for a new way to exploit it to facilitate inference. In this paper, we propose Obj-GloVe, a generic scene-based contextual embedding for common visual objects, where we adopt the word embedding method GloVe to exploit the co-occurrence between entities. We train the embedding on pre-processed Open Images V4 dataset and provide extensive visualization and analysis by dimensionality reduction and projecting the vectors along a specific semantic axis, and showcasing the nearest neighbors of the most common objects. Furthermore, we reveal the potential applications of Obj-GloVe on object detection and text-to-image synthesis, then verify its effectiveness on these two applications respectively. | Text-to-Image Synthesis. The text-to-image synthesis problem is split by Reed al @cite_13 into two sub-problems: learning a joint embedding between natural language and images and train a deep convolutional generative adversarial network (GAN) to synthesise realistic images. Dong al @cite_5 used a pairwise ranking loss to project images and text into a joint embedding space. PPGN @cite_8 exploits a conditional network to restrain the synthetic images on a caption. StackGAN @cite_18 generates realistic images with two stages. DA-GAN @cite_4 translates'' each word into a region in an image. AttnGAN @cite_26 harnesses attention mechanism to refine the local details of synthetic images. Current models achieve satisfying performance on datasets of a specific field ( CUB dataset @cite_11 ) but perform poorly on common objects dataset ( COCO). | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_4",
"@cite_8",
"@cite_5",
"@cite_13",
"@cite_11"
],
"mid": [
"2964024144",
"2963966654",
"2799062770",
"",
"2738892144",
"2405756170",
""
],
"abstract": [
"Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing textto- image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In this paper, we propose Stacked Generative Adversarial Networks (StackGAN) to generate 256.256 photo-realistic images conditioned on text descriptions. We decompose the hard problem into more manageable sub-problems through a sketch-refinement process. The Stage-I GAN sketches the primitive shape and colors of the object based on the given text description, yielding Stage-I low-resolution images. The Stage-II GAN takes Stage-I results and text descriptions as inputs, and generates high-resolution images with photo-realistic details. It is able to rectify defects in Stage-I results and add compelling details with the refinement process. To improve the diversity of the synthesized images and stabilize the training of the conditional-GAN, we introduce a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Extensive experiments and comparisons with state-of-the-arts on benchmark datasets demonstrate that the proposed method achieves significant improvements on generating photo-realistic images conditioned on text descriptions.",
"In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different sub-regions of the image by paying attentions to the relevant words in the natural language description. In addition, a deep attentional multimodal similarity model is proposed to compute a fine-grained image-text matching loss for training the generator. The proposed AttnGAN significantly outperforms the previous state of the art, boosting the best reported inception score by 14.14 on the CUB dataset and 170.25 on the more challenging COCO dataset. A detailed analysis is also performed by visualizing the attention layers of the AttnGAN. It for the first time shows that the layered attentional GAN is able to automatically select the condition at the word level for generating different parts of the image.",
"Unsupervised image translation, which aims in translating two independent sets of images, is challenging in discovering the correct correspondences without paired data. Existing works build upon Generative Adversarial Networks (GANs) such that the distribution of the translated images are indistinguishable from the distribution of the target set. However, such set-level constraints cannot learn the instance-level correspondences (e.g. aligned semantic parts in object transfiguration task). This limitation often results in false positives (e.g. geometric or semantic artifacts), and further leads to mode collapse problem. To address the above issues, we propose a novel framework for instance-level image translation by Deep Attention GAN (DA-GAN). Such a design enables DA-GAN to decompose the task of translating samples from two sets into translating instances in a highly-structured latent space. Specifically, we jointly learn a deep attention encoder, and the instance-level correspondences could be consequently discovered through attending on the learned instances. Therefore, the constraints could be exploited on both set-level and instance-level. Comparisons against several state-of-the-arts demonstrate the superiority of our approach, and the broad application capability, e.g, pose morphing, data augmentation, etc., pushes the margin of domain translation problem.1",
"",
"In this paper, we propose a way of synthesizing realistic images directly with natural language description, which has many useful applications, e.g. intelligent image manipulation. We attempt to accomplish such synthesis: given a source image and a target text description, our model synthesizes images to meet two requirements: 1) being realistic while matching the target text description; 2) maintaining other image features that are irrelevant to the text description. The model should be able to disentangle the semantic information from the two modalities (image and text), and generate new images from the combined semantics. To achieve this, we proposed an end-to-end neural architecture that leverages adversarial learning to automatically learn implicit loss functions, which are optimized to fulfill the aforementioned two requirements. We have evaluated our model by conducting experiments on Caltech-200 bird dataset and Oxford-102 flower dataset, and have demonstrated that our model is capable of synthesizing realistic images that match the given descriptions, while still maintain other features of original images.",
"Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image modeling, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions.",
""
]
} |
1907.01591 | 2953962999 | Collaborative filtering based algorithms, including Recurrent Neural Networks (RNN), tend towards predicting a perpetuation of past observed behavior. In a recommendation context, this can lead to an overly narrow set of suggestions lacking in serendipity and inadvertently placing the user in what is known as a "filter bubble." In this paper, we grapple with the issue of the filter bubble in the context of a course recommendation system in production at a public university. Most universities in the United States encourage students to explore developing interests while simultaneously advising them to adhere to course taking norms which progress them towards graduation. These competing objectives, and the stakes involved for students, make this context a particularly meaningful one for investigating real-world recommendation strategies. We introduce a novel modification to the skip-gram model applied to nine years of historic course enrollment sequences to learn course vector representations used to diversify recommendations based on similarity to a student's specified favorite course. This model, which we call multifactor2vec, is intended to improve the semantics of the primary token embedding by also learning embeddings of potentially conflated factors of the token (e.g., instructor). Our offline testing found this model improved accuracy and recall on our course similarity and analogy validation sets over a standard skip-gram. Incorporating course catalog description text resulted in further improvements. We compare the performance of these models to the system's existing RNN-based recommendations with a user study of undergraduates (N = 70) rating six characteristics of their course recommendations. Results of the user study show a dramatic lack of novelty in RNN recommendations and depict the characteristic trade-offs that make serendipity difficult to achieve. | measured the "filter bubble" effect in terms of content diversity at the individual level and found that collaborative filtering-based recommender systems expose users to a slightly narrowing set of items over time. also proposed that the recommender community should move beyond conventional accuracy metrics and their associated experimental methodologies. To counter the "filter bubble", used a collection of novel LDA-based algorithms inspired by principles of "serendipitous discovery" and injected serendipity, novelty, and diversity to music recommendations whilst limiting the impact on accuracy. Different serendipitous metrics that measure the uncertainty and relevance of user's attitude towards items in order to mitigate the over-specialization problem with surprising suggestions are combined with traditional collaborative filtering recommendation @cite_8 and content-based recommendation @cite_26 . presented the Personal Innovator Degree (PID) which focused on the dynamics and precedence of user preference to recommend items that match the latest preference of the target user to achieve serendipity. | {
"cite_N": [
"@cite_26",
"@cite_8"
],
"mid": [
"2146825440",
"2896873311"
],
"abstract": [
"We examine the case of over-specialization in recommender systems, which results from returning items that are too similar to those previously rated by the user. We propose Outside-The-Box (otb) recommendation, which takes some risk to help users make fresh discoveries, while maintaining high relevance. The proposed formalization relies on item regions and attempts to identify regions that are under-exposed to the user. We develop a recommendation algorithm which achieves a compromise between relevance and risk to find otb items. We evaluate this approach on the MovieLens data set and compare our otb recommendations against conventional recommendation strategies.",
"Most recommender algorithms are designed to suggest relevant items, but suggesting these items does not always result in user satisfaction. Therefore, the efforts in recommender systems recently shifted towards serendipity, but generating serendipitous recommendations is difficult due to the lack of training data. To the best of our knowledge, there are many large datasets containing relevance scores (relevance oriented) and only one publicly available dataset containing a relatively small number of serendipity scores (serendipity oriented). This limits the learning capabilities of serendipity oriented algorithms. Therefore, in the absence of any known deep learning algorithms for recommending serendipitous items and the lack of large serendipity oriented datasets, we introduce SerRec our novel transfer learning method to recommend serendipitous items. SerRec uses transfer learning to firstly train a deep neural network for relevance scores using a large dataset and then tunes it for serendipity scores using a smaller dataset. Our method shows benefits of transfer learning for recommending serendipitous items as well as performance gains over the state-of-the-art serendipity oriented algorithms"
]
} |
1907.01591 | 2953962999 | Collaborative filtering based algorithms, including Recurrent Neural Networks (RNN), tend towards predicting a perpetuation of past observed behavior. In a recommendation context, this can lead to an overly narrow set of suggestions lacking in serendipity and inadvertently placing the user in what is known as a "filter bubble." In this paper, we grapple with the issue of the filter bubble in the context of a course recommendation system in production at a public university. Most universities in the United States encourage students to explore developing interests while simultaneously advising them to adhere to course taking norms which progress them towards graduation. These competing objectives, and the stakes involved for students, make this context a particularly meaningful one for investigating real-world recommendation strategies. We introduce a novel modification to the skip-gram model applied to nine years of historic course enrollment sequences to learn course vector representations used to diversify recommendations based on similarity to a student's specified favorite course. This model, which we call multifactor2vec, is intended to improve the semantics of the primary token embedding by also learning embeddings of potentially conflated factors of the token (e.g., instructor). Our offline testing found this model improved accuracy and recall on our course similarity and analogy validation sets over a standard skip-gram. Incorporating course catalog description text resulted in further improvements. We compare the performance of these models to the system's existing RNN-based recommendations with a user study of undergraduates (N = 70) rating six characteristics of their course recommendations. Results of the user study show a dramatic lack of novelty in RNN recommendations and depict the characteristic trade-offs that make serendipity difficult to achieve. | Recommender systems in higher education contexts have recently focused on prediction of which courses a student will take or the grade they will receive if enrolled. At Stanford, a system called "CARTA" allows students to see grade distributions, course evaluations, and the most common courses taken before a course of interest @cite_2 . At UC Berkeley, our AskOski https: askoski.berkeley.edu recommender, named after the school's mascot, serves students next-semester course considerations based on their personal course enrollment history @cite_17 . Earlier systems included a focus on requirement satisfaction @cite_25 and career-based relevancy recommendation @cite_24 . No system has yet focused on serendipitous or novel course discovery. | {
"cite_N": [
"@cite_24",
"@cite_25",
"@cite_17",
"@cite_2"
],
"mid": [
"1997050658",
"2005653178",
"2907171927",
"2809138487"
],
"abstract": [
"User participation emerged as a critical issue for collaborative and social recommender systems as well as for a range of other systems based on the power of user community. A range of mechanisms to encourage user participation in social systems has been proposed over the last few years; however, the impact of these mechanisms on users behavior in recommender systems has not been studied sufficiently. This paper investigates the impact of encouraging user participation in the context of CourseAgent, a community-based course recommender system. The recommendation power of CourseAgent is based on course ratings provided by a community of students. To increase the number of course ratings, CourseAgent applies an incentive mechanism which turns user feedback into a self-beneficial activity. In this paper, we describe the design and implementation of our course recommendation system and its incentive mechanism. We also report a dual impact of this mechanism on user behavior discovered in two user studies.",
"We study the problem of making recommendations when the objects to be recommended must also satisfy constraints or requirements. In particular, we focus on course recommendations: the courses taken by a student must satisfy requirements (e.g., take two out of a set of five math courses) in order for the student to graduate. Our work is done in the context of the CourseRank system, used by students to plan their academic program at Stanford University. Our goal is to recommend to these students courses that not only help satisfy constraints, but that are also desirable (e.g., popular or taken by similar students). We develop increasingly expressive models for course requirements, and present a variety of schemes for both checking if the requirements are satisfied, and for making recommendations that take into account the requirements. We show that some types of requirements are inherently expensive to check, and we present exact, as well as heuristic techniques, for those cases. Although our work is specific to course requirements, it provides insights into the design of recommendation systems in the presence of complex constraints found in other applications.",
"The aggregate behaviors of users can collectively encode deep semantic information about the objects with which they interact. In this paper, we demonstrate novel ways in which the synthesis of these data can illuminate the terrain of users’ environment and support them in their decision making and wayfinding. A novel application of recurrent neural networks and skip-gram models, approaches popularized by their application to modeling language, are brought to bear on student university enrollment sequences to create vector representations of courses and map out traversals across them. We present demonstrations of how scrutability from these neural networks can be gained and how the combination of these techniques can be seen as an evolution of content tagging and a means for a recommender to balance user preferences inferred from data with those explicitly specified. From validation of the models to the development of a UI, we discuss additional requisite functionality informed by the results of a usability study leading to the ultimate deployment of the system at a university.",
"College students rely on increasingly data-rich environments when making learning-relevant decisions about the courses they take and their expected time commitments. However, we know little about how their exposure to such data may influence student course choice, effort regulation, and performance. We conducted a large-scale field experiment in which all the undergraduates at a large, selective university were randomized to an encouragement to use a course-planning web application that integrates information from official transcripts from the past fifteen years with detailed end-of-course evaluation surveys. We found that use of the platform lowered students' GPA by 0.28 standard deviations on average. In a subsequent field experiment, we varied access to information about course grades and time commitment on the platform and found that access to grade information in particular lowered students' overall GPA. Our exploratory analysis suggests these effects are not due to changes in the portfolio of courses that students choose, but rather by changes to their behavior within courses."
]
} |
1907.01713 | 2955819861 | In this paper, a novel deep reinforcement learning (DRL)-based method is proposed to navigate the robot team through unknown complex environments, where the geometric centroid of the robot team aims to reach the goal position while avoiding collisions and maintaining connectivity. Decentralized robot-level policies are derived using a mechanism of centralized learning and decentralized executing. The proposed method can derive end-to-end policies, which map raw lidar measurements into velocity control commands of robots without the necessity of constructing obstacle maps. Simulation and indoor real-world unmanned ground vehicles (UGVs) experimental results verify the effectiveness of the proposed method. | There exists extensive research work for multi-robot navigation, which can be further categorized into rule-based and learning-based strategies. Rule-based approaches include using leader-follower scheme @cite_11 , artificial potential field (APF) @cite_17 , graph theory @cite_10 , consensus theory @cite_23 , model predictive control @cite_6 , virtual structure @cite_7 @cite_8 , etc. | {
"cite_N": [
"@cite_11",
"@cite_7",
"@cite_8",
"@cite_6",
"@cite_23",
"@cite_10",
"@cite_17"
],
"mid": [
"2110375737",
"2202063402",
"",
"2091756921",
"2105850748",
"2529941186",
"2166967570"
],
"abstract": [
"In this paper, control laws are designed to achieve desired flight formations for a group of unmanned (uninhabited) aerial vehicles (UAVs). It is proposed that the formation is led and managed by a leader UAV, which determines desired (for instance, safe and achievable) flight trajectories for a group of follower UAVs. Having the desired trajectories, control laws are designed to achieve flight formations according to one of the following scenarios: (1) Each UAV takes off toward its corresponding trajectory and locks on to it in finite time; the UAVs take off independently of each other and one at a time; (2) all UAVs take off simultaneously towards their corresponding trajectories and lock on to them at the same instance of time. Examples are presented to illustrate the efficacy of the designed control laws.",
"This paper presents a method for navigating a team of robots in formation in 2D and 3D environments with static and dynamic obstacles. The method is local and computes the optimal parameters for the formation within a neighborhood of the robots, allowing for reconfigurations, when required, by considering a set of target formations. The method consists of first computing the largest collision-free convex polytope in a neighborhood of the robots, followed by a constrained optimization via sequential convex programming where the optimal parameters for the formation are obtained. The robots navigate towards the target collision-free formation with individual local planners that account for their dynamics. The approach is efficient and scalable with the number of robots and performed well in simulations with a large team of quadrators and in experiments with two mobile manipulators carrying a rigid object.",
"",
"An approach for coordination and control of 3D heterogeneous formations of unmanned aerial and ground vehicles under hawk-eye-like relative localization is presented in this paper. The core of the method lies in the use of visual top-view feedback from flying robots for the stabilization of the entire group in a leader-follower formation. We formulate a novel model predictive control-based methodology for guiding the formation. The method is employed to solve the trajectory planning and control of a virtual leader into a desired target region. In addition, the method is used for keeping the following vehicles in the desired shape of the group. The approach is designed to ensure direct visibility between aerial and ground vehicles, which is crucial for the formation stabilization using the hawk-eye-like approach. The presented system is verified in numerous experiments inspired by search-and-rescue applications, where the formation acts as a searching phalanx. In addition, stability and convergence analyses are provided to explicitly determine the limitations of the method in real-world applications.",
"In this paper, we present a theoretical framework for design and analysis of distributed flocking algorithms. Two cases of flocking in free-space and presence of multiple obstacles are considered. We present three flocking algorithms: two for free-flocking and one for constrained flocking. A comprehensive analysis of the first two algorithms is provided. We demonstrate the first algorithm embodies all three rules of Reynolds. This is a formal approach to extraction of interaction rules that lead to the emergence of collective behavior. We show that the first algorithm generically leads to regular fragmentation, whereas the second and third algorithms both lead to flocking. A systematic method is provided for construction of cost functions (or collective potentials) for flocking. These collective potentials penalize deviation from a class of lattice-shape objects called spl alpha -lattices. We use a multi-species framework for construction of collective potentials that consist of flock-members, or spl alpha -agents, and virtual agents associated with spl alpha -agents called spl beta - and spl gamma -agents. We show that migration of flocks can be performed using a peer-to-peer network of agents, i.e., \"flocks need no leaders.\" A \"universal\" definition of flocking for particle systems with similarities to Lyapunov stability is given. Several simulation results are provided that demonstrate performing 2-D and 3-D flocking, split rejoin maneuver, and squeezing maneuver for hundreds of agents using the proposed algorithms.",
"Multi-robot cooperative navigation in real-world environments is essential in many applications, including surveillance and search-and-rescue missions. State-of-the-art methods for cooperative navigation are often tested in ideal laboratory conditions and not ready to be deployed in real-world environments, which are often cluttered with static and dynamic obstacles. In this work, we explore a graph-based framework to achieve control of real robot formations moving in a world cluttered with a variety of obstacles by introducing a new distributed algorithm for reconfiguring the formation shape. We systematically validate the reconfiguration algorithm using three real robots in scenarios of increasing complexity.",
"Potential function approaches to robot navigation provide an elegant paradigm for expressing multiple constraints and goals in mobile robot navigation problems. As an example, a simple reactive navigation strategy can be generated by combining repulsion from obstacles with attraction to a goal. Advantages of this approach can also be extended to multirobot teams. In this paper we present a new class of potential functions for multiple robots that enables homogeneous large-scale robot teams to arrange themselves in geometric formations while navigating to a goal location through an obstacle field. The approach is inspired by the way molecules \"snap\" into place as they form crystals; the robots are drawn to particular \"attachment sites\" positioned with respect to other robots. We refer to these potential functions as \"social potentials\" because they are constructed with respect to other agents. Initial results, generated in simulation, illustrate the viability of the approach."
]
} |
1907.01589 | 2953878887 | Cryo-electron microscopy (cryo-EM), the subject of the 2017 Nobel Prize in Chemistry, is a technology for determining the 3-D structure of macromolecules from many noisy 2-D projections of instances of these macromolecules, whose orientations and positions are unknown. The molecular structures are not rigid objects, but flexible objects involved in dynamical processes. The different conformations are exhibited by different instances of the macromolecule observed in a cryo-EM experiment, each of which is recorded as a particle image. The range of conformations and the conformation of each particle are not known a priori; one of the great promises of cryo-EM is to map this conformation space. Remarkable progress has been made in determining rigid structures from homogeneous samples of molecules in spite of the unknown orientation of each particle image and significant progress has been made in recovering a few distinct states from mixtures of rather distinct conformations, but more complex heterogeneous samples remain a major challenge. We introduce the hyper-molecule'' framework for modeling structures across different states of heterogeneous molecules, including continuums of states. The key idea behind this framework is representing heterogeneous macromolecules as high-dimensional objects, with the additional dimensions representing the conformation space. This idea is then refined to model properties such as localized heterogeneity. In addition, we introduce an algorithmic framework for recovering such maps of heterogeneous objects from experimental data using a Bayesian formulation of the problem and Markov chain Monte Carlo (MCMC) algorithms to address the computational challenges in recovering these high dimensional hyper-molecules. We demonstrate these ideas in a prototype applied to synthetic data. | In addition to homogeneous reconstruction, many of the methods mentioned above also accommodate discrete heterogeneity through a 3-D classification framework, where each particle image is assigned to a separate 3-D reconstruction by maximizing a similarity measure. Expectation-maximization algorithms, such as RELION @cite_18 , generalize to discrete heterogeneity by estimating conditional joint distributions of orientations and discrete class assignment. While this approach has led to impressive results, it requires significant human intervention in a process of successive refinement of the datasets to achieve a more homogeneous sample, and structures that are not well represented in the data tend to be lost @cite_33 . | {
"cite_N": [
"@cite_18",
"@cite_33"
],
"mid": [
"2104234755",
"2257760214"
],
"abstract": [
"RELION, for REgularized LIkelihood OptimizatioN, is an open-source computer program for the refinement of macromolecular structures by single-particle analysis of electron cryo-microscopy (cryo-EM) data. Whereas alternative approaches often rely on user expertise for the tuning of parameters, RELION uses a Bayesian approach to infer parameters of a statistical model from the data. This paper describes developments that reduce the computational costs of the underlying maximum a posteriori (MAP) algorithm, as well as statistical considerations that yield new insights into the accuracy with which the relative orientations of individual particles may be determined. A so-called gold-standard Fourier shell correlation (FSC) procedure to prevent overfitting is also described. The resulting implementation yields high-quality reconstructions and reliable resolution estimates with minimal user intervention and at acceptable computational costs.",
"Single-particle reconstruction is the process by which 3D density maps are obtained from a set of low-dose cryo-EM images of individual macromolecules. This review considers the fundamental principles of this process and the steps in the overall workflow for single-particle image processing. Also considered are the limits that image signal-to-noise ratio places on resolution and the distinguishing of heterogeneous particle populations."
]
} |
1907.01589 | 2953878887 | Cryo-electron microscopy (cryo-EM), the subject of the 2017 Nobel Prize in Chemistry, is a technology for determining the 3-D structure of macromolecules from many noisy 2-D projections of instances of these macromolecules, whose orientations and positions are unknown. The molecular structures are not rigid objects, but flexible objects involved in dynamical processes. The different conformations are exhibited by different instances of the macromolecule observed in a cryo-EM experiment, each of which is recorded as a particle image. The range of conformations and the conformation of each particle are not known a priori; one of the great promises of cryo-EM is to map this conformation space. Remarkable progress has been made in determining rigid structures from homogeneous samples of molecules in spite of the unknown orientation of each particle image and significant progress has been made in recovering a few distinct states from mixtures of rather distinct conformations, but more complex heterogeneous samples remain a major challenge. We introduce the hyper-molecule'' framework for modeling structures across different states of heterogeneous molecules, including continuums of states. The key idea behind this framework is representing heterogeneous macromolecules as high-dimensional objects, with the additional dimensions representing the conformation space. This idea is then refined to model properties such as localized heterogeneity. In addition, we introduce an algorithmic framework for recovering such maps of heterogeneous objects from experimental data using a Bayesian formulation of the problem and Markov chain Monte Carlo (MCMC) algorithms to address the computational challenges in recovering these high dimensional hyper-molecules. We demonstrate these ideas in a prototype applied to synthetic data. | More recently, the RELION framework has been extended to include multi-body refinement @cite_24 (also see @cite_0 @cite_6 @cite_34 @cite_34 @cite_1 @cite_64 ). In this approach, the user selects different rigid 3-D models that are to be refined separately from the main, or consensus, model. Each separate sub-model is then refined separately, with its own viewing direction and translation, allowing it to move with respect to the consensus model in a rigid-body fashion. This method is limited to rigid-body variability in a few sub-volumes, and cannot handle non-rigid deformations or other types of variability. In particular, the structure found at the interface between the sub-models is likely to vary as their relative positions vary, and it is therefore lost in this method. | {
"cite_N": [
"@cite_64",
"@cite_1",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_34"
],
"mid": [
"2258912434",
"2120396539",
"2138390185",
"2951611668",
"2150790075",
"2072064506"
],
"abstract": [
"Electron cryomicroscopy can yield near-atomic resolution structures of highly ordered macromolecular complexes. Often however some subunits bind in a flexible manner, have different symmetry from the rest of the complex, or are present in sub-stoichiometric amounts, limiting the attainable resolution. Here we report a general method for the localized three-dimensional reconstruction of such subunits. After determining the particle orientations, local areas corresponding to the subunits can be extracted and treated as single particles. We demonstrate the method using three examples including a flexible assembly and complexes harbouring subunits with either partial occupancy or mismatched symmetry. Most notably, the method allows accurate fitting of the monomeric RNA-dependent RNA polymerase bound at the threefold axis of symmetry inside a viral capsid, revealing for the first time its exact orientation and interactions with the capsid proteins. Localized reconstruction is expected to provide novel biological insights in a range of challenging biological systems.",
"An enzyme called gamma-secretase cuts other proteins in cells into smaller pieces. Like most enzymes, gamma-secretase is expected to move through several different three-dimensional shapes to perform its role, and identifying these structures could help us to understand how the enzyme works. One of the proteins that is targeted by gamma-secretase is called amyloid precursor protein, and cutting this protein results in the formation of so-called amyloid-beta peptides. When gamma-secretase doesn't work properly, these amyloid-beta peptides can accumulate in the brain and large accumulations of these peptides have been observed in the brains of patients with Alzheimer's disease. Earlier in 2015, a group of researchers used a technique called cryo-electron microscopy (cryo-EM) to produce a three-dimensional model of gamma-secretase. This revealed that the active site of the enzyme, that is, the region that is used to cut the other proteins, is particularly flexible. Now, – including many of the researchers from the earlier work – studied this flexibility in more detail. For the experiments, gamma-secretase was exposed to an inhibitor molecule that stopped it from cutting other proteins. This meant that the structure of gamma-secretase became more rigid than normal, which made it possible to collect more detailed structural information using cryo-EM. also developed new methods for processing images to separate the images of individual enzyme molecules based on the different shapes they had adopted at the time. These methods make it possible to view a mixture of very similar enzyme structures that differ only in a small region of the protein (in this case the active site). In the future, it would be useful to repeat these imaging experiments using a range of different molecules that alter the activity of gamma-secretase. Furthermore, the new image processing methods developed by could be used to study flexibility in the shapes of other proteins.",
"Each year, malaria kills more than 600,000 people, mostly children younger than 5 years old. Humans who have been bitten by mosquitoes infected with malaria-causing parasites become ill as the parasites rapidly multiply in blood cells. Although there are several drugs that are currently used to treat malaria, the parasites are rapidly developing resistance to them, setting off an urgent hunt for new malaria drugs. Developing new antimalarial medications from scratch is likely to take decades—too long to combat the current public health threat posed by emerging strains of drug-resistant parasites. To speed up the process, scientists are investigating whether drugs developed for other illnesses may also act as therapies for malaria, either when used alone or in combination with existing malaria drugs. Certain antibiotics—including one called emetine—have already shown promise as antimalarial drugs. These antibiotics prevent the parasites from multiplying by interfering with the ribosome—the part of a cell that builds new proteins. However, humans become ill after taking emetine for long periods because it also blocks the production of human proteins. Tweaking emetine so that it acts only against the production of parasite proteins would make it a safer malaria treatment. To do this, scientists must first map the precise interactions between the drug and the ribosomes in parasites. have now used a technique called cryo-electron microscopy to examine the ribosome of the most virulent form of malaria parasite. This technique uses very cold temperatures to rapidly freeze molecules, allowing scientists to look at molecular-level details without distorting the structure of the molecule—a problem sometimes encountered in other techniques. The images of the parasitic ribosome taken by Wong, Bai, show that emetine binds to the end of the ribosome where the instructions for how to assemble amino acids into a protein are copied from strands of RNA. In addition, the images revealed features of the parasitic ribosome that are not found in the human form. Drug makers could exploit these features to improve emetine so that it more specifically targets the production of proteins by the parasite and is less toxic to humans.",
"Macromolecular complexes that exhibit continuous forms of structural flexibility pose a challenge for many existing tools in cryo-EM single-particle analysis. We describe a new tool, called multi-b ...",
"Mitochondria have specialized ribosomes that have diverged from their bacterial and cytoplasmic counterparts. We have solved the structure of the yeast mitoribosomal large subunit using single-particle cryo–electron microscopy. The resolution of 3.2 angstroms enabled a nearly complete atomic model to be built de novo and refined, including 39 proteins, 13 of which are unique to mitochondria, as well as expansion segments of mitoribosomal RNA. The structure reveals a new exit tunnel path and architecture, unique elements of the E site, and a putative membrane docking site.",
"N-ethylmaleimide-sensitive factor (NSF) and α soluble NSF attachment proteins (α-SNAPs) work together within a 20S particle to disassemble and recycle the SNAP receptor (SNARE) complex after intracellular membrane fusion. To understand the disassembly mechanism of the SNARE complex by NSF and α-SNAP, we performed single-particle cryo-electron microscopy analysis of 20S particles and determined the structure of the α-SNAP-SNARE assembly portion at a resolution of 7.35 A. The structure illustrates that four α-SNAPs wrap around the single left-handed SNARE helical bundle as a right-handed cylindrical assembly within a 20S particle. A conserved hydrophobic patch connecting helices 9 and 10 of each α-SNAP forms a chock protruding into the groove of the SNARE four-helix bundle. Biochemical studies proved that this structural element was critical for SNARE complex disassembly. Our study suggests how four α-SNAPs may coordinate with the NSF to tear the SNARE complex into individual proteins."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.