aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1907.01589
2953878887
Cryo-electron microscopy (cryo-EM), the subject of the 2017 Nobel Prize in Chemistry, is a technology for determining the 3-D structure of macromolecules from many noisy 2-D projections of instances of these macromolecules, whose orientations and positions are unknown. The molecular structures are not rigid objects, but flexible objects involved in dynamical processes. The different conformations are exhibited by different instances of the macromolecule observed in a cryo-EM experiment, each of which is recorded as a particle image. The range of conformations and the conformation of each particle are not known a priori; one of the great promises of cryo-EM is to map this conformation space. Remarkable progress has been made in determining rigid structures from homogeneous samples of molecules in spite of the unknown orientation of each particle image and significant progress has been made in recovering a few distinct states from mixtures of rather distinct conformations, but more complex heterogeneous samples remain a major challenge. We introduce the hyper-molecule'' framework for modeling structures across different states of heterogeneous molecules, including continuums of states. The key idea behind this framework is representing heterogeneous macromolecules as high-dimensional objects, with the additional dimensions representing the conformation space. This idea is then refined to model properties such as localized heterogeneity. In addition, we introduce an algorithmic framework for recovering such maps of heterogeneous objects from experimental data using a Bayesian formulation of the problem and Markov chain Monte Carlo (MCMC) algorithms to address the computational challenges in recovering these high dimensional hyper-molecules. We demonstrate these ideas in a prototype applied to synthetic data.
The covariance estimation approach proposed in @cite_39 does not rely on a particular model for heterogeneity, be it discrete or continuous. Indeed, the authors present a method for characterizing continuous variability in synthetic data. However, the covariance approach is adapted to a linear model of variability and is therefore not well-suited for continuous, and necessarily non-linear, variability. Furthermore, the limited resolution of the reconstruction precludes the study of heterogeneity at higher level of detail. Another approach has been to study the normal modes of perturbation of a macromolecular structure @cite_45 @cite_13 .
{ "cite_N": [ "@cite_45", "@cite_13", "@cite_39" ], "mid": [ "2127650328", "2160880244", "2963304184" ], "abstract": [ "A theory of elastic normal modes is described for the exploration of global distortions of biological structures and their assemblies based upon low-resolution image data. Structural information at low resolution, e.g. from density maps measured by cryogenic electron microscopy (cryo-EM), is used to construct discrete multi-resolution models for the electron density using the techniques of vector quantization. The elastic normal modes computed based on these discretized low-resolution models are found to compare well with the normal modes obtained at atomic resolution. The quality of the normal modes describing global displacements of the molecular system is found to depend on the resolution of the synthetic EM data and the extent of reductionism in the discretized representation. However, models that reproduce the functional rearrangements of our test set of molecules are achieved for realistic values of experimental resolution. Thus large conformational changes as occur during the functioning of biological macromolecules and assemblies can be elucidated directly from low-resolution structural data through the application of elastic normal mode theory and vector quantization.", "This article presents a method to study large-scale conformational changes by combining electron microscopy (EM) single-particle image analysis and normal mode analysis (NMA). It is referred to as HEMNMA, which stands for hybrid electron microscopy normal mode analysis. NMA of a reference structure (atomic-resolution structure or EM volume) is used to predict possible motions that are then confronted with EM images within an automatic iterative elastic 3D-to-2D alignment procedure to identify actual motions in the imaged samples. HEMNMA can be used to extensively analyze the conformational changes and may be used in combination with classic discrete procedures. The identified conformations allow modeling of deformation pathways compatible with the experimental data. HEMNMA was tested with synthetic and experimental data sets of E. coli 70S ribosome, DNA polymerase Pol a and B subunit complex of the eukaryotic primosome, and tomato bushy stunt virus.", "In cryo-electron microscopy, the three-dimensional (3D) electric potentials of an ensemble of molecules are projected along arbitrary viewing directions to yield noisy two-dimensional images. The volume maps representing these potentials typically exhibit a great deal of structural variability, which is described by their 3D covariance matrix. Typically, this covariance matrix is approximately low rank and can be used to cluster the volumes or estimate the intrinsic geometry of the conformation space. We formulate the estimation of this covariance matrix as a linear inverse problem, yielding a consistent least-squares estimator. For @math images of size @math -by- @math pixels, we propose an algorithm for calculating this covariance estimator with computational complexity @math , where the condition number @math is empirically in the range 10--200. Its efficiency relies on the observation that the normal equations are equivalent to a deconvolution problem in six dimensions...." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
The problem of finding polynomial time algorithms for objects defined non-constructively has been a major research focus in the algorithmic game theory literature and beyond @cite_63 . Positive results were obtained for important special cases, such as computing Nash equilibria in zero-sum games and competitive equilibria in exchange economies with additive utilities , as well as negative (hardness) results for the corresponding problems in general-sum games and economies with non-additive utilities .
{ "cite_N": [ "@cite_63" ], "mid": [ "2169359757" ], "abstract": [ "We define several new complexity classes of search problems, ''between'' the classes FP and FNP. These new classes are contained, along with factoring, and the class PLS, in the class TFNP of search problems in FNP that always have a witness. A problem in each of these new classes is defined in terms of an implicitly given, exponentially large graph. The existence of the solution sought is established via a simple graph-theoretic argument with an inefficiently constructive proof; for example, PLS can be thought of as corresponding to the lemma ''every dag has a sink.'' The new classes, are based on lemmata such as ''every graph has an even number of odd-degree nodes.'' They contain several important problems for which no polynomial time algorithm is presently known, including the computational versions of Sperner's lemma, Brouwer's fixpoint theorem, Chevalley's theorem, and the Borsuk-Ulam theorem, the linear complementarity problem for P-matrices, finding a mixed equilibrium in a non-zero sum game, finding a second Hamilton circuit in a Hamiltonian cubic graph, a second Hamiltonian decomposition in a quartic graph, and others. Some of these problems are shown to be complete." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
In particular, the search for algorithms for computing competitive equilibria has brought a flurry of efficient algorithms for finding equilibria in diverse market scenarios (see, e.g., primal-dual type algorithms in @cite_58 @cite_66 , network flow type algorithms in @cite_75 @cite_50 @cite_72 , convex programming formulations for Fisher markets and their extensions such as Eisenberg-Gale markets @cite_0 @cite_34 @cite_30 @cite_44 , auction-based algorithms in @cite_35 ) as well as computational hardness results (see, e.g. @cite_18 @cite_14 @cite_42 @cite_45 ).
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_18", "@cite_14", "@cite_42", "@cite_34", "@cite_0", "@cite_44", "@cite_72", "@cite_45", "@cite_50", "@cite_58", "@cite_75", "@cite_66" ], "mid": [ "2009809490", "1511958950", "", "2097048499", "", "1754990977", "", "1985717133", "2004578381", "2626895635", "2072713008", "2102523945", "2135820054", "1540301562" ], "abstract": [ "Eisenberg and Gale (1959) gave a convex program for computing market equilibrium for Fisher's model for linear utility functions, and Eisenberg (1961) generalized this to concave homogeneous functions of degree one. We further generalize to:1. Homothetic, quasi-concave utilities. This also helps extend Eisenberg's result to concave homogeneous functions of arbitrary degree.2. We introduce the notion of a trading cone which enables us to compute market equilibrium in the presence of economies of scale in production provided differential pricing is allowed. Applications to network pricing are provided.", "Utility functions satisfying gross substitutability have been studied extensively in the economics literature [1,11,12] and recently, the importance of this property has been recognized in the design of combinatorial polynomial time market equilibrium algorithms [8]. This naturally raises the following question: is it possible to design a combinatorial polynomial time algorithm for this general class of utility functions? We partially answer this question by giving an algorithm for separable, differentiable, concave utility functions satisfying gross substitutes. Our algorithm uses the auction based approach of [10].", "", "We prove that the problem of computing an Arrow-Debreu market equilibrium is PPAD-complete even when all traders use additively separable, piecewise-linear and concave utility functions. In fact, our proof shows that this market-equilibrium problem does not have a fully polynomial-time approximation scheme, unless every problem in PPAD is solvable in polynomial time.", "", "We consider exchange economies where the traders' preferences are expressed in terms of the extensively used constant elasticity of substitution (CES) utility functions. We show that for any such economy it is possible to say in polynomial time whether an equilibrium exists. We then describe a convex formulation of the equilibrium conditions, which leads to polynomial time algorithms for a wide range of the parameter defining the CES utility functions. This range includes instances that do not satisfy weak gross substitutability. As a byproduct of our work, we prove the uniqueness of equilibrium in an interesting setting where such a result was not known. The range for which we do not obtain polynomial-time algorithms coincides with the range for which the economies admit multiple disconnected equilibria.", "", "We provide the first polynomial time exact algorithm for computing an Arrow-Debreu market equilibrium for the case of linear utilities. Our algorithm is based on solving a convex program using the ellipsoid algorithm and simultaneous diophantine approximation. As a side result, we prove that the set of assignments at equilibrium is convex and the equilibrium prices themselves are log-convex. Our convex program is explicit and intuitive, which allows maximizing a concave function over the set of equilibria. On the practical side, Ye developed an interior point algorithm [Lecture Notes in Comput. Sci. 3521, Springer, New York, 2005, pp. 3-5] to find an equilibrium based on our convex program. We also derive separate combinatorial characterizations of equilibrium for Arrow-Debreu and Fisher cases. Our convex program can be extended for many nonlinear utilities and production models. Our paper also makes a powerful theorem (Theorem 6.4.1 in [M. Grotschel, L. Lovasz, and A. Schrijver, Geometric Algorithms and Combinatorial Optimization, 2nd ed., Springer-Verlag, Berlin, Heidelberg, 1993]) even more powerful (in Theorems 12 and 13) in the area of geometric algorithms and combinatorial optimization. The main idea in this generalization is to allow ellipsoids to contain not the whole convex region but a part of it. This theorem is of independent interest.", "We consider a nonlinear extension of the generalized network flow model, with the flow leaving an arc being an increasing concave function of the flow entering it, as proposed by Truemper and Shigeno. We give a polynomial time combinatorial algorithm for solving corresponding flow maximization problems, finding an @math -approximate solution in @math arithmetic operations and value oracle queries, where @math and @math are upper bounds on simple parameters. This also gives a new algorithm for linear generalized flows, an efficient, purely scaling variant of the Fat-Path algorithm by Goldberg, Plot kin and Tardos, not using any cycle cancellations. We show that this general convex programming model serves as a common framework for several market equilibrium problems, including the linear Fisher market model and its various extensions. Our result immediately provides combinatorial algorithms for various extensions of these market models. This includes nonsymmetric Arrow-Debreu Nash bargaining, settling an open question by Vazirani [4].", "Our first result shows membership in PPAD for the problem of computing approximate equilibria for an Arrow-Debreu exchange market for piecewise-linear concave (PLC) utility functions. As a corollary we also obtain membership in PPAD for Leontief utility functions. This settles an open question of Vazirani and Yannakakis (2011). Next we show FIXP-hardness of computing equilibria in Arrow-Debreu exchange markets under Leontief utility functions, and Arrow-Debreu markets under linear utility functions and Leontief production sets, thereby settling these open questions of Vazirani and Yannakakis (2011). As corollaries, we obtain FIXP-hardness for PLC utilities and for Arrow-Debreu markets under linear utility functions and polyhedral production sets. In all cases, as required under FIXP, the set of instances mapped onto will admit equilibria, i.e., will be \"yes\" instances. If all instances are under consideration, then in all cases we prove that the problem of deciding if a given instance admits an equilibrium is ETR-complete, where ETR is the class Existential Theory of Reals. As a consequence of the results stated above, and the fact that membership in FIXP has been established for PLC utilities, the entire computational difficulty of Arrow-Debreu markets under PLC utility functions lies in the Leontief utility subcase. This is perhaps the most unexpected aspect of our result, since Leontief utilities are meant for the case that goods are perfect complements, whereas PLC utilities are very general, capturing not only the cases when goods are complements and substitutes, but also arbitrary combinations of these and much more. Finally, we give a polynomial time algorithm for finding an equilibrium in Arrow-Debreu exchange markets under Leontief utility functions provided the number of agents is a constant. This settles part of an open problem of Devanur and Kannan (2008).", "A well-studied nonlinear extension of the minimum-cost flow problem is to minimize the objective ∑ij∈E Cij(fij) over feasible flows f, where on every arc ij of the network, Cij is a convex function. We give a strongly polynomial algorithm for finding an exact optimal solution for a broad class of such problems. The key characteristic of this class is that an optimal solution can be computed exactly provided its support. This includes separable convex quadratic objectives and also certain market equilibria problems: Fisher's market with linear and with spending constraint utilities. We thereby give the first strongly polynomial algorithms for separable quadratic minimum-cost flows and for Fisher's market with spending constraint utilities, settling open questions posed e.g. in [15] and in [35], respectively. The running time is O(m4 log m) for quadratic costs, O(n4+n2(m+n log n) log n) for Fisher's markets with linear utilities and O(mn3 +m2(m+n log n) log m) for spending constraint utilities.", "Although the study of market equilibria has occupied center stage within mathematical economics for over a century, polynomial time algorithms for such questions have so far evaded researchers. We provide the first such algorithm for the linear version of a problem defined by Irving Fisher in 1891. Our algorithm is modeled after Kuhn's (1995) primal-dual algorithm for bipartite matching.", "We give the rst strongly polynomial time algorithm for computing an equilibrium for the linear utilities case of Fisher’s market model. We consider a problem with a set B of buyers and a set G of divisible goods. Each buyer i starts with an initial integral allocation ei of money. The integral utility for buyer i of good j is Uij. We rst develop a weakly polynomial time algorithm that runs in O(n 4 logUmax +n 3 emax) time, where n =jBj +jGj. We further modify the algorithm so that it runs in O(n 4 logn) time. These algorithms improve upon the previous best running time of O(n 8 logUmax +n 7 logemax), due to [5]", "In this paper we study efficient algorithms for computing equilibrium price in the Fisher model for a class of nonlinear concave utility functions, the logarithmic utility functions. We derive a duality relation between buyers and sellers under such utility functions, and use it to design a polynomial time algorithm for calculating equilibrium price, for the special case when either the number of sellers or the number of buyers is bounded by a constant." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
The polynomial time algorithms in these works are designed for economies that satisfy implicit or explicit convexity assumptions. For example, in the case of Fisher markets, the competitive equilibrium solves the Eisenberg-Gale convex program @cite_67 for a large class of utilities, maximizing the Nash product (i.e., the geometric mean of the utilities weighted by the budgets of the agents). Moreover, the equilibrium is unique, robust @cite_10 (i.e., small errors in the observation of the market parameters do not change the competitive allocation by much), and admits polynomial time approximation algorithms based on gradient descent methods as well as exact algorithms (see, e.g., Chapters 5 and 6 in @cite_12 ). In contrast, in the case of chores, there is a multiplicity of equilibria and no robustness guarantee: the set of equilibria admits no continuous selections @cite_39 .
{ "cite_N": [ "@cite_67", "@cite_10", "@cite_12", "@cite_39" ], "mid": [ "1980938853", "1562593567", "", "2586913291" ], "abstract": [ "Abstract : Under the pari-mutuel system of betting on horse races the final track's odds are in some sense a consensus of the 'subjective odds' of the individual bettors weighted by the amounts of their bets. The properties which this consensus must possess and prove that there always exists a unique set of odds having the required properties are formulated. (Author)", "Continuity of the mapping from initial endowments and utilities to equilibria is an essential property for a desirable model of an economy - without continuity, small errors in the observation of parameters of the economy may lead to entirely different predicted equilibria. We show that for the linear case of Fisher's market model, the (unique) vector of equilibrium prices, p = p(m, U) is a continuous function of the initial amounts of money held by the agents, m, and their utility functions, U. Furthermore, the correspondence X(m, U), giving the set of equilibrium allocations for any specified m and U, is upper hemicontinuous, but not lower hemicontinuous. However, for a fixed U, this correspondence is lower hemicontinuous in m.", "", "A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of pro les), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility pro le (before any manna) is Pareto dominated, the competitive pro le is unique and still maximizes the product of utilities. If the zero pro le is unfeasible, the competitive pro les are the critical points of the product of disutilities on the eciency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We re ne our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly bene ts under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Ecient divisions of bads can be disconnected so that it will admit no continuous selection." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
Dynamic processes in markets have also been studied, such as tatonement (see, e.g., @cite_1 for a general class of markets containing Eisenberg-Gale markets), and proportional response dynamics in Fisher markets @cite_7 @cite_61 @cite_13 and production markets @cite_5 .
{ "cite_N": [ "@cite_61", "@cite_7", "@cite_1", "@cite_5", "@cite_13" ], "mid": [ "2154648819", "2109916679", "2150526819", "2788306682", "2090459604" ], "abstract": [ "Designing distributed algorithms that converge quickly to an equilibrium is one of the foremost research goals in algorithmic game theory, and convex programs have played a crucial role in the design of algorithms for Fisher markets. In this paper we shed new light on both aspects for Fisher markets with linear and spending constraint utilities. We show fast convergence of the Proportional Response dynamics recently introduced by Wu and Zhang. The convergence is obtained from a new perspective: we show that the Proportional Response dynamics is equivalent to a gradient descent algorithm (with respect to a Bregman divergence instead of euclidean distance) on a convex program that captures the equilibria for linear utilities. We further show that the convex program program easily extends to the case of spending constraint utilities, thus resolving an open question raised by Vazirani. This also gives a way to extend the Proportional Response dynamics to spending constraint utilties. We also prove a technical result that is interesting in its own right: that the gradient descent algorithm based on a Bregman divergence converges with rate O(1 t) under a condition that is weaker than having Lipschitz continuous gradient (which is the usual assumption in the optimization literature for obtaining the same rate).", "In this paper, we show that the proportional response dynamics, a utility based distributed dynamics, converges to the market equilibrium in the Fisher market with constant elasticity of substitution (CES) utility functions. By the proportional response dynamics, each buyer allocates his budget proportional to the utility he receives from each good in the previous time period. Unlike the tâtonnement process and its variants, the proportional response dynamics is a large step discrete dynamics, and the buyers do not solve any optimization problem at each step. In addition, the goods are always cleared and assigned to the buyers proportional to their bids at each step. Despite its simplicity, the dynamics converges fast for strictly concave CES utility functions, matching the best upperbound of computing the market equilibrium via solving a global convex optimization problem.", "Tatonnement is a simple and natural rule for updating prices in Exchange (Arrow-Debreu) markets. In this paper we define a class of markets for which tatonnement is equivalent to gradient descent. This is the class of markets for which there is a convex potential function whose gradient is always equal to the negative of the excess demand and we call it Convex Potential Function (CPF) markets. We show the following results. CPF markets contain the class of Eisenberg Gale (EG) markets, defined previously by Jain and Vazirani. The subclass of CPF markets for which the demand is a differentiable function contains exactly those markets whose demand function has a symmetric negative semi-definite Jacobian. We define a family of continuous versions of tatonnement based on gradient descent using a Bregman divergence. As we show, all processes in this family converge to an equilibrium for any CPF market. This is analogous to the classic result for markets satisfying the Weak Gross Substitutes property. A discrete version of tatonnement converges toward the equilibrium for the following markets of complementary goods; its convergence rate for these settings is analyzed using a common potential function. Fisher markets in which all buyers have Leontief utilities. The tatonnement process reduces the distance to the equilibrium, as measured by the potential function, to an e fraction of its initial value in O(1 e) rounds of price updates. Fisher markets in which all buyers have complementary CES utilities. Here, the distance to the equilibrium is reduced to an e fraction of its initial value in O(log(1 e)) rounds of price updates. This shows that tatonnement converges for the entire range of Fisher markets when buyers have complementary CES utilities, in contrast to prior work, which could analyze only the substitutes range, together with a small portion of the complementary range.", "We study a simple variant of the von Neumann model of an expanding economy, in which multiple producers produce goods according to their production function. The players trade their goods at the market and then use the bundles acquired as inputs for the production in the next round. We show that a simple decentralized dynamic, where players update their bids proportionally to how useful the investments were in the past round, leads to growth of the economy in the long term (whenever growth is possible) but also creates unbounded inequality, i.e. very rich and very poor players emerge. We analyze several other phenomena, such as how the relation of a player with others influences its development and the Gini index of the system.", "One of the main reasons of the recent success of peer to peer (P2P)file sharing systems such as BitTorrent is their built-in tit-for-tat mechanism. In this paper, we model the bandwidth allocation in a P2P system as an exchange economy and study a tit-for-tat dynamics, namely the proportional response dynamics, in this economy. In aproportional response dynamics each player distributes its good to its neighbors proportional to the utility it received from them in thelast period. We show that this dynamics not only converges but converges to a market equilibrium, a standard economic characterization of efficient exchanges in a competitive market. In addition, for some classes of utility functions we consider, it converges much faster than the classical tat process and any existingalgorithms for computing market equilibria. As a part of our proof we study the double normalization of a matrix, an operation that linearly scales the rows of a matrix sothat each row sums to a prescribed positive number, followed by a similar scaling of the columns. We show that the iterative double normalization process of any non-negative matrix always converges. This complements the previous studies in matrix scaling that has focused on the convergence condition of the process when the row and column normalizations are considered as separate steps." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
None of the methods mentioned above are applicable to the situation when the competitive equilibria form a disconnected set, that is, when the competitive rule becomes multivalued (as in the case of chores) . This situation corresponds to constrained economies, such as when preferences are satiated or there are some constraints on individual consumption Note that an economy with chores can be reduced to a constrained economy with goods, see @cite_39 and . In the economic literature it is known that for such economies the competitive correspondence may become multivalued (see, e.g., @cite_64 ), which was observed to be problematic from the point of view of finding competitive equilibria; for example, in @cite_72 polynomial time algorithms are not obtained precisely in cases where economies admit multiple disconnected equilibria.
{ "cite_N": [ "@cite_72", "@cite_64", "@cite_39" ], "mid": [ "2004578381", "1600146083", "2586913291" ], "abstract": [ "We consider a nonlinear extension of the generalized network flow model, with the flow leaving an arc being an increasing concave function of the flow entering it, as proposed by Truemper and Shigeno. We give a polynomial time combinatorial algorithm for solving corresponding flow maximization problems, finding an @math -approximate solution in @math arithmetic operations and value oracle queries, where @math and @math are upper bounds on simple parameters. This also gives a new algorithm for linear generalized flows, an efficient, purely scaling variant of the Fat-Path algorithm by Goldberg, Plot kin and Tardos, not using any cycle cancellations. We show that this general convex programming model serves as a common framework for several market equilibrium problems, including the linear Fisher market model and its various extensions. Our result immediately provides combinatorial algorithms for various extensions of these market models. This includes nonsymmetric Arrow-Debreu Nash bargaining, settling an open question by Vazirani [4].", "For agents with identical homothetic preferences (but possibly different endowments), aggregate excess demand can be derived from maximization of a utility function of a representative agent whose endowment is the sum of the individual's endowments. Such an economy has a unique equilibrium. In this paper, a metric p is defined on the set P of preference relations representable by CES utility functions. It is then shown that there are agentswhose preference relations in P are arbitrarily close to one another in t he metric p, and there are endowments for these agents, such that the resulting exchange economy has a multiple Walrasian equilibria.", "A mixed manna contains goods (that everyone likes), bads (that everyone dislikes), as well as items that are goods to some agents, but bads or satiated to others. If all items are goods and utility functions are homothetic, concave (and monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash product of utilities: hence it is welfarist (determined utility-wise by the feasible set of pro les), single-valued and easy to compute. We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive division is still welfarist and related to the product of utilities or disutilities. If the zero utility pro le (before any manna) is Pareto dominated, the competitive pro le is unique and still maximizes the product of utilities. If the zero pro le is unfeasible, the competitive pro les are the critical points of the product of disutilities on the eciency frontier, and multiplicity is pervasive. In particular the task of dividing a mixed manna is either good news for everyone, or bad news for everyone. We re ne our results in the practically important case of linear preferences, where the axiomatic comparison between the division of goods and that of bads is especially sharp. When we divide goods and the manna improves, everyone weakly bene ts under the competitive rule; but no reasonable rule to divide bads can be similarly Resource Monotonic. Also, the much larger set of Non Envious and Ecient divisions of bads can be disconnected so that it will admit no continuous selection." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
There are very few examples of efficient algorithms for computing competitive equilibria for non-convex economies. In @cite_59 , a polynomial time algorithm is given for markets with covering constraints, where the utilities are satiated but the equilibria form a connected, yet non-convex set. The work of @cite_2 gives a polynomial time algorithm for computing competitive equilibria when either the number of agents or goods is fixed based on the cell enumeration technique. The work of @cite_36 extends the approach of @cite_2 to the fair assignment problem of @cite_26 : there the utilities are piecewise-linear concave functions, but are neither separable nor monotone, and do not satisfy gross substitutability; their study also gives a polynomial time algorithm when either the number of agents or the number of goods is fixed.
{ "cite_N": [ "@cite_36", "@cite_26", "@cite_59", "@cite_2" ], "mid": [ "2604998889", "2008751404", "2606123014", "2095954749" ], "abstract": [ "Market equilibria of matching markets offer an intuitive and fair solution for matching problems without money with agents who have preferences over the items. Such a matching market can be viewed as a variation of Fisher market, albeit with rather peculiar preferences of agents. These preferences can be described by piece-wise linear concave (PLC) functions, which however, are not separable (due to each agent only asking for one item), are not monotone, and do not satisfy the gross substitute property-- increase in price of an item can result in increased demand for the item. Devanur and Kannan in FOCS 08 showed that market clearing prices can be found in polynomial time in markets with fixed number of items and general PLC preferences. They also consider Fischer markets with fixed number of agents (instead of fixed number of items), and give a polynomial time algorithm for this case if preferences are separable functions of the items, in addition to being PLC functions. Our main result is a polynomial time algorithm for finding market clearing prices in matching markets with fixed number of different agent preferences, despite that the utility corresponding to matching markets is not separable. We also give a simpler algorithm for the case of matching markets with fixed number of different items.", "In a variety of contexts, individuals must be allocated to positions with limited capacities. Legislators must be assigned to committees, college students to dormitories, and urban homesteaders to dwellings. (A general class of fair division problems would have the positions represent goods.) This paper examines the general problem of achieving efficient allocations when individuals' preferences are unknown and where (as with a growing number of nonmarket allocation schemes) there is no facilitating external medium of exchange such as money. An implicit market procedure is developed that elicits honest preferences, that assigns individuals efficiently, and that is adaptable to a variety of distributional objectives.", "We introduce a new class of combinatorial markets in which agents have covering constraints over resources required and are interested in delay minimization. Our market model is applicable to several settings including scheduling and communicating over a network. This model is quite different from the traditional models, to the extent that neither do the classical equilibrium existence results seem to apply to it nor do any of the efficient algorithmic techniques developed to compute equilibria. In particular, our model does not satisfy the condition of non-satiation, which is used critically to show the existence of equilibria in traditional market models and we observe that our set of equilibrium prices could be a connected, non-convex set. We give a proof of the existence of equilibria and a polynomial time algorithm for finding one, drawing heavily on techniques from LP duality and submodular minimization. Finally, we show that our model inherits many of the fairness properties of traditional equilibrium models as well as new models, such as CEEI.", "We consider markets in the classical Arrow-Debreu model. There are n agents and m goods. Each buyer has a concave utility function (of the bundle of goods he she buys) and an initial bundle. At an ldquoequilibriumrdquo set of prices for goods, if each individual buyer separately ex-changes the initial bundle for an optimal bundle at the set prices, the market clears, i.e., all goods are exactly consumed. Classical theorems guarantee the existence of equilibria, but computing them has been the subject of much recent research. In the related area of Multi-Agent Games,much attention has been paid to the complexity as well as algorithms. While most general problems are hard, polynomial time algorithms have been developed for restricted classes of games, when one assumes the number of strategies is constant.For the Market Equilibrium problem, several important special cases of utility functions have been tackled. Here we begin a program for this problem similar to that for multi-agent games, where general utilities are considered. We begin by showing that if the utilities are separable piece-wise linear concave (PLC) functions, and the number of goods(or alternatively the number of buyers) is constant, then we can compute an exact equilibrium in polynomial time.Our technique for the constant number of goods is to de-compose the space of price vectors into cells using certain hyperplanes, so that in each cell, each buyerpsilas threshold marginal utility is known. Still, one needs to solve a linear optimization problem in each cell. We then show the main result - that for general (non-separable) PLC utilities, an exact equilibrium can be found in polynomial time provided the number of goods is constant. The starting point of the algorithm is a ldquocell-decompositionrdquo of the space of price vectors using polynomial surfaces (instead of hyperplanes).We use results from computational algebraic geometry to bound the number of such cells. For solving the problem inside each cell, we introduce and use a novel LP-duality based method. We note that if the number of buyers and agents both can vary, the problem is PPAD hard even for the very special case of PLC utilities - namely Leontief utilities." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
The literature on fair division of indivisible goods has studied several fairness notions, such as envy-freeness up to one good (EF1) @cite_25 , proportionality up to one good (Prop1), envy-freeness up to any good (EFX), max-min fair share @cite_40 , and (approximate) competitive equilibrium. Envy-freeness up to one good roughly means that no agent @math envies the bundle of another agent @math after the best item has been dropped from @math 's bundle. Proportionality up to one good is similarly defined. These two fairness notions can be miraculously obtained by maximizing the Nash social welfare, which also guarantees Pareto optimality @cite_33 . It is open whether or not EFX allocations always exist (see, e.g., @cite_22 ).
{ "cite_N": [ "@cite_40", "@cite_25", "@cite_33", "@cite_22" ], "mid": [ "2150409561", "2121240598", "", "2964250178" ], "abstract": [ "This paper proposes a new mechanism for combinatorial assignment—for example, assigning schedules of courses to students—based on an approximation to competitive equilibrium from equal incomes (CEEI) in which incomes are unequal but arbitrarily close together. The main technical result is an existence theorem for approximate CEEI. The mechanism is approximately efficient, satisfies two new criteria of outcome fairness, and is strategyproof in large markets. Its performance is explored on real data, and it is compared to alternatives from theory and practice: all other known mechanisms are either unfair ex post or manipulable even in large markets, and most are both manipulable and unfair.", "We study the problem of fairly allocating a set of indivisible goods to a set of people from an algorithmic perspective. fair division has been a central topic in the economic literature and several concepts of fairness have been suggested. The criterion that we focus on is envy-freeness. In our model, a monotone utility function is associated with every player specifying the value of each subset of the goods for the player. An allocation is envy-free if every player prefers her own share than the share of any other player. When the goods are divisible, envy-free allocations always exist. In the presence of indivisibilities, we show that there exist allocations in which the envy is bounded by the maximum marginal utility, and present a simple algorithm for computing such allocations. We then look at the optimization problem of finding an allocation with minimum possible envy. In the general case the problem is not solvable or approximable in polynomial time unless P = NP. We consider natural special cases (e.g.additive utilities) which are closely related to a class of job scheduling problems. Approximation algorithms as well as inapproximability results are obtained. Finally we investigate the problem of designing truthful mechanisms for producing allocations with bounded envy.", "", "The goal of fair division is to distribute resources among competing players in a \"fair\" way. Envy-freeness is the most extensively studied fairness notion in fair division. Envy-free allocations do not always exist with indivisible goods, motivating the study of relaxed versions of envy-freeness. We study the envy-freeness up to any good (EFX) property, which states that no player prefers the bundle of another player following the removal of any single good, and prove the first general results about this property. We use the leximin solution to show existence of EFX allocations in several contexts, sometimes in conjunction with Pareto optimality. For two players with valuations obeying a mild assumption, one of these results provides stronger guarantees than the currently deployed algorithm on Spliddit, a popular fair division website. Unfortunately, finding the leximin solution can require exponential time. We show that this is necessary by proving an exponential lower bound on the number of value queries needed to identify an EFX allocation, even for two players with identical valuations. We consider both additive and more general valuations, and our work suggests that there is a rich landscape of problems to explore in the fair division of indivisible goods with different classes of player valuations." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
The max-min fair share is a fairness notion inspired from cake cutting protocols and requires that each agent gets a value at least as high as the one he can guarantee by preparing first @math bundles and letting the other players choose the best @math of these bundles. This optimization problem induces a max-min value @math for each player and the question is whether there exists an allocation where each agent has utility at least @math . While such allocations may not exist @cite_23 , approximations are possible; in particular, there always exists an allocation in which all the agents get two thirds of their max-min value @cite_23 , and this can be computed in polynomial time @cite_17 .
{ "cite_N": [ "@cite_23", "@cite_17" ], "mid": [ "2079854492", "1542025417" ], "abstract": [ "We consider the problem of fairly allocating indivisible goods, focusing on a recently-introduced notion of fairness called maximin share guarantee: Each player's value for his allocation should be at least as high as what he can guarantee by dividing the items into as many bundles as there are players and receiving his least desirable bundle. Assuming additive valuation functions, we show that such allocations may not exist, but allocations guaranteeing each player 2 3 of the above value always exist, and can be computed in polynomial time when the number of players is constant. These theoretical results have direct practical implications.", "We study the problem of computing maximin share allocations, a recently introduced fairness notion. Given a set of n agents and a set of goods, the maximin share of an agent is the best she can guarantee to herself, if she is allowed to partition the goods in any way she prefers, into n bundles, and then receive her least desirable bundle. The objective then is to find a partition, where each agent is guaranteed her maximin share. Such allocations do not always exist, hence we resort to approximation algorithms. Our main result is a 2 3-approximation that runs in polynomial time for any number of agents and goods. This improves upon the algorithm of Procaccia and Wang (2014), which is also a 2 3-approximation but runs in polynomial time only for a constant number of agents. To achieve this, we redesign certain parts of the algorithm in Procaccia and Wang (2014), exploiting the construction of carefully selected matchings in a bipartite graph representation of the problem. Furthermore, motivated by the apparent difficulty in establishing lower bounds, we undertake a probabilistic analysis. We prove that in randomly generated instances, maximin share allocations exist with high probability. This can be seen as a justification of previously reported experimental evidence. Finally, we provide further positive results for two special cases arising from previous works. The first is the intriguing case of three agents, where we provide an improved 7 8-approximation. The second case is when all item values belong to 0, 1, 2 , where we obtain an exact algorithm." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
@cite_71 study the fair allocation of multiple indivisible chores using the max-min share solution concept, showing that such allocations do not always exist and computing one (if it exists) is strongly NP-hard; these findings are complemented by a polynomial 2-approximation algorithm. @cite_9 consider the problem of fair allocation of a mixture of goods and chores and design several algorithms for finding fair (but not necessarily Pareto optimal) allocations in this setting. @cite_46 consider mechanisms robust to strategic manipulations.
{ "cite_N": [ "@cite_46", "@cite_9", "@cite_71" ], "mid": [ "2946422814", "2883410524", "2593819652" ], "abstract": [ "We initiate the work on fair and strategyproof allocation of indivisible chores. The fairness concept we consider in this paper is maxmin share (MMS) fairness. We consider three previously studied models of information elicited from the agents: the ordinal model, the cardinal model, and the public ranking model in which the ordinal preferences are publicly known. We present both positive and negative results on the level of MMS approximation that can be guaranteed if we require the algorithm to be strategyproof. Our results uncover some interesting contrasts between the approximation ratios achieved for chores versus goods.", "We consider the problem of fairly dividing a set of items. Much of the fair division literature assumes that the items are goods' i.e., they yield positive utility for the agents. There is also some work where the items are chores' that yield negative utility for the agents. In this paper, we consider a more general scenario where an agent may have negative or positive utility for each item. This framework captures, e.g., fair task assignment, where agents can have both positive and negative utilities for each task. We show that whereas some of the positive axiomatic and computational results extend to this more general setting, others do not. We present several new and efficient algorithms for finding fair allocations in this general setting. We also point out several gaps in the literature regarding the existence of allocations satisfying certain fairness and efficiency properties and further study the complexity of computing such allocations.", "We consider Max-min Share (MmS) fair allocations of indivisible chores (items with negative utilities). We show that allocation of chores and classical allocation of goods (items with positive utilities) have some fundamental connections but also differences which prevent a straightforward application of algorithms for goods in the chores setting and viceversa. We prove that an MmS allocation does not need to exist for chores and computing an MmS allocation - if it exists - is strongly NP-hard. In view of these non-existence and complexity results, we present a polynomial-time 2-approximation algorithm for MmS fairness for chores. We then introduce a new fairness concept called optimal MmS that represents the best possible allocation in terms of MmS that is guaranteed to exist. We use connections to parallel machine scheduling to give (1) a polynomial-time approximation scheme for computing an optimal MmS allocation when the number of agents is fixed and (2) an effective and efficient heuristic with an ex-post worst-case analysis." ] }
1907.01766
2953466159
We study the problem of allocating divisible bads (chores) among multiple agents with additive utilities, when money transfers are not allowed. The competitive rule is known to be the best mechanism for goods with additive utilities and was recently extended to chores by (2017). For both goods and chores, the rule produces Pareto optimal and envy-free allocations. In the case of goods, the outcome of the competitive rule can be easily computed. Competitive allocations solve the Eisenberg-Gale convex program; hence the outcome is unique and can be approximately found by standard gradient methods. An exact algorithm that runs in polynomial time in the number of agents and goods was given by Orlin. In the case of chores, the competitive rule does not solve any convex optimization problem; instead, competitive allocations correspond to local minima, local maxima, and saddle points of the Nash Social Welfare on the Pareto frontier of the set of feasible utilities. The rule becomes multivalued and none of the standard methods can be applied to compute its outcome. In this paper, we show that all the outcomes of the competitive rule for chores can be computed in strongly polynomial time if either the number of agents or the number of chores is fixed. The approach is based on a combination of three ideas: all consumption graphs of Pareto optimal allocations can be listed in polynomial time; for a given consumption graph, a candidate for a competitive allocation can be constructed via explicit formula; and a given allocation can be checked for being competitive using a maximum flow computation as in (2002). Our algorithm immediately gives an approximately-fair allocation of indivisible chores by the rounding technique of Barman and Krishnamurthy (2018).
Finally, the competitive rule and various relaxations (such as those obtained by removing the budget clearing requirement, allowing item bundling, or using randomization) can also be used to allocate indivisible goods. These have been studied for various classes of utilities from the point of view of existence of fair solutions and their computation in @cite_40 @cite_55 @cite_51 @cite_15 @cite_74 @cite_16 @cite_3 . Closest to ours is the work by @cite_16 , which considers Fisher markets with indivisible goods and shows how to compute an allocation that is Prop1 and Pareto optimal in strongly polynomial time. We build on these results to obtain a theorem for chores.
{ "cite_N": [ "@cite_55", "@cite_3", "@cite_40", "@cite_74", "@cite_15", "@cite_16", "@cite_51" ], "mid": [ "2021361030", "2909121337", "2150409561", "2963668580", "2027473833", "2901391097", "2158373208" ], "abstract": [ "We study a combinatorial market design problem, where a collection of indivisible objects is to be priced and sold to potential buyers subject to equilibrium constraints. The classic solution concept for such problems is Walrasian Equilibrium (WE), which provides a simple and transparent pricing structure that achieves optimal social welfare. The main weakness of the WE notion is that it exists only in very restrictive cases. To overcome this limitation, we introduce the notion of a Combinatorial Walrasian equilibium (CWE), a natural relaxation of WE. The difference between a CWE and a (non-combinatorial) WE is that the seller can package the items into indivisible bundles prior to sale, and the market does not necessarily clear. We show that every valuation profile admits a CWE that obtains at least half of the optimal (unconstrained) social welfare. Moreover, we devise a poly-time algorithm that, given an arbitrary allocation X, computes a CWE that achieves at least half of the welfare of X. Thus, the economic problem of finding a CWE with high social welfare reduces to the algorithmic problem of social-welfare approximation. In addition, we show that every valuation profile admits a CWE that extracts a logarithmic fraction of the optimal welfare as revenue. Finally, these results are complemented by strong lower bounds when the seller is restricted to using item prices only, which motivates the use of bundles. The strength of our results derives partly from their generality --- our results hold for arbitrary valuations that may exhibit complex combinations of substitutes and complements.", "Two food banks catering to populations of different sizes with different needs must divide among themselves a donation of food items. What constitutes a \"fair\" allocation of the items among them? Competitive equilibrium from equal incomes (CEEI) is a classic solution to the problem of fair and efficient allocation of goods among equally entitled agents [Foley 1967, Varian 1974]. Every agent (foodbank) receives an equal endowment of artificial currency with which to \"purchase\" bundles of goods (food items). Prices for the goods are set high enough such that the agents can simultaneously get their favorite within-budget bundle, and low enough such that all goods are allocated (no waste). A CEEI satisfies mathematical notions of fairness like fair-share, and also has built-in transparency -- prices can be published so the agents can verify they're being treated equally. However, a CEEI is not guaranteed to exist when the items are indivisible. We study competitive equilibrium from generic incomes (CEGI), which is based on the idea of slightly perturbed endowments, and enjoys similar fairness, efficiency and transparency properties as CEEI. We show that when the two agents have almost equal endowments and additive preferences for the items, a CEGI always exists. We then consider agents who are a priori non-equal (like different-sized foodbanks); we formulate a new notion of fair allocation among non-equals satisfied by CEGI, and show existence in cases of interest (like when the agents have identical preferences). Experiments on simulated and Spliddit data (a popular fair division website) indicate more general existence. Our results open opportunities for future research on fairness through generic endowments, and on fair treatment of non-equals.", "This paper proposes a new mechanism for combinatorial assignment—for example, assigning schedules of courses to students—based on an approximation to competitive equilibrium from equal incomes (CEEI) in which incomes are unequal but arbitrarily close together. The main technical result is an existence theorem for approximate CEEI. The mechanism is approximately efficient, satisfies two new criteria of outcome fairness, and is strategyproof in large markets. Its performance is explored on real data, and it is compared to alternatives from theory and practice: all other known mechanisms are either unfair ex post or manipulable even in large markets, and most are both manipulable and unfair.", "Single minded agents have strict preferences, in which a bundle is acceptable only if it meets a certain demand. Such preferences arise naturally in scenarios such as allocating computational resources among users, where the goal is to fairly serve as many requests as possible. In this paper we study the fair division problem for such agents, which is complex due to discontinuity and complementarities of preferences. Our solution concept--the competitive allocation from equal incomes (CAEI)--is inspired from market equilibria and implements fair outcomes through a pricing mechanism. We study existence and computation of CAEI for multiple divisible goods, discrete goods, and cake cutting. Our solution is useful more generally, when the players have a target set of goods, and very small positive values for any bundle other than their target set.", "Competitive equilibrium from equal incomes (CEEI) is a well-known fair allocation mechanism with desirable fairness and efficiency properties; however, with indivisible resources, a CEEI may not exist [Foley 1967; Varian 1974; Thomson and Varian 1985]. It was shown in Budish [2011] that in the case of indivisible resources, there is always an allocation, called A-CEEI, that is approximately fair, approximately truthful, and approximately efficient for some favorable approximation parameters. A heuristic search that attempts to find this approximation is used in practice to assign business school students to courses. In this article, we show that finding the A-CEEI allocation guaranteed to exist by Budish’s theorem is PPAD-complete. We further show that finding an approximate equilibrium with better approximation guarantees is even harder: NP-complete.", "We study Fisher markets that admit equilibria wherein each good is integrally assigned to some agent. While strong existence and computational guarantees are known for equilibria of Fisher markets with additive valuations, such equilibria, in general, assign goods fractionally to agents. Hence, Fisher markets are not directly applicable in the context of indivisible goods. In this work we show that one can always bypass this hurdle and, up to a bounded change in agents' budgets, obtain markets that admit an integral equilibrium. We refer to such markets as pure markets and show that, for any given Fisher market (with additive valuations), one can efficiently compute a \"near-by,\" pure market with an accompanying integral equilibrium. Our work on pure markets leads to novel algorithmic results for fair division of indivisible goods. Prior work in discrete fair division has shown that, under additive valuations, there always exist allocations that simultaneously achieve the seemingly incompatible properties of fairness and efficiency; here fairness refers to envy-freeness up to one good (EF1) and efficiency corresponds to Pareto efficiency. However, polynomial-time algorithms are not known for finding such allocations. Considering relaxations of proportionality and EF1, respectively, as our notions of fairness, we show that fair and Pareto efficient allocations can be computed in strongly polynomial time.", "We consider the problem of allocating indivisible goods using the leading notion of fairness in economics: the competitive equilibrium from equal incomes. Focusing on two major classes of valuations, namely perfect substitutes and perfect complements, we establish the computational properties of algorithms operating in this framework. For the class of valuations with perfect complements, our algorithm yields a surprisingly succinct characterization of instances that admit a competitive equilibrium from equal incomes." ] }
1907.01577
2954004206
This paper proposes an SVM Enhanced Trajectory Planner for dynamic scenes, typically those encountered in on road settings. Frenet frame based trajectory generation is popular in the context of autonomous driving both in research and industry. We incorporate a safety based maximal margin criteria using a SVM layer that generates control points that are maximally separated from all dynamic obstacles in the scene. A kinematically consistent trajectory generator then computes a path through these waypoints. We showcase through simulations as well as real world experiments on a self driving car that the SVM enhanced planner provides for a larger offset with dynamic obstacles than the regular Frenet frame based trajectory generation. Thereby, the authors argue that such a formulation is inherently suited for navigation amongst pedestrians. We assume the availability of an intent or trajectory prediction module that predicts the future trajectories of all dynamic actors in the scene.
Motion planning in the presence of dynamic obstacles is a well studied problem with various flavours. Model Predictive Control (MPC) formulations has been a popular theme @cite_5 that has been extended to uncertainty based formulations @cite_2 . Time scaling formulations that modulate velocities on highly non-linear curves without changing the original planned trajectories represent another paradigm @cite_0 , @cite_1 . In a reactive multi-robot setting Reciprocal Velocity Obstacles and its variants have been popular @cite_15 , @cite_4 especially amidst moving amongst pedestrians @cite_8 . However to the best of our knowledge these approaches have not formulated their method as one that strives to maintain maximum margin separation between the ego vehicle and other moving obstacles in the scene thereby providing for safety. Also these approaches have not shown a real implementation on a car, while @cite_8 does show it on a ground robot moving amidst pedestrians.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_15" ], "mid": [ "", "2964136016", "2896594106", "2054149315", "2909324603", "2890092150", "2142943472" ], "abstract": [ "", "", "Conventionally, planning frameworks for autonomous vehicles consider large safety margins and pre- defined paths for performing the merge maneuvers. These considerations often increase the wait time at the intersec- tions leading to traffic disruption. In this paper, we present a motion planning framework for autonomous vehicles to perform merge maneuver in dense traffic. Our framework is divided into a two-layer structure, Lane Selection layer and Scale optimization layer. The Lane Selection layer computes the likelihood of collision along the lanes. This likelihood represents the collision risk associated with each lane and is used for lane selection. Subsequently, the Scale optimization layer solves the time scaled collision cone (TSCC) constraint re- actively for collision-free velocities. Our framework guarantees a collision-free merging even in dense traffic with minimum disruption. Furthermore, we show the simulation results in different merging scenarios to demonstrate the efficacy of our framework.", "Reactive Collision avoidance for non-holonomic robots is a challenging task because of the restrictions in the space of achievable velocities. The complexity increases further when multiple non-holonomic robots are operating in tight cluttered spaces. The present paper presents a framework specially carved out for such situations. But at the same time can be easily appended with any existing collision avoidance framework. At the crux of the methodology is the concept of non-linear time scaling which allows robots to reactively accelerate de-accelerate without altering the geometric path. The framework introduced is completely independent of the robot kinematics and dynamics. As such it can be applied to any ground or aerial robot. Through this concept the collision avoidance is framed as a problem of choosing appropriate scaling transformations. We present a “scaled” variant of the collision cone concept which automatically induces distributiveness among robots. The efficacy of the proposed work is demonstrated through simulations of both ground as well as UAVs.", "Safe autonomous navigation of microair vehicles in cluttered dynamic environments is challenging due to the uncertainties arising from robot localization, sensing, and motion disturbances. This letter presents a probabilistic collision avoidance method for navigation among other robots and moving obstacles, such as humans. The approach explicitly considers the collision probability between each robot and obstacle and formulates a chance constrained nonlinear model predictive control problem (CCNMPC). A tight bound for approximation of collision probability is developed, which makes the CCNMPC formulation tractable and solvable in real time. For multirobot coordination, we describe three approaches, one distributed without communication (constant velocity assumption), one distributed with communication (of previous plans), and one centralized (sequential planning). We evaluate the proposed method in experiments with two quadrotors sharing the space with two humans and verify the multirobot coordination strategy in simulation with up to sixteen quadrotors.", "When driving in urban environments, an autonomous vehicle must account for the interaction with other traffic participants. It must reason about their future behavior, how its actions affect their future behavior, and potentially consider multiple motion hypothesis. In this paper we introduce a method for joint behavior estimation and trajectory planning that models interaction and multi-policy decision-making. The method leverages Partially Observable Markov Decision Processes to estimate the behavior of other traffic participants given the planned trajectory for the ego-vehicle, and Receding-Horizon Control for generating safe trajectories for the ego-vehicle. To achieve safe navigation we introduce chance constraints over multiple motion policies in the receding-horizon planner. These constraints account for uncertainty over the behavior of other traffic participants. The method is capable of running in real-time and we show its performance and good scalability in simulated multi-vehicle intersection scenarios.", "In this paper, we propose a new concept - the \"Reciprocal Velocity Obstacle\"- for real-time multi-agent navigation. We consider the case in which each agent navigates independently without explicit communication with other agents. Our formulation is an extension of the Velocity Obstacle concept [3], which was introduced for navigation among (passively) moving obstacles. Our approach takes into account the reactive behavior of the other agents by implicitly assuming that the other agents make a similar collision-avoidance reasoning. We show that this method guarantees safe and oscillation- free motions for each of the agents. We apply our concept to navigation of hundreds of agents in densely populated environments containing both static and moving obstacles, and we show that real-time and scalable performance is achieved in such challenging scenarios." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
* Other Configuration Spaces for Matchings As mentioned, reconfiguration problems of matchings have already been studied under different models @cite_3 @cite_18 @cite_13 @cite_10 @cite_8 . These models chose more elementary changes as the adjacency on the configuration space. Then, the situation changes drastically under such models: even the reachability of two matchings is not guaranteed.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_8", "@cite_3", "@cite_10" ], "mid": [ "", "2145799305", "2942216670", "1997048861", "2754517646" ], "abstract": [ "", "We study problems of reconfigurability of independent sets in graphs. We consider three different models (token jumping, token sliding, and token addition and removal) and analyze relationships between them. We prove that independent set reconfigurability in perfect graphs (under any of the three models) generalizes the shortest path reconfigurability problem in general graphs and is therefore PSPACE-complete. On the positive side, we give polynomial results for even-hole-free graphs and P\"4-free graphs.", "We study the perfect matching reconfiguration problem: Given two perfect matchings of a graph, is there a sequence of flip operations that transforms one into the other? Here, a flip operation exchanges the edges in an alternating cycle of length four. We are interested in the complexity of this decision problem from the viewpoint of graph classes. We first prove that the problem is PSPACE-complete even for split graphs and for bipartite graphs of bounded bandwidth with maximum degree five. We then investigate polynomial-time solvable cases. Specifically, we prove that the problem is solvable in polynomial time for strongly orderable graphs (that include interval graphs and strongly chordal graphs), for outerplanar graphs, and for cographs (also known as @math -free graphs). Furthermore, for each yes-instance from these graph classes, we show that a linear number of flip operations is sufficient and we can exhibit a corresponding sequence of flip operations in polynomial time.", "Reconfiguration problems arise when we wish to find a step-by-step transformation between two feasible solutions of a problem such that all intermediate results are also feasible. We demonstrate that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate. In contrast, several reconfiguration versions of problems in P are solvable in polynomial time.", "Reconfiguration is concerned with relationships among solutions to a problem instance, where the reconfiguration of one solution to another is a sequence of steps such that each step produces an intermediate feasible solution. The solution space can be represented as a reconfiguration graph, where two vertices representing solutions are adjacent if one can be formed from the other in a single step. Work in the area encompasses both structural questions (Is the reconfiguration graph connected?) and algorithmic ones (How can one find the shortest sequence of steps between two solutions?) This survey discusses techniques, results, and future directions in the area." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
Matching reconfiguration was initiated by the work of @cite_3 . They proposed the model of reconfiguration, in which we are also given an integer @math , and the vertex set of the configuration space consists of the matchings of size at least @math . Precisely, their model is defined in a slightly different way, but it is essentially the same as this definition. Two matchings @math and @math are adjacent if and only if they differ in only one edge. @cite_3 proved that the reachability of two given matchings can be checked in polynomial time.
{ "cite_N": [ "@cite_3" ], "mid": [ "1997048861" ], "abstract": [ "Reconfiguration problems arise when we wish to find a step-by-step transformation between two feasible solutions of a problem such that all intermediate results are also feasible. We demonstrate that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate. In contrast, several reconfiguration versions of problems in P are solvable in polynomial time." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
Another model of reconfiguration is , introduced by Kami ' @cite_18 . In the token jumping model, we are also given an integer @math , and the vertex set of the configuration space consists of the matchings of size exactly @math . Two matchings @math and @math are adjacent if and only if they differ in only two edges. Kami ' [Theorem 1] DBLP:journals tcs KaminskiMM12 proved that the token jumping model is equivalent to the token addition removal model when @math . Thus, using the result by @cite_3 , the reachability can be checked in polynomial time also under the token jumping model [Corollary 2] DBLP:journals tcs KaminskiMM12 .
{ "cite_N": [ "@cite_18", "@cite_3" ], "mid": [ "2145799305", "1997048861" ], "abstract": [ "We study problems of reconfigurability of independent sets in graphs. We consider three different models (token jumping, token sliding, and token addition and removal) and analyze relationships between them. We prove that independent set reconfigurability in perfect graphs (under any of the three models) generalizes the shortest path reconfigurability problem in general graphs and is therefore PSPACE-complete. On the positive side, we give polynomial results for even-hole-free graphs and P\"4-free graphs.", "Reconfiguration problems arise when we wish to find a step-by-step transformation between two feasible solutions of a problem such that all intermediate results are also feasible. We demonstrate that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate. In contrast, several reconfiguration versions of problems in P are solvable in polynomial time." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
Recently, @cite_8 studied the reachability of two perfect matchings under a model close to ours, namely the alternating cycle model . In the model, two perfect matchings @math and @math are adjacent if and only if their symmetric difference @math is a cycle of length four. Then, the answer to the reachability is not always yes, and @cite_8 proved that the reachability problem is -complete under this restricted model.
{ "cite_N": [ "@cite_8" ], "mid": [ "2942216670" ], "abstract": [ "We study the perfect matching reconfiguration problem: Given two perfect matchings of a graph, is there a sequence of flip operations that transforms one into the other? Here, a flip operation exchanges the edges in an alternating cycle of length four. We are interested in the complexity of this decision problem from the viewpoint of graph classes. We first prove that the problem is PSPACE-complete even for split graphs and for bipartite graphs of bounded bandwidth with maximum degree five. We then investigate polynomial-time solvable cases. Specifically, we prove that the problem is solvable in polynomial time for strongly orderable graphs (that include interval graphs and strongly chordal graphs), for outerplanar graphs, and for cographs (also known as @math -free graphs). Furthermore, for each yes-instance from these graph classes, we show that a linear number of flip operations is sufficient and we can exhibit a corresponding sequence of flip operations in polynomial time." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
As mentioned before, the matching reconfiguration has been studied by several authors @cite_3 @cite_18 @cite_13 @cite_10 @cite_8 . Extension to @math -matchings has been considered, too @cite_2 @cite_0 .
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_8", "@cite_3", "@cite_0", "@cite_2", "@cite_10" ], "mid": [ "", "2145799305", "2942216670", "1997048861", "2803094153", "", "2754517646" ], "abstract": [ "", "We study problems of reconfigurability of independent sets in graphs. We consider three different models (token jumping, token sliding, and token addition and removal) and analyze relationships between them. We prove that independent set reconfigurability in perfect graphs (under any of the three models) generalizes the shortest path reconfigurability problem in general graphs and is therefore PSPACE-complete. On the positive side, we give polynomial results for even-hole-free graphs and P\"4-free graphs.", "We study the perfect matching reconfiguration problem: Given two perfect matchings of a graph, is there a sequence of flip operations that transforms one into the other? Here, a flip operation exchanges the edges in an alternating cycle of length four. We are interested in the complexity of this decision problem from the viewpoint of graph classes. We first prove that the problem is PSPACE-complete even for split graphs and for bipartite graphs of bounded bandwidth with maximum degree five. We then investigate polynomial-time solvable cases. Specifically, we prove that the problem is solvable in polynomial time for strongly orderable graphs (that include interval graphs and strongly chordal graphs), for outerplanar graphs, and for cographs (also known as @math -free graphs). Furthermore, for each yes-instance from these graph classes, we show that a linear number of flip operations is sufficient and we can exhibit a corresponding sequence of flip operations in polynomial time.", "Reconfiguration problems arise when we wish to find a step-by-step transformation between two feasible solutions of a problem such that all intermediate results are also feasible. We demonstrate that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate. In contrast, several reconfiguration versions of problems in P are solvable in polynomial time.", "Consider a graph such that each vertex has a nonnegative integer capacity and each edge has a positive integer weight. Then, a b-matching in the graph is a multi-set of edges (represented by an integer vector on edges) such that the total number of edges incident to each vertex is at most the capacity of the vertex. In this paper, we study a reconfiguration variant for maximum-weight b-matchings: For two given maximum-weight b-matchings in a graph, we are asked to determine whether there exists a sequence of maximum-weight b-matchings in the graph between them, with subsequent b-matchings obtained by removing one edge and adding another. We show that this reconfiguration problem is solvable in polynomial time for instances with no integrality gap. Such instances include bipartite graphs with any capacity function on vertices, and 2-matchings in general graphs. Thus, our result implies that the reconfiguration problem for maximum-weight matchings can be solved in polynomial time for bipartite graphs.", "", "Reconfiguration is concerned with relationships among solutions to a problem instance, where the reconfiguration of one solution to another is a sequence of steps such that each step produces an intermediate feasible solution. The solution space can be represented as a reconfiguration graph, where two vertices representing solutions are adjacent if one can be formed from the other in a single step. Work in the area encompasses both structural questions (Is the reconfiguration graph connected?) and algorithmic ones (How can one find the shortest sequence of steps between two solutions?) This survey discusses techniques, results, and future directions in the area." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
Shortest reconfiguration has attracted considerable attention. Starting from an old work on the @math -puzzle @cite_30 , we see the work on pancake sorting @cite_37 , triangulations of point sets @cite_19 @cite_36 and simple polygons @cite_11 under flip distances, and also independent set reconfigurations @cite_6 , satisfiability reconfiguration @cite_17 , coloring reconfiguration @cite_33 , token swapping problems @cite_16 @cite_14 @cite_23 @cite_20 @cite_27 @cite_22 . A tantalizing open problem is to determine the complexity of computing the rotation distance of two rooted binary trees (or equivalently the flip distance of two triangulations of a convex polygon, or the combinatorial shortest path of an associahedron).
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_14", "@cite_33", "@cite_22", "@cite_36", "@cite_17", "@cite_6", "@cite_19", "@cite_27", "@cite_23", "@cite_16", "@cite_20", "@cite_11" ], "mid": [ "2149951504", "1846975517", "2962926282", "2568898402", "2962964818", "1987750368", "2757068865", "2258225879", "2110762325", "2912916233", "", "", "", "2163548699" ], "abstract": [ "The 8-puzzle and the 15-puzzle have been used for many years as a domain for testing heuristic search techniques. From experience it is known that these puzzles are \"difficult\" and therefore useful for testing search techniques. In this paper we give strong evidence that these puzzles are indeed good test problems. We extend the 8-puzzle and the 15-puzzle to a nxn board and show that finding a shortest solution for the extended puzzle is NP-hard and thus computationally infeasible. We also present an approximation algorithm for transforming boards that is guaranteed to use no more than c L (SP) moves, where L (SP) is the length of the shortest solution and c is a constant which is independent of the given boards and their size n.", "Pancake Flipping is the problem of sorting a stack of pancakes of different sizes (that is, a permutation), when the only allowed operation is to insert a spatula anywhere in the stack and to flip the pancakes above it (that is, to perform a prefix reversal). In the burnt variant, one side of each pancake is marked as burnt, and it is required to finish with all pancakes having the burnt side down. Computing the optimal scenario for any stack of pancakes and determining the worst-case stack for any stack size have been challenges for over more than three decades. Beyond being an intriguing combinatorial problem in itself, it also yields applications, e.g. in parallel computing and computational biology. In this paper, we show that the Pancake Flipping problem, in its original (unburnt) variant, is NP-hard, thus answering the long-standing question of its computational complexity.", "Given a graph G=(V,E) with V= 1,...,n , we place on every vertex a token T_1,...,T_n. A swap is an exchange of tokens on adjacent vertices. We consider the algorithmic question of finding a shortest sequence of swaps such that token T_i is on vertex i. We are able to achieve essentially matching upper and lower bounds, for exact algorithms and approximation algorithms. For exact algorithms, we rule out any 2^ o(n) algorithm under the ETH. This is matched with a simple 2^ O(n*log(n)) algorithm based on a breadth-first search in an auxiliary graph. We show one general 4-approximation and show APX-hardness. Thus, there is a small constant delta > 1 such that every polynomial time approximation algorithm has approximation factor at least delta. Our results also hold for a generalized version, where tokens and vertices are colored. In this generalized version each token must go to a vertex with the same color.", "The (k )-colouring reconfiguration problem asks whether, for a given graph (G ), two proper (k )-colourings ( ) and ( ) of (G ), and a positive integer ( ), there exists a sequence of at most ( ) proper (k )-colourings of (G ) which starts with ( ) and ends with ( ) and where successive colourings in the sequence differ on exactly one vertex of (G ). We give a complete picture of the parameterized complexity of the (k )-colouring reconfiguration problem for each fixed (k ) when parameterized by ( ). First we show that the (k )-colouring reconfiguration problem is polynomial-time solvable for (k=3 ), settling an open problem of Cereceda, van den Heuvel and Johnson. Then, for all (k 4 ), we show that the (k )-colouring reconfiguration problem, when parameterized by ( ), is fixed-parameter tractable (addressing a question of Mouawad, Nishimura, Raman, Simjour and Suzuki) but that it has no polynomial kernel unless the polynomial hierarchy collapses.", "", "In this work we consider triangulations of point sets in the Euclidean plane, i.e., maximal straight-line crossing-free graphs on a finite set of points. Given a triangulation of a point set, an edge flip is the operation of removing one edge and adding another one, such that the resulting graph is again a triangulation. Flips are a major way of locally transforming triangular meshes. We show that, given a point set S in the Euclidean plane and two triangulations T\"1 and T\"2 of S, it is an APX-hard problem to minimize the number of edge flips to transform T\"1 to T\"2.", "Given a Boolean formula and a satisfying assignment, a flip is an operation that changes the value of a variable in the assignment so that the resulting assignment remains satisfying. We study the problem of computing the shortest sequence of flips (if one exists) that transforms a given satisfying assignment @math to another satisfying assignment @math of an input Boolean formula. Earlier work characterized the complexity of deciding the existence of a sequence of flips between two given satisfying assignments using Schaefer's framework for classification of Boolean formulas. We build on it to provide a trichotomy for the complexity of finding the shortest sequence of flips and show that it is either in P, NP-complete, or PSPACE-complete. Our result adds to the growing set of complexity results known for shortest reconfiguration sequence problems by providing an example where the shortest sequence can be found in polynomial time even though the sequence flips variables that have the same value in both @math an...", "For given two independent sets ( I _b ) and ( I _r ) of a graph, the sliding token problem is to determine if there exists a sequence of independent sets which transforms ( I _b ) into ( I _r ) so that each independent set in the sequence results from the previous one by sliding exactly one token along an edge in the graph. The sliding token problem is one of the reconfiguration problems that attract the attention from the viewpoint of theoretical computer science. These problems tend to be PSPACE-complete in general, and some polynomial time algorithms are shown in restricted cases. Recently, the problems for finding a shortest reconfiguration sequence are investigated. For the 3SAT reconfiguration problem, a trichotomy for the complexity of finding the shortest sequence has been shown; it is in P, NP-complete, or PSPACE-complete in certain conditions. Even if it is polynomial time solvable to decide whether two instances are reconfigured with each other, it can be NP-complete to find a shortest sequence between them. We show nontrivial polynomial time algorithms for finding a shortest sequence between two independent sets for some graph classes. As far as the authors know, one of them is the first polynomial time algorithm for the shortest sliding token problem that requires detours of tokens.", "Given two triangulations of a convex polygon, computing the minimum number of flips required to transform one to the other is a long-standing open problem. It is not known whether the problem is in P or NP-complete. We prove that two natural generalizations of the problem are NP-complete, namely computing the minimum number of flips between two triangulations of (1) a polygon with holes; (2) a set of points in the plane.", "", "", "", "", "Let T be a triangulation of a simple polygon. A flip in T is the operation of replacing one diagonal of T by a different one such that the resulting graph is again a triangulation. The flip distance between two triangulations is the smallest number of flips required to transform one triangulation into the other. For the special case of convex polygons, the problem of determining the shortest flip distance between two triangulations is equivalent to determining the rotation distance between two binary trees, a central problem which is still open after over 25 years of intensive study. We show that computing the flip distance between two triangulations of a simple polygon is NP-hard. This complements a recent result that shows APX-hardness of determining the flip distance between two triangulations of a planar point set." ] }
1907.01700
2955954498
Motivated by adjacency in perfect matching polytopes, we study the shortest reconfiguration problem of perfect matchings via alternating cycles. Namely, we want to find a shortest sequence of perfect matchings which transforms one given perfect matching to another given perfect matching such that the symmetric difference of each pair of consecutive perfect matchings is a single cycle. The problem is equivalent to the combinatorial shortest path problem in perfect matching polytopes. We prove that the problem is NP-hard even when a given graph is planar or bipartite, but it can be solved in polynomial time when the graph is outerplanar.
The computational aspect of the combinatorial shortest path problem on convex polytopes is not well investigated. It is known that the combinatorial diameter is hard to determine @cite_35 even for fractional matching polytopes @cite_9 . In the literature, we find many papers on the adjacency of convex polytopes arising from combinatorial optimization problems @cite_15 @cite_12 @cite_26 @cite_5 . Among others, Papadimitriou @cite_24 proved that determining whether two given vertices are adjacent in a traveling salesman polytope is -complete. This implies that computing the combinatorial shortest path between two vertices of a traveling salesman polytope is -hard. However, to the best of the authors' knowledge, all known combinatorial polytopes with such adjacency hardness stem from -hard combinatorial optimization problems and the associated polytopes have exponentially many facets. We also point out the work on a randomized algorithm to compute a combinatorial short'' path @cite_29 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_9", "@cite_29", "@cite_24", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2010362899", "2059625366", "2963404973", "2132914290", "2000825158", "2077831283", "2017122944", "34983847" ], "abstract": [ "We show that it isDP-hard to determine the combinatorial diameter of a polytope specified by linear inequalities with integer data. Our result partially resolves a long-term open question.", "Abstract Let Qc,r be the integer hull of the intersection of the assignment polytope with a given hyper-plane H = x = (xij) ϵ Rn × n: ∑ni = 1 ∑nj = 1 cijxij = r . We show that the problem of checking whether two given extreme points of Qc,r are nonadjacent on Qc,r is solvable in O (n5) time if c = (cij) is a 0–1 matrix, and that it is NP-Complete if c is a general integer matrix.", "The (combinatorial) diameter of a polytope P ⊆ ⊆ R^d is the maximum value of a shortest path between a pair of vertices on the 1-skeleton of P, that is the graph where the nodes are given by the 0-dimensional faces of P, and the edges are given the 1-dimensional faces of P. The diameter of a polytope has been studied from many different perspectives, including a computational complexity point of view. In particular, [Frieze and Teng, 1994] showed that computing the diameter of a polytope is (weakly) NP-hard. In this paper, we show that the problem of computing the diameter is strongly NP-hard even for a polytope with a very simple structure: namely, the fractional matching polytope. We also show that computing a pair of vertices at maximum shortest path distance on the 1-skeleton of this polytope is an APX-hard problem. We prove these results by giving an exact characterization of the diameter of the fractional matching polytope, that is of independent interest.", "We show that the shadow vertex algorithm can be used to compute a short path between a given pair of vertices of a polytope @math along the edges of P, where A∈ℝm ×n. Both, the length of the path and the running time of the algorithm, are polynomial in m, n, and a parameter 1 δ that is a measure for the flatness of the vertices of P. For integer matrices A∈ℤm ×n we show a connection between δ and the largest absolute value Δ of any sub-determinant of A, yielding a bound of O(Δ4mn4) for the length of the computed path. This bound is expressed in the same parameter Δ as the recent non-constructive bound of O(Δ2n4 log(n Δ)) by [1]. For the special case of totally unimodular matrices, the length of the computed path simplifies to O(mn4), which significantly improves the previously best known constructive bound of O(m16n3 log3 (mn)) by Dyer and Frieze [7].", "We consider the problem of determining whether two traveling salesman tours correspond to non-adjacent vertices of the convex polytope associated with the traveling salesman problem. This problem is shown to be NP-Complete for both the symmetric and nonsymmetric traveling salesman problem. Several implications are discussed.", "To each finite set with at least two elements, there corresponds a partial order polytope. It is defined as the convex hull of the characteristic vectors of all partial orders which have that set as ground set. This 0 1-polytope contains the linear ordering polytope as a proper face. The present article deals with the facial structure of partial order polytopes. Our main results are: (i) a proof that the nonadjacency problem on partial order polytopes is NP-complete; (ii) a characterization of the polytopes that are affinely equivalent to a face of some partial order polytope.", "Abstract In this paper we show that adjacency on the 0–1 knapsack polytope can be determined by a very simple argument. Namely, let u and v be two feasible solutions to the 0–1 knapsack problem, then u and v are nonadjacent on the polytope of the convex hull of feasible solutions, if and only if, there exist two other feasible solutions w 1 and w 2 , such that 1 2w 1 + 1 2w 2 = 1 u + 1 2v . This observation allows us to prove that the question of determining whether two given feasible solutions are adjacent, is an NP-complete problem.", "In this paper, we discuss the adjacency structures of some classes of 0-1 polytopes including knapsack polytopes, set covering polytopes and 0-1 polytopes represented by complete sets of implicants. We show that for each class of 0-1 polytope, non-adjacency test problems are NP-complete. For equality constrained knapsack polytopes, we can solve adjacency test problems in pseudo polynomial time." ] }
1907.01457
2953980163
A common approach for knowledge-base entity search is to consider an entity as a document with multiple fields. Models that focus on matching query terms in different fields are popular choices for searching such entity representations. An instance of such a model is FSDM (Fielded Sequential Dependence Model). We propose to integrate field-level semantic features into FSDM. We use FSDM to retrieve a pool of documents, and then to use semantic field-level features to re-rank those documents. We propose to represent queries as bags of terms as well as bags of entities, and eventually, use their dense vector representation to compute semantic features based on query document similarity. Our proposed re-ranking approach achieves significant improvement in entity retrieval on the DBpedia-Entity (v2) dataset over existing FSDM model. Specifically, for all queries we achieve 2.5 and 1.2 significant improvement in NDCG@10 and NDCG@100, respectively.
@cite_10 and @cite_6 show that over 70 Existing methods take advantage of the fact that entities have rich fielded information and propose a variety of fielded retrieval methods such as BM25F @cite_9 @cite_21 @cite_14 and FSDM @cite_17 . In FSDM, different fields of an entity are categorized into five final fields: names, attributes, categories, related entity names, and similar entity names. FSDM incorporates term dependency based on ordered and unordered n-grams. @cite_19 investigate learning to rank model on entity search which incorporates different features such as the FSDM score, BM25 score, etc.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_21", "@cite_6", "@cite_19", "@cite_10", "@cite_17" ], "mid": [ "", "2128878007", "", "1997189720", "2085030399", "2098700435", "2049534074" ], "abstract": [ "", "Information Retrieval (IR) approaches for semantic web search engines have become very populars in the last years. Popularization of different IR libraries, like Lucene, that allows IR implementations almost out-of-the-box have make easier IR integration in Semantic Web search engines. However, one of the most important features of Semantic Web documents is the structure, since this structure allow us to represent semantic in a machine readable format. In this paper we analyze the specific problems of structured IR and how to adapt weighting schemas for semantic document retrieval.", "", "Semantic Search refers to a loose set of concepts, challenges and techniques having to do with harnessing the information of the growing Web of Data (WoD) for Web search. Here we propose a formal model of one specific semantic search task: ad-hoc object retrieval. We show that this task provides a solid framework to study some of the semantic search problems currently tackled by commercial Web search engines. We connect this task to the traditional ad-hoc document retrieval and discuss appropriate evaluation metrics. Finally, we carry out a realistic evaluation of this task in the context of a Web search application.", "This paper describes a simple way of adapting the BM25 ranking formula to deal with structured documents. In the past it has been common to compute scores for the individual fields (e.g. title and body) independently and then combine these scores (typically linearly) to arrive at a final score for the document. We highlight how this approach can lead to poor performance by breaking the carefully constructed non-linear saturation of term frequency in the BM25 function. We propose a much more intuitive alternative which weights term frequencies before the non-linear term frequency saturation function is applied. In this scheme, a structured document with a title weight of two is mapped to an unstructured document with the title content repeated twice. This more verbose unstructured document is then ranked in the usual way. We demonstrate the advantages of this method with experiments on Reuters Vol1 and the TREC dotGov collection.", "This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods.", "Previously proposed approaches to ad-hoc entity retrieval in the Web of Data (ERWD) used multi-fielded representation of entities and relied on standard unigram bag-of-words retrieval models. Although retrieval models incorporating term dependencies have been shown to be significantly more effective than the unigram bag-of-words ones for ad hoc document retrieval, it is not known whether accounting for term dependencies can improve retrieval from the Web of Data. In this work, we propose a novel retrieval model that incorporates term dependencies into structured document retrieval and apply it to the task of ERWD. In the proposed model, the document field weights and the relative importance of unigrams and bigrams are optimized with respect to the target retrieval metric using a learning-to-rank method. Experiments on a publicly available benchmark indicate significant improvement of the accuracy of retrieval results by the proposed model over state-of-the-art retrieval models for ERWD." ] }
1907.01457
2953980163
A common approach for knowledge-base entity search is to consider an entity as a document with multiple fields. Models that focus on matching query terms in different fields are popular choices for searching such entity representations. An instance of such a model is FSDM (Fielded Sequential Dependence Model). We propose to integrate field-level semantic features into FSDM. We use FSDM to retrieve a pool of documents, and then to use semantic field-level features to re-rank those documents. We propose to represent queries as bags of terms as well as bags of entities, and eventually, use their dense vector representation to compute semantic features based on query document similarity. Our proposed re-ranking approach achieves significant improvement in entity retrieval on the DBpedia-Entity (v2) dataset over existing FSDM model. Specifically, for all queries we achieve 2.5 and 1.2 significant improvement in NDCG@10 and NDCG@100, respectively.
There is substantial work in ad-hoc document retrieval that tries to take advantage of embeddings to improve retrieval effectiveness. Recently, @cite_5 described a method which presents documents and queries in both text and entity space, thus leveraging entity embeddings. However, such deep models need significant amounts of data to be effective. For this task, since the provided dataset is small, our model is more readily applicable.
{ "cite_N": [ "@cite_5" ], "mid": [ "2710956079" ], "abstract": [ "This paper presents a word-entity duet framework for utilizing knowledge bases in ad-hoc retrieval. In this work, the query and documents are modeled by word-based representations and entity-based representations. Ranking features are generated by the interactions between the two representations, incorporating information from the word space, the entity space, and the cross-space connections through the knowledge graph. To handle the uncertainties from the automatically constructed entity representations, an attention-based ranking model AttR-Duet is developed. With back-propagation from ranking labels, the model learns simultaneously how to demote noisy entities and how to rank documents with the word-entity duet. Evaluation results on TREC Web Track ad-hoc task demonstrate that all of the four-way interactions in the duet are useful, the attention mechanism successfully steers the model away from noisy entities, and together they significantly outperform both word-based and entity-based learning to rank systems." ] }
1907.01457
2953980163
A common approach for knowledge-base entity search is to consider an entity as a document with multiple fields. Models that focus on matching query terms in different fields are popular choices for searching such entity representations. An instance of such a model is FSDM (Fielded Sequential Dependence Model). We propose to integrate field-level semantic features into FSDM. We use FSDM to retrieve a pool of documents, and then to use semantic field-level features to re-rank those documents. We propose to represent queries as bags of terms as well as bags of entities, and eventually, use their dense vector representation to compute semantic features based on query document similarity. Our proposed re-ranking approach achieves significant improvement in entity retrieval on the DBpedia-Entity (v2) dataset over existing FSDM model. Specifically, for all queries we achieve 2.5 and 1.2 significant improvement in NDCG@10 and NDCG@100, respectively.
Entity embeddings are also used in other tasks such as question answering @cite_13 , academic search @cite_1 , entity disambiguation @cite_15 , and for knowledge graph completion @cite_11 @cite_18 . The TREC-CAR (Complex Answer retrieval) task provides a large dataset on a large collection of knowledge articles from Wikipedia which present an opportunity for incorporating deep models in the task of entity retrieval. TREC-CAR shows that the RDF2Vec @cite_2 is not as effective as the BM25 model in the paragraph ranking task @cite_0 .
{ "cite_N": [ "@cite_18", "@cite_1", "@cite_0", "@cite_2", "@cite_15", "@cite_13", "@cite_11" ], "mid": [ "2184957013", "2604165577", "2616330167", "2523679382", "2337969891", "1992712260", "2951077644" ], "abstract": [ "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to state-of-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: github.com mrlyk423 relation_extraction.", "This paper introduces Explicit Semantic Ranking (ESR), a new ranking technique that leverages knowledge graph embedding. Analysis of the query log from our academic search engine, SemanticScholar.org, reveals that a major error source is its inability to understand the meaning of research concepts in queries. To addresses this challenge, ESR represents queries and documents in the entity space and ranks them based on their semantic connections from their knowledge graph embedding. Experiments demonstrate ESR's ability in improving Semantic Scholar's online production system, especially on hard queries where word-based ranking fails.", "Providing answers to complex information needs is a challenging task. The new TREC Complex Answer Retrieval (TREC CAR) track introduces a large-scale dataset where paragraphs are to be retrieved in response to outlines of Wikipedia articles representing complex information needs. We present early results from a variety of approaches -- from standard information retrieval methods (e.g., TF-IDF) to complex systems that adopt query expansion, knowledge bases and deep neural networks. The goal is to offer an overview of some promising approaches to tackle this problem.", "Linked Open Data has been recognized as a valuable source for background information in data mining. However, most data mining tools require features in propositional form, i.e., a vector of nominal or numerical features associated with an instance, while Linked Open Data sources are graphs by nature. In this paper, we present RDF2Vec, an approach that uses language modeling approaches for unsupervised feature extraction from sequences of words, and adapts them to RDF graphs. We generate sequences by leveraging local information from graph sub-structures, harvested by Weisfeiler-Lehman Subtree RDF Graph Kernels and graph walks, and learn latent numerical representations of entities in RDF graphs. Our evaluation shows that such vector representations outperform existing techniques for the propositionalization of RDF graphs on a variety of different predictive machine learning tasks, and that feature vector representations of general knowledge graphs such as DBpedia and Wikidata can be easily reused for different tasks.", "Entity disambiguation is the task of mapping ambiguous terms in natural-language text to its entities in a knowledge base. It finds its application in the extraction of structured data in RDF (Resource Description Framework) from textual documents, but equally so in facilitating artificial intelligence applications, such as Semantic Search, Reasoning and Question & Answering. We propose a new collective, graph-based disambiguation algorithm utilizing semantic entity and document embeddings for robust entity disambiguation. Robust thereby refers to the property of achieving better than state-of-the-art results over a wide range of very different data sets. Our approach is also able to abstain if no appropriate entity can be found for a specific surface form. Our evaluation shows, that our approach achieves significantly (>5 ) better results than all other publicly available disambiguation algorithms on 7 of 9 datasets without data set specific tuning. Moreover, we discuss the influence of the quality of the knowledge base on the disambiguation accuracy and indicate that our algorithm achieves better results than non-publicly available state-of-the-art algorithms.", "The availability of large amounts of open, distributed, and structured semantic data on the web has no precedent in the history of computer science. In recent years, there have been important advances in semantic search and question answering over RDF data. In particular, natural language interfaces to online semantic data have the advantage that they can exploit the expressive power of Semantic Web data models and query languages, while at the same time hiding their complexity from the user. However, despite the increasing interest in this area, there are no evaluations so far that systematically evaluate this kind of systems, in contrast to traditional question answering and search interfaces to document spaces. To address this gap, we have set up a series of evaluation challenges for question answering over linked data. The main goal of the challenge was to get insight into the strengths, capabilities, and current shortcomings of question answering systems as interfaces to query linked data sources, as well as benchmarking how these interaction paradigms can deal with the fact that the amount of RDF data available on the web is very large and heterogeneous with respect to the vocabularies and schemas used. Here, we report on the results from the first and second of such evaluation campaigns. We also discuss how the second evaluation addressed some of the issues and limitations which arose from the first one, as well as the open issues to be addressed in future competitions.", "We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (, 2013) and TransE (, 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2 vs. 54.7 by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as \"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)\". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning." ] }
1812.04798
2905357333
We propose an approach for unsupervised adaptation of object detectors from label-rich to label-poor domains which can significantly reduce annotation costs associated with detection. Recently, approaches that align distributions of source and target images using an adversarial loss have been proven effective for adapting object classifiers. However, for object detection, fully matching the entire distributions of source and target images to each other at the global image level may fail, as domains could have distinct scene layouts and different combinations of objects. On the other hand, strong matching of local features such as texture and color makes sense, as it does not change category level semantics. This motivates us to propose a novel method for detector adaptation based on strong local alignment and weak global alignment. Our key contribution is the weak alignment model, which focuses the adversarial alignment loss on images that are globally similar and puts less emphasis on aligning images that are globally dissimilar. Additionally, we design the strong domain alignment model to only look at local receptive fields of the feature map. We empirically verify the effectiveness of our method on four datasets comprising both large and small domain shifts. Our code is available at this https URL
The problem of bridging a gap between domains has been investigated for various visual applications such as image classification and semantic segmentation @cite_14 @cite_13 @cite_1 @cite_18 .
{ "cite_N": [ "@cite_1", "@cite_18", "@cite_14", "@cite_13" ], "mid": [ "", "2795889831", "1722318740", "1565327149" ], "abstract": [ "", "Visual Domain Adaptation is a problem of immense importance in computer vision. Previous approaches showcase the inability of even deep neural networks to learn informative representations across domain shift. This problem is more severe for tasks where acquiring hand labeled data is extremely hard and tedious. In this work, we focus on adapting the representations learned by segmentation networks across synthetic and real domains. Contrary to previous approaches that use a simple adversarial objective or superpixel information to aid the process, we propose an approach based on Generative Adversarial Networks (GANs) that brings the embeddings closer in the learned feature space. To showcase the generality and scalability of our approach, we show that we can achieve state of the art results on two challenging scenarios of synthetic to real domain adaptation. Additional exploratory experiments show that our approach: (1) generalizes to unseen domains and (2) results in improved alignment of source and target distributions.", "Domain adaptation is an important emerging topic in computer vision. In this paper, we present one of the first studies of domain shift in the context of object recognition. We introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While we focus our evaluation on object recognition tasks, the transform-based adaptation technique we develop is general and could be applied to nonimage data. Another contribution is a new multi-domain object database, freely available for download. We experimentally demonstrate the ability of our method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task." ] }
1812.04798
2905357333
We propose an approach for unsupervised adaptation of object detectors from label-rich to label-poor domains which can significantly reduce annotation costs associated with detection. Recently, approaches that align distributions of source and target images using an adversarial loss have been proven effective for adapting object classifiers. However, for object detection, fully matching the entire distributions of source and target images to each other at the global image level may fail, as domains could have distinct scene layouts and different combinations of objects. On the other hand, strong matching of local features such as texture and color makes sense, as it does not change category level semantics. This motivates us to propose a novel method for detector adaptation based on strong local alignment and weak global alignment. Our key contribution is the weak alignment model, which focuses the adversarial alignment loss on images that are globally similar and puts less emphasis on aligning images that are globally dissimilar. Additionally, we design the strong domain alignment model to only look at local receptive fields of the feature map. We empirically verify the effectiveness of our method on four datasets comprising both large and small domain shifts. Our code is available at this https URL
To solve the problem, a large number of methods utilize feature distribution matching between training and testing domains. The basic idea is to measure some type of distance between different domains' feature distributions and train a feature extractor to minimize that distance. Various ways of measuring the distance have been proposed @cite_19 @cite_13 @cite_6 @cite_5 @cite_2 @cite_35 . Motivated by a theoretical result @cite_41 @cite_37 , various approaches utilize the domain classifier @cite_19 @cite_13 @cite_6 to measure domain discrepancy. They train a domain classifier and feature extractor in an adversarial way, as done for training GANs @cite_17 . Such methods are designed to strictly align the feature distribution of the target with that of the source. In addition, Long al designed a loss function of the domain classifier to strictly match features between domains @cite_31 for image classification.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_41", "@cite_6", "@cite_19", "@cite_2", "@cite_5", "@cite_31", "@cite_13", "@cite_17" ], "mid": [ "", "2104094955", "2131953535", "2593768305", "2963826681", "2279034837", "2159291411", "2795155917", "1565327149", "2099471712" ], "abstract": [ "", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.", "Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.", "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.", "Top-performing deep architectures are trained on massive amounts of labeled data. In the absence of labeled data for a certain task, domain adaptation often provides an attractive option given that labeled data of similar nature but from a different domain (e.g. synthetic images) are available. Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary). As the training progresses, the approach promotes the emergence of \"deep\" features that are (i) discriminative for the main learning task on the source domain and (ii) invariant with respect to the shift between the domains. We show that this adaptation behaviour can be achieved in almost any feed-forward model by augmenting it with few standard layers and a simple new gradient reversal layer. The resulting augmented architecture can be trained using standard back propagation. Overall, the approach can be implemented with little effort using any of the deep-learning packages. The method performs very well in a series of image classification experiments, achieving adaptation effect in the presence of big domain shifts and outperforming previous state-of-the-art on Office datasets.", "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.", "Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.", "Adversarial learning has been embedded into deep networks to learn transferable representations for domain adaptation. Existing adversarial domain adaptation methods may struggle to align different domains of multimode distributions that are native in classification problems. In this paper, we present conditional adversarial domain adaptation, a new framework that conditions the adversarial adaptation models on discriminative information conveyed in the classifier predictions. Conditional domain adversarial networks are proposed to enable discriminative adversarial adaptation of multimode domains. Experiments testify that the proposed approaches exceed the state-of-the-art results on three domain adaptation datasets.", "Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias on a standard benchmark. Fine-tuning deep models in a new domain can require a significant amount of data, which for many applications is simply not available. We propose a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant. We additionally show that a domain confusion metric can be used for model selection to determine the dimension of an adaptation layer and the best position for the layer in the CNN architecture. Our proposed adaptation method offers empirical performance which exceeds previously published results on a standard benchmark visual domain adaptation task.", "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples." ] }
1812.04571
2905114471
Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained on manually segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. In this paper, we propose to use both types of training data (fully-annotated and weakly-annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks in order to exploit information contained in weakly-annotated images while preventing the network to learn features which are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in Magnetic Resonance images from BRATS 2018 challenge. We show that the proposed approach provides a significant improvement of segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly-annotated and fully-annotated images available for training.
Pre-trained classification networks were also used to detect objects by determining image subregions whose modification influences the global classification score of a class. In @cite_16 , propose to compute the gradient of the classification score with respect to the intensities of pixels and to threshold it in order to localize the object of interest. However, these partial derivatives represent a very weak information for tumor segmentation, which requires a complex analysis of the spatial context. The method proposed in @cite_5 is based on replacing image subregions by the mean value in order to measure the drop of the classification score.
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "2951505120", "2962851944" ], "abstract": [ "This paper introduces self-taught object localization, a novel approach that leverages deep convolutional networks trained for whole-image recognition to localize objects in images without additional human supervision, i.e., without using any ground-truth bounding boxes for training. The key idea is to analyze the change in the recognition scores when artificially masking out different regions of the image. The masking out of a region that includes the object typically causes a significant drop in recognition score. This idea is embedded into an agglomerative clustering technique that generates self-taught localization hypotheses. Our object localization scheme outperforms existing proposal methods in both precision and recall for small number of subwindow proposals (e.g., on ILSVRC-2012 it produces a relative gain of 23.4 over the state-of-the-art for top-1 hypothesis). Furthermore, our experiments show that the annotations automatically-generated by our method can be used to train object detectors yielding recognition results remarkably close to those obtained by training on manually-annotated bounding boxes.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]." ] }
1812.04571
2905114471
Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained on manually segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. In this paper, we propose to use both types of training data (fully-annotated and weakly-annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks in order to exploit information contained in weakly-annotated images while preventing the network to learn features which are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in Magnetic Resonance images from BRATS 2018 challenge. We show that the proposed approach provides a significant improvement of segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly-annotated and fully-annotated images available for training.
In standard semi-supervised learning @cite_3 for classification, the training data is composed both of labelled samples and unlabelled samples. Unlabelled samples can be used to encourage the model to satisfy some properties on relations between labels and the feature space. Common properties include smoothness (points close in the feature space should be close in the target space), clustering (labels form clusters in the feature space) and low density separation (decision boundaries should be in low density regions of the feature space). Semi-supervised learning based on these properties can be performed by graph-based methods such as the recent work of @cite_21 . The main idea of such methods is to propagate labels in a fully-connected graph whose nodes are samples (labelled and unlabelled) and whose edges are weighted by similarities between samples. The use of graph-based semi-supervised methods is difficult for segmentation, in particular because it implies computation of similarity metrics between samples, whereas each single image is generally composed of millions of samples (pixels or voxels).
{ "cite_N": [ "@cite_21", "@cite_3" ], "mid": [ "2804344481", "2964317695" ], "abstract": [ "We present a novel cost function for semi-supervised learning of neural networks that encourages compact clustering of the latent space to facilitate separation. The key idea is to dynamically create a graph over embeddings of labeled and unlabeled samples of a training batch to capture underlying structure in feature space, and use label propagation to estimate its high and low density regions. We then devise a cost function based on Markov chains on the graph that regularizes the latent space to form a single compact cluster per class, while avoiding to disturb existing clusters during optimization. We evaluate our approach on three benchmarks and compare to state-of-the art with promising results. Our approach combines the benefits of graph-based regularization with efficient, inductive inference, does not require modifications to a network architecture, and can thus be easily applied to existing networks to enable an effective use of unlabeled data.", "Abstract Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https: figshare.com articles Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_ 7479416 ." ] }
1812.04571
2905114471
Most of the current state-of-the-art methods for tumor segmentation are based on machine learning models trained on manually segmented images. This type of training data is particularly costly, as manual delineation of tumors is not only time-consuming but also requires medical expertise. On the other hand, images with a provided global label (indicating presence or absence of a tumor) are less informative but can be obtained at a substantially lower cost. In this paper, we propose to use both types of training data (fully-annotated and weakly-annotated) to train a deep learning model for segmentation. The idea of our approach is to extend segmentation networks with an additional branch performing image-level classification. The model is jointly trained for segmentation and classification tasks in order to exploit information contained in weakly-annotated images while preventing the network to learn features which are irrelevant for the segmentation task. We evaluate our method on the challenging task of brain tumor segmentation in Magnetic Resonance images from BRATS 2018 challenge. We show that the proposed approach provides a significant improvement of segmentation performance compared to the standard supervised learning. The observed improvement is proportional to the ratio between weakly-annotated and fully-annotated images available for training.
Relatively few works were proposed for semi-supervised learning for image segmentation. Some semi-supervised approaches are based on self-training, i.e. training of a machine learning model on self-generated labels. Iterative algorithms similar to EM @cite_36 were proposed for natural images @cite_40 and medical images @cite_19 . Recently, @cite_18 proposed a method based on Generative Adversarial Networks @cite_14 where the generator network performs image segmentation and the discriminator network tries to determine if a segmentation corresponds to the ground truth or the segmentation produced by the generator. The discriminator network is used to produce confidence maps for self-training. The approaches based on self-training have the drawback of learning on uncertain labels (produced by the model itself) and training of such models is difficult.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_36", "@cite_19", "@cite_40" ], "mid": [ "2787241931", "", "2136573752", "2951021229", "2221898772" ], "abstract": [ "We propose a method for semi-supervised semantic segmentation using the adversarial network. While most existing discriminators are trained to classify input images as real or fake on the image level, we design a discriminator in a fully convolutional manner to differentiate the predicted probability maps from the ground truth segmentation distribution with the consideration of the spatial resolution. We show that the proposed discriminator can be used to improve the performance on semantic segmentation by coupling the adversarial loss with the standard cross entropy loss on the segmentation network. In addition, the fully convolutional discriminator enables the semi-supervised learning through discovering the trustworthy regions in prediction results of unlabeled images, providing additional supervisory signals. In contrast to existing methods that utilize weakly-labeled images, our method leverages unlabeled images without any annotation to enhance the segmentation model. Experimental results on both the PASCAL VOC 2012 dataset and the Cityscapes dataset demonstrate the effectiveness of our algorithm.", "", "The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limitation-no spatial information is taken into account. This causes the FM model to work only on well-defined images with low levels of noise; unfortunately, this is often not the the case due to artifacts such as partial volume effect and bias field distortion. Under these conditions, FM model-based methods produce unreliable results. Here, the authors propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown that the FM model is a degenerate version of the HMRF model. The advantage of the HMRF model derives from the way in which the spatial information is encoded through the mutual influences of neighboring sites. Although MRF modeling has been employed in MR image segmentation by other researchers, most reported methods are limited to using MRF as a general prior in an FM model-based approach. To fit the HMRF model, an EM algorithm is used. The authors show that by incorporating both the HMRF model and the EM algorithm into a HMRF-EM framework, an accurate and robust segmentation can be achieved. More importantly, the HMRF-EM framework can easily be combined with other techniques. As an example, the authors show how the bias field correction algorithm of Guillemaud and Brady (1997) can be incorporated into this framework to achieve a three-dimensional fully automated approach for brain MR image segmentation.", "In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled with bounding box annotations. It extends the approach of the well-known GrabCut method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fet al magnetic resonance dataset and obtain encouraging results in terms of accuracy.", "Deep convolutional neural networks (DCNNs) trained on a large number of images with strong pixel-level annotations have recently significantly pushed the state-of-art in semantic image segmentation. We study the more challenging problem of learning DCNNs for semantic image segmentation from either (1) weakly annotated training data such as bounding boxes or image-level labels or (2) a combination of few strongly labeled and many weakly labeled images, sourced from one or multiple datasets. We develop Expectation-Maximization (EM) methods for semantic image segmentation model training under these weakly supervised and semi-supervised settings. Extensive experimental evaluation shows that the proposed techniques can learn models delivering competitive results on the challenging PASCAL VOC 2012 image segmentation benchmark, while requiring significantly less annotation effort. We share source code implementing the proposed system at https: bitbucket.org deeplab deeplab-public." ] }
1812.04821
2949606544
Single Image Super Resolution (SISR) is a well-researched problem with broad commercial relevance. However, most of the SISR literature focuses on small-size images under 500px, whereas business needs can mandate the generation of very high resolution images. At Expedia Group, we were tasked with generating images of at least 2000px for display on the website, four times greater than the sizes typically reported in the literature. This requirement poses a challenge that state-of-the-art models, validated on small images, have not been proven to handle. In this paper, we investigate solutions to the problem of generating high-quality images for large-scale super resolution in a commercial setting. We find that training a generative adversarial network (GAN) with attention from scratch using a large-scale lodging image data set generates images with high PSNR and SSIM scores. We describe a novel attentional SISR model for large-scale images, A-SRGAN, that uses a Flexible Self Attention layer to enable processing of large-scale images. We also describe a distributed algorithm which speeds up training by around a factor of five.
More recently, deep learning approaches have produced favorable results in many computer vision tasks including SR. There are many interesting deep learning frameworks for SR, but here we focus on three models of particular relevance to our work, SRGAN @cite_22 , SAGAN @cite_18 , and SNGAN @cite_12 .
{ "cite_N": [ "@cite_18", "@cite_22", "@cite_12" ], "mid": [ "2950893734", "2523714292", "2785678896" ], "abstract": [ "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques." ] }
1812.04821
2949606544
Single Image Super Resolution (SISR) is a well-researched problem with broad commercial relevance. However, most of the SISR literature focuses on small-size images under 500px, whereas business needs can mandate the generation of very high resolution images. At Expedia Group, we were tasked with generating images of at least 2000px for display on the website, four times greater than the sizes typically reported in the literature. This requirement poses a challenge that state-of-the-art models, validated on small images, have not been proven to handle. In this paper, we investigate solutions to the problem of generating high-quality images for large-scale super resolution in a commercial setting. We find that training a generative adversarial network (GAN) with attention from scratch using a large-scale lodging image data set generates images with high PSNR and SSIM scores. We describe a novel attentional SISR model for large-scale images, A-SRGAN, that uses a Flexible Self Attention layer to enable processing of large-scale images. We also describe a distributed algorithm which speeds up training by around a factor of five.
Super-Resolution through Generative Adversarial Networks (SRGAN) @cite_22 is a generative model that uses GANs @cite_10 to produce an estimate of the HR image given the LR image. The SRGAN architecture involves first training a ResNet-based model @cite_1 called SRResNet --- a deep residual network with convolutions and a pixel shuffle layer @cite_2 ) that minimizes the between @math and @math . SRGAN uses SRResNet as the generator and takes an adversarial approach to estimate the probability distribution of the target dataset.
{ "cite_N": [ "@cite_1", "@cite_10", "@cite_22", "@cite_2" ], "mid": [ "2949650786", "2577946330", "2523714292", "2476548250" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods." ] }
1812.04821
2949606544
Single Image Super Resolution (SISR) is a well-researched problem with broad commercial relevance. However, most of the SISR literature focuses on small-size images under 500px, whereas business needs can mandate the generation of very high resolution images. At Expedia Group, we were tasked with generating images of at least 2000px for display on the website, four times greater than the sizes typically reported in the literature. This requirement poses a challenge that state-of-the-art models, validated on small images, have not been proven to handle. In this paper, we investigate solutions to the problem of generating high-quality images for large-scale super resolution in a commercial setting. We find that training a generative adversarial network (GAN) with attention from scratch using a large-scale lodging image data set generates images with high PSNR and SSIM scores. We describe a novel attentional SISR model for large-scale images, A-SRGAN, that uses a Flexible Self Attention layer to enable processing of large-scale images. We also describe a distributed algorithm which speeds up training by around a factor of five.
SRGAN encourages perceptually-rich images and directly addresses deficiencies in texture quality exhibited by earlier models (SRCNN @cite_17 , ESPCNN @cite_2 , and SRResNet itself). The , the weighted sum of a and an , is a crucial theoretical contribution of SRGAN used to recover fine texture content. There are two versions of the SRGAN content loss: (1) a straightforward pixel-wise MSE loss between the @math and @math , similar to ESPCNN, and (2) VGG loss, which is the Euclidean distance between feature maps of @math and @math passed through the VGG19 network. The adversarial loss enhances the texture quality of the image, as a GAN drives the distribution of the generated image @math to the true distribution @math , which is in the natural image manifold, thus enhancing the visual appearance of the image.
{ "cite_N": [ "@cite_2", "@cite_17" ], "mid": [ "2476548250", "2525167219" ], "abstract": [ "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.", "In this note, we want to focus on aspects related to two questions most people asked us at CVPR about the network we presented. Firstly, What is the relationship between our proposed layer and the deconvolution layer? And secondly, why are convolutions in low-resolution (LR) space a better choice? These are key questions we tried to answer in the paper, but we were not able to go into as much depth and clarity as we would have liked in the space allowance. To better answer these questions in this note, we first discuss the relationships between the deconvolution layer in the forms of the transposed convolution layer, the sub-pixel convolutional layer and our efficient sub-pixel convolutional layer. We will refer to our efficient sub-pixel convolutional layer as a convolutional layer in LR space to distinguish it from the common sub-pixel convolutional layer. We will then show that for a fixed computational budget and complexity, a network with convolutions exclusively in LR space has more representation power at the same speed than a network that first upsamples the input in high resolution space." ] }
1812.04821
2949606544
Single Image Super Resolution (SISR) is a well-researched problem with broad commercial relevance. However, most of the SISR literature focuses on small-size images under 500px, whereas business needs can mandate the generation of very high resolution images. At Expedia Group, we were tasked with generating images of at least 2000px for display on the website, four times greater than the sizes typically reported in the literature. This requirement poses a challenge that state-of-the-art models, validated on small images, have not been proven to handle. In this paper, we investigate solutions to the problem of generating high-quality images for large-scale super resolution in a commercial setting. We find that training a generative adversarial network (GAN) with attention from scratch using a large-scale lodging image data set generates images with high PSNR and SSIM scores. We describe a novel attentional SISR model for large-scale images, A-SRGAN, that uses a Flexible Self Attention layer to enable processing of large-scale images. We also describe a distributed algorithm which speeds up training by around a factor of five.
The SAGAN model @cite_18 introduces a for capturing long-term dependencies in a general GAN framework. This layer takes input from the previous convolution layer and outputs feature maps of the same size. The architecture is inspired from the query and key model in @cite_16 . The input feature maps are transformed into the spaces of a query @math and a key @math , where @math and @math . Then the attention map is calculated as follows: where and @math implies the extent to which the model attends to the @math location when synthesizing the @math region. This attention map ( @math ) is included in the feature maps, and a weighted skip connection is established to make sure that the attention is added on top of the feature maps of previous layer.
{ "cite_N": [ "@cite_18", "@cite_16" ], "mid": [ "2950893734", "2626778328" ], "abstract": [ "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data." ] }
1812.04821
2949606544
Single Image Super Resolution (SISR) is a well-researched problem with broad commercial relevance. However, most of the SISR literature focuses on small-size images under 500px, whereas business needs can mandate the generation of very high resolution images. At Expedia Group, we were tasked with generating images of at least 2000px for display on the website, four times greater than the sizes typically reported in the literature. This requirement poses a challenge that state-of-the-art models, validated on small images, have not been proven to handle. In this paper, we investigate solutions to the problem of generating high-quality images for large-scale super resolution in a commercial setting. We find that training a generative adversarial network (GAN) with attention from scratch using a large-scale lodging image data set generates images with high PSNR and SSIM scores. We describe a novel attentional SISR model for large-scale images, A-SRGAN, that uses a Flexible Self Attention layer to enable processing of large-scale images. We also describe a distributed algorithm which speeds up training by around a factor of five.
Weight normalization @cite_6 is a computationally cheap and efficient technique that has proved highly effective in many computer vision models. Spectral normalization is an extension of weight normalization, where the weights of a hidden layer are normalized by the largest singular value of the weight matrix of the same layer while satisfying local 1-Lipschitz constraint @cite_12 . Empirically, this technique converges faster and better, particularly in GANs @cite_9 , as spectral normalization has been shown to help with the mode collapse problem that GANs often face @cite_12 . The Spectral Normalization GAN (SNGAN) @cite_12 uses spectral normalization by constraining the Lipschitz constant of the discriminator network. Note that conditioning the weights of the generator network in addition to the discriminator has also proven successful @cite_18 . Hence, borrowing from these previous works, we use spectral normalization in all of our convolutional and dense layers in both the generator and discriminator networks of A-SRGAN.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_12", "@cite_6" ], "mid": [ "2724651715", "2950893734", "2785678896", "2284050935" ], "abstract": [ "Generative adversarial networks (GANs) are highly effective unsupervised learning frameworks that can generate very sharp data, even for data such as images with complex, highly multimodal distributions. However GANs are known to be very hard to train, suffering from problems such as mode collapse and disturbing visual artifacts. Batch normalization (BN) techniques have been introduced to address the training. Though BN accelerates the training in the beginning, our experiments show that the use of BN can be unstable and negatively impact the quality of the trained model. The evaluation of BN and numerous other recent schemes for improving GAN training is hindered by the lack of an effective objective quality measure for GAN models. To address these issues, we first introduce a weight normalization (WN) approach for GAN training that significantly improves the stability, efficiency and the quality of the generated samples. To allow a methodical evaluation, we introduce squared Euclidean reconstruction error on a test set as a new objective measure, to assess training performance in terms of speed, stability, and quality of generated samples. Our experiments with a standard DCGAN architecture on commonly used datasets (CelebA, LSUN bedroom, and CIFAR-10) indicate that training using WN is generally superior to BN for GANs, achieving 10 lower mean squared loss for reconstruction and significantly better qualitative results than BN. We further demonstrate the stability of WN on a 21-layer ResNet trained with the CelebA data set. The code for this paper is available at this https URL", "In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps. In SAGAN, details can be generated using cues from all feature locations. Moreover, the discriminator can check that highly detailed features in distant portions of the image are consistent with each other. Furthermore, recent work has shown that generator conditioning affects GAN performance. Leveraging this insight, we apply spectral normalization to the GAN generator and find that this improves training dynamics. The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Visualization of the attention layers shows that the generator leverages neighborhoods that correspond to object shapes rather than local regions of fixed shape.", "One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.", "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning." ] }
1812.04351
2950762885
Most contemporary robots have depth sensors, and research on semantic segmentation with RGBD images has shown that depth images boost the accuracy of segmentation. Since it is time-consuming to annotate images with semantic labels per pixel, it would be ideal if we could avoid this laborious work by utilizing an existing dataset or a synthetic dataset which we can generate on our own. Robot motions are often tested in a synthetic environment, where multichannel (eg, RGB + depth + instance boundary) images plus their pixel-level semantic labels are available. However, models trained simply on synthetic images tend to demonstrate poor performance on real images. In order to address this, we propose two approaches that can efficiently exploit multichannel inputs combined with an unsupervised domain adaptation (UDA) algorithm. One is a fusion-based approach that uses depth images as inputs. The other is a multitask learning approach that uses depth images as outputs. We demonstrated that the segmentation results were improved by using a multitask learning approach with a post-process and created a benchmark for this task.
The above-mentioned approaches utilize all the different modals as input, but there is also an approach that utilizes only RGB as input and the other modals as output; this is the multitask learning approach. Multitask learning is a promising approach for efficiently and effectively addressing multiple mutually-related recognition tasks and its performance is known to outperform that of the single task methods. Kendall al worked on three tasks (semantic and instance segmentation, and depth estimation) @cite_33 and Kuga al also worked on three tasks (RGB reconstruction, semantic segmentation, and depth estimation) @cite_36 .
{ "cite_N": [ "@cite_36", "@cite_33" ], "mid": [ "2772409766", "2963677766" ], "abstract": [ "Multi-task learning is a promising approach for efficiently and effectively addressing multiple mutually related recognition tasks. Many scene understanding tasks such as semantic segmentation and depth prediction can be framed as cross-modal encoding decoding, and hence most of the prior work used multi-modal datasets for multi-task learning. However, the inter-modal commonalities, such as one across image, depth, and semantic labels, have not been fully exploited. We propose a multi-modal encoder-decoder networks to harness the multi-modal nature of multi-task scene recognition. In addition to the shared latent representation among encoder-decoder pairs, our model also has shared skip connections from different encoders. By combining these two representation sharing mechanisms, the proposed method efficiently learns a shared feature representation among all modalities in the training data. Experiments using two public datasets shows the advantage of our method over baseline methods that are based on encoder-decoder networks and multi-modal auto-encoders.", "Numerous deep learning applications benefit from multitask learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task." ] }
1812.04351
2950762885
Most contemporary robots have depth sensors, and research on semantic segmentation with RGBD images has shown that depth images boost the accuracy of segmentation. Since it is time-consuming to annotate images with semantic labels per pixel, it would be ideal if we could avoid this laborious work by utilizing an existing dataset or a synthetic dataset which we can generate on our own. Robot motions are often tested in a synthetic environment, where multichannel (eg, RGB + depth + instance boundary) images plus their pixel-level semantic labels are available. However, models trained simply on synthetic images tend to demonstrate poor performance on real images. In order to address this, we propose two approaches that can efficiently exploit multichannel inputs combined with an unsupervised domain adaptation (UDA) algorithm. One is a fusion-based approach that uses depth images as inputs. The other is a multitask learning approach that uses depth images as outputs. We demonstrated that the segmentation results were improved by using a multitask learning approach with a post-process and created a benchmark for this task.
There are other approaches using geometric cues obtained from depth images @cite_31 @cite_0 , but in this research, we just focus on fusion-based and multitask learning approaches, which renders our model not only applicable to geometric applications but also other modal images, such as thermal images.
{ "cite_N": [ "@cite_0", "@cite_31" ], "mid": [ "2777686015", "2776622059" ], "abstract": [ "Fully convolutional network (FCN) has been successfully applied in semantic segmentation of scenes represented with RGB images. Images augmented with depth channel provide more understanding of the geometric information of the scene in the image. The question is how to best exploit this additional information to improve the segmentation performance.,,In this paper, we present a neural network with multiple branches for segmenting RGB-D images. Our approach is to use the available depth to split the image into layers with common visual characteristic of objects scenes, or common “scene-resolution”. We introduce context-aware receptive field (CaRF) which provides a better control on the relevant contextual information of the learned features. Equipped with CaRF, each branch of the network semantically segments relevant similar scene-resolution, leading to a more focused domain which is easier to learn. Furthermore, our network is cascaded with features from one branch augmenting the features of adjacent branch. We show that such cascading of features enriches the contextual information of each branch and enhances the overall performance. The accuracy that our network achieves outperforms the stateof-the-art methods on two public datasets.", "RGBD semantic segmentation requires joint reasoning about 2D appearance and 3D geometric information. In this paper we propose a 3D graph neural network (3DGNN) that builds a k-nearest neighbor graph on top of 3D point cloud. Each node in the graph corresponds to a set of points and is associated with a hidden representation vector initialized with an appearance feature extracted by a unary CNN from 2D images. Relying on recurrent functions, every node dynamically updates its hidden representation based on the current status and incoming messages from its neighbors. This propagation model is unrolled for a certain number of time steps and the final per-node representation is used for predicting the semantic class of each pixel. We use back-propagation through time to train the model. Extensive experiments on NYUD2 and SUN-RGBD datasets demonstrate the effectiveness of our approach." ] }
1812.04405
2904751523
We explore the performance of latent variable models for conditional text generation in the context of neural machine translation (NMT). Similar to , we augment the encoder-decoder NMT paradigm by introducing a continuous latent variable to model features of the translation process. We extend this model with a co-attention mechanism motivated by in the inference network. Compared to the vision domain, latent variable models for text face additional challenges due to the discrete nature of language, namely posterior collapse. We experiment with different approaches to mitigate this issue. We show that our conditional variational model improves upon both discriminative attention-based translation and the variational baseline presented in Finally, we present some exploration of the learned latent space to illustrate what the latent variable is capable of capturing. This is the first reported conditional variational model for text that meaningfully utilizes the latent variable without weakening the translation model.
There has been substantial exploration on both the neural machine translation and variational autoencoder fronts. The attention mechanism introduced by @cite_10 has been extensively used with RNN encoder-decoder models @cite_2 to enhance their ability to deal with long source inputs.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "2133564696", "2221711388" ], "abstract": [ "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "Natural language inference (NLI) is a fundamentally important task in natural language processing that has many applications. The recently released Stanford Natural Language Inference (SNLI) corpus has made it possible to develop and evaluate learning-centered methods such as deep neural networks for natural language inference (NLI). In this paper, we propose a special long short-term memory (LSTM) architecture for NLI. Our model builds on top of a recently proposed neural attention model for NLI but is based on a significantly different idea. Instead of deriving sentence embeddings for the premise and the hypothesis to be used for classification, our solution uses a match-LSTM to perform word-by-word matching of the hypothesis with the premise. This LSTM is able to place more emphasis on important word-level matching results. In particular, we observe that this LSTM remembers important mismatches that are critical for predicting the contradiction or the neutral relationship label. On the SNLI corpus, our model achieves an accuracy of 86.1 , outperforming the state of the art." ] }
1812.04405
2904751523
We explore the performance of latent variable models for conditional text generation in the context of neural machine translation (NMT). Similar to , we augment the encoder-decoder NMT paradigm by introducing a continuous latent variable to model features of the translation process. We extend this model with a co-attention mechanism motivated by in the inference network. Compared to the vision domain, latent variable models for text face additional challenges due to the discrete nature of language, namely posterior collapse. We experiment with different approaches to mitigate this issue. We show that our conditional variational model improves upon both discriminative attention-based translation and the variational baseline presented in Finally, we present some exploration of the learned latent space to illustrate what the latent variable is capable of capturing. This is the first reported conditional variational model for text that meaningfully utilizes the latent variable without weakening the translation model.
@cite_5 presents a basic RNN-based VAE generative model to explicitly model holistic properties of sentences. It analyzes challenges for training variational models for text (primarily posterior collapse) and propose two workarounds: 1. KL cost annealing and 2. masking parts of the source and target tokens with @math symbols in order to strengthen the inferer by weakening the decoder ("word dropouts"). This model is primarily concerned with unconditional text generation and does not discuss conditional tasks.
{ "cite_N": [ "@cite_5" ], "mid": [ "2210838531" ], "abstract": [ "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling." ] }
1812.04405
2904751523
We explore the performance of latent variable models for conditional text generation in the context of neural machine translation (NMT). Similar to , we augment the encoder-decoder NMT paradigm by introducing a continuous latent variable to model features of the translation process. We extend this model with a co-attention mechanism motivated by in the inference network. Compared to the vision domain, latent variable models for text face additional challenges due to the discrete nature of language, namely posterior collapse. We experiment with different approaches to mitigate this issue. We show that our conditional variational model improves upon both discriminative attention-based translation and the variational baseline presented in Finally, we present some exploration of the learned latent space to illustrate what the latent variable is capable of capturing. This is the first reported conditional variational model for text that meaningfully utilizes the latent variable without weakening the translation model.
@cite_6 introduces the basic setup for a conditional variational language model and applies it to the task of machine translation. It reports improvements over vanilla neural machine translation baselines on Chinese-English and English-German tasks.
{ "cite_N": [ "@cite_6" ], "mid": [ "2394571815" ], "abstract": [ "Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdecoder model that can be trained end-to-end. Different from the vanilla encoder-decoder model that generates target translations from hidden representations of source sentences alone, the variational model introduces a continuous latent variable to explicitly model underlying semantics of source sentences and to guide the generation of target translations. In order to perform efficient posterior inference and large-scale training, we build a neural posterior approximator conditioned on both the source and the target sides, and equip it with a reparameterization technique to estimate the variational lower bound. Experiments on both Chinese-English and English- German translation tasks show that the proposed variational neural machine translation achieves significant improvements over the vanilla neural machine translation baselines." ] }
1812.04451
2904020365
In this work, we propose a novel framework named Coconditional Autoencoding Adversarial Networks (CocoAAN) for Chinese font learning, which jointly learns a generation network and two encoding networks of different feature domains using an adversarial process. The encoding networks map the glyph images into style and content features respectively via the pairwise substitution optimization strategy, and the generation network maps these two kinds of features to glyph samples. Together with a discriminative network conditioned on the extracted features, our framework succeeds in producing realistic-looking Chinese glyph images flexibly. Unlike previous models relying on the complex segmentation of Chinese components or strokes, our model can "parse" structures in an unsupervised way, through which the content feature representation of each character is captured. Experiments demonstrate our framework has a powerful generalization capacity to other unseen fonts and characters.
@cite_14 proposed a sequence-based method which uses Recurrent Neural Networks (RNN) to extract temporal information of ordinal hand-written strokes to generate skeletons of Chinese characters. Since the generated skeletons by this method contain only the structure appearance but no style information, it can not generalize to different style glyphs.
{ "cite_N": [ "@cite_14" ], "mid": [ "2474015039" ], "abstract": [ "Recent deep learning based approaches have achieved great success on handwriting recognition. Chinese characters are among the most widely adopted writing systems in the world. Previous research has mainly focused on recognizing handwritten Chinese characters. However, recognition is only one aspect for understanding a language, another challenging and interesting task is to teach a machine to automatically write (pictographic) Chinese characters. In this paper, we propose a framework by using the recurrent neural network (RNN) as both a discriminative model for recognizing Chinese characters and a generative model for drawing (generating) Chinese characters. To recognize Chinese characters, previous methods usually adopt the convolutional neural network (CNN) models which require transforming the online handwriting trajectory into image-like representations. Instead, our RNN based approach is an end-to-end system which directly deals with the sequential structure and does not require any domain-specific knowledge. With the RNN system (combining an LSTM and GRU), state-of-the-art performance can be achieved on the ICDAR-2013 competition database. Furthermore, under the RNN framework, a conditional generative model with character embedding is proposed for automatically drawing recognizable Chinese characters. The generated characters (in vector format) are human-readable and also can be recognized by the discriminative RNN model with high accuracy. Experimental results verify the effectiveness of using RNNs as both generative and discriminative models for the tasks of drawing and recognizing Chinese characters." ] }
1812.04451
2904020365
In this work, we propose a novel framework named Coconditional Autoencoding Adversarial Networks (CocoAAN) for Chinese font learning, which jointly learns a generation network and two encoding networks of different feature domains using an adversarial process. The encoding networks map the glyph images into style and content features respectively via the pairwise substitution optimization strategy, and the generation network maps these two kinds of features to glyph samples. Together with a discriminative network conditioned on the extracted features, our framework succeeds in producing realistic-looking Chinese glyph images flexibly. Unlike previous models relying on the complex segmentation of Chinese components or strokes, our model can "parse" structures in an unsupervised way, through which the content feature representation of each character is captured. Experiments demonstrate our framework has a powerful generalization capacity to other unseen fonts and characters.
zi2zi https: github.com kaonashi-tyc zi2zi and other relevant methods @cite_6 @cite_8 implements the Chinese characters’ style transfer based on pix2pix @cite_17 , which considers the generation as a problem of mapping from the source domain to the target pairwisely. Theoretically, different glyphs of a same font do share the same writing style, there should be a fixed mapping for any glyphs of two uniformly designed fonts. However, this kind of methods only concentrate on the mapping relationship in a specified font pair, while ignoring a lot more font resouces which would also lead to a lack of flexibility for new style learning, as for every new font pair, the netwroks must be retrained from scratch.
{ "cite_N": [ "@cite_17", "@cite_6", "@cite_8" ], "mid": [ "2552465644", "2784936144", "2739007166" ], "abstract": [ "We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.", "Handwriting of Chinese has long been an important skill in East Asia. However, automatic generation of handwritten Chinese characters poses a great challenge due to the large number of characters. Various machine learning techniques have been used to recognize Chinese characters, but few works have studied the handwritten Chinese character generation problem, especially with unpaired training data. In this work, we formulate the Chinese handwritten character generation as a problem that learns a mapping from an existing printed font to a personalized handwritten style. We further propose DenseNet CycleGAN to generate Chinese handwritten characters. Our method is applied not only to commonly used Chinese characters but also to calligraphy work with aesthetic values. Furthermore, we propose content accuracy and style discrepancy as the evaluation metrics to assess the quality of the handwritten characters generated. We then use our proposed metrics to evaluate the generated characters from CASIA dataset as well as our newly introduced Lanting calligraphy dataset.", "In this paper, we propose a new network architecture for Chinese typography transformation based on deep learning. The architecture consists of two sub-networks: (1)a fully convolutional network(FCN) aiming at transferring specified typography style to another in condition of preserving structure information; (2)an adversarial network aiming at generating more realistic strokes in some details. Unlike models proposed before 2012 relying on the complex segmentation of Chinese components or strokes, our model treats every Chinese character as an inseparable image, so pre-processing or post-preprocessing are abandoned. Besides, our model adopts end-to-end training without pre-trained used in other deep models. The experiments demonstrates that our model can synthesize realistic-looking target typography from any source typography both on printed style and handwriting style." ] }
1812.04451
2904020365
In this work, we propose a novel framework named Coconditional Autoencoding Adversarial Networks (CocoAAN) for Chinese font learning, which jointly learns a generation network and two encoding networks of different feature domains using an adversarial process. The encoding networks map the glyph images into style and content features respectively via the pairwise substitution optimization strategy, and the generation network maps these two kinds of features to glyph samples. Together with a discriminative network conditioned on the extracted features, our framework succeeds in producing realistic-looking Chinese glyph images flexibly. Unlike previous models relying on the complex segmentation of Chinese components or strokes, our model can "parse" structures in an unsupervised way, through which the content feature representation of each character is captured. Experiments demonstrate our framework has a powerful generalization capacity to other unseen fonts and characters.
@cite_7 invoked an intercross pairwise scheme to infer the common style feature, which takes advantage of the implicit style co-sharing nature of different fonts. But for the character content feature they used a manual encoding method based on the radical assembling knowledge of each character, which could hardly be generalized to sophisticated structure as the stroke-based methods. Moreover, the generated samples are often blurry, which should attribute to the disadvantage of its varitional auto-encoder (VAE) mechanism as it uses the pixel-wise loss for reconstruction objective.
{ "cite_N": [ "@cite_7" ], "mid": [ "2785648555" ], "abstract": [ "Traditional methods in Chinese typography synthesis view characters as an assembly of radicals and strokes, but they rely on manual definition of the key points, which is still time-costing. Some recent work on computer vision proposes a brand new approach: to treat every Chinese character as an independent and inseparable image, so the pre-processing and post-processing of each character can be avoided. Then with a combination of a transfer network and a discriminating network, one typography can be well transferred to another. Despite the quite satisfying performance of the model, the training process requires to be supervised, which means in the training data each character in the source domain and the target domain needs to be perfectly paired. Sometimes the pairing is time-costing, and sometimes there is no perfect pairing, such as the pairing between traditional Chinese and simplified Chinese characters. In this paper, we proposed an unsupervised typography transfer method which doesn't need pairing." ] }
1812.04240
2903779395
Deep Convolution Neural Networks (CNN) have achieved significant performance on single image super-resolution (SR) recently. However, existing CNN-based methods use artificially synthetic low-resolution (LR) and high-resolution (HR) image pairs to train networks, which cannot handle real-world cases since the degradation from HR to LR is much more complex than manually designed. To solve this problem, we propose a real-world LR images guided bi-cycle network for single image super-resolution, in which the bidirectional structural consistency is exploited to train both the degradation and SR reconstruction networks in an unsupervised way. Specifically, we propose a degradation network to model the real-world degradation process from HR to LR via generative adversarial networks, and these generated realistic LR images paired with real-world HR images are exploited for training the SR reconstruction network, forming the first cycle. Then in the second reverse cycle, consistency of real-world LR images are exploited to further stabilize the training of SR reconstruction and degradation networks. Extensive experiments on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against state-of-the-art single image SR methods.
Early methods @cite_30 @cite_21 @cite_19 super-resolve images based on the the interpolation-based theory. However, it is difficult to reconstruct detailed textures in the super-resolved results. Dong al @cite_39 propose a pioneer 3-layer CNN (SRCNN) for bicubic up-sampled image SR, which then brings outs a series of CNN-based SISR methods with more descent effectiveness and higher efficiency. On one hand, more effective CNN architectures are designed to improve SR performance, including very deep CNN with residual learning @cite_9 , residual and dense block @cite_28 @cite_5 , recursive structure @cite_31 @cite_38 and channel attention @cite_17 . On the other hand, separate research efforts are paid to speed up computational efficiency, where deep features are extracted from original LR image @cite_12 @cite_42 @cite_6 . Taking both effectiveness and efficiency into account, this speed up strategy has also been succesively adopted in @cite_28 @cite_5 @cite_0 @cite_17 .
{ "cite_N": [ "@cite_30", "@cite_38", "@cite_28", "@cite_9", "@cite_21", "@cite_42", "@cite_6", "@cite_39", "@cite_0", "@cite_19", "@cite_5", "@cite_31", "@cite_12", "@cite_17" ], "mid": [ "1910615622", "2747898905", "2523714292", "", "1967441049", "", "", "", "2964277374", "", "2735224642", "2949079773", "", "" ], "abstract": [ "We present a new method for digitally interpolating images to higher resolution. It consists of two phases: rendering and correction. The rendering phase is edge-directed. From the low resolution image data, we generate a high resolution edge map by first filtering with a rectangular center-on-surround-off filter and then performing piecewise linear interpolation between the zero crossings in the filter output. The rendering phase is based on bilinear interpolation modified to prevent interpolation across edges, as determined from the estimated high resolution edge map. During the correction phase, we modify the mesh values on which the rendering is based to account for the disparity between the true low resolution data, and that predicted by a sensor model operating on the high resolution output of the rendering phase. The overall process is repeated iteratively. We show experimental results which demonstrate the efficacy of our interpolation method.", "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https: github.com tyshiwo DRRN_CVPR17.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "", "This paper presents a novel edge orientation adaptive interpolation scheme for resolution enhancement of still images. In order to achieve ideal orientation adaptation, we propose to estimate the local covariance characteristics at low resolution but cleverly use them to direct the interpolation at high resolution based on the resolution invariant property of edge orientation. The orientation adaptive property guarantees the interpolation always go along the edge orientation but not across it. Our new interpolation scheme can generate images with dramatically higher visual quality than linear interpolation techniques while keeping the computational complexity still modest.", "", "", "", "Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to nonblindly deal with multiple degradations. To address these issues, we propose a general framework with dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications.", "", "Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge.", "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.", "", "" ] }
1812.04240
2903779395
Deep Convolution Neural Networks (CNN) have achieved significant performance on single image super-resolution (SR) recently. However, existing CNN-based methods use artificially synthetic low-resolution (LR) and high-resolution (HR) image pairs to train networks, which cannot handle real-world cases since the degradation from HR to LR is much more complex than manually designed. To solve this problem, we propose a real-world LR images guided bi-cycle network for single image super-resolution, in which the bidirectional structural consistency is exploited to train both the degradation and SR reconstruction networks in an unsupervised way. Specifically, we propose a degradation network to model the real-world degradation process from HR to LR via generative adversarial networks, and these generated realistic LR images paired with real-world HR images are exploited for training the SR reconstruction network, forming the first cycle. Then in the second reverse cycle, consistency of real-world LR images are exploited to further stabilize the training of SR reconstruction and degradation networks. Extensive experiments on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against state-of-the-art single image SR methods.
Recently, SRGAN @cite_28 and ESRGAN @cite_25 introduce perceptual loss and adversarial loss into the reconstruction network. Spatial feature transform @cite_20 are suggested to enhance texture details for photo-realistic SISR. Furthermore, CinCGAN @cite_4 resorts to unsupervised learning with unpaired data. These methods, however, are all tailored to specific bicubic down-sampling, and usually perform limited on real-world LR images. Although SRMD @cite_0 can handle multiple down-samplers by taking degradation parameters as input, these degradation parameters should be accurately provided, limiting its practical applications.
{ "cite_N": [ "@cite_4", "@cite_28", "@cite_0", "@cite_25", "@cite_20" ], "mid": [ "2892111734", "2523714292", "2964277374", "2952773607", "2795824235" ], "abstract": [ "We consider the single image super-resolution problem in a more general case that the low- high-resolution pairs and the down-sampling process are unavailable. Different from traditional super-resolution formulation, the low-resolution input is further degraded by noises and blurring. This complicated setting makes supervised learning and accurate kernel estimation impossible. To solve this problem, we resort to unsupervised learning without paired data, inspired by the recent successful image-to-image translation applications. With generative adversarial networks (GAN) as the basic component, we propose a Cycle-in-Cycle network structure to tackle the problem within three steps. First, the noisy and blurry input is mapped to a noise-free low-resolution space. Then the intermediate image is up-sampled with a pre-trained deep model. Finally, we fine-tune the two modules in an end-to-end manner to get the high-resolution output. Experiments on NTIRE2018 datasets demonstrate that the proposed unsupervised method achieves comparable results as the state-of-the-art supervised models.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to nonblindly deal with multiple degradations. To address these issues, we propose a general framework with dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications.", "The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL .", "Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN and EnhanceNet." ] }
1812.04240
2903779395
Deep Convolution Neural Networks (CNN) have achieved significant performance on single image super-resolution (SR) recently. However, existing CNN-based methods use artificially synthetic low-resolution (LR) and high-resolution (HR) image pairs to train networks, which cannot handle real-world cases since the degradation from HR to LR is much more complex than manually designed. To solve this problem, we propose a real-world LR images guided bi-cycle network for single image super-resolution, in which the bidirectional structural consistency is exploited to train both the degradation and SR reconstruction networks in an unsupervised way. Specifically, we propose a degradation network to model the real-world degradation process from HR to LR via generative adversarial networks, and these generated realistic LR images paired with real-world HR images are exploited for training the SR reconstruction network, forming the first cycle. Then in the second reverse cycle, consistency of real-world LR images are exploited to further stabilize the training of SR reconstruction and degradation networks. Extensive experiments on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against state-of-the-art single image SR methods.
Albeit there exist diverse degradations in real SISR applications, blurring is one of the vital aspect in degradation. There are several successive work @cite_32 @cite_29 @cite_41 to estimate blur kernels from LR images, in which blurring and down-sampling are considered in the degradation model. But these methods rely on hand-crafted image priors and are also limited to diverse degradations. Recently, motivated by CycleGAN @cite_11 , several deep CNN-based methods are suggested to learn blind SR from unpaired HR-LR images. Yuan al @cite_4 present a Cycle-in-Cycle network to learn SISR and degradation models, but the degradation model is deterministic, making it limited in generating diverse and real-world LR images.
{ "cite_N": [ "@cite_4", "@cite_41", "@cite_29", "@cite_32", "@cite_11" ], "mid": [ "2892111734", "2952773148", "", "2103824707", "2962793481" ], "abstract": [ "We consider the single image super-resolution problem in a more general case that the low- high-resolution pairs and the down-sampling process are unavailable. Different from traditional super-resolution formulation, the low-resolution input is further degraded by noises and blurring. This complicated setting makes supervised learning and accurate kernel estimation impossible. To solve this problem, we resort to unsupervised learning without paired data, inspired by the recent successful image-to-image translation applications. With generative adversarial networks (GAN) as the basic component, we propose a Cycle-in-Cycle network structure to tackle the problem within three steps. First, the noisy and blurry input is mapped to a noise-free low-resolution space. Then the intermediate image is up-sampled with a pre-trained deep model. Finally, we fine-tune the two modules in an end-to-end manner to get the high-resolution output. Experiments on NTIRE2018 datasets demonstrate that the proposed unsupervised method achieves comparable results as the state-of-the-art supervised models.", "This paper proposes a simple, accurate, and robust approach to single image nonparametric blind Super-Resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed approach includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the unnatural bi-l0-l2-norm regularization imposed on the super-resolved, sharp image and the blur-kernel, which is shown to be quite beneficial for estimating the blur-kernel accurately. The numerical optimization is implemented by coupling the splitting augmented Lagrangian and the conjugate gradient (CG). Using the pre-estimated blur-kernel, we finally reconstruct the SR image by a very simple non-blind SR method that uses a natural image prior. The proposed approach is demonstrated to achieve better performance than the recent method by Michaeli and Irani [2] in both terms of the kernel estimation accuracy and image SR quality.", "", "In this paper, a novel method for learning based image super resolution (SR) is presented. The basic idea is to bridge the gap between a set of low resolution (LR) images and the corresponding high resolution (HR) image using both the SR reconstruction constraint and a patch based image synthesis constraint in a general probabilistic framework. We show that in this framework, the estimation of the LR image formation parameters is straightforward. The whole framework is implemented via an annealed Gibbs sampling method. Experiments on SR on both single image and image sequence input show that the proposed method provides an automatic and stable way to compute super-resolution and the achieved result is encouraging for both synthetic and real LR images.", "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach." ] }
1812.04240
2903779395
Deep Convolution Neural Networks (CNN) have achieved significant performance on single image super-resolution (SR) recently. However, existing CNN-based methods use artificially synthetic low-resolution (LR) and high-resolution (HR) image pairs to train networks, which cannot handle real-world cases since the degradation from HR to LR is much more complex than manually designed. To solve this problem, we propose a real-world LR images guided bi-cycle network for single image super-resolution, in which the bidirectional structural consistency is exploited to train both the degradation and SR reconstruction networks in an unsupervised way. Specifically, we propose a degradation network to model the real-world degradation process from HR to LR via generative adversarial networks, and these generated realistic LR images paired with real-world HR images are exploited for training the SR reconstruction network, forming the first cycle. Then in the second reverse cycle, consistency of real-world LR images are exploited to further stabilize the training of SR reconstruction and degradation networks. Extensive experiments on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against state-of-the-art single image SR methods.
Closest to ours is the work of Bulat al @cite_22 in which the authors learn a high-to-low GAN to degrade and down-sample HR images, and then employ the LR-HR pairs to train a low-to-high GAN for blind SISR. Our method differs from @cite_22 in several important ways. First, both the structural consistency between the LR and HR images, and the relationship between reconstruction and degradation are explored by our bi-cycle structure, which jointly stabilizes the training of SR reconstruction and degradation networks. Second, since there are no pairs of LR-HR images in practice, our degradation model is trained in an unsupervised way, , without using paired images. We introduce unpaired real-world LR images into the GAN model for generating realistic LR images, and also exploit them to enhance the reconstruction model and degradation model jointly in a cycle.
{ "cite_N": [ "@cite_22" ], "mid": [ "2883102461" ], "abstract": [ "This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling). We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to effectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories." ] }
1812.04418
2905467924
Identifying animals from a large group of possible individuals is very important for biodiversity monitoring and especially for collecting data on a small number of particularly interesting individuals, as these have to be identified first before this can be done. Identifying them can be a very time-consuming task. This is especially true, if the animals look very similar and have only a small number of distinctive features, like elephants do. In most cases the animals stay at one place only for a short period of time during which the animal needs to be identified for knowing whether it is important to collect new data on it. For this reason, a system supporting the researchers in identifying elephants to speed up this process would be of great benefit. In this paper, we present such a system for identifying elephants in the face of a large number of individuals with only few training images per individual. For that purpose, we combine object part localization, off-the-shelf CNN features, and support vector machine classification to provide field researches with proposals of possible individuals given new images of an elephant. The performance of our system is demonstrated on a dataset comprising a total of 2078 images of 276 individual elephants, where we achieve 56 top-1 test accuracy and 80 top-10 accuracy. To deal with occlusion, varying viewpoints, and different poses present in the dataset, we furthermore enable the analysts to provide the system with multiple images of the same elephant to be identified and aggregate confidence values generated by the classifier. With that, our system achieves a top-1 accuracy of 74 and a top-10 accuracy of 88 on the held-out test dataset.
In the context of human beings, face identification is a very actively studied field, where breakthroughs have recently been achieved using deep learning with systems trained end-to-end, , FaceNet @cite_2 , VGG-Face @cite_10 , or DeepFace @cite_5 . However, such approaches usually require large amounts of annotated training images per class, which are often not available in wildlife monitoring scenarios.
{ "cite_N": [ "@cite_5", "@cite_10", "@cite_2" ], "mid": [ "2145287260", "2325939864", "2096733369" ], "abstract": [ "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.", "The goal of this paper is face recognition – from either a single photograph or from a set of faces tracked in a video. Recent progress in this area has been due to two factors: (i) end to end learning for the task using a convolutional neural network (CNN), and (ii) the availability of very large scale training datasets. We make two contributions: first, we show how a very large scale dataset (2.6M images, over 2.6K people) can be assembled by a combination of automation and human in the loop, and discuss the trade off between data purity and time; second, we traverse through the complexities of deep network training and face recognition to present methods and procedures to achieve comparable state of the art results on the standard LFW and YTF face benchmarks.", "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors." ] }
1812.04418
2905467924
Identifying animals from a large group of possible individuals is very important for biodiversity monitoring and especially for collecting data on a small number of particularly interesting individuals, as these have to be identified first before this can be done. Identifying them can be a very time-consuming task. This is especially true, if the animals look very similar and have only a small number of distinctive features, like elephants do. In most cases the animals stay at one place only for a short period of time during which the animal needs to be identified for knowing whether it is important to collect new data on it. For this reason, a system supporting the researchers in identifying elephants to speed up this process would be of great benefit. In this paper, we present such a system for identifying elephants in the face of a large number of individuals with only few training images per individual. For that purpose, we combine object part localization, off-the-shelf CNN features, and support vector machine classification to provide field researches with proposals of possible individuals given new images of an elephant. The performance of our system is demonstrated on a dataset comprising a total of 2078 images of 276 individual elephants, where we achieve 56 top-1 test accuracy and 80 top-10 accuracy. To deal with occlusion, varying viewpoints, and different poses present in the dataset, we furthermore enable the analysts to provide the system with multiple images of the same elephant to be identified and aggregate confidence values generated by the classifier. With that, our system achieves a top-1 accuracy of 74 and a top-10 accuracy of 88 on the held-out test dataset.
Brust al have recently taken this approach to the deep learning age for gorilla identification using pre-trained convolutional neural networks (CNNs) for face detection and feature extraction @cite_11 . Their approach is, in principle, very similar to ours. However, we do not only demonstrate that it is also suitable for identifying other species such as elephants, but also show that the performance can be improved further by using earlier layers than the last layer of a CNN for feature extraction and additional pooling. Moreover, we found that simple data augmentation such as flipping can be useful for training the SVM classifier and show how to aggregate predictions obtained for multiple images of the same unknown individual to deal with occlusion and variations in pose and perspective.
{ "cite_N": [ "@cite_11" ], "mid": [ "2768765894" ], "abstract": [ "In this paper we report on the context and evaluation of a system for an automatic interpretation of sightings of individual western lowland gorillas (Gorilla gorilla gorilla) as captured in facial field photography in the wild. This effort aligns with a growing need for effective and integrated monitoring approaches for assessing the status of biodiversity at high spatio-temporal scales. Manual field photography and the utilisation of autonomous camera traps have already transformed the way ecological surveys are conducted. In principle, many environments can now be monitored continuously, and with a higher spatio-temporal resolution than ever before. Yet, the manual effort required to process photographic data to derive relevant information delimits any large scale application of this methodology. The described system applies existing computer vision techniques including deep convolutional neural networks to cover the tasks of detection and localisation, as well as individual identification of gorillas in a practically relevant setup. We evaluate the approach on a relatively large and challenging data corpus of 12,765 field images of 147 individual gorillas with image-level labels (i.e. missing bounding boxes) photographed at Mbeli Bai at the Nouabal-Ndoki National Park, Republic of Congo. Results indicate a facial detection rate of 90.8 AP and an individual identification accuracy for ranking within the Top 5 set of 80.3 . We conclude that, whilst keeping the human in the loop is critical, this result is practically relevant as it exemplifies model transferability and has the potential to assist manual identification efforts. We argue further that there is significant need towards integrating computer vision deeper into ecological sampling methodologies and field practice to move the discipline forward and open up new research horizons." ] }
1812.04172
2904072229
Despite huge success in the image domain, modern detection models such as Faster R-CNN have not been used nearly as much for video analysis. This is arguably due to the fact that detection models are designed to operate on single frames and as a result do not have a mechanism for learning motion representations directly from video. We propose a learning procedure that allows detection models such as Faster R-CNN to learn motion features directly from the RGB video data while being optimized with respect to a pose estimation task. Given a pair of video frames---Frame A and Frame B---we force our model to predict human pose in Frame A using the features from Frame B. We do so by leveraging deformable convolutions across space and time. Our network learns to spatially sample features from Frame B in order to maximize pose detection accuracy in Frame A. This naturally encourages our network to learn motion offsets encoding the spatial correspondences between the two frames. We refer to these motion offsets as DiMoFs (Discriminative Motion Features). In our experiments we show that our training scheme helps learn effective motion cues, which can be used to estimate and localize salient human motion. Furthermore, we demonstrate that as a byproduct, our model also learns features that lead to improved pose detection in still-images, and better keypoint tracking. Finally, we show how to leverage our learned model for the tasks of spatiotemporal action localization and fine-grained action recognition.
Modern object detectors @cite_56 @cite_33 @cite_15 @cite_7 @cite_52 @cite_16 @cite_4 @cite_41 @cite_57 @cite_38 @cite_1 are built using deep CNNs @cite_8 @cite_44 @cite_3 . One of the earlier of such object detection systems was R-CNN @cite_16 , which operated in a two-stage pipeline, first extracting object proposals, and then classifying each of them using a CNN. To reduce the computational cost, RoI pooling operation was introduced in @cite_56 @cite_52 . A few years ago, Faster R-CNN @cite_7 replaced region proposal methods by another network, thus eliminating a two stage pipeline. Several methods @cite_38 @cite_1 extended Faster R-CNN into a system that runs in real time with little loss in performance. The recent Mask R-CNN @cite_33 introduced an extra branch that predicts a mask for each region of interest, Finally, Deformable CNNs @cite_40 leveraged deformable convolution to model deformations of objects more robustly. While these prior detection methods work well on images, they are not designed to exploit motion cues in a video-- a shortcoming we aim to address.
{ "cite_N": [ "@cite_38", "@cite_4", "@cite_33", "@cite_7", "@cite_8", "@cite_41", "@cite_1", "@cite_52", "@cite_3", "@cite_56", "@cite_57", "@cite_44", "@cite_40", "@cite_15", "@cite_16" ], "mid": [ "", "2952771913", "", "2953106684", "", "2193145675", "2570343428", "", "2949650786", "2179352600", "2950800384", "1686810756", "2950477723", "2743473392", "2102605133" ], "abstract": [ "", "In this paper we study the problem of object detection for RGB-D images using semantically rich image and depth features. We propose a new geocentric embedding for depth images that encodes height above ground and angle with gravity for each pixel in addition to the horizontal disparity. We demonstrate that this geocentric embedding works better than using raw depth images for learning feature representations with convolutional neural networks. Our final object detection system achieves an average precision of 37.3 , which is a 56 relative improvement over existing methods. We then focus on the task of instance segmentation where we label pixels belonging to object instances found by our detector. For this task, we propose a decision forest approach that classifies pixels in the detection window as foreground or background using a family of unary and binary tests that query shape and geocentric pose features. Finally, we use the output from our object detectors in an existing superpixel classification framework for semantic scene segmentation and achieve a 24 relative improvement over current state-of-the-art for the object categories that we study. We believe advances such as those represented in this paper will facilitate the use of perception in fields like robotics.", "", "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "", "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For (300 300 ) input, SSD achieves 74.3 mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for (512 512 ) input, SSD achieves 76.9 mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https: github.com weiliu89 caffe tree ssd.", "We introduce YOLO9000, a state-of-the-art, real-time object detection system that can detect over 9000 object categories. First we propose various improvements to the YOLO detection method, both novel and drawn from prior work. The improved model, YOLOv2, is state-of-the-art on standard detection tasks like PASCAL VOC and COCO. Using a novel, multi-scale training method the same YOLOv2 model can run at varying sizes, offering an easy tradeoff between speed and accuracy. At 67 FPS, YOLOv2 gets 76.8 mAP on VOC 2007. At 40 FPS, YOLOv2 gets 78.6 mAP, outperforming state-of-the-art methods like Faster RCNN with ResNet and SSD while still running significantly faster. Finally we propose a method to jointly train on object detection and classification. Using this method we train YOLO9000 simultaneously on the COCO detection dataset and the ImageNet classification dataset. Our joint training allows YOLO9000 to predict detections for object classes that dont have labelled detection data. We validate our approach on the ImageNet detection task. YOLO9000 gets 19.7 mAP on the ImageNet detection validation set despite only having detection data for 44 of the 200 classes. On the 156 classes not in COCO, YOLO9000 gets 16.0 mAP. YOLO9000 predicts detections for more than 9000 different object categories, all in real-time.", "", "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g. 224×224) input image. This requirement is “artificial” and may hurt the recognition accuracy for the images or sub-images of an arbitrary size scale. In this work, we equip the networks with a more principled pooling strategy, “spatial pyramid pooling”, to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size scale. By removing the fixed-size limitation, we can improve all CNN-based image classification methods in general. Our SPP-net achieves state-of-the-art accuracy on the datasets of ImageNet 2012, Pascal VOC 2007, and Caltech101.", "We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image. To achieve this goal, we propose position-sensitive score maps to address a dilemma between translation-invariance in image classification and translation-variance in object detection. Our method can thus naturally adopt fully convolutional image classifier backbones, such as the latest Residual Networks (ResNets), for object detection. We show competitive results on the PASCAL VOC datasets (e.g., 83.6 mAP on the 2007 set) with the 101-layer ResNet. Meanwhile, our result is achieved at a test-time speed of 170ms per image, 2.5-20x faster than the Faster R-CNN counterpart. Code is made publicly available at: this https URL", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.", "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: this https URL", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn." ] }
1812.04172
2904072229
Despite huge success in the image domain, modern detection models such as Faster R-CNN have not been used nearly as much for video analysis. This is arguably due to the fact that detection models are designed to operate on single frames and as a result do not have a mechanism for learning motion representations directly from video. We propose a learning procedure that allows detection models such as Faster R-CNN to learn motion features directly from the RGB video data while being optimized with respect to a pose estimation task. Given a pair of video frames---Frame A and Frame B---we force our model to predict human pose in Frame A using the features from Frame B. We do so by leveraging deformable convolutions across space and time. Our network learns to spatially sample features from Frame B in order to maximize pose detection accuracy in Frame A. This naturally encourages our network to learn motion offsets encoding the spatial correspondences between the two frames. We refer to these motion offsets as DiMoFs (Discriminative Motion Features). In our experiments we show that our training scheme helps learn effective motion cues, which can be used to estimate and localize salient human motion. Furthermore, we demonstrate that as a byproduct, our model also learns features that lead to improved pose detection in still-images, and better keypoint tracking. Finally, we show how to leverage our learned model for the tasks of spatiotemporal action localization and fine-grained action recognition.
Several recent methods proposed architectures capable of aligning features temporally for improved object detection in video @cite_30 @cite_50 @cite_6 . The method in @cite_6 proposes a spatial-temporal memory mechanism, whereas @cite_50 leverages spatiotemporal sampling for feature alignment. Furthermore, the work in @cite_30 uses an optical flow CNN to align the features across time.
{ "cite_N": [ "@cite_30", "@cite_6", "@cite_50" ], "mid": [ "2604445072", "2951641129", "2789755258" ], "abstract": [ "Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong single-frame baselines in ImageNet VID, especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The proposed method, together with Deep Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The code is available at this https URL.", "We introduce Spatial-Temporal Memory Networks for video object detection. At its core, a novel Spatial-Temporal Memory module (STMM) serves as the recurrent computation unit to model long-term temporal appearance and motion dynamics. The STMM's design enables full integration of pretrained backbone CNN weights, which we find to be critical for accurate detection. Furthermore, in order to tackle object motion in videos, we propose a novel MatchTrans module to align the spatial-temporal memory from frame to frame. Our method produces state-of-the-art results on the benchmark ImageNet VID dataset, and our ablative studies clearly demonstrate the contribution of our different design choices. We release our code and models at this http URL.", "We propose a Spatiotemporal Sampling Network (STSN) that uses deformable convolutions across time for object detection in videos. Our STSN performs object detection in a video frame by learning to spatially sample features from the adjacent frames. This naturally renders the approach robust to occlusion or motion blur in individual frames. Our framework does not require additional supervision, as it optimizes sampling locations directly with respect to object detection performance. Our STSN outperforms the state-of-the-art on the ImageNet VID dataset and compared to prior video object detection methods it uses a simpler design, and does not require optical flow data for training. We also show that after training STSN on videos, we can adapt it for object detection in images, by adding and training a single deformable convolutional layer on still-image data. This leads to improvements in accuracy compared to traditional object detection in images." ] }
1812.04172
2904072229
Despite huge success in the image domain, modern detection models such as Faster R-CNN have not been used nearly as much for video analysis. This is arguably due to the fact that detection models are designed to operate on single frames and as a result do not have a mechanism for learning motion representations directly from video. We propose a learning procedure that allows detection models such as Faster R-CNN to learn motion features directly from the RGB video data while being optimized with respect to a pose estimation task. Given a pair of video frames---Frame A and Frame B---we force our model to predict human pose in Frame A using the features from Frame B. We do so by leveraging deformable convolutions across space and time. Our network learns to spatially sample features from Frame B in order to maximize pose detection accuracy in Frame A. This naturally encourages our network to learn motion offsets encoding the spatial correspondences between the two frames. We refer to these motion offsets as DiMoFs (Discriminative Motion Features). In our experiments we show that our training scheme helps learn effective motion cues, which can be used to estimate and localize salient human motion. Furthermore, we demonstrate that as a byproduct, our model also learns features that lead to improved pose detection in still-images, and better keypoint tracking. Finally, we show how to leverage our learned model for the tasks of spatiotemporal action localization and fine-grained action recognition.
While the mechanisms in @cite_50 @cite_6 are useful for improved detection, it is not clear how to use them for motion cue extraction, which is our primary objective. Furthermore, models like @cite_30 are redundant since they compute flow for every single pixel, which is rarely necessary for higher level video understanding tasks. Using optical flow CNN also adds @math extra parameters to the model, which is costly.
{ "cite_N": [ "@cite_30", "@cite_6", "@cite_50" ], "mid": [ "2604445072", "2951641129", "2789755258" ], "abstract": [ "Extending state-of-the-art object detectors from image to video is challenging. The accuracy of detection suffers from degenerated object appearances in videos, e.g., motion blur, video defocus, rare poses, etc. Existing work attempts to exploit temporal information on box level, but such methods are not trained end-to-end. We present flow-guided feature aggregation, an accurate and end-to-end learning framework for video object detection. It leverages temporal coherence on feature level instead. It improves the per-frame features by aggregation of nearby features along the motion paths, and thus improves the video recognition accuracy. Our method significantly improves upon strong single-frame baselines in ImageNet VID, especially for more challenging fast moving objects. Our framework is principled, and on par with the best engineered systems winning the ImageNet VID challenges 2016, without additional bells-and-whistles. The proposed method, together with Deep Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The code is available at this https URL.", "We introduce Spatial-Temporal Memory Networks for video object detection. At its core, a novel Spatial-Temporal Memory module (STMM) serves as the recurrent computation unit to model long-term temporal appearance and motion dynamics. The STMM's design enables full integration of pretrained backbone CNN weights, which we find to be critical for accurate detection. Furthermore, in order to tackle object motion in videos, we propose a novel MatchTrans module to align the spatial-temporal memory from frame to frame. Our method produces state-of-the-art results on the benchmark ImageNet VID dataset, and our ablative studies clearly demonstrate the contribution of our different design choices. We release our code and models at this http URL.", "We propose a Spatiotemporal Sampling Network (STSN) that uses deformable convolutions across time for object detection in videos. Our STSN performs object detection in a video frame by learning to spatially sample features from the adjacent frames. This naturally renders the approach robust to occlusion or motion blur in individual frames. Our framework does not require additional supervision, as it optimizes sampling locations directly with respect to object detection performance. Our STSN outperforms the state-of-the-art on the ImageNet VID dataset and compared to prior video object detection methods it uses a simpler design, and does not require optical flow data for training. We also show that after training STSN on videos, we can adapt it for object detection in images, by adding and training a single deformable convolutional layer on still-image data. This leads to improvements in accuracy compared to traditional object detection in images." ] }
1812.04172
2904072229
Despite huge success in the image domain, modern detection models such as Faster R-CNN have not been used nearly as much for video analysis. This is arguably due to the fact that detection models are designed to operate on single frames and as a result do not have a mechanism for learning motion representations directly from video. We propose a learning procedure that allows detection models such as Faster R-CNN to learn motion features directly from the RGB video data while being optimized with respect to a pose estimation task. Given a pair of video frames---Frame A and Frame B---we force our model to predict human pose in Frame A using the features from Frame B. We do so by leveraging deformable convolutions across space and time. Our network learns to spatially sample features from Frame B in order to maximize pose detection accuracy in Frame A. This naturally encourages our network to learn motion offsets encoding the spatial correspondences between the two frames. We refer to these motion offsets as DiMoFs (Discriminative Motion Features). In our experiments we show that our training scheme helps learn effective motion cues, which can be used to estimate and localize salient human motion. Furthermore, we demonstrate that as a byproduct, our model also learns features that lead to improved pose detection in still-images, and better keypoint tracking. Finally, we show how to leverage our learned model for the tasks of spatiotemporal action localization and fine-grained action recognition.
Recently, two-stream CNN architectures have been a popular choice for incorporating motion cues into modern CNNs @cite_5 @cite_2 @cite_26 @cite_37 @cite_55 @cite_29 @cite_25 @cite_0 . In these types of models, one stream learns appearance features from RGB data, whereas the other stream learns motion representation from the manually extracted optical flow inputs. The work in @cite_26 @cite_9 leverages two-stream architectures for learning more effective spatiotemporal representations. Recent methods in @cite_48 @cite_24 @cite_25 explored the use of different backbone networks for action recognition tasks. Furthermore, various techniques have explored how to fuse the information from two streams @cite_29 @cite_0 @cite_55 . However, these two-stream CNNs are costly and consume lots of memory. Learning discriminative motion cues from the RGB video data directly instead of relying on manually extracted optical flow inputs could alleviate this issue.
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_48", "@cite_55", "@cite_29", "@cite_9", "@cite_0", "@cite_24", "@cite_2", "@cite_5", "@cite_25" ], "mid": [ "2559833261", "2462996230", "", "2752386593", "2342662179", "", "2746726611", "", "2952186347", "2519080876", "2952005526" ], "abstract": [ "We introduce the concept of dynamic image , a novel compact representation of videos useful for video analysis, particularly in combination with convolutional neural networks (CNNs). A dynamic image encodes temporal data such as RGB or optical flow videos by using the concept of ‘rank pooling’. The idea is to learn a ranking machine that captures the temporal evolution of the data and to use the parameters of the latter as a representation. We call the resulting representation dynamic image because it summarizes the video dynamics in addition to appearance. This powerful idea allows to convert any video to an image so that existing CNN models pre-trained with still images can be immediately extended to videos. We also present an efficient approximate rank pooling operator that runs two orders of magnitude faster than the standard ones with any loss in ranking performance and can be formulated as a CNN layer. To demonstrate the power of the representation, we introduce a novel four stream CNN architecture which can learn from RGB and optical flow frames as well as from their dynamic image representations. We show that the proposed network achieves state-of-the-art performance, 95.5 and 72.5 percent accuracy, in the UCF101 and HMDB51, respectively.", "We introduce the concept of dynamic image, a novel compact representation of videos useful for video analysis especially when convolutional neural networks (CNNs) are used. The dynamic image is based on the rank pooling concept and is obtained through the parameters of a ranking machine that encodes the temporal evolution of the frames of the video. Dynamic images are obtained by directly applying rank pooling on the raw image pixels of a video producing a single RGB image per video. This idea is simple but powerful as it enables the use of existing CNN models directly on video data with fine-tuning. We present an efficient and effective approximate rank pooling operator, speeding it up orders of magnitude compared to rank pooling. Our new approximate rank pooling CNN layer allows us to generalize dynamic images to dynamic feature maps and we demonstrate the power of our new representations on standard benchmarks in action recognition achieving state-of-the-art performance.", "", "We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5 relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem.", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "", "This paper presents a general ConvNet architecture for video action recognition based on multiplicative interactions of spacetime features. Our model combines the appearance and motion pathways of a two-stream architecture by motion gating and is trained end-to-end. We theoretically motivate multiplicative gating functions for residual networks and empirically study their effect on classification accuracy. To capture long-term dependencies we inject identity mapping kernels for learning temporal relationships. Our architecture is fully convolutional in spacetime and able to evaluate a video in a single forward pass. Empirical investigation reveals that our model produces state-of-the-art results on two standard action recognition datasets.", "", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "We propose a multi-region two-stream R-CNN model for action detection in realistic videos. We start from frame-level action detection based on faster R-CNN [1], and make three contributions: (1) we show that a motion region proposal network generates high-quality proposals , which are complementary to those of an appearance region proposal network; (2) we show that stacking optical flow over several frames significantly improves frame-level action detection; and (3) we embed a multi-region scheme in the faster R-CNN model, which adds complementary information on body parts. We then link frame-level detections with the Viterbi algorithm, and temporally localize an action with the maximum subarray method. Experimental results on the UCF-Sports, J-HMDB and UCF101 action detection datasets show that our approach outperforms the state of the art with a significant margin in both frame-mAP and video-mAP.", "Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping these with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art." ] }
1812.04172
2904072229
Despite huge success in the image domain, modern detection models such as Faster R-CNN have not been used nearly as much for video analysis. This is arguably due to the fact that detection models are designed to operate on single frames and as a result do not have a mechanism for learning motion representations directly from video. We propose a learning procedure that allows detection models such as Faster R-CNN to learn motion features directly from the RGB video data while being optimized with respect to a pose estimation task. Given a pair of video frames---Frame A and Frame B---we force our model to predict human pose in Frame A using the features from Frame B. We do so by leveraging deformable convolutions across space and time. Our network learns to spatially sample features from Frame B in order to maximize pose detection accuracy in Frame A. This naturally encourages our network to learn motion offsets encoding the spatial correspondences between the two frames. We refer to these motion offsets as DiMoFs (Discriminative Motion Features). In our experiments we show that our training scheme helps learn effective motion cues, which can be used to estimate and localize salient human motion. Furthermore, we demonstrate that as a byproduct, our model also learns features that lead to improved pose detection in still-images, and better keypoint tracking. Finally, we show how to leverage our learned model for the tasks of spatiotemporal action localization and fine-grained action recognition.
Currently the most common approach for learning features from raw RGB videos is via 3D convolutional networks @cite_21 . Whereas the method in @cite_21 proposes a 3D network architecture for end-to-end feature learning, the recent I3D method @cite_54 inflates all 2D filters to 3D, which allows re-using the features learned in the image domain. Additionally, there have been many recent attempts at making 3D CNNs more effective by replacing 3D convolution with separable 2D and 1D convolutions @cite_53 @cite_17 @cite_43 @cite_11 .
{ "cite_N": [ "@cite_11", "@cite_54", "@cite_21", "@cite_53", "@cite_43", "@cite_17" ], "mid": [ "2761659801", "2619082050", "1522734439", "", "2951583185", "2772114784" ], "abstract": [ "Convolutional Neural Networks (CNN) have been regarded as a powerful class of models for image recognition problems. Nevertheless, it is not trivial when utilizing a CNN for learning spatio-temporal video representation. A few studies have shown that performing 3D convolutions is a rewarding approach to capture both spatial and temporal dimensions in videos. However, the development of a very deep 3D CNN from scratch results in expensive computational cost and memory demand. A valid question is why not recycle off-the-shelf 2D networks for a 3D CNN. In this paper, we devise multiple variants of bottleneck building blocks in a residual learning framework by simulating @math convolutions with @math convolutional filters on spatial domain (equivalent to 2D CNN) plus @math convolutions to construct temporal connections on adjacent feature maps in time. Furthermore, we propose a new architecture, named Pseudo-3D Residual Net (P3D ResNet), that exploits all the variants of blocks but composes each in different placement of ResNet, following the philosophy that enhancing structural diversity with going deep could improve the power of neural networks. Our P3D ResNet achieves clear improvements on Sports-1M video classification dataset against 3D CNN and frame-based 2D CNN by 5.3 and 1.8 , respectively. We further examine the generalization performance of video representation produced by our pre-trained P3D ResNet on five different benchmarks and three different tasks, demonstrating superior performances over several state-of-the-art techniques.", "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9 on HMDB-51 and 98.0 on UCF-101.", "We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets, 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets, and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8 accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.", "", "We present an interpretation of Inception modules in convolutional neural networks as being an intermediate step in-between regular convolution and the depthwise separable convolution operation (a depthwise convolution followed by a pointwise convolution). In this light, a depthwise separable convolution can be understood as an Inception module with a maximally large number of towers. This observation leads us to propose a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions. We show that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset (which Inception V3 was designed for), and significantly outperforms Inception V3 on a larger image classification dataset comprising 350 million images and 17,000 classes. Since the Xception architecture has the same number of parameters as Inception V3, the performance gains are not due to increased capacity but rather to a more efficient use of model parameters.", "In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51." ] }
1812.04128
2950878895
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.
How should autonomous systems be verified is a new challenging question along with their increasing applications @cite_27 . Formal methods must be integrated in order to develop, verify and provide certification evidence for large-scale and complex autonomous systems like robots @cite_9 .
{ "cite_N": [ "@cite_9", "@cite_27" ], "mid": [ "2807059453", "1993466996" ], "abstract": [ "Robotic systems are multi-dimensional entities, combining both hardware and software, that are heavily dependent on, and influenced by, interactions with the real world. They can be variously categorised as embedded, cyber-physical, real-time, hybrid, adaptive and even autonomous systems, with a typical robotic system being likely to contain all of these aspects. The techniques for developing and verifying each of these system varieties are often quite distinct. This, together with the sheer complexity of robotic systems, leads us to argue that diverse formal techniques must be integrated in order to develop, verify, and provide certification evidence for, robotic systems. Furthermore, we propose the fast evolving field of robotics as an ideal catalyst for the advancement of integrated formal methods research, helping to drive the field in new and exciting directions and shedding light on the development of large-scale, dynamic, complex systems.", "I l l u S t r a t I o n b y a l I C I a k u b I S t a a n D r I J b o r y S a S S o C I a t e S in This arTiCle we consider the question: How should autonomous systems be analyzed? in particular, we describe how the confluence of developments in two areas—autonomous systems architectures and formal verification for rational agents—can provide the basis for the formal verification of autonomous systems behaviors. We discuss an approach to this question that involves: 1. Modeling the behavior and describing the interface (input output) to an agent in charge of making decisions within the system; 2. Model checking the agent within an unrestricted environment representing the “real world” and those parts of the systems external to the agent, in order to establish some property, j; 3. Utilizing theorems or analysis of the environment, in the form of logical statements (where necessary), to derive properties of the larger system; and 4. if the agent is refined, modify (1), but if environmental properties are clarified, modify (3). Autonomous systems are now being deployed in safety, mission, or business critical scenarios, which means a thorough analysis of the choices the core software might make becomes crucial. But, should the analysis and verification of autonomous software be treated any differently than traditional software used in critical situations? Or is there something new going on here? Autonomous systems are systems that decide for themselves what to do and when to do it. Such systems might seem futuristic, but they are closer than we might think. Modern household, business, and industrial systems increasingly incorporate autonomy. There are many examples, all varying in the degree of autonomy used, from almost pure human control to fully autonomous activities with minimal human interaction. Application areas are broad, ranging from healthcare monitoring to autonomous vehicles. But what are the reasons for this increase in autonomy? Typically, autonomy is used in systems that: 1. must be deployed in remote environments where direct human control is infeasible; 2. must be deployed in hostile environments where it is dangerous for humans to be nearby, and so difficult for humans to assess the possibilities; 3. involve activity that is too lengthy Verifying autonomous systems Doi:10.1145 2494558" ] }
1812.04128
2950878895
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.
Model checking is a widely used formal method in verifying robotic systems, due to its relative simplicity and powerful automatic tools @cite_31 . For instance, in @cite_6 , a proof-of-concept approach is presented to generate certification evidence for autonomous unmanned aircraft based on both model checking and simulation. PMC, as a variant, emphasises the inherent uncertainties of the formalised system. In @cite_4 @cite_37 , the complex and uncertain behaviours of robot swarms are analysed by PMC. In @cite_25 @cite_17 , PMC is used to verify the control policies of robots in partially unknown environments. In a hostile environment, the movements of adversaries are modelled probabilistically in @cite_0 . The reliability and performance of UUVs is guaranteed by PMC in @cite_1 .
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_1", "@cite_6", "@cite_0", "@cite_31", "@cite_25", "@cite_17" ], "mid": [ "2499003444", "2002956917", "1998284497", "2011274429", "2963979817", "2811472820", "2594789366", "2744647809" ], "abstract": [ "Robot swarms are collections of simple robots cooperating without centralized control. Control algorithms for swarms are often inspired by decentralised problem-solving systems found in nature. In this paper we conduct a formal analysis of an algorithm inspired by the foraging behaviour of ants, where a swarm of flying vehicles searches for a target at some unknown location. We show how both exhaustive model checking and statistical model checking can be used to check properties that complement the results obtained through simulation, resulting in information that would facilitate the logistics of swarm deployment.", "An alternative to deploying a single robot of high complexity can be to utilise robot swarms comprising large numbers of identical, and much simpler, robots. Such swarms have been shown to be adaptable, fault-tolerant and widely applicable. However, designing individual robot algorithms to ensure effective and correct overall swarm behaviour is actually very difficult. While mechanisms for assessing the effectiveness of any swarm algorithm before deployment are essential, such mechanisms have traditionally involved either computational simulations of swarm behaviour, or experiments with robot swarms themselves. However, such simulations or experiments cannot, by their nature, analyse all possible swarm behaviours. In this paper, we will develop and apply the use of automated probabilistic formal verification techniques to robot swarms, involving an exhaustive mathematical analysis, in order to assess whether swarms will indeed behave as required. In particular we consider a foraging robot scenario to which we apply probabilistic model checking.", "Self-adaptive systems used in safety-critical and business-critical applications must continue to comply with strict non-functional requirements while evolving in order to adapt to changing workloads, environments, and goals. Runtime quantitative verification (RQV) has been proposed as an effective means of enhancing self-adaptive systems with this capability. However, RQV frequently fails to provide the fast response times and low computation overheads required by real-world self-adaptive systems. In this paper, we investigate how three techniques, namely caching, lookahead and nearly-optimal reconfiguration, and combinations thereof, can help address this limitation. Extensive experiments in a case study involving the RQV-driven self-adaptation of an unmanned underwater vehicle indicate that these techniques can lead to significant reductions in RQV response times and computation overheads.", "The use of unmanned aircraft for civil applications is expected to increase over the next decade, particularly in so-called dull, dirty, and dangerous missions. Unmanned aircraft will undoubtedly require some form of autonomy to ensure safe operations for all airspace users. However, to be used for civil applications, unmanned aircraft must gain regulatory approval in a process known as “certification”. This paper presents a proof-of-concept approach to the generation of certification evidence for autonomous unmanned aircraft based on a combination of formal verification and flight simulation. In particular, a class of autonomous systems controlled by rational agents is examined, and we give examples of 23 different properties, based on the rules of the air and notions of airmanship, which can be used in the formal model checking of rational agents controlling autonomous unmanned aircraft. Our techniques can be based on either 1) implicit models of the aircraft’s physical environment specified in terms of...", "Abstract In this paper we present an approach to control a vehicle in a hostile environment with static obstacles and moving adversaries. The vehicle is required to satisfy a mission objective expressed as a temporal logic specification over a set of properties satisfied at regions of a partitioned environment. We model the movements of adversaries in between regions of the environment as Poisson processes. Furthermore, we assume that the time it takes for the vehicle to traverse in between two facets of a region is exponentially distributed, and we obtain the rate of this exponential distribution from a simulator of the environment. We capture the motion of the vehicle and the vehicle updates of adversaries distributions as a Markov Decision Process. Using tools in Probabilistic Computational Tree Logic, we find a control strategy for the vehicle that maximizes the probability of accomplishing the mission objective. We demonstrate our approach with illustrative case studies.", "Robotic systems are complex and critical: they are inherently hybrid, combining both hardware and software; they typically exhibit both cyber-physical attributes and autonomous capabilities; and are required to be at least safe and often ethical. While for many engineered systems testing, either through real deployment or via simulation, is deemed sufficient the uniquely challenging elements of robotic systems, together with the crucial dependence on sophisticated software control and decision-making, requires a stronger form of verification. The increasing deployment of robotic systems in safety-critical scenarios exacerbates this still further and leads us towards the use of formal methods to ensure the correctness of, and provide sufficient evidence for the certification of, robotic systems. There have been many approaches that have used some variety of formal specification or formal verification in autonomous robotics, but there is no resource that collates this activity in to one place. This paper systematically surveys the state-of-the art in specification formalisms and tools for verifying robotic systems. Specifically, it describes the challenges arising from autonomy and software architectures, avoiding low-level hardware control and is subsequently identifies approaches for the specification and verification of robotic systems, while avoiding more general approaches.", "We present automated techniques for the verification and control of partially observable, probabilistic systems for both discrete and dense models of time. For the discrete-time case, we formally model these systems using partially observable Markov decision processes; for dense time, we propose an extension of probabilistic timed automata in which local states are partially visible to an observer or controller. We give probabilistic temporal logics that can express a range of quantitative properties of these models, relating to the probability of an event's occurrence or the expected value of a reward measure. We then propose techniques to either verify that such a property holds or synthesise a controller for the model which makes it true. Our approach is based on a grid-based abstraction of the uncountable belief space induced by partial observability and, for dense-time models, an integer discretisation of real-time behaviour. The former is necessarily approximate since the underlying problem is undecidable, however we show how both lower and upper bounds on numerical results can be generated. We illustrate the effectiveness of the approach by implementing it in the PRISM model checker and applying it to several case studies from the domains of task and network scheduling, computer security and planning.", "Reinforcement Learning is a well-known AI paradigm whereby control policies of autonomous agents can be synthesized in an incremental fashion with little or no knowledge about the properties of the environment. We are concerned with safety of agents whose policies are learned by reinforcement, i.e., we wish to bound the risk that, once learning is over, an agent damages either the environment or itself. We propose a general-purpose automated methodology to verify, i.e., establish risk bounds, and repair policies, i.e., fix policies to comply with stated risk bounds. Our approach is based on probabilistic model checking algorithms and tools, which provide theoretical and practical means to verify risk bounds and repair policies. Considering a taxonomy of potential repair approaches tested on an artificially-generated parametric domain, we show that our methodology is also more effective than comparable ones." ] }
1812.04128
2950878895
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.
Although runtime PMC is effective for assuring the quality of service-based systems @cite_29 and self-adaptive systems @cite_12 @cite_28 , there is little research on runtime PMC for robots. In the UUV domain, the first work of runtime PMC is credited to @cite_1 . However, it focuses on improving the scalability of runtime PMC by using software engineering techniques, which is also applicable to our work here that focuses on developing new methods of learning model parameters.
{ "cite_N": [ "@cite_28", "@cite_29", "@cite_1", "@cite_12" ], "mid": [ "2116769849", "2146044140", "1998284497", "1974641445" ], "abstract": [ "An effective design of effective and efficient self-adaptive systems may rely on several existing approaches. Software models and model checking techniques at run time represent one of them since they support automatic reasoning about such changes, detect harmful configurations, and potentially enable appropriate (self-)reactions. However, traditional model checking techniques and tools may not be applied as they are at run time, since they hardly meet the constraints imposed by on-the-fly analysis, in terms of execution time and memory occupation. For this reason, efficient run-time model checking represents a crucial research challenge.", "Service-based systems that are dynamically composed at runtime to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimization of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment, and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analyzed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.", "Self-adaptive systems used in safety-critical and business-critical applications must continue to comply with strict non-functional requirements while evolving in order to adapt to changing workloads, environments, and goals. Runtime quantitative verification (RQV) has been proposed as an effective means of enhancing self-adaptive systems with this capability. However, RQV frequently fails to provide the fast response times and low computation overheads required by real-world self-adaptive systems. In this paper, we investigate how three techniques, namely caching, lookahead and nearly-optimal reconfiguration, and combinations thereof, can help address this limitation. Extensive experiments in a case study involving the RQV-driven self-adaptation of an unmanned underwater vehicle indicate that these techniques can lead to significant reductions in RQV response times and computation overheads.", "Continually verify self-adaptation decisions taken by critical software in response to changes in the operating environment." ] }
1812.04128
2950878895
Robots are increasingly used to carry out critical missions in extreme environments that are hazardous for humans. This requires a high degree of operational autonomy under uncertain conditions, and poses new challenges for assuring the robot's safety and reliability. In this paper, we develop a framework for probabilistic model checking on a layered Markov model to verify the safety and reliability requirements of such robots, both at pre-mission stage and during runtime. Two novel estimators based on conservative Bayesian inference and imprecise probability model with sets of priors are introduced to learn the unknown transition parameters from operational data. We demonstrate our approach using data from a real-world deployment of unmanned underwater vehicles in extreme environments.
One of the initial methods to learn the transition probabilities of DTMC is in @cite_35 , which later has been retrofitted for CTMC @cite_13 and extended with ageing factors of collected data to accurately estimate time-varying transition probabilities @cite_26 . To reduce the noise and provide smooth estimates, a lightweight adaptive filter is proposed in @cite_10 . Whilst, above mentioned approaches yield point estimations, these can be affected by unquantified and potentially significant errors. The work in @cite_15 is the first to synthesise bounds for unknown transition parameters. However, it is based on the theory of simultaneous confidence intervals, which is fundamentally different to the Bayesian approach presented here which has the distinct advantage of being able to embed various forms of prior knowledge.
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_10", "@cite_15", "@cite_13" ], "mid": [ "2133859873", "", "2075465263", "2291637985", "2028622255" ], "abstract": [ "Models can help software engineers to reason about design-time decisions before implementing a system. This paper focuses on models that deal with non-functional properties, such as reliability and performance. To build such models, one must rely on numerical estimates of various parameters provided by domain experts or extracted by other similar systems. Unfortunately, estimates are seldom correct. In addition, in dynamic environments, the value of parameters may change over time. We discuss an approach that addresses these issues by keeping models alive at run time and feeding a Bayesian estimator with data collected from the running system, which produces updated parameters. The updated model provides an increasingly better representation of the system. By analyzing the updated model at run time, it is possible to detect or predict if a desired property is, or will be, violated by the running implementation. Requirement violations may trigger automatic reconfigurations or recovery actions aimed at guaranteeing the desired goals. We illustrate a working framework supporting our methodology and apply it to an example in which a Web service orchestrated composition is modeled through a Discrete Time Markov Chain. Numerical simulations show the effectiveness of the approach.", "", "Adaptive software systems are designed to cope with unpredictable and evolving usage behaviors and environmental conditions. For these systems reasoning mechanisms are needed to drive evolution, which are usually based on models capturing relevant aspects of the running software. The continuous update of these models in evolving environments requires efficient learning procedures, having low overhead and being robust to changes. Most of the available approaches achieve one of these goals at the price of the other. In this paper we propose a lightweight adaptive filter to accurately learn time-varying transition probabilities of discrete time Markov models, which provides robustness to noise and fast adaptation to changes with a very low overhead. A formal stability, unbiasedness and consistency assessment of the learning approach is provided, as well as an experimental comparison with state-of-the-art alternatives.", "Formal verification is used to establish the compliance of software and hardware systems with important classes of requirements. System compliance with functional requirements is frequently analyzed using techniques such as model checking, and theorem proving. In addition, a technique called quantitative verification supports the analysis of the reliability, performance, and other quality-of-service (QoS) properties of systems that exhibit stochastic behavior. In this paper, we extend the applicability of quantitative verification to the common scenario when the probabilities of transition between some or all states of the Markov models analyzed by the technique are unknown, but observations of these transitions are available. To this end, we introduce a theoretical framework, and a tool chain that establish confidence intervals for the QoS properties of a software system modelled as a Markov chain with uncertain transition probabilities. We use two case studies from different application domains to assess the effectiveness of the new quantitative verification technique. Our experiments show that disregarding the above source of uncertainty may significantly affect the accuracy of the verification results, leading to wrong decisions, and low-quality software systems.", "Modern software systems are increasingly requested to be adaptive to changes in the environment in which they are embedded. Moreover, adaptation often needs to be performed automatically, through self-managed reactions enacted by the application at run time. Off-line, human-driven changes should be requested only if self-adaptation cannot be achieved successfully. To support this kind of autonomic behavior, software systems must be empowered by a rich run-time support that can monitor the relevant phenomena of the surrounding environment to detect changes, analyze the data collected to understand the possible consequences of changes, reason about the ability of the application to continue to provide the required service, and finally react if an adaptation is needed. This paper focuses on non-functional requirements, which constitute an essential component of the quality that modern software systems need to exhibit. Although the proposed approach is quite general, it is mainly exemplified in the paper in the context of service-oriented systems, where the quality of service (QoS) is regulated by contractual obligations between the application provider and its clients. We analyze the case where an application, exported as a service, is built as a composition of other services. Non-functional requirements—such as reliability and performance—heavily depend on the environment in which the application is embedded. Thus changes in the environment may ultimately adversely affect QoS satisfaction. We illustrate an approach and support tools that enable a holistic view of the design and run-time management of adaptive software systems. The approach is based on formal (probabilistic) models that are used at design time to reason about dependability of the application in quantitative terms. Models continue to exist at run time to enable continuous verification and detection of changes that require adaptation." ] }
1812.04179
2946699102
Traditional color images only depict color intensities in red, green and blue channels, often making object trackers fail in challenging scenarios, e.g., background clutter and rapid changes of target appearance. Alternatively, material information of targets contained in a large amount of bands of hyperspectral images (HSI) is more robust to these challenging conditions. In this paper, we conduct a comprehensive study on how material information can be utilized to boost object tracking from three aspects: benchmark dataset, material feature representation and material based tracking. In terms of benchmark, we construct a dataset of fully-annotated videos which contain both hyperspectral and color sequences of the same scene. Material information is represented by spectral-spatial histogram of multidimensional gradient, which describes the 3D local spectral-spatial structure in an HSI, and abundances which encode the underlying material distribution. These two types of features are embedded into correlation filters, yielding material based tracking. Experimental results on the collected benchmark dataset show the potentials and advantages of material based object tracking.
Discriminative correlation filter (DCF) is widely used in object tracking due to its competitive performance and computational efficiency enabled by fast Fourier transform (FFT). DCF produces filters by minimizing the output sum of squared error (MOSSE) @cite_23 for all circular shifts of a training sample. Some efforts have been made to address several limitations of MOSSE. For example, Henriques embedded kernel methods to correlation filter to achieve non-linear decision boundary @cite_55 without sacrifice of computational cost. Improvement has also been made in feature representation to learn more discriminative filters, for example, by extracting HOG @cite_42 , color names @cite_44 @cite_32 and deep features learned by convolutional neural networks (CNNs) @cite_8 @cite_56 @cite_57 @cite_35 @cite_38 .
{ "cite_N": [ "@cite_35", "@cite_38", "@cite_8", "@cite_55", "@cite_42", "@cite_32", "@cite_56", "@cite_44", "@cite_57", "@cite_23" ], "mid": [ "", "2673818281", "2118097920", "", "2161969291", "", "", "2044986361", "2211629196", "2421627342" ], "abstract": [ "", "In this paper, we analyze the spatial information of deep features, and propose two complementary regressions for robust visual tracking. First, we propose a kernelized ridge regression model wherein the kernel value is defined as the weighted sum of similarity scores of all pairs of patches between two samples. We show that this model can be formulated as a neural network and thus can be efficiently solved. Second, we propose a fully convolutional neural network with spatially regularized kernels, through which the filter kernel corresponding to each output channel is forced to focus on a specific region of the target. Distance transform pooling is further exploited to determine the effectiveness of each output channel of the convolution layer. The outputs from the kernelized ridge regression model and the fully convolutional neural network are combined to obtain the ultimate response. Experimental results on two benchmark datasets validate the effectiveness of the proposed method.", "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked de-noising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).", "", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "", "", "Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attribute-based evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24 in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second.", "We propose a new approach for general object tracking with fully convolutional neural network. Instead of treating convolutional neural network (CNN) as a black-box feature extractor, we conduct in-depth study on the properties of CNN features offline pre-trained on massive image data and classification task on ImageNet. The discoveries motivate the design of our tracking system. It is found that convolutional layers in different levels characterize the target from different perspectives. A top layer encodes more semantic features and serves as a category detector, while a lower layer carries more discriminative information and can better separate the target from distracters with similar appearance. Both layers are jointly used with a switch mechanism during tracking. It is also found that for a tracking target, only a subset of neurons are relevant. A feature map selection method is developed to remove noisy and irrelevant feature maps, which can reduce computation redundancy and improve tracking accuracy. Extensive evaluation on the widely used tracking benchmark [36] shows that the proposed tacker outperforms the state-of-the-art significantly.", "In this paper, we present a novel attention-modulated visual tracking algorithm that decomposes an object into multiple cognitive units, and trains multiple elementary trackers in order to modulate the distribution of attention according to various feature and kernel types. In the integration stage it recombines the units to memorize and recognize the target object effectively. With respect to the elementary trackers, we present a novel attentional feature-based correlation filter (AtCF) that focuses on distinctive attentional features. The effectiveness of the proposed algorithm is validated through experimental comparison with state-of-theart methods on widely-used tracking benchmark datasets." ] }
1812.04429
2951827950
Automated deception detection (ADD) from real-life videos is a challenging task. It specifically needs to address two problems: (1) Both face and body contain useful cues regarding whether a subject is deceptive. How to effectively fuse the two is thus key to the effectiveness of an ADD model. (2) Real-life deceptive samples are hard to collect; learning with limited training data thus challenges most deep learning based ADD models. In this work, both problems are addressed. Specifically, for face-body multimodal learning, a novel face-focused cross-stream network (FFCSN) is proposed. It differs significantly from the popular two-stream networks in that: (a) face detection is added into the spatial stream to capture the facial expressions explicitly, and (b) correlation learning is performed across the spatial and temporal streams for joint deep feature learning across both face and body. To address the training data scarcity problem, our FFCSN model is trained with both meta learning and adversarial learning. Extensive experiments show that our FFCSN model achieves state-of-the-art results. Further, the proposed FFCSN model as well as its robust training strategy are shown to be generally applicable to other human-centric video analysis tasks such as emotion recognition from user-generated videos.
. Earlier works on video-based ADD are limited by the datasets which contain only staged deceptive behaviors @cite_51 @cite_24 @cite_6 @cite_4 @cite_28 . Their usefulness for detecting real-life deception is thus in doubt. The change towards deception detection with real-life data was first advocated in @cite_3 , where the identification of deception in statements issued by witnesses and defendants is targeted using a corpus collected from hearings in Italian courts (i.e., no visual data was available). In @cite_11 @cite_37 , a new multimodal deception dataset of real-life videos from court trials was first introduced, and the combination of features extracted from different modalities is used for deception detection. Thanks to this benchmark dataset, more advancing ADD methods @cite_22 @cite_27 @cite_23 have been developed to leverage multimodal features for detecting deception.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_22", "@cite_28", "@cite_6", "@cite_3", "@cite_24", "@cite_27", "@cite_23", "@cite_51", "@cite_11" ], "mid": [ "2251550834", "2125936346", "2583437951", "2514266867", "", "2040849715", "149261206", "2564068158", "2773980160", "2164151453", "2163833659" ], "abstract": [ "Deception detection has been receiving an increasing amount of attention from the computational linguistics, speech, and multimodal processing communities. One of the major challenges encountered in this task is the availability of data, and most of the research work to date has been conducted on acted or artificially collected data. The generated deception models are thus lacking real-world evidence. In this paper, we explore the use of multimodal real-life data for the task of deception detection. We develop a new deception dataset consisting of videos from reallife scenarios, and build deception tools relying on verbal and nonverbal features. We achieve classification accuracies in the range of 77-82 when using a model that extracts and fuses features from the linguistic and visual modalities. We show that these results outperform the human capability of identifying deceit.", "We report on machine learning experiments to distinguish deceptive from nondeceptive speech in the Columbia-SRI-Colorado (CSC) corpus. Specifically, we propose a system combination approach using different models and features for deception detection. Scores from an SVM system based on prosodic lexical features are combined with scores from a Gaussian mixture model system based on acoustic features, resulting in improved accuracy over the individual systems. Finally, we compare results from the prosodic-only SVM system using features derived either from recognized words or from human transcriptions.", "We propose a data-driven method for automatic deception detection in real-life trial data using visual and verbal cues. Using OpenFace with facial action unit recognition, we analyze the movement of facial features of the witness when posed with questions and the acoustic patterns using OpenSmile. We then perform a lexical analysis on the spoken words, emphasizing the use of pauses and utterance breaks, feeding that to a Support Vector Machine to test deceit or truth prediction. We then try out a method to incorporate utterance-based fusion of visual and lexical analysis, using string based matching.", "", "", "Effective methods for evaluating the reliability of statements issued by witnesses and defendants in hearings would be an extremely valuable support to decision-making in court and other legal settings. In recent years, methods relying on stylometric techniques have proven most successful for this task; but few such methods have been tested with language collected in real-life situations of high-stakes deception, and therefore their usefulness outside lab conditions still has to be properly assessed. In this study we report the results obtained by using stylometric techniques to identify deceptive statements in a corpus of hearings collected in Italian courts. The defendants at these hearings were condemned for calumny or false testimony, so the falsity of (some of) their statements is fairly certain. In our experiments we replicated the methods used in previous studies but never before applied to high-stakes data, and tested new methods. We also considered the effect of a number of variables including in particular the homogeneity of the dataset. Our results suggest that accuracy at deception detection clearly above chance level can be obtained with real-life data as well.", "The current work sets out to enhance our knowledge of changes or lack of changes in the speech signal when people are being deceptive. In particular, the study attempted to investigate the appropriateness of using speech cues in detecting deception. Truthful, deceptive and control speech was elicited from five speakers during an interview setting. The data was subjected to acoustic analysis and results are presented on a range of speech parameters including fundamental frequency (f0), overall intensity and mean vowel formants F1, F2 and F3. A significant correlation could not be established for any of the acoustic features examined. Directions for future work are highlighted.", "Deception detection has received an increasing amount of attention in recent years, due to the significant growth of digital media, as well as increased ethical and security concerns. Earlier approaches to deception detection were mainly focused on law enforcement applications and relied on polygraph tests, which had proved to falsely accuse the innocent and free the guilty in multiple cases. In this paper, we explore a multimodal deception detection approach that relies on a novel data set of 149 multimodal recordings, and integrates multiple physiological, linguistic, and thermal features. We test the system on different domains, to measure its effectiveness and determine its limitations. We also perform feature analysis using a decision tree model, to gain insights into the features that are most effective in detecting deceit. Our experimental results indicate that our multimodal approach is a promising step toward creating a feasible, non-invasive, and fully automated deception detection system.", "We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely used for action recognition, are also very good at predicting deception in videos. We fuse the score of classifiers trained on IDT features and high-level micro-expressions to improve performance. MFCC (Mel-frequency Cepstral Coefficients) features from the audio domain also provide a significant boost in performance, while information from transcripts is not very beneficial for our system. Using various classifiers, our automated system obtains an AUC of 0.877 (10-fold cross-validation) when evaluated on subjects which were not part of the training set. Even though state-of-the-art methods use human annotations of micro-expressions for deception detection, our fully automated approach outperforms them by 5 . When combined with human annotations of micro-expressions, our AUC improves to 0.922. We also present results of a user-study to analyze how well do average humans perform on this task, what modalities they use for deception detection and how they perform if only one modality is accessible. Our project page can be found at this https URL .", "To date, studies of deceptive speech have largely been confined to descriptive studies and observations from subjects, researchers, or practitioners, with few empirical studies of the specific lexical or acoustic prosodic features which may characterize deceptive speech. We present results from a study seeking to distinguish deceptive from non-deceptive speech using machine learning techniques on features extracted from a large corpus of deceptive and non-deceptive speech. This corpus employs an interview paradigm that includes subject reports of truth vs. lie at multiple temporal scales. We present current results comparing the performance of acoustic prosodic, lexical, and speaker-dependent features and discuss future research directions.", "Hearings of witnesses and defendants play a crucial role when reaching court trial decisions. Given the high-stake nature of trial outcomes, implementing accurate and effective computational methods to evaluate the honesty of court testimonies can offer valuable support during the decision making process. In this paper, we address the identification of deception in real-life trial data. We introduce a novel dataset consisting of videos collected from public court trials. We explore the use of verbal and non-verbal modalities to build a multimodal deception detection system that aims to discriminate between truthful and deceptive statements provided by defendants and witnesses. We achieve classification accuracies in the range of 60-75 when using a model that extracts and fuses features from the linguistic and gesture modalities. In addition, we present a human deception detection study where we evaluate the human capability of detecting deception in trial hearings. The results show that our system outperforms the human capability of identifying deceit." ] }
1812.04429
2951827950
Automated deception detection (ADD) from real-life videos is a challenging task. It specifically needs to address two problems: (1) Both face and body contain useful cues regarding whether a subject is deceptive. How to effectively fuse the two is thus key to the effectiveness of an ADD model. (2) Real-life deceptive samples are hard to collect; learning with limited training data thus challenges most deep learning based ADD models. In this work, both problems are addressed. Specifically, for face-body multimodal learning, a novel face-focused cross-stream network (FFCSN) is proposed. It differs significantly from the popular two-stream networks in that: (a) face detection is added into the spatial stream to capture the facial expressions explicitly, and (b) correlation learning is performed across the spatial and temporal streams for joint deep feature learning across both face and body. To address the training data scarcity problem, our FFCSN model is trained with both meta learning and adversarial learning. Extensive experiments show that our FFCSN model achieves state-of-the-art results. Further, the proposed FFCSN model as well as its robust training strategy are shown to be generally applicable to other human-centric video analysis tasks such as emotion recognition from user-generated videos.
. Our FFCSN model adopts a two-stream network architecture, one for RGB still frame modeling and the other for optical flow extracted from consecutive frames. Such a two-stream architecture was originally proposed for action recognition in videos and has been popular for many human-centric video analysis tasks @cite_49 @cite_29 . Various improvements such as temporal segment network (TSN) @cite_10 and its variants @cite_13 @cite_25 have been designed by capturing the long-range temporal structure and learning the ConvNet models with limited training samples. Similarly, @cite_1 proposed to add faster R-CNN @cite_36 so that attention can be focused on objects detected in a video. Our FFCSN model is different from existing two-stream models in that: (1) face detection is added into the spatial stream subnet to capture the facial expressions explicitly; (2) correlation learning is performed across the spatial and temporal streams to cope with the temporal inconsistency between facial expressions and body motions for ADD.
{ "cite_N": [ "@cite_36", "@cite_10", "@cite_29", "@cite_1", "@cite_49", "@cite_13", "@cite_25" ], "mid": [ "2953106684", "2507009361", "2342662179", "", "2952186347", "2950870964", "" ], "abstract": [ "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features---using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.", "Deep convolutional networks have achieved great success for visual recognition in still images. However, for action recognition in videos, the advantage over traditional methods is not so evident. This paper aims to discover the principles to design effective ConvNet architectures for action recognition in videos and learn these models given limited training samples. Our first contribution is temporal segment network (TSN), a novel framework for video-based action recognition. which is based on the idea of long-range temporal structure modeling. It combines a sparse temporal sampling strategy and video-level supervision to enable efficient and effective learning using the whole action video. The other contribution is our study on a series of good practices in learning ConvNets on video data with the help of temporal segment network. Our approach obtains the state-the-of-art performance on the datasets of HMDB51 ( ( 69.4 , )) and UCF101 ( ( 94.2 , )). We also visualize the learned ConvNet models, which qualitatively demonstrates the effectiveness of temporal segment network and the proposed good practices (Models and code at https: github.com yjxiong temporal-segment-networks).", "Recent applications of Convolutional Neural Networks (ConvNets) for human action recognition in videos have proposed different solutions for incorporating the appearance and motion information. We study a number of ways of fusing ConvNet towers both spatially and temporally in order to best take advantage of this spatio-temporal information. We make the following findings: (i) that rather than fusing at the softmax layer, a spatial and temporal network can be fused at a convolution layer without loss of performance, but with a substantial saving in parameters, (ii) that it is better to fuse such networks spatially at the last convolutional layer than earlier, and that additionally fusing at the class prediction layer can boost accuracy, finally (iii) that pooling of abstract convolutional features over spatiotemporal neighbourhoods further boosts performance. Based on these studies we propose a new ConvNet architecture for spatiotemporal fusion of video snippets, and evaluate its performance on standard benchmarks where this architecture achieves state-of-the-art results.", "", "We investigate architectures of discriminatively trained deep Convolutional Networks (ConvNets) for action recognition in video. The challenge is to capture the complementary information on appearance from still frames and motion between frames. We also aim to generalise the best performing hand-crafted features within a data-driven learning framework. Our contribution is three-fold. First, we propose a two-stream ConvNet architecture which incorporates spatial and temporal networks. Second, we demonstrate that a ConvNet trained on multi-frame dense optical flow is able to achieve very good performance in spite of limited training data. Finally, we show that multi-task learning, applied to two different action classification datasets, can be used to increase the amount of training data and improve the performance on both. Our architecture is trained and evaluated on the standard video actions benchmarks of UCF-101 and HMDB-51, where it is competitive with the state of the art. It also exceeds by a large margin previous attempts to use deep nets for video classification.", "Temporal relational reasoning, the ability to link meaningful transformations of objects or entities over time, is a fundamental property of intelligent species. In this paper, we introduce an effective and interpretable network module, the Temporal Relation Network (TRN), designed to learn and reason about temporal dependencies between video frames at multiple time scales. We evaluate TRN-equipped networks on activity recognition tasks using three recent video datasets - Something-Something, Jester, and Charades - which fundamentally depend on temporal relational reasoning. Our results demonstrate that the proposed TRN gives convolutional neural networks a remarkable capacity to discover temporal relations in videos. Through only sparsely sampled video frames, TRN-equipped networks can accurately predict human-object interactions in the Something-Something dataset and identify various human gestures on the Jester dataset with very competitive performance. TRN-equipped networks also outperform two-stream networks and 3D convolution networks in recognizing daily activities in the Charades dataset. Further analyses show that the models learn intuitive and interpretable visual common sense knowledge in videos.", "" ] }
1812.04429
2951827950
Automated deception detection (ADD) from real-life videos is a challenging task. It specifically needs to address two problems: (1) Both face and body contain useful cues regarding whether a subject is deceptive. How to effectively fuse the two is thus key to the effectiveness of an ADD model. (2) Real-life deceptive samples are hard to collect; learning with limited training data thus challenges most deep learning based ADD models. In this work, both problems are addressed. Specifically, for face-body multimodal learning, a novel face-focused cross-stream network (FFCSN) is proposed. It differs significantly from the popular two-stream networks in that: (a) face detection is added into the spatial stream to capture the facial expressions explicitly, and (b) correlation learning is performed across the spatial and temporal streams for joint deep feature learning across both face and body. To address the training data scarcity problem, our FFCSN model is trained with both meta learning and adversarial learning. Extensive experiments show that our FFCSN model achieves state-of-the-art results. Further, the proposed FFCSN model as well as its robust training strategy are shown to be generally applicable to other human-centric video analysis tasks such as emotion recognition from user-generated videos.
. Deception detection is closely related to emotion recognition: deception could be considered as a specific emotion state of humans, albeit it is much more subtle and harder to detect than others such as happy and angry. Although emotion recognition with still face images has been well studied in previous works, emotion recognition from user-generated videos @cite_17 is still a challenging problem. In particular, because of the complicated and unstructured nature of user-generated videos and the sparsity of video frames that express the emotion content, it is often hard to understand emotions conveyed in user-generated videos. To address this challenging problem, multi-modal fusion and knowledge transfer approaches have been proposed in recent works @cite_40 @cite_2 @cite_42 @cite_5 . In this paper, we show that our FFCSN model can be easily extended to emotion recognition from user-generated videos, with state-of-the-art results achieved.
{ "cite_N": [ "@cite_42", "@cite_40", "@cite_2", "@cite_5", "@cite_17" ], "mid": [ "2794257965", "1930223417", "2414501075", "2177696193", "2188687388" ], "abstract": [ "Recognition of emotions in user-generated videos has attracted increasing research attention. Most existing approaches are based on spatial features extracted from video frames. However, due to the broad affective gap between spatial features of images and high-level emotions, the performance of existing approaches is restricted. To bridge the affective gap, we propose recognizing emotions in user-generated videos with kernelized features. We reformulate the equation of the discrete Fourier transform as a linear kernel function and construct a polynomial kernel function based on the linear kernel. The polynomial kernel is applied to spatial features of video frames to generate kernelized features. Compared with spatial features, kernelized features show superior discriminative capability. Moreover, we are the first to apply the sparse representation method to reduce the impact of noise contained in videos; this method helps contribute to performance improvement. Extensive experiments are conducted on two challenging benchmark datasets, that is, VideoEmotion-8 and Ekman-6. The experimental results demonstrate that the proposed method achieves state-of-the-art performance.", "Social media has been a convenient platform for voicing opinions through posting messages, ranging from tweeting a short text to uploading a media file, or any combination of messages. Understanding the perceived emotions inherently underlying these user-generated contents (UGC) could bring light to emerging applications such as advertising and media analytics. Existing research efforts on affective computation are mostly dedicated to single media, either text captions or visual content. Few attempts for combined analysis of multiple media are made, despite that emotion can be viewed as an expression of multimodal experience. In this paper, we explore the learning of highly non-linear relationships that exist among low-level features across different modalities for emotion prediction. Using the deep Bolzmann machine (DBM), a joint density model over the space of multimodal inputs, including visual, auditory, and textual modalities, is developed. The model is trained directly using UGC data without any labeling efforts. While the model learns a joint representation over multimodal inputs, training samples in absence of certain modalities can also be leveraged. More importantly, the joint representation enables emotion-oriented cross-modal retrieval, for example, retrieval of videos using the text query “crazy cat”. The model does not restrict the types of input and output, and hence, in principle, emotion prediction and retrieval on any combinations of media are feasible. Extensive experiments on web videos and images show that the learnt joint representation could be very compact and be complementary to hand-crafted features, leading to performance improvement in both emotion classification and cross-modal retrieval.", "Despite growing research interest, emotion understanding for user-generated videos remains a challenging problem. Major obstacles include the diversity and complexity of video content, as well as the sparsity of expressed emotions. For the first time, we systematically study large-scale video emotion recognition by transferring deep feature encodings. In addition to the traditional, supervised recognition, we study the problem of zero-shot emotion recognition, where emotions in the test set are unseen during training. To cope with this task, we utilize knowledge transferred from auxiliary image and text corpora. A novel auxiliary Image Transfer Encoding (ITE) process is proposed to efficiently encode and generate video representation. We also thoroughly investigate different configurations of convolutional neural networks. Comprehensive experiments on multiple datasets demonstrate the effectiveness of our framework.", "Emotion is a key element in user-generated video. However, it is difficult to understand emotions conveyed in such videos due to the complex and unstructured nature of user-generated content and the sparsity of video frames expressing emotion. In this paper, for the first time, we propose a technique for transferring knowledge from heterogeneous external sources, including image and textual data, to facilitate three related tasks in understanding video emotion: emotion recognition, emotion attribution and emotion-oriented summarization. Specifically, our framework (1) learns a video encoding from an auxiliary emotional image dataset in order to improve supervised video emotion recognition, and (2) transfers knowledge from an auxiliary textual corpora for zero-shot recognition of emotion classes unseen during training. The proposed technique for knowledge transfer facilitates novel applications of emotion attribution and emotion-oriented summarization. A comprehensive set of experiments on multiple datasets demonstrate the effectiveness of our framework.", "User-generated video collections are expanding rapidly in recent years, and systems for automatic analysis of these collections are in high demands. While extensive research efforts have been devoted to recognizing semantics like \"birthday party\" and \"skiing\", little attempts have been made to understand the emotions carried by the videos, e.g., \"joy\" and \"sadness\". In this paper, we propose a comprehensive computational framework for predicting emotions in user-generated videos. We first introduce a rigorously designed dataset collected from popular video-sharing websites with manual annotations, which can serve as a valuable benchmark for future research. A large set of features are extracted from this dataset, ranging from popular low-level visual descriptors, audio features, to high-level semantic attributes. Results of a comprehensive set of experiments indicate that combining multiple types of features--such as the joint use of the audio and visual clues--is important, and attribute features such as those containing sentiment-level semantics are very effective." ] }
1812.04246
2949308370
Open-set classification is a problem of handling unknown' classes that are not contained in the training dataset, whereas traditional classifiers assume that only known classes appear in the test environment. Existing open-set classifiers rely on deep networks trained in a supervised manner on known classes in the training set; this causes specialization of learned representations to known classes and makes it hard to distinguish unknowns from knowns. In contrast, we train networks for joint classification and reconstruction of input data. This enhances the learned representation so as to preserve information useful for separating unknowns from knowns, as well as to discriminate classes of knowns. Our novel Classification-Reconstruction learning for Open-Set Recognition (CROSR) utilizes latent representations for reconstruction and enables robust unknown detection without harming the known-class classification accuracy. Extensive experiments reveal that the proposed method outperforms existing deep open-set classifiers in multiple standard datasets and is robust to diverse outliers.
Some studies use networks trained in a supervised manner to detect anomalies that are not from the distributions of training data @cite_0 @cite_4 . However, their methods cannot be simply extended to open-set classifiers because they use input preprocessing, for example, adversarial perturbation @cite_5 , and this operation may degrade known-class classification.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_4" ], "mid": [ "2531327146", "1945616565", "2963693742" ], "abstract": [ "We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.", "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.", "We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in- and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7 to 4.3 on the DenseNet (applied to CIFAR-10 and Tiny-ImageNet) when the true positive rate is 95 ." ] }
1812.04427
2904422720
Zero-shot learning (ZSL) aims to recognize a set of unseen classes without any training images. The standard approach to ZSL requires a set of training images annotated with seen class labels and a semantic descriptor for seen unseen classes (attribute vector is the most widely used). Class label attribute annotation is expensive; it thus severely limits the scalability of ZSL. In this paper, we define a new ZSL setting where only a few annotated images are collected from each seen class. This is clearly more challenging yet more realistic than the conventional ZSL setting. To overcome the resultant image-level attribute sparsity, we propose a novel inductive ZSL model termed sparse attribute propagation (SAP) by propagating attribute annotations to more unannotated images using sparse coding. This is followed by learning bidirectional projections between features and attributes for ZSL. An efficient solver is provided, together with rigorous theoretic algorithm analysis. With our SAP, we show that a ZSL training dataset can now be augmented by the abundant web images returned by image search engine, to further improve the model performance. Moreover, the general applicability of SAP is demonstrated on solving the social image annotation (SIA) problem. Extensive experiments show that our model achieves superior performance on both ZSL and SIA.
A ZSL model typically exploits two types of human annotation for recognizing unseen classes without any training images: (1) the human-annotated class labels of training images from seen classes; (2) the human-defined semantic representations of seen unseen classes. Particularly, for attribute-based ZSL, the annotation cost becomes even more expensive when image-level attribute annotations are collected. In the area of ZSL, much attention has been paid to reducing the annotation cost of generating human-defined semantic representations, i.e., the semantic space is formed using online textual documents @cite_46 @cite_28 , human gaze @cite_18 , or visual similes @cite_53 @cite_40 (instead of attributes), which leads to significantly less annotation cost. Different from these ZSL models, we focus on ZSL with less human annotation by defining a new ZSL setting, i.e., only few annotated images are collected from each seen class. Although our model is proposed based on attributes in this paper, it can be easily generalized to other forms of semantic space @cite_46 @cite_18 @cite_40 @cite_28 to further reduce the annotation cost. To our best knowledge, we are the first to define this new ZSL setting only with few annotated seen class images in the area of ZSL.
{ "cite_N": [ "@cite_18", "@cite_28", "@cite_53", "@cite_40", "@cite_46" ], "mid": [ "2407797316", "2963955958", "2581186283", "2766468550", "" ], "abstract": [ "Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have a natural ability to judge class membership. We present a data collection paradigm that involves a discrimination task to increase the information content obtained from gaze data. Our method extracts discriminative descriptors from the data and learns a compatibility function between image and gaze using three novel gaze embeddings: Gaze Histograms (GH), Gaze Features with Grid (GFG) and Gaze Features with Sequence (GFS). We introduce two new gaze-annotated datasets for fine-grained image classification and show that human gaze data is indeed class discriminative, provides a competitive alternative to expert-annotated attributes, and outperforms other baselines for zero-shot image classification.", "Classifying a visual concept merely from its associated online textual source, such as a Wikipedia article, is an attractive research topic in zero-shot learning because it alleviates the burden of manually collecting semantic attributes. Recent work has pursued this approach by exploring various ways of connecting the visual and text domains. In this paper, we revisit this idea by going further to consider one important factor: the textual representation is usually too noisy for the zero-shot learning application. This observation motivates us to design a simple yet effective zero-shot learning method that is capable of suppressing noise in the text. Specifically, we propose an l2,1-norm based objective function which can simultaneously suppress the noisy signal in the text and learn a function to match the text document and visual features. We also develop an optimization algorithm to efficiently solve the resulting problem. By conducting experiments on two large datasets, we demonstrate that the proposed method significantly outperforms those competing methods which rely on online information sources but with no explicit noise suppression. Furthermore, we make an in-depth analysis of the proposed method and provide insight as to what kind of information in documents is useful for zero-shot learning.", "Learning visual attributes is an effective approach for zero-shot recognition. However, existing methods are restricted to learning explicitly nameable attributes and cannot tell which attributes are more important to the recognition task. In this paper, we propose a unified framework named Grouped Simile Ensemble (GSE). We claim our contributions as follows. 1) We propose to substitute explicit attribute annotation by similes, which are more natural expressions that can describe complex unseen classes. Similes do not involve extra concepts of attributes, i.e. only exemplars of seen classes are needed. We provide an efficient scenario to annotate similes for two benchmark datasets, AwA and aPY. 2) We propose a graph-cut-based class clustering algorithm to effectively discover implicit attributes from the similes. 3) Our GSE can automatically find the most effective simile groups to make the prediction. On both datasets, extensive experimental results manifest that our approach can significantly improve the performance over the state-of-the-art methods.", "Existing image classification systems often suffer from re-training models for novel unseen classes. Zero-shot learning (ZSL) aims to recognise these unseen classes directly using trained models with a further inference procedure. However, existing approaches highly rely on human-defined class-attribute associations to achieve the inference, which significantly increases the annotation cost. This paper aims to address ZSL on non-attribute tasks, i.e. only training images with labels are used as most of the supervised settings. Our main contributions are: 1) to circumvent expensive attributes, we propose to use semantic similes that directly indicate the unseen-to-seen associations; 2) a novel similarity-based representation is proposed to represent both visual images and semantic similes in a unified embedding space; 3) in order to reduce the annotation cost, we use only a few similes to infer a class-level prototype for each unseen class. On two popular benchmarks, AwA and aPY, extensive experiments manifest that our method can significantly improve the state-of-the-art results using only two similes for each unseen class. Furthermore, we revisit the Caltech 101 dataset without attributes. Our ZSL results can exceed that of previous supervised methods.", "" ] }
1812.04427
2904422720
Zero-shot learning (ZSL) aims to recognize a set of unseen classes without any training images. The standard approach to ZSL requires a set of training images annotated with seen class labels and a semantic descriptor for seen unseen classes (attribute vector is the most widely used). Class label attribute annotation is expensive; it thus severely limits the scalability of ZSL. In this paper, we define a new ZSL setting where only a few annotated images are collected from each seen class. This is clearly more challenging yet more realistic than the conventional ZSL setting. To overcome the resultant image-level attribute sparsity, we propose a novel inductive ZSL model termed sparse attribute propagation (SAP) by propagating attribute annotations to more unannotated images using sparse coding. This is followed by learning bidirectional projections between features and attributes for ZSL. An efficient solver is provided, together with rigorous theoretic algorithm analysis. With our SAP, we show that a ZSL training dataset can now be augmented by the abundant web images returned by image search engine, to further improve the model performance. Moreover, the general applicability of SAP is demonstrated on solving the social image annotation (SIA) problem. Extensive experiments show that our model achieves superior performance on both ZSL and SIA.
In computer vision, web images have been widely used to promote the performance of existing recognition models as in @cite_43 @cite_11 @cite_31 @cite_27 . However, there is less attention on exploiting web images for ZSL. Two exceptions are: the web images are utilized to augment the unseen class data in @cite_34 and discover event composition knowledge for zero-shot event detection in @cite_32 . In this work, although web images are also employed as external data, our model is quite different from @cite_34 in that we do not search web images since this is against the zero-shot setting.
{ "cite_N": [ "@cite_32", "@cite_43", "@cite_27", "@cite_31", "@cite_34", "@cite_11" ], "mid": [ "2604924528", "2107250100", "2471581439", "2796418006", "2798913983", "2122084318" ], "abstract": [ "", "Most current image categorization methods require large collections of manually annotated training examples to learn accurate visual recognition models. The time-consuming human labeling effort effectively limits these approaches to recognition problems involving a small number of different object classes. In order to address this shortcoming, in recent years several authors have proposed to learn object classifiers from weakly-labeled Internet images, such as photos retrieved by keyword-based image search engines. While this strategy eliminates the need for human supervision, the recognition accuracies of these methods are considerably lower than those obtained with fully-supervised approaches, because of the noisy nature of the labels associated to Web data. In this paper we investigate and compare methods that learn image classifiers by combining very few manually annotated examples (e.g., 1-10 images per class) and a large number of weakly-labeled Web photos retrieved using keyword-based image search. We cast this as a domain adaptation problem: given a few strongly-labeled examples in a target domain (the manually annotated examples) and many source domain examples (the weakly-labeled Web photos), learn classifiers yielding small generalization error on the target domain. Our experiments demonstrate that, for the same number of strongly-labeled examples, our domain adaptation approach produces significant recognition rate improvements over the best published results (e.g., 65 better when using 5 labeled training examples per class) and that our classifiers are one order of magnitude faster to learn and to evaluate than the best competing method, despite our use of large weakly-labeled data sets.", "In this study, we present a weakly supervised approach that discovers the discriminative structures of sketch images, given pairs of sketch images and web images. In contrast to traditional approaches that use global appearance features or relay on keypoint features, our aim is to automatically learn the shared latent structures that exist between sketch images and real images, even when there are significant appearance differences across its relevant real images. To accomplish this, we propose a deep convolutional neural network, named SketchNet. We firstly develop a triplet composed of sketch, positive and negative real image as the input of our neural network. To discover the coherent visual structures between the sketch and its positive pairs, we introduce the softmax as the loss function. Then a ranking mechanism is introduced to make the positive pairs obtain a higher score comparing over negative ones to achieve robust representation. Finally, we formalize above-mentioned constrains into the unified objective function, and create an ensemble feature representation to describe the sketch images. Experiments on the TUBerlin sketch benchmark demonstrate the effectiveness of our model and show that deep feature representation brings substantial improvements over other state-of-the-art methods on sketch classification.", "Learning from web data is increasingly popular due to abundant free web resources. However, the performance gap between webly supervised learning and traditional supervised learning is still very large, due to the label noise of web data as well as the domain shift between web data and test data. To fill this gap, most existing methods propose to purify or augment web data using instance-level supervision, which generally requires heavy annotation. Instead, we propose to address the label noise and domain shift by using more accessible category-level supervision. In particular, we build our deep probabilistic framework upon variational autoencoder (VAE), in which classification network and VAE can jointly leverage category-level hybrid information. Then, we extend our method for domain adaptation followed by our low-rank refinement strategy. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our proposed method.", "Fine-grained image classification, which targets at distinguishing subtle distinctions among various subordinate categories, remains a very difficult task due to the high annotation cost of enormous fine-grained categories. To cope with the scarcity of well-labeled training images, existing works mainly follow two research directions: 1) utilize freely available web images without human annotation; 2) only annotate some fine-grained categories and transfer the knowledge to other fine-grained categories, which falls into the scope of zero-shot learning (ZSL). However, the above two directions have their own drawbacks. For the first direction, the labels of web images are very noisy and the data distribution between web images and test images are considerably different. For the second direction, the performance gap between ZSL and traditional supervised learning is still very large. The drawbacks of the above two directions motivate us to design a new framework which can jointly leverage both web data and auxiliary labeled categories to predict the test categories that are not associated with any well-labeled training images. Comprehensive experiments on three benchmark datasets demonstrate the effectiveness of our proposed framework.", "Recent work has demonstrated the effectiveness of domain adaptation methods for computer vision applications. In this work, we propose a new multiple source domain adaptation method called Domain Selection Machine (DSM) for event recognition in consumer videos by leveraging a large number of loosely labeled web images from different sources (e.g., Flickr.com and Photosig.com), in which there are no labeled consumer videos. Specifically, we first train a set of SVM classifiers (referred to as source classifiers) by using the SIFT features of web images from different source domains. We propose a new parametric target decision function to effectively integrate the static SIFT features from web images video keyframes and the spacetime (ST) features from consumer videos. In order to select the most relevant source domains, we further introduce a new data-dependent regularizer into the objective of Support Vector Regression (SVR) using the ∊-insensitive loss, which enforces the target classifier shares similar decision values on the unlabeled consumer videos with the selected source classifiers. Moreover, we develop an alternating optimization algorithm to iteratively solve the target decision function and a domain selection vector which indicates the most relevant source domains. Extensive experiments on three real-world datasets demonstrate the effectiveness of our proposed method DSM over the state-of-the-art by a performance gain up to 46.41 ." ] }
1812.04427
2904422720
Zero-shot learning (ZSL) aims to recognize a set of unseen classes without any training images. The standard approach to ZSL requires a set of training images annotated with seen class labels and a semantic descriptor for seen unseen classes (attribute vector is the most widely used). Class label attribute annotation is expensive; it thus severely limits the scalability of ZSL. In this paper, we define a new ZSL setting where only a few annotated images are collected from each seen class. This is clearly more challenging yet more realistic than the conventional ZSL setting. To overcome the resultant image-level attribute sparsity, we propose a novel inductive ZSL model termed sparse attribute propagation (SAP) by propagating attribute annotations to more unannotated images using sparse coding. This is followed by learning bidirectional projections between features and attributes for ZSL. An efficient solver is provided, together with rigorous theoretic algorithm analysis. With our SAP, we show that a ZSL training dataset can now be augmented by the abundant web images returned by image search engine, to further improve the model performance. Moreover, the general applicability of SAP is demonstrated on solving the social image annotation (SIA) problem. Extensive experiments show that our model achieves superior performance on both ZSL and SIA.
A number of recent works on image annotation attempted to improve the performance by exploiting side information collected from social media websites, which is called social image annotation (SIA) in this paper. The user-provided side information can be extracted from the noisy tags @cite_45 @cite_50 and group labels @cite_21 . By forming the semantic space using the social tags, our algorithm originally developed for ZSL can also be generalized to SIA. Although the label correlation is not considered in our model, it is shown to generally outperform the state-of-the-art alternatives @cite_9 @cite_35 @cite_21 @cite_50 that employ the well-known recurrent neural network (RNN) @cite_4 to model the label correlation for SIA (see Table ).
{ "cite_N": [ "@cite_35", "@cite_4", "@cite_9", "@cite_21", "@cite_45", "@cite_50" ], "mid": [ "2963020325", "2122585011", "2963745697", "2963513598", "1908139891", "2549365021" ], "abstract": [ "Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance.", "Recognizing lines of unconstrained handwritten text is a challenging task. The difficulty of segmenting cursive or overlapping characters, combined with the need to exploit surrounding context, has led to low recognition rates for even the best current recognizers. Most recent progress in the field has been made either through improved preprocessing or through advances in language modeling. Relatively little work has been done on the basic recognition algorithms. Indeed, most systems rely on the same hidden Markov models that have been used for decades in speech and handwriting recognition, despite their well-known shortcomings. This paper proposes an alternative approach based on a novel type of recurrent neural network, specifically designed for sequence labeling tasks where the data is hard to segment and contains long-range bidirectional interdependencies. In experiments on two large unconstrained handwriting databases, our approach achieves word recognition accuracies of 79.7 percent on online data and 74.1 percent on offline data, significantly outperforming a state-of-the-art HMM-based system. In addition, we demonstrate the network's robustness to lexicon size, measure the individual influence of its hidden layers, and analyze its use of context. Last, we provide an in-depth discussion of the differences between the network and HMMs, suggesting reasons for the network's superior performance.", "While deep convolutional neural networks (CNNs) have shown a great success in single-label image classification, it is important to note that real world images generally contain multiple labels, which could correspond to different objects, scenes, actions and attributes in an image. Traditional approaches to multi-label image classification learn independent classifiers for each category and employ ranking or thresholding on the classification results. These techniques, although working well, fail to explicitly exploit the label dependencies in an image. In this paper, we utilize recurrent neural networks (RNNs) to address this problem. Combined with CNNs, the proposed CNN-RNN framework learns a joint image-label embedding to characterize the semantic label dependency as well as the image-label relevance, and it can be trained end-to-end from scratch to integrate both information in a unified framework. Experimental results on public benchmark datasets demonstrate that the proposed architecture achieves better performance than the state-of-the-art multi-label classification models.", "Images of scenes have various objects as well as abundant attributes, and diverse levels of visual categorization are possible. A natural image could be assigned with finegrained labels that describe major components, coarsegrained labels that depict high level abstraction, or a set of labels that reveal attributes. Such categorization at different concept layers can be modeled with label graphs encoding label information. In this paper, we exploit this rich information with a state-of-art deep learning framework, and propose a generic structured model that leverages diverse label relations to improve image classification performance. Our approach employs a novel stacked label prediction neural network, capturing both inter-level and intra-level label semantics. We evaluate our method on benchmark image datasets, and empirical results illustrate the efficacy of our model.", "Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata.", "The CNN-RNN design pattern is increasingly widely applied in a variety of image annotation tasks including multi-label classification and captioning. Existing models use the weakly semantic CNN hidden layer or its transform as the image embedding that provides the interface between the CNN and RNN. This leaves the RNN overstretched with two jobs: predicting the visual concepts and modelling their correlations for generating structured annotation output. Importantly this makes the end-to-end training of the CNN and RNN slow and ineffective due to the difficulty of back propagating gradients through the RNN to train the CNN. We propose a simple modification to the design pattern that makes learning more effective and efficient. Specifically, we propose to use a semantically regularised embedding layer as the interface between the CNN and RNN. Regularising the interface can partially or completely decouple the learning problems, allowing each to be more effectively trained and jointly training much more efficient. Extensive experiments show that state-of-the art performance is achieved on multi-label classification as well as image captioning." ] }
1812.04103
2904245485
An important step in early brain development study is to perform automatic segmentation of infant brain magnetic resonance (MR) images into cerebrospinal fluid (CSF), gray matter (GM) and white matter (WM) regions. This task is especially challenging in the isointense stage (approximately 6-8 months of age) when GM and WM exhibit similar levels of intensities in MR images. Deep learning has shown its great promise in various image segmentation tasks. However, existing models do not have an efficient and effective way to aggregate global information. They also suffer from information loss during up-sampling operations. In this work, we address these problems by proposing a global aggregation block, which can be flexibly used for global information fusion. We build a novel model based on 3D U-Net to make fast and accurate voxel-wise dense prediction. We perform thorough experiments, and results indicate that our model outperforms previous best models significantly on 3D multimodality isointense infant brain MR image segmentation.
The U-Net @cite_42 architecture incorporates both local and global contextual information through the encoding-decoding process. In the past several years, many variants of U-Net have been developed and they achieved improved performance on biomedical image segmentation. For example, FusionNet @cite_27 , residual deconvolutional network (RDN) @cite_1 and residual symmetric U-Net @cite_2 addressed the 2D electron microscopy image segmentation task by building a U-Net-based network with additional short-range residual connections @cite_7 . In addition, U-Net was extended from 2D to 3D cases for volumetric biomedical images, leading to models like 3D U-Net @cite_11 , V-Net @cite_50 , and CC-3D-FCN @cite_9 . Meanwhile, DeepMedic @cite_3 explored another way to fuse both local and global contextual information by removing the decoder of U-Net and employing a dual pathway architecture. However, without the decoder, the spatial sizes of outputs become smaller as compared to U-Net-based models, which harms the inference efficiency since more patches need to be processed during inference. DeepMedic has been outperformed by U-Net-based models like CC-3D-FCN @cite_9 . In this work, we unified previous models and employed the 3D U-Net architecture with short-range residual connections as the basic framework.
{ "cite_N": [ "@cite_7", "@cite_9", "@cite_42", "@cite_1", "@cite_3", "@cite_27", "@cite_50", "@cite_2", "@cite_11" ], "mid": [ "2949650786", "2791155853", "2952232639", "2521803624", "", "2582996697", "2432481613", "2621042378", "2464708700" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Accurate segmentation of infant brain images into different regions of interest is one of the most important fundamental steps in studying early brain development. In the isointense phase (approximately 6–8 months of age), white matter and gray matter exhibit similar levels of intensities in magnetic resonance (MR) images, due to the ongoing myelination and maturation. This results in extremely low tissue contrast and thus makes tissue segmentation very challenging. Existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single modality. To address the challenge, we propose a novel 3-D multimodal fully convolutional network (FCN) architecture for segmentation of isointense phase brain MR images. Specifically, we extend the conventional FCN architectures from 2-D to 3-D, and, rather than directly using FCN, we intuitively integrate coarse (naturally high-resolution) and dense (highly semantic) feature maps to better model tiny tissue regions, in addition, we further propose a transformation module to better connect the aggregating layers; we also propose a fusion module to better serve the fusion of feature maps. We compare the performance of our approach with several baseline and state-of-the-art methods on two sets of isointense phase brain images. The comparison results show that our proposed 3-D multimodal FCN model outperforms all previous methods by a large margin in terms of segmentation accuracy. In addition, the proposed framework also achieves faster segmentation results compared to all other methods. Our experiments further demonstrate that: 1) carefully integrating coarse and dense feature maps can considerably improve the segmentation performance; 2) batch normalization can speed up the convergence of the networks, especially when hierarchical feature aggregations occur; and 3) integrating multimodal information can further boost the segmentation performance.", "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL .", "Accurate reconstruction of anatomical connections between neurons in the brain using electron microscopy (EM) images is considered to be the gold standard for circuit mapping. A key step in obtaining the reconstruction is the ability to automatically segment neurons with a precision close to human-level performance. Despite the recent technical advances in EM image segmentation, most of them rely on hand-crafted features to some extent that are specific to the data, limiting their ability to generalize. Here, we propose a simple yet powerful technique for EM image segmentation that is trained end-to-end and does not rely on prior knowledge of the data. Our proposed residual deconvolutional network consists of two information pathways that capture full-resolution features and contextual information, respectively. We showed that the proposed model is very effective in achieving the conflicting goals in dense output prediction; namely preserving full-resolution predictions and including sufficient contextual information. We applied our method to the ongoing open challenge of 3D neurite segmentation in EM images. Our method achieved one of the top results on this open challenge. We demonstrated the generality of our technique by evaluating it on the 2D neurite segmentation challenge dataset where consistently high performance was obtained. We thus expect our method to generalize well to other dense output prediction problems.", "", "Electron microscopic connectomics is an ambitious research direction with the goal of studying comprehensive brain connectivity maps by using high-throughput, nano-scale microscopy. One of the main challenges in connectomics research is developing scalable image analysis algorithms that require minimal user intervention. Recently, deep learning has drawn much attention in computer vision because of its exceptional performance in image classification tasks. For this reason, its application to connectomic analyses holds great promise, as well. In this paper, we introduce a novel deep neural network architecture, FusionNet, for the automatic segmentation of neuronal structures in connectomics data. FusionNet leverages the latest advances in machine learning, such as semantic segmentation and residual neural networks, with the novel introduction of summation-based skip connections to allow a much deeper network architecture for a more accurate segmentation. We demonstrate the performance of the proposed method by comparing it with state-of-the-art electron microscopy (EM) segmentation methods from the ISBI EM segmentation challenge. We also show the segmentation results on two different tasks including cell membrane and cell body segmentation and a statistical analysis of cell morphology.", "Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.", "For the past decade, convolutional networks have been used for 3D reconstruction of neurons from electron microscopic (EM) brain images. Recent years have seen great improvements in accuracy, as evidenced by submissions to the SNEMI3D benchmark challenge. Here we report the first submission to surpass the estimate of human accuracy provided by the SNEMI3D leaderboard. A variant of 3D U-Net is trained on a primary task of predicting affinities between nearest neighbor voxels, and an auxiliary task of predicting long-range affinities. The training data is augmented by simulated image defects. The nearest neighbor affinities are used to create an oversegmentation, and then supervoxels are greedily agglomerated based on mean affinity. The resulting SNEMI3D score exceeds the estimate of human accuracy by a large margin. While one should be cautious about extrapolating from the SNEMI3D benchmark to real-world accuracy of large-scale neural circuit reconstruction, our result inspires optimism that the goal of full automation may be realizable in the future.", "This paper introduces a network for volumetric segmentation that learns from sparsely annotated volumetric images. We outline two attractive use cases of this method: (1) In a semi-automated setup, the user annotates some slices in the volume to be segmented. The network learns from these sparse annotations and provides a dense 3D segmentation. (2) In a fully-automated setup, we assume that a representative, sparsely annotated training set exists. Trained on this data set, the network densely segments new volumetric images. The proposed network extends the previous u-net architecture from by replacing all 2D operations with their 3D counterparts. The implementation performs on-the-fly elastic deformations for efficient data augmentation during training. It is trained end-to-end from scratch, i.e., no pre-trained network is required. We test the performance of the proposed method on a complex, highly variable 3D structure, the Xenopus kidney, and achieve good results for both use cases." ] }
1812.04093
2904585437
Recent progress in the field of robotic manipulation has generated interest in fully automatic object packing in warehouses. This paper proposes a formulation of the packing problem that is tailored to the automated warehousing domain. Besides minimizing waste space inside a container, the problem requires stability of the object pile during packing and the feasibility of the robot motion executing the placement plans. To address this problem, a set of constraints are formulated, and a constructive packing pipeline is proposed to solve for these constraints. The pipeline is able to pack geometrically complex, non-convex objects with stability while satisfying robot constraints. In particular, a new 3D positioning heuristic called Heightmap-Minimization heuristic is proposed, and heightmaps are used to speed up the search. Experimental evaluation of the method is conducted with a realistic physical simulator on a dataset of scanned real-world items, demonstrating stable and high-quality packing plans compared with other 3D packing methods.
Popular variations of the cutting and packing problem include the bin and strip packing problem, the knapsack problem, the container loading problem, and others. Most existing research on cutting and packing handles floating 2D and 3D rectilinear objects under the non-overlapping constraints. Under some settings, such problems can be formulated and solved to optimally using the exact algorithms. One example of these state-of-the-art exact algorithms is the solution to the 3D bin packing problem using branch and bound, proposed by @cite_9 @cite_10 , whose work is further extended by many including @cite_20 and @cite_6 . The exact algorithms, although capable of finding the optimal solution if infinite time is spent, are strongly NP-hard @cite_26 and do not guarantee optimal results within a reasonable amount of time, especially when a large number of instances are involved @cite_10 . Therefore, heuristic methods and metaheuristic approaches have been developed over the years, such as the popular “Bottom-Left (BL)” heuristic @cite_5 and the Best-Fit-Decreasing heuristic @cite_23 .
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_6", "@cite_23", "@cite_5", "@cite_10", "@cite_20" ], "mid": [ "2891212941", "2074821977", "2161926431", "2043862089", "2019318803", "2101057470", "2053901957" ], "abstract": [ "", "Given a set of rectangular pieces to be cut from an unlimited number of standardized stock pieces (bins), the Two-Dimensional Finite Bin Packing Problem is to determine the minimum number of stock pieces that provide all the pieces. The problem is NP-hard in the strong sense and finds many practical applications in the cutting and packing area. We analyze a well-known lower bound and determine its worst-case performance. We propose not lower bounds which are used within a branch-and-bound algorithm for the exact solution of the problem. Extensive computational testing on problem instances from the literature involving up to 120 pieces shows the effectiveness of the proposed approach.", "One of the main issues in addressing three-dimensional packing problems is finding an efficient and accurate definition of the points at which to place the items inside the bins, because the performance of exact and heuristic solution methods is actually strongly influenced by the choice of a placement rule. We introduce the extreme point concept and present a new extreme point-based rule for packing items inside a three-dimensional container. The extreme point rule is independent from the particular packing problem addressed and can handle additional constraints, such as fixing the position of the items. The new extreme point rule is also used to derive new constructive heuristics for the three-dimensional bin-packing problem. Extensive computational results show the effectiveness of the new heuristics compared to state-of-the-art results. Moreover, the same heuristics, when applied to the two-dimensional bin-packing problem, outperform those specifically designed for the problem.", "The following abstract problem models several practical problems in computer science and operations research: given a list L of real numbers between 0 and l, place the elements of L into a minimum number @math of “bins” so that no bin contains numbers whose sum exceeds l. Motivated by the likelihood that an excessive amount of computation will be required by any algorithm which actually determines an optimal placement, we examine the performance of a number of simple algorithms which obtain “good” placements. The first-fit algorithm places each number, in succession, into the first bin in which it fits. The best-fit algorithm places each number, in succession, into the most nearly full bin in which it fits. We show that neither the first-fit nor the best-fit algorithm will ever use more than @math bins. Furthermore, we outline a proof that, if L is in decreasing order, then neither algorithm will use more than @math bins. Examples are given to show that both upper bou...", "We consider problems of packing an arbitrary collection of rectangular pieces into an open-ended, rectangular bin so as to minimize the height achieved by any piece. This problem has numerous applications in operations research and studies of computer operation. We devise efficient approximation algorithms, study their limitations, and derive worst-case bounds on the performance of the packings they produce.", "The problem addressed in this paper is that of orthogonally packing a given set of rectangular-shaped items into the minimum number of three-dimensional rectangular bins. The problem is strongly NP-hard and extremely difficult to solve in practice. Lower bounds are discussed, and it is proved that the asymptotic worst-case performance ratio of the continuous lower bound is ?. An exact algorithm for filling a single bin is developed, leading to the definition of an exact branch-and-bound algorithm for the three-dimensional bin packing problem, which also incorporates original approximation algorithms. Extensive computational results, involving instances with up to 90 items, are presented: It is shown that many instances can be solved to optimality within a reasonable time limit.", "In the three-dimensional bin packing problem the task is to orthogonally pack a given set of rectangular items into a minimum number of three-dimensional rectangular bins. We give a characterization of the algorithm proposed by (2000) for the exact solution of the problem, showing that not all orthogonal packings can be generated by the proposed algorithm. The packings, however, have the property of being robot packings, which is relevant in practical settings. References to the modified algorithm, which solves the orthogonal as well as robot packable three-dimensional problem, are given." ] }
1812.04093
2904585437
Recent progress in the field of robotic manipulation has generated interest in fully automatic object packing in warehouses. This paper proposes a formulation of the packing problem that is tailored to the automated warehousing domain. Besides minimizing waste space inside a container, the problem requires stability of the object pile during packing and the feasibility of the robot motion executing the placement plans. To address this problem, a set of constraints are formulated, and a constructive packing pipeline is proposed to solve for these constraints. The pipeline is able to pack geometrically complex, non-convex objects with stability while satisfying robot constraints. In particular, a new 3D positioning heuristic called Heightmap-Minimization heuristic is proposed, and heightmaps are used to speed up the search. Experimental evaluation of the method is conducted with a realistic physical simulator on a dataset of scanned real-world items, demonstrating stable and high-quality packing plans compared with other 3D packing methods.
On the other hand, irregular shape packing, often referred to as nesting, is a more recent variant of the cutting and packing problem. With non-rectilinear geometries, the search space is infinite, with few guidelines available to narrow it down to finite options. Metaheuristics such as Simulated Annealing (SA) @cite_8 @cite_3 @cite_22 and Guided Local Search (GLS) @cite_25 @cite_7 @cite_15 @cite_1 @cite_19 are the most popular tools for solving a nesting problem. These methods commonly start with an initial placement and iteratively improve the placement by moving the pieces in the neighborhood while minimizing an objective function (e.g., overlap in the system). In addition to metaheuristic methods, recent work has also proposed constructive positioning heuristics for 3D irregular objects, such as Deepest-Bottom-Left-Fill (DBLF), which places items in the deepest, bottom-most, left-most position; and Maximum Touching Area (MTA), which places an item in a position that maximizes the total contact area of its faces with the faces of other items @cite_16 .
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_19", "@cite_15", "@cite_16", "@cite_25" ], "mid": [ "", "", "2049142849", "296030155", "", "2017565019", "", "1548434108", "2158804684" ], "abstract": [ "", "", "Simulated annealing (statistical cooling) is applied to bin packing problems. Different cooling strategies are compared empirically and for a particular 100 item problem a solution is given which is most likely the best known so far.", "This paper presents a tabu search and best-fit decreasing (BFD) algorithms to address a real-world steel cutting problem from a retail steel distributor. It consists of cutting large steel blocks in order to obtain smaller tailored blocks ordered by clients. The problem is addressed as a cutting & packing problem, formulated as a 3-dimensional residual bin packing problem for minimization of stock variation. The performance of the proposed approaches is compared to an heuristic and ant colony optimization (ACO) algorithms. The proposed algorithms were able to reduce the stock variation by up to 179 . The comparison of results between the tabu search and BFD algorithm shows that a multiple order joint analysis benefits the optimization of the addressed objective.", "", "This paper considers a new variant of the two-dimensional bin packing problem where each rectangle is assigned a due date and each bin has a fixed processing time. Hence the objective is not only to minimize the number of bins, but also to minimize the maximum lateness of the rectangles. This problem is motivated by the cutting of stock sheets and the potential increased efficiency that might be gained by drawing on a larger pool of demand pieces by mixing orders, while also aiming to ensure a certain level of customer service. We propose a genetic algorithm for searching the solution space, which uses a new placement heuristic for decoding the gene based on the best fit heuristic designed for the strip packing problems. The genetic algorithm employs an innovative crossover operator that considers several different children from each pair of parents. Further, the dual objective is optimized hierarchically with the primary objective periodically alternating between maximum lateness and number of bins. As a result, the approach produces several non-dominated solutions with different trade-offs. Two further approaches are implemented. One is based on a previous Unified Tabu Search, suitably modified to tackle this revised problem. The other is randomized descent and serves as a benchmark for comparing the results. Comprehensive computational results are presented, which show that the Unified Tabu Search still works well in minimizing the bins, but the genetic algorithm performs slightly better. When also considering maximum lateness, the genetic algorithm is considerably better.", "", "In this paper, we describe two heuristics for the Single Vehicle Loading Problem (SVLP), which can handle practical constraints that are frequently encountered in the freight transportation industry, such as the servicing order of clients; item fragility; and the stability of the goods. The two heuristics, Deepest-Bottom-Left-Fill and Maximum Touching Area, are 3D extensions of natural heuristics that have previously only been applied to 2D packing problems. We employ these heuristics as part of a two-phase tabu search algorithm for the Three-Dimensional Loading Capacitated Vehicle Routing Problem (3L-CVRP), where the task is to serve all customers using a homogeneous fleet of vehicles at minimum traveling cost. The resultant algorithm produces mostly superior solutions to existing approaches, and appears to scale better with problem size.", "The three-dimensional bin-packing problem is the problem of orthogonally packing a set of boxes into a minimum number of three-dimensional bins. In this paper we present a heuristic algorithm based on guided local search. Starting with an upper bound on the number of bins obtained by a greedy heuristic, the presented algorithm iteratively decreases the number of bins, each time searching for a feasible packing of the boxes. The process terminates when a given time limit has been reached or the upper bound matches a precomputed lower bound. The algorithm can also be applied to two-dimensional bin-packing problems by having a constant depth for all boxes and bins. Computational experiments are reported for two- and three-dimensional instances with up to 200 boxes, showing that the algorithm on average finds better solutions than do heuristics from the literature." ] }
1812.04093
2904585437
Recent progress in the field of robotic manipulation has generated interest in fully automatic object packing in warehouses. This paper proposes a formulation of the packing problem that is tailored to the automated warehousing domain. Besides minimizing waste space inside a container, the problem requires stability of the object pile during packing and the feasibility of the robot motion executing the placement plans. To address this problem, a set of constraints are formulated, and a constructive packing pipeline is proposed to solve for these constraints. The pipeline is able to pack geometrically complex, non-convex objects with stability while satisfying robot constraints. In particular, a new 3D positioning heuristic called Heightmap-Minimization heuristic is proposed, and heightmaps are used to speed up the search. Experimental evaluation of the method is conducted with a realistic physical simulator on a dataset of scanned real-world items, demonstrating stable and high-quality packing plans compared with other 3D packing methods.
We also know of one packing work that takes into account robot manipulation feasibility @cite_20 , in which the author proposes a variant of the orthogonal 3D box packing scheme such that no prior packed box is in front of, to the right of, or above the current placing box, to avoid possible collision with a vacuum gripper. Although this placing rule prevents a robot from colliding with boxes whose dimensions are much larger than the vacuum gripper, it cannot be generalized to other gripper geometries (e.g., parallel jaw gripper) and does not consider other aspects of robot feasibility such as kinematic constraints and graspability constraints.
{ "cite_N": [ "@cite_20" ], "mid": [ "2053901957" ], "abstract": [ "In the three-dimensional bin packing problem the task is to orthogonally pack a given set of rectangular items into a minimum number of three-dimensional rectangular bins. We give a characterization of the algorithm proposed by (2000) for the exact solution of the problem, showing that not all orthogonal packings can be generated by the proposed algorithm. The packings, however, have the property of being robot packings, which is relevant in practical settings. References to the modified algorithm, which solves the orthogonal as well as robot packable three-dimensional problem, are given." ] }
1812.04042
2953354325
We propose a novel single-image super-resolution approach based on the geostatistical method of kriging. Kriging is a zero-bias minimum-variance estimator that performs spatial interpolation based on a weighted average of known observations. Rather than solving for the kriging weights via the traditional method of inverting covariance matrices, we propose a supervised form in which we learn a deep network to generate said weights. We combine the kriging weight generation and kriging process into a joint network that can be learned end-to-end. Our network achieves competitive super-resolution results as other state-of-the-art methods. In addition, since the super-resolution process follows a known statistical framework, we are able to estimate bias and variance, something which is rarely possible for other deep networks.
Prior to the use of deep learning, SISR approaches applied variants of dictionary learning @cite_8 @cite_31 @cite_28 @cite_20 @cite_23 . Patches were extracted from the low-resolution images and mapped to their corresponding high-resolution version which are then stitched together to increase the image resolution. Other learning-based approaches to increase image resolution include @cite_6 @cite_4 @cite_11 . State-of-the-art SISR methods are deep-learning-based @cite_2 @cite_0 @cite_22 @cite_15 @cite_24 . The VDSR @cite_0 and DRCN @cite_22 approaches showed the benefits of working with image residuals for super-resolution. The DRRN approach @cite_24 generalizes VDSR and concludes that the deeper the network, the better the super-resolved image. We also use a residual network in our approach, but unlike all the other deep SISR methods, we are solving for a set of filter weights to perform the super-resolution with our network rather than directly estimating the HR image itself.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_8", "@cite_28", "@cite_15", "@cite_6", "@cite_0", "@cite_24", "@cite_23", "@cite_2", "@cite_31", "@cite_20", "@cite_11" ], "mid": [ "1992408872", "2949079773", "", "", "2580080206", "1584320927", "", "", "2117865218", "", "", "2057065563", "" ], "abstract": [ "Image super-resolution (SR) reconstruction is essentially an ill-posed problem, so it is important to design an effective prior. For this purpose, we propose a novel image SR method by learning both non-local and local regularization priors from a given low-resolution image. The non-local prior takes advantage of the redundancy of similar patches in natural images, while the local prior assumes that a target pixel can be estimated by a weighted average of its neighbors. Based on the above considerations, we utilize the non-local means filter to learn a non-local prior and the steering kernel regression to learn a local prior. By assembling the two complementary regularization terms, we propose a maximum a posteriori probability framework for SR recovery. Thorough experimental results suggest that the proposed SR method can reconstruct higher quality results both quantitatively and perceptually.", "We propose an image super-resolution method (SR) using a deeply-recursive convolutional network (DRCN). Our network has a very deep recursive layer (up to 16 recursions). Increasing recursion depth can improve performance without introducing new parameters for additional convolutions. Albeit advantages, learning a DRCN is very hard with a standard gradient descent method due to exploding vanishing gradients. To ease the difficulty of training, we propose two extensions: recursive-supervision and skip-connection. Our method outperforms previous methods by a large margin.", "", "", "Single image super-resolution is the task of inferring a high-resolution image from a single low-resolution input. Traditionally, the performance of algorithms for this task is measured using pixel-wise reconstruction measures such as peak signal-to-noise ratio (PSNR) which have been shown to correlate poorly with the human perception of image quality. As a result, algorithms minimizing these metrics tend to produce over-smoothed images that lack high-frequency textures and do not look natural despite yielding high PSNR values. We propose a novel application of automated texture synthesis in combination with a perceptual loss focusing on creating realistic textures rather than optimizing for a pixel-accurate reproduction of ground truth images during training. By using feed-forward fully convolutional neural networks in an adversarial training setting, we achieve a significant boost in image quality at high magnification ratios. Extensive experiments on a number of datasets show the effectiveness of our approach, yielding state-of-the-art results in both quantitative and qualitative benchmarks.", "In this paper we propose an image super-resolution algorithm using wavelet-domain hidden Markov tree (HMT) model. Wavelet-domain HMT models the dependencies of multiscale wavelet coefficients through the state probabilities of wavelet coefficients, whose distribution densities can be approximated by the Gaussian mixture. Because wavelet-domain HMT accurately characterizes the statistics of real-world images, we reasonably specify it as the prior distribution and then formulate the image super-resolution problem as a constrained optimization problem. And the cycle-spinning technique is used to suppress the artifacts that may exist in the reconstructed high-resolution images. Quantitative error analyses are provided and several experimental images are shown for subjective assessment.", "", "", "As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of the l1-norm optimization techniques and the fact that natural images are intrinsically sparse in some domains. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a precollected dataset of example image patches, and then, for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image nonlocal self-similarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and super-resolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many state-of-the-art algorithms in terms of both PSNR and visual perception.", "", "", "We address single image super-resolution using a statistical prediction model based on sparse representations of low- and high-resolution image patches. The suggested model allows us to avoid any invariance assumption, which is a common practice in sparsity-based approaches treating this task. Prediction of high resolution patches is obtained via MMSE estimation and the resulting scheme has the useful interpretation of a feedforward neural network. To further enhance performance, we suggest data clustering and cascading several levels of the basic algorithm. We suggest a training scheme for the resulting network and demonstrate the capabilities of our algorithm, showing its advantages over existing methods based on a low- and high-resolution dictionary pair, in terms of computational complexity, numerical criteria, and visual appearance. The suggested approach offers a desirable compromise between low computational complexity and reconstruction quality, when comparing it with state-of-the-art methods for single image super-resolution.", "" ] }
1812.04063
2905407235
Randomized trials and observational studies, more often than not, run over a certain period of time during which the treatment effect evolves. Many conventional methods for estimating treatment effects are limited to the i.i.d. setting and are not suited for inferring the time dynamics of the treatment effect. The time series encountered in these settings are highly informative but often nonstationary due to the changing effects of treatment. This increases the difficulty of the task, since stationarity, a common assumption in time series analysis, cannot be reasonably assumed. Another challenge is the heterogeneity of the treatment effect when the treatment affects units differently. The task of estimating heterogeneous treatment effects from nonstationary and, in particular, interventional time series is highly relevant but remains largely unexplored. We propose Causal Transfer, a method which fits state-space models to observational-interventional data in order to learn the effect of the treatment and how it evolves over time. Causal Transfer does not assume the data to be stationary and can be applied to randomized trials and observational studies in which treatment is confounded. Causal Transfer adjusts the effect for possible confounders and transfers the learned effect to other time series and, thereby, estimates various forms of treatment effects, such as the average treatment effect (ATE), the sample average treatment effect (SATE), or the conditional average treatment effect (CATE). By learning the time dynamics of the effect, Causal Transfer can also predict the treatment effect for unobserved future time points and determine the long-term consequences of treatment.
A related method which uses state-space models for causal effect estimation is Causal Impact @cite_12 . Causal Impact infers the counterfactual of a treated univariate time series, that is, its outcome under no interventions. For this purpose, it requires a control time series: a covariate which is predictive of the time series of interest but not affected by treatment itself. During the pre-period, Causal Impact learns the relationship between the response and the control time series by fitting a dynamic regression model. Causal Impact assumes that the learned relationship between the control and the response does not change due to treatment. By doing so, Causal Impact is able to predict the counterfactual of a treated time series for the treatment (or post-intervention) period from the model that was fitted to the pre-period. The treatment effect is estimated by subtracting the observed treated time series by the predicted untreated time series.
{ "cite_N": [ "@cite_12" ], "mid": [ "2078639378" ], "abstract": [ "An important problem in econometrics and marketing is to infer the causal impact that a designed market intervention has exerted on an outcome metric over time. In order to allocate a given budget optimally, for example, an advertiser must determine the incremental contributions that dierent advertising campaigns have made to web searches, product installs, or sales. This paper proposes to infer causal impact on the basis of a diusion-regressi on state-space model that predicts the counterfactual market response that would have occurred had no intervention taken place. In con- trast to classical dierence-in-dier ences schemes, state-space models make it possible to (i) infer the temporal evolution of attributable impact, (ii) incorporate empirical priors on the parameters in a fully Bayesian treatment, and (iii) exibly accommodate multiple sources of variation, including the time-varying inuence of contemporane- ous covariates, i.e., synthetic controls. Using a Markov chain Monte Carlo algorithm for posterior inference, we illustrate the statistical properties of our approach on synthetic data. We then demonstrate its practical utility by evaluating the eect of an online advertising campaign on search-related site visits. We discuss the strengths and limitations of our approach in improving the accuracy of causal at- tribution, power analyses, and principled budget allocation." ] }
1812.04063
2905407235
Randomized trials and observational studies, more often than not, run over a certain period of time during which the treatment effect evolves. Many conventional methods for estimating treatment effects are limited to the i.i.d. setting and are not suited for inferring the time dynamics of the treatment effect. The time series encountered in these settings are highly informative but often nonstationary due to the changing effects of treatment. This increases the difficulty of the task, since stationarity, a common assumption in time series analysis, cannot be reasonably assumed. Another challenge is the heterogeneity of the treatment effect when the treatment affects units differently. The task of estimating heterogeneous treatment effects from nonstationary and, in particular, interventional time series is highly relevant but remains largely unexplored. We propose Causal Transfer, a method which fits state-space models to observational-interventional data in order to learn the effect of the treatment and how it evolves over time. Causal Transfer does not assume the data to be stationary and can be applied to randomized trials and observational studies in which treatment is confounded. Causal Transfer adjusts the effect for possible confounders and transfers the learned effect to other time series and, thereby, estimates various forms of treatment effects, such as the average treatment effect (ATE), the sample average treatment effect (SATE), or the conditional average treatment effect (CATE). By learning the time dynamics of the effect, Causal Transfer can also predict the treatment effect for unobserved future time points and determine the long-term consequences of treatment.
Marginal integration @cite_17 is another related method for causal effect estimation. The main difference is that the regression function in Equation is nonparametric and estimated with kernel regression before integrating out the adjustment set. Marginal integration can consistently estimate the ATE with optimal one-dimensional nonparametric convergence rate @math for continuous treatment variables @cite_17 . The price to be paid for such a general result is that it requires strict stationarity for the estimation of the smooth regression function and is, therefore, restricted to observational time series. The theoretical guarantees hold for estimands which are functions of @math for some @math in the support of @math , such as the ATE. In principle, marginal integration can be extended to estimate sample average treatment effects or heterogenous effects. The theoretical guarantees, however, may not carry over as the estimation of the latter is severely exposed to the curse of dimensionality. Marginal integration is capable of predicting future effects but only up to the maximum time distance present in the data and assuming stationarity. It does not support the estimation of prediction intervals.
{ "cite_N": [ "@cite_17" ], "mid": [ "2963139802" ], "abstract": [ "Causal inference from observational data is an ambitious but highly relevant task, with diverse applications ranging from natural to social sciences. Within the scope of nonparametric time series, causal inference defined through interventions is largely unexplored, although time order simplifies the problem substantially. A marginal integration scheme is considered for inferring causal effects from observational time series data, MINT-T (marginal integration in time series), which is an adaptation for time series of a previously proposed method for the case of independent data. This approach for stationary stochastic processes is fully nonparametric and, assuming no instantaneous effects consistently recovers the total causal effect of a single intervention with optimal one-dimensional nonparametric convergence rate n−2 5 assuming regularity conditions and twice differentiability of a certain corresponding regression function. Therefore, MINT-T remains largely unaffected by the curse of dimensionality as long as smoothness conditions hold in higher dimensions and it is feasible for a large class of stationary time series, including nonlinear and multivariate processes. For the case with instantaneous effects, we provide a procedure which guards against false positive causal statements." ] }
1812.03955
2904870248
Model based predictions of future trajectories of a dynamical system often suffer from inaccuracies, forcing model based control algorithms to re-plan often, thus being computationally expensive, suboptimal and not reliable. In this work, we propose a model agnostic method for estimating the uncertainty of a model?s predictions based on reconstruction error, using it in control and exploration. As our experiments show, this uncertainty estimation can be used to improve control performance on a wide variety of environments by choosing predictions of which the model is confident. It can also be used for active learning to explore more efficiently the environment by planning for trajectories with high uncertainty, allowing faster model learning.
The work @cite_2 shows how a neural network can learn a world model to control agents simulated in the MuJoCo physics-engine through step-by-step re-planning using Model Predictive Control. Similarly, the work @cite_3 showed how it is possible to improve model-based control by adding a value function approximator. This method is complementary to ours and could be integrated into the general architecture as future work. The paper @cite_7 adopts an uncertainty-aware model-based control technique to autonomously fly a drone near obstacles, using dropout to estimate uncertainty.
{ "cite_N": [ "@cite_7", "@cite_3", "@cite_2" ], "mid": [ "2586067474", "2898917980", "2743381431" ], "abstract": [ "Reinforcement learning can enable complex, adaptive behavior to be learned automatically for autonomous robotic platforms. However, practical deployment of reinforcement learning methods must contend with the fact that the training process itself can be unsafe for the robot. In this paper, we consider the specific case of a mobile robot learning to navigate an a priori unknown environment while avoiding collisions. In order to learn collision avoidance, the robot must experience collisions at training time. However, high-speed collisions, even at training time, could damage the robot. A successful learning method must therefore proceed cautiously, experiencing only low-speed collisions until it gains confidence. To this end, we present an uncertainty-aware model-based learning algorithm that estimates the probability of collision together with a statistical estimate of uncertainty. By formulating an uncertainty-dependent cost function, we show that the algorithm naturally chooses to proceed cautiously in unfamiliar environments, and increases the velocity of the robot in settings where it has high confidence. Our predictive model is based on bootstrapped neural networks using dropout, allowing it to process raw sensory inputs from high-bandwidth sensors such as cameras. Our experimental evaluation demonstrates that our method effectively minimizes dangerous collisions at training time in an obstacle avoidance task for a simulated and real-world quadrotor, and a real-world RC car. Videos of the experiments can be found at this https URL.", "We propose a plan online and learn offline (POLO) framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex simulated control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.", "Model-free deep reinforcement learning algorithms have been shown to be capable of learning a wide range of robotic skills, but typically require a very large number of samples to achieve good performance. Model-based algorithms, in principle, can provide for much more efficient learning, but have proven difficult to extend to expressive, high-capacity models such as deep neural networks. In this work, we demonstrate that medium-sized neural network models can in fact be combined with model predictive control (MPC) to achieve excellent sample complexity in a model-based reinforcement learning algorithm, producing stable and plausible gaits to accomplish various complex locomotion tasks. We also propose using deep neural network dynamics models to initialize a model-free learner, in order to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. We empirically demonstrate on MuJoCo locomotion tasks that our pure model-based approach trained on just random action data can follow arbitrary trajectories with excellent sample efficiency, and that our hybrid algorithm can accelerate model-free learning on high-speed benchmark tasks, achieving sample efficiency gains of 3-5x on swimmer, cheetah, hopper, and ant agents. Videos can be found at this https URL" ] }
1812.03962
2905067712
We present a deep generative model that learns disentangled static and dynamic representations of data from unordered input. Our approach exploits regularities in sequential data that exist regardless of the order in which the data is viewed. The result of our factorized graphical model is a well-organized and coherent latent space for data dynamics. We demonstrate our method on several synthetic dynamic datasets and real video data featuring various facial expressions and head poses.
Unsupervised learning of disentangled representations can be related to modeling context or hierarchical structure in datasets. In particular, our approach invites comparison to the neural statistician'' of @cite_0 , whose context variable closely corresponds to our static encoding, although our model has a different dependence structure. On sequential data, @cite_5 propose a factorized hierarchical variational auto-encoder using a lookup table for different means, while @cite_8 condition a component of the factorized prior on the full ordered sequence. @cite_3 use an adversarial loss to factor the latent representation of a video frame in a stationary and temporally varying component. @cite_7 introduce a GAN that produces video clips by sequentially decoding a sample vector that consists of two parts: a sample from the motion subspace and a sample from a content subspace. In video generation, other directions can also be explored by decomposing the learned representation into deterministic and stochastic ( @cite_1 ).
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_1", "@cite_3", "@cite_0", "@cite_5" ], "mid": [ "2737548191", "2952161038", "2788033868", "", "2412589713", "2758785877" ], "abstract": [ "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion.", "We present a VAE architecture for encoding and generating high dimensional sequential data, such as video or audio. Our deep generative model learns a latent representation of the data which is split into a static and dynamic part, allowing us to approximately disentangle latent time-dependent features (dynamics) from features which are preserved over time (content). This architecture gives us partial control over generating content and dynamics by conditioning on either one of these sets of features. In our experiments on artificially generated cartoon video clips and voice recordings, we show that we can convert the content of a given sequence into another one by such content swapping. For audio, this allows us to convert a male speaker into a female speaker and vice versa, while for video we can separately manipulate shapes and dynamics. Furthermore, we give empirical evidence for the hypothesis that stochastic RNNs as latent state models are more efficient at compressing and generating long sequences than deterministic ones, which may be relevant for applications in video compression.", "Generating video frames that accurately predict future world states is challenging. Existing approaches either fail to capture the full distribution of outcomes, or yield blurry generations, or both. In this paper we introduce an unsupervised video generation model that learns a prior model of uncertainty in a given environment. Video frames are generated by drawing samples from this prior and combining them with a deterministic estimate of the future frame. The approach is simple and easily trained end-to-end on a variety of datasets. Sample generations are both varied and sharp, even many frames into the future, and compare favorably to those from existing approaches.", "", "An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes.", "We present a factorized hierarchical variational autoencoder, which learns disentangled and interpretable representations from sequential data without supervision. Specifically, we exploit the multi-scale nature of information in sequential data by formulating it explicitly within a factorized hierarchical graphical model that imposes sequence-dependent priors and sequence-independent priors to different sets of latent variables. The model is evaluated on two speech corpora to demonstrate, qualitatively, its ability to transform speakers or linguistic content by manipulating different sets of latent variables; and quantitatively, its ability to outperform an i-vector baseline for speaker verification and reduce the word error rate by as much as 35 in mismatched train test scenarios for automatic speech recognition tasks." ] }
1812.03966
2905089852
Internet of Things (IoT) has become a common paradigm for different domains such as health care, transportation infrastructure, smart home, smart shopping, and e-commerce. With its interoperable functionality, it is now possible to connect all domains of IoT together for providing competent services to the users. Because numerous IoT devices can connect and communicate at the same time, there can be events that trigger conflicting actions to an actuator or an environmental feature. However, there have been very few research efforts made to detect conflicting situation in IoT system using formal method. This paper provides a formal method approach, IoT Confict Checker (IoTC2), to ensure safety of controller and actuators' behavior with respect to conflicts. Any policy violation results in detection of the conflicts. We defined the safety policies for controller, actions, and triggering events and implemented the those with Prolog to prove the logical completeness and soundness. In addition to that, we have implemented the detection policies in Matlab Simulink Environment with its built-in Model Verification blocks. We created smart home environment in Simulink and showed how the conflicts affect actions and corresponding features. We have also experimented the scalability, efficiency, and accuracy of our method in the simulated environment.
Some preliminary work in the areas of formal modeling and verification for smart home @cite_16 @cite_14 , intelligent transportation system @cite_12 , and health Internet of Things @cite_2 has been done. The closest to this work in terms of detecting conflicts are Depsys @cite_9 , and HomeOS @cite_10 . Depsys specified and detected conflicts after collecting the functionalities of 35 smart apps used in smart home. It detects the conflicts after they have occurred and in order to solve the situation priorities have been set to the apps so that no two apps can access the same actuator. Our approach, on the other hand, detects conflicts as soon as an event is generated which may immediately cause an action or a set of actions that result in conflicts. We have left the automated resolution of conflicts as future work.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_2", "@cite_16", "@cite_10", "@cite_12" ], "mid": [ "1985393656", "1975812213", "2015246389", "", "1663109347", "1967899224" ], "abstract": [ "The design of ambient intelligence applications in critical systems requires rigorous software-engineering-oriented approaches. Drawing on practical experience, the authors propose a set of formal tools and a specification process for AmI design activities and artifacts.", "As sensor and actuator networks mature, they become a core utility of smart homes like electricity and water and enable the running of many CPS applications. Like other Cyber-Physical Systems (CPSs), when a number of applications share physical world entities, it raises many systems of systems interdependency problems. Such problems arise in the cyber part mainly because each application has assumptions on the physical world entities without knowing how other applications work. In this work, we propose DepSys, a utility sensing and actuation infrastructure for smart homes that provides comprehensive strategies to specify, detect, and resolve conflicts in a home setting. Based on real home data, we demonstrate the severity of conflicts when multiple CPSs are integrated and the significant ability of detecting and resolving such conflicts using DepSys.", "Abstract The rapid development of technologies towards Internet of Things (IoT), has led to new circumstances at all levels of the social environment. In healthcare in particular, the use of IoT concepts and technologies make diagnose and monitor more convenient for the physicians and patients. As mobile applications solutions are widely accepted because the easy to use, secure healthcare service is a new demand for mobile solutions. To protect the privacy and security for patients in the domain of healthcare towards IoT, a systematic mechanism is needed. This article proposes a novel security and privacy mechanism for Health Internet of Things (Health-IoT) to solve above problems. Health-IoT is promising for both traditional healthcare industry and the information and communication technologies (ICTs) industry. From the view of trustworthiness, interactive vector was proposed to communicate the end-devices and application brokers. The aim is to establish a trust IoT application market (IAM), feature of application in marketplace and behavior of applications on end-devices can be exchanged in mathematical value to establish the connection between market and users.", "", "Network devices for the home such as remotely controllable locks, lights, thermostats, cameras, and motion sensors are now readily available and inexpensive. In theory, this enables scenarios like remotely monitoring cameras from a smartphone or customizing climate control based on occupancy patterns. However, in practice today, such smarthome scenarios are limited to expert hobbyists and the rich because of the high overhead of managing and extending current technology. We present HomeOS, a platform that bridges this gap by presenting users and developers with a PC-like abstraction for technology in the home. It presents network devices as peripherals with abstract interfaces, enables cross-device tasks via applications written against these interfaces, and gives users a management interface designed for the home environment. HomeOS already has tens of applications and supports a wide range of devices. It has been running in 12 real homes for 4-8 months, and 42 students have built new applications and added support for additional devices independent of our efforts.", "Due to development and extension of internet of things, mobile-hierarchy architecture was proposed for querying a deployed wireless sensor network in an intelligent transportation system. Secure handshake among nodes becomes an important part of an intelligent transportation system. The mobile node verifies the legitimacy of an ordinary sensor node over an insecure communication channel. Attribute set or information as important handshake factors and negotiate each other privately in local side. In this paper, a secure attribute matching handshake scheme which extends fuzzy information for mobile-hierarchy city intelligent transportation system is proposed." ] }
1812.03892
2905089247
We present an open-source system for Micro-Aerial Vehicle autonomous navigation from vision-based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field of view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real-world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search and rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real-world experiments.
We will give a very abbreviated overview of related work, as more thorough discussions of all parts are available in our previous work @cite_20 @cite_8 @cite_10 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_20", "@cite_8" ], "mid": [ "2962794880", "2963105445", "2564322318", "2607968634" ], "abstract": [ "Micro-Aerial Vehicles (MAVs)have the advantage of moving freely in 3D space. However, creating compact and sparse map representations that can be efficiently used for planning for such robots is still an open problem. In this paper, we take maps built from noisy sensor data and construct a sparse graph containing topological information that can be used for 3D planning. We use a Euclidean Signed Distance Field, extract a 3D Generalized Voronoi Diagram (GVD), and obtain a thin skeleton diagram representing the topological structure of the environment. We then convert this skeleton diagram into a sparse graph, which we show is resistant to noise and changes in resolution. We demonstrate global planning over this graph, and the orders of magnitude speed-up it offers over other common planning methods. We validate our planning algorithm in real maps built onboard an MAV, using RGB-D sensing.", "In order to enable microaerial vehicles (MAVs) to assist in complex, unknown, unstructured environments, they must be able to navigate with guaranteed safety, even when faced with a cluttered environment they have no prior knowledge of. While trajectory-optimization-based local planners have been shown to perform well in these cases, prior work either does not address how to deal with local minima in the optimization problem or solves it by using an optimistic global planner. We present a conservative trajectory-optimization-based local planner, coupled with a local exploration strategy that selects intermediate goals. We perform extensive simulations to show that this system performs better than the standard approach of using an optimistic global planner and also outperforms doing a single exploration step when the local planner is stuck. The method is validated through experiments in a variety of highly cluttered environments including a dense forest. These experiments show the complete system running in real time fully onboard an MAV, mapping and replanning at 4 Hz.", "Multirotor unmanned aerial vehicles (UAVs) are rapidly gaining popularity for many applications. However, safe operation in partially unknown, unstructured environments remains an open question. In this paper, we present a continuous-time trajectory optimization method for real-time collision avoidance on multirotor UAVs. We then propose a system where this motion planning method is used as a local replanner, that runs at a high rate to continuously recompute safe trajectories as the robot gains information about its environment. We validate our approach by comparing against existing methods and demonstrate the complete system avoiding obstacles on a multirotor UAV platform.", "Micro Aerial Vehicles (MAVs) that operate in unstructured, unexplored environments require fast and flexible local planning, which can replan when new parts of the map are explored. Trajectory optimization methods fulfill these needs, but require obstacle distance information, which can be given by Euclidean Signed Distance Fields (ESDFs). We propose a method to incrementally build ESDFs from Truncated Signed Distance Fields (TSDFs), a common implicit surface representation used in computer graphics and vision. TSDFs are fast to build and smooth out sensor noise over many observations, and are designed to produce surface meshes. We show that we can build TSDFs faster than Octomaps, and that it is more accurate to build ESDFs out of TSDFs than occupancy maps. Our complete system, called voxblox, is available as open source and runs in real-time on a single CPU core. We validate our approach on-board an MAV, by using our system with a trajectory optimization local planner, entirely on-board and in real-time." ] }
1812.03892
2905089247
We present an open-source system for Micro-Aerial Vehicle autonomous navigation from vision-based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field of view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real-world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search and rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real-world experiments.
We aim to show a complete system for mapping and planning on-board an autonomous UAV, using vision-based sensing. Lin al @cite_21 presented a similar complete system, spanning visual-inertial state estimation, local re-planning, and control. However, there are a few key differences between the frameworks proposed: ours focuses strongly on the map representation we use and exploiting all the information within, while their uses a standard occupancy map. More importantly, our planning is , meaning we will only traverse known free space, while theirs assumes unknown space is free. Therefore, we must make more considerations about the contents of our map with these restrictive assumptions. We also offer an evaluation of global and path smoothing planning methods.
{ "cite_N": [ "@cite_21" ], "mid": [ "2732510496" ], "abstract": [ "Autonomous micro aerial vehicles (MAVs) have cost and mobility benefits, making them ideal robotic platforms for applications including aerial photography, surveillance, and search and rescue. As the platform scales down, MAVs become more capable of operating in confined environments, but it also introduces significant size and payload constraints. A monocular visual-inertial navigation system (VINS), consisting only of an inertial measurement unit (IMU) and a camera, becomes the most suitable sensor suite in this case, thanks to its light weight and small footprint. In fact, it is the minimum sensor suite allowing autonomous flight with sufficient environmental awareness. In this paper, we show that it is possible to achieve reliable online autonomous navigation using monocular VINS. Our system is built on a customized quadrotor testbed equipped with a fisheye camera, a low-cost IMU, and heterogeneous onboard computing resources. The backbone of our system is a highly accurate optimization-based monocular visual-inertial state estimator with online initialization and self-extrinsic calibration. An onboard GPU-based monocular dense mapping module that conditions on the estimated pose provides wide-angle situational awareness. Finally, an online trajectory planner that operates directly on the incrementally built three-dimensional map guarantees safe navigation through cluttered environments. Extensive experimental results are provided to validate individual system modules as well as the overall performance in both indoor and outdoor environments." ] }
1812.03892
2905089247
We present an open-source system for Micro-Aerial Vehicle autonomous navigation from vision-based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field of view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real-world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search and rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real-world experiments.
Mohta al @cite_23 also propose an autonomous system for fast UAV flight through cluttered environments. There are a few key differences with their work, especially on mapping and planning. They use a LIDAR as the main sensor, which gives 360 @math field of view for collision detection, removing many of the issues with narrow field of view sensors which we attempt to address in this work. They also only keep a small local 3D map, and use a global 2D map to escape local minima, whereas we use a full global 3D approach at comparable computation speeds. For how the mapping is used, they attempt to break the world into overlapping convex free-space regions, which grows in complexity and is increasingly more limited as the space gets more complex, while we always plan directly in the map space. They also make no considerations for how drift will affect the map other than to keep only a local 3D map.
{ "cite_N": [ "@cite_23" ], "mid": [ "2771926486" ], "abstract": [ "Author(s): Mohta, K.; Mulgaonkar, Y.; Watterson, M.; Liu, S.; Qu, C.; Makineni, A.; Saulnier, K.; Sun, K.; Zhu, A.; Delmerico, J.; Karydis, K.; Atanasov, N.; Loianno, G.; Scaramuzza, D.; Daniilidis, K.; Taylor, C. J.; Kumar, V." ] }
1812.03892
2905089247
We present an open-source system for Micro-Aerial Vehicle autonomous navigation from vision-based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field of view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real-world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search and rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real-world experiments.
Finally, the system we propose is conceptually similar to the original system in our previous work @cite_22 . The core differences are that we improved every individual component, designed and evaluated a custom mapping system, and proposed a way to do local re-planning as well (whereas the previous work was only global planning). This makes the system proposed in this work much more robust and able to deal with changes in the environment.
{ "cite_N": [ "@cite_22" ], "mid": [ "2214613866" ], "abstract": [ "In this work, we present an MAV system that is able to relocalize itself, create consistent maps and plan paths in full 3D in previously unknown environments. This is solely based on vision and IMU measurements with all components running onboard and in real-time. We use visual-inertial odometry to keep the MAV airborne safely locally, as well as for exploration of the environment based on high-level input by an operator. A globally consistent map is constructed in the background, which is then used to correct for drift of the visual odometry algorithm. This map serves as an input to our proposed global planner, which finds dynamic 3D paths to any previously visited place in the map, without the use of teach and repeat algorithms. In contrast to previous work, all components are executed onboard and in real-time without any prior knowledge of the environment." ] }
1907.01195
2955623766
This paper addresses the problem of building a speech recognition system attuned to the control of unmanned aerial vehicles (UAVs). Even though UAVs are becoming widespread, the task of creating voice interfaces for them is largely unaddressed. To this end, we introduce a multi-modal evaluation dataset for UAV control, consisting of spoken commands and associated images, which represent the visual context of what the UAV "sees" when the pilot utters the command. We provide baseline results and address two research directions: (i) how robust the language models are, given an incomplete list of commands at train time; (ii) how to incorporate visual information in the language model. We find that recurrent neural networks (RNNs) are a solution to both tasks: they can be successfully adapted using a small number of commands and they can be extended to use visual cues. Our results show that the image-based RNN outperforms its text-only counterpart even if the command-image training associations are automatically generated and inherently imperfect. The dataset and our code are available at this http URL.
The task of speech recognition for UAV control is relatively unexplored and the few published works on this topic @cite_25 @cite_34 @cite_8 focus on recognition of simple commands: the authors of @cite_25 predict a fixed set of nine commands using a classification pipeline based on audio features, such as energy and MFCC; the method in @cite_8 recognizes commands to navigate through menus, operations which were previously achieved through keyboard presses.
{ "cite_N": [ "@cite_34", "@cite_25", "@cite_8" ], "mid": [ "", "2101771572", "2127060550" ], "abstract": [ "", "This project presents a speech-based control system for DRONE using Support Vector Machines (SVM). The set of controlling speeches consists of BACKWARD, FORWARD, HOLD ON, LANDING, MOVE UP, MOVE DOWN, TAKE OFF, TURN LEFT and TURN RIGHT are trained the SVM. The feature extraction of speech used in this study comprises of “fundamental frequency”, “Energy”, and Mel Frequency Cepstral Coefficient”. For performance evaluation, a set of features are used to test the SVM-based system developed by MATLAB. The results show that the average percentage of accuracy of the controlling speeches are 22.22, 46.67, 97.78 and 95.56 for fundamental frequency, energy, Mel frequency cepstral coefficient and all features, respectively. In addition, the interface of SVM-based system and DRONE is developed in practical use.", "Unmanned aerial vehicle (UAV) control stations feature multiple menu pages with systems accessed by keyboard presses. Use of speech-based input may enable operators to navigate through menus and select options more quickly. This experiment examined the utility of conventional manual input versus speech input for tasks performed by operators of a UAV control station simulator at two levels of mission difficulty. Pilots performed a continuous flight navigation control task while completing eight different data entry task types with each input modality. Results showed that speech input was significantly better than manual input in terms of task completion time, task accuracy, flight navigation measures, and pilot ratings. Across tasks, data entry time was reduced by approximately 40 with speech input. Additional research is warranted to confirm that this head-up, hands-free control is still beneficial in operational UAV control station auditory environments and does not conflict with intercom operations and..." ] }
1907.01195
2955623766
This paper addresses the problem of building a speech recognition system attuned to the control of unmanned aerial vehicles (UAVs). Even though UAVs are becoming widespread, the task of creating voice interfaces for them is largely unaddressed. To this end, we introduce a multi-modal evaluation dataset for UAV control, consisting of spoken commands and associated images, which represent the visual context of what the UAV "sees" when the pilot utters the command. We provide baseline results and address two research directions: (i) how robust the language models are, given an incomplete list of commands at train time; (ii) how to incorporate visual information in the language model. We find that recurrent neural networks (RNNs) are a solution to both tasks: they can be successfully adapted using a small number of commands and they can be extended to use visual cues. Our results show that the image-based RNN outperforms its text-only counterpart even if the command-image training associations are automatically generated and inherently imperfect. The dataset and our code are available at this http URL.
Vision-language systems are used in tasks such as image captioning @cite_17 @cite_18 or visual question answering @cite_15 @cite_1 . Many such systems model the language in the context of an image: they estimate the probability distribution over the next word given the preceding words and the visual context. The most common approach uses a recurrent neural network to model the distribution over the words and a convolutional neural network to extract visual features @cite_18 @cite_16 @cite_31 @cite_29 ; we use a similar architecture. Audio-visual systems target tasks such as image retrieval by speech @cite_27 @cite_3 @cite_10 , embedding learning @cite_3 @cite_21 , speech-prompted object localization @cite_10 or semantic keyword spotting @cite_9 . The typical approach exploits statistical correspondences and learns embeddings for the two modalities, utterances and images, to a common sub-space.
{ "cite_N": [ "@cite_18", "@cite_31", "@cite_29", "@cite_21", "@cite_1", "@cite_9", "@cite_3", "@cite_27", "@cite_15", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2951805548", "1895577753", "2950178297", "2962862718", "2277195237", "2892818738", "2556930864", "385555557", "2950761309", "2159243025", "2796315435", "2171361956" ], "abstract": [ "We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets: Flickr8k, Flickr30k and MS COCO.", "In this paper, we present a model which takes as input a corpus of images with relevant spoken captions and finds a correspondence between the two modalities. We employ a pair of convolutional neural networks to model visual objects and speech signals at the word level, and tie the networks together with an embedding and alignment model which learns a joint semantic space over both modalities. We evaluate our model using image search and annotation tasks on the Flickr8k dataset, which we augmented by collecting a corpus of 40,000 spoken captions using Amazon Mechanical Turk.", "Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked \"What vehicle is the person riding?\", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) to answer correctly that \"the person is riding a horse-drawn carriage.\" In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 108K images where each image has an average of @math 35 objects, @math 26 attributes, and @math 21 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.", "There is a growing interest in models that can learn from unlabelled speech paired with visual context. This setting is relevant for low-resource speech processing, robotics, and human language acquisition research. Here, we study how a visually grounded speech model, trained on images of scenes paired with spoken captions, captures aspects of semantics. We use an external image tagger to generate soft text labels from images, which serve as targets for a neural model that maps untranscribed speech to (semantic) keyword labels. We introduce a newly collected data set of human semantic relevance judgements and an associated task, semantic speech retrieval , where the goal is to search for spoken utterances that are semantically relevant to a given text query. Without seeing any text, the model trained on parallel speech and images achieves a precision of almost 60 on its top ten semantic retrievals. Compared to a supervised model trained on transcriptions, our model matches human judgements better by some measures, especially in retrieving non-verbatim semantic matches. We perform an extensive analysis of the model and its resulting representations.", "Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.", "Previous research has shown that an efficient acoustic model can be trained from wordlevel annotations alone [1]. Here, we explore the possibility of learning both an acoustic model and a word image association from multi-modal co-occurrences between speech and pictures alone (a task known as cross-situational learning [2]). Our work is inspired by the observation that infants achieve spontaneously this kind of correspondence during their first year of life, e.g. with pictures of natural kinds like animals [3].", "We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing 0.25M images, 0.76M questions, and 10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (this http URL).", "In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.", "In this paper, we explore neural network models that learn to associate segments of spoken audio captions with the semantically relevant portions of natural images that they refer to. We demonstrate that these audio-visual associative localizations emerge from network-internal representations learned as a by-product of training to perform an image-audio retrieval task. Our models operate directly on the image pixels and speech waveform, and do not rely on any conventional supervision in the form of labels, segmentations, or alignments between the modalities during training. We perform analysis using the Places 205 and ADE20k datasets demonstrating that our models implicitly learn semantically-coupled object and word detectors.", "We introduce two multimodal neural language models: models of natural language that can be conditioned on other modalities. An image-text multimodal neural language model can be used to retrieve images given complex sentence queries, retrieve phrase descriptions given image queries, as well as generate text conditioned on images. We show that in the case of image-text modelling we can jointly learn word representations and image features by training our models together with a convolutional network. Unlike many of the existing methods, our approach can generate sentence descriptions for images without the use of templates, structured prediction, and or syntactic trees. While we focus on imagetext modelling, our algorithms can be easily applied to other modalities such as audio." ] }
1907.01195
2955623766
This paper addresses the problem of building a speech recognition system attuned to the control of unmanned aerial vehicles (UAVs). Even though UAVs are becoming widespread, the task of creating voice interfaces for them is largely unaddressed. To this end, we introduce a multi-modal evaluation dataset for UAV control, consisting of spoken commands and associated images, which represent the visual context of what the UAV "sees" when the pilot utters the command. We provide baseline results and address two research directions: (i) how robust the language models are, given an incomplete list of commands at train time; (ii) how to incorporate visual information in the language model. We find that recurrent neural networks (RNNs) are a solution to both tasks: they can be successfully adapted using a small number of commands and they can be extended to use visual cues. Our results show that the image-based RNN outperforms its text-only counterpart even if the command-image training associations are automatically generated and inherently imperfect. The dataset and our code are available at this http URL.
The work of Sun al @cite_26 combines all three modalities and is most similar to ours: they attempt to improve an ASR system based on a language model that takes the context image as input. We differ from them by taking other architectural decisions and, more importantly, by assuming a scenario with small amounts of data. For this reason we have to rely on out-of-domain datasets for initialization and semi-automatic methods to generate training data.
{ "cite_N": [ "@cite_26" ], "mid": [ "2586850765" ], "abstract": [ "In this paper, we introduce a multimodal speech recognition scenario, in which an image provides contextual information for a spoken caption to be decoded. We investigate a lattice rescoring algorithm that integrates information from the image at two different points: the image is used to augment the language model with the most likely words, and to rescore the top hypotheses using a word-level RNN. This rescoring mechanism decreases the word error rate by 3 absolute percentage points, compared to a baseline speech recognizer operating with only the speech recording." ] }
1907.01298
2954149929
Robot Learning, from a control point of view, often involves continuous actions. In Reinforcement Learning, such actions are usually handled with actor-critic algorithms. They may build on Conservative Policy Iteration (e.g., Trust Region Policy Optimization, TRPO), on policy gradient (e.g., Reinforce), on entropy regularization (e.g., Soft Actor Critic, SAC), among others (e.g., Proximal Policy Optimization, PPO), but in all cases they can be seen as a form of soft policy iteration: they iterate policy evaluation followed by a soft policy improvement step. As so, they often are naturally on-policy. In this paper, we propose to combine (any kind of) soft greediness with Modified Policy Iteration (MPI). The proposed abstract framework applies repeatedly: (i) a partial policy evaluation step that allows off-policy learning and (ii) any soft greedy step. As a proof of concept, we instantiate this framework with the PPO soft greediness. Comparison to the original PPO shows that our algorithm is much more sample efficient. We also show that it is competitive with the state-of-art off-policy algorithm SAC.
As an off-policy deep actor-critic, MoPPO can also be related to approaches such as SAC @cite_4 , DDPG @cite_3 or TD3 @cite_2 . They share the same characteristics (off-policy, actor-critic), but they are derived from different principles. SAC is build upon entropy-regularized policy iteration, while DDPG and TD3 are based on the deterministic policy gradient theorem . The proposed MoSoPI framework is somehow more general, as it allows considering any soft greedy step (and thus those of the aforementioned approaches). Notice that these approaches are made off-policy by (somehow implicitly) replacing the full policy evaluation by a single TD backup. This corresponds to setting @math in our framework (but learning and sample collection are entangled, contrary to our approach).
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_2" ], "mid": [ "2962902376", "2963864421", "2963923407" ], "abstract": [ "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as either off-policy Q-learning, or on-policy policy gradient methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.", "Abstract: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.", "In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested." ] }
1907.01343
2953889673
Transfer learning focuses on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these areas, learning scenarios change by nature, but often remain related and motivate the reuse of existing supervised models. While the majority of symmetric and asymmetric domain adaptation algorithms utilize all available source and target domain data, we show that domain adaptation requires only a substantial smaller subset. This makes it more suitable for real-world scenarios where target domain data is rare. The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer. We employ Nystrom techniques and show the reliability of this approximation without a particular landmark matrix by applying post-transfer normalization. It is evaluated on typical domain adaptation tasks with standard benchmark data.
Transfer learning is the task of reusing information or trained models in one domain to help to learn a target prediction function in a different domain of interest @cite_9 @cite_18 @cite_16 . Deep transfer networks are excluded, because they are mainly designed for specific tasks or data types and differ from each other in their architecture @cite_16 .
{ "cite_N": [ "@cite_9", "@cite_16", "@cite_18" ], "mid": [ "2395579298", "2887280559", "2165698076" ], "abstract": [ "Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments.", "As a new classification platform, deep learning has recently received increasing attention from researchers and has been successfully applied to many domains. In some domains, like bioinformatics and robotics, it is very difficult to construct a large-scale well-annotated dataset due to the expense of data acquisition and costly annotation, which limits its development. Transfer learning relaxes the hypothesis that the training data must be independent and identically distributed (i.i.d.) with the test data, which motivates us to use transfer learning to solve the problem of insufficient training data. This survey focuses on reviewing the current researches of transfer learning by using deep neural network and its applications. We defined deep transfer learning, category and review the recent research works based on the techniques used in deep transfer learning.", "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research." ] }
1907.01343
2953889673
Transfer learning focuses on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these areas, learning scenarios change by nature, but often remain related and motivate the reuse of existing supervised models. While the majority of symmetric and asymmetric domain adaptation algorithms utilize all available source and target domain data, we show that domain adaptation requires only a substantial smaller subset. This makes it more suitable for real-world scenarios where target domain data is rare. The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer. We employ Nystrom techniques and show the reliability of this approximation without a particular landmark matrix by applying post-transfer normalization. It is evaluated on typical domain adaptation tasks with standard benchmark data.
All the considered methods have approximately a complexity of @math where @math is the most significant number of samples concerning target or source. These algorithms pursue transfer learning @cite_18 , because some data be available at training time. These transfer-solutions cannot be directly used as predictors, but instead are wrappers for classification algorithms. The respective used baseline classifier is the SVM . Preliminaries
{ "cite_N": [ "@cite_18" ], "mid": [ "2165698076" ], "abstract": [ "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research." ] }
1907.01343
2953889673
Transfer learning focuses on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these areas, learning scenarios change by nature, but often remain related and motivate the reuse of existing supervised models. While the majority of symmetric and asymmetric domain adaptation algorithms utilize all available source and target domain data, we show that domain adaptation requires only a substantial smaller subset. This makes it more suitable for real-world scenarios where target domain data is rare. The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer. We employ Nystrom techniques and show the reliability of this approximation without a particular landmark matrix by applying post-transfer normalization. It is evaluated on typical domain adaptation tasks with standard benchmark data.
The computational complexity of calculating kernels or eigensystems scales with @math where @math is the sample size @cite_0 . Therefore, low-rank approximations and dimensionality reduction of data matrices are popular methods to speed up computational processes. In this scope, however not limited to it, the Nyström approximation @cite_15 is a reliable technique to accelerate eigendecomposition or approximation of general symmetric matrices @cite_8 . It computes an approximated set of eigenvectors and values based on a usually much smaller sample matrix @cite_8 . The landmarks are typically picked at random, but advanced sampling concepts could be used as well @cite_6 . The approximation is exact if the sample size is equal to the rank of the original matrix and the rows of the sample matrix are linear independent @cite_8 .
{ "cite_N": [ "@cite_0", "@cite_15", "@cite_6", "@cite_8" ], "mid": [ "2112545207", "", "2141566892", "2120157859" ], "abstract": [ "A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n3), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix can be computed by the Nystrom method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using this approximation is O(m2n). We report experiments on the USPS and abalone data sets and show that we can set m ≪ n without any significant decrease in the accuracy of the solution.", "", "The Nystrom method is an efficient technique to generate low-rank matrix approximations and is used in several large-scale learning applications. A key aspect of this method is the procedure according to which columns are sampled from the original matrix. In this work, we explore the efficacy of a variety of fixed and adaptive sampling schemes. We also propose a family of ensemble-based sampling algorithms for the Nystrom method. We report results of extensive experiments that provide a detailed comparison of various fixed and adaptive sampling techniques, and demonstrate the performance improvement associated with the ensemble Nystrom method when used in conjunction with either fixed or adaptive sampling schemes. Corroborating these empirical findings, we present a theoretical analysis of the Nystrom method, providing novel error bounds guaranteeing a better convergence rate of the ensemble Nystrom method in comparison to the standard Nystrom method.", "Domain specific (dis-)similarity or proximity measures used e.g. in alignment algorithms of sequence data are popular to analyze complicated data objects and to cover domain specific data properties. Without an underlying vector space these data are given as pairwise (dis-)similarities only. The few available methods for such data focus widely on similarities and do not scale to large datasets. Kernel methods are very effective for metric similarity matrices, also at large scale, but costly transformations are necessary starting with non-metric (dis-) similarities. We propose an integrative combination of Nystrom approximation, potential double centering and eigenvalue correction to obtain valid kernel matrices at linear costs in the number of samples. By the proposed approach effective kernel approaches become accessible. Experiments with several larger (dis-)similarity datasets show that the proposed method achieves much better runtime performance than the standard strategy while keeping competitive model accuracy. The main contribution is an efficient and accurate technique, to convert (potentially non-metric) large scale dissimilarity matrices into approximated positive semi-definite kernel matrices at linear costs. HighlightsWe propose a linear time and memory efficient approach for converting low rank dissimilarity matrices to similarity matrices and vice versa.Our approach is applicable for proximities obtained from non-metric proximity measures (indefinite kernels, non-standard dissimilarity measures).The presented approach also comprises a generalization of Landmark MDS - the presented approach is in general more accurate and flexible than Landmark MDS.We provide an alternative derivation of the Nystrom approximation together with a convergence proof, also for indefinite kernels not given in the workshop paper as a core element of the approach." ] }
1907.01343
2953889673
Transfer learning focuses on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these areas, learning scenarios change by nature, but often remain related and motivate the reuse of existing supervised models. While the majority of symmetric and asymmetric domain adaptation algorithms utilize all available source and target domain data, we show that domain adaptation requires only a substantial smaller subset. This makes it more suitable for real-world scenarios where target domain data is rare. The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer. We employ Nystrom techniques and show the reliability of this approximation without a particular landmark matrix by applying post-transfer normalization. It is evaluated on typical domain adaptation tasks with standard benchmark data.
The polar decomposition @cite_7 is a universal decomposition applicable to an arbitrary matrix and is defined as @math . Where @math and @math with @math as singular values, @math and @math are left and right singular vectors respectively. If @math is a square matrix, the decomposition is unique and @math is orthogonal and a rotation matrix. @math is positive semi-definite and scaling factor of @math . then the eigenvectors and square root eigenvalues of @math are singular vectors and values of @math respectively. It is important to note that the spectral theorem is incorporated in Theorem , which in this context means that for the EVD and SVD are the same.
{ "cite_N": [ "@cite_7" ], "mid": [ "2088307891" ], "abstract": [ "A quadratically convergent Newton method for computing the polar decomposition of a full-rank matrix is presented and analysed. Acceleration parameters are introduced so as to enhance the initial rate of convergence and it is shown how reliable estimates of the optimal parameters may be computed in practice.To add to the known best approximation property of the unitary polar factor, the Hermitian polar factor H of a nonsingular Hermitian matrix A is shown to be a good positive definite approximation to Aand @math is shown to be a best Hermitian positive semi-definite approximation to A. Perturbation bounds for the polar factors are derived.Applications of the polar decomposition to factor analysis, aerospace computations and optimisation are outlined; and a new method is derived for computing the square root of a symmetric positive definite matrix." ] }
1907.01343
2953889673
Transfer learning focuses on the reuse of supervised learning models in a new context. Prominent applications can be found in robotics, image processing or web mining. In these areas, learning scenarios change by nature, but often remain related and motivate the reuse of existing supervised models. While the majority of symmetric and asymmetric domain adaptation algorithms utilize all available source and target domain data, we show that domain adaptation requires only a substantial smaller subset. This makes it more suitable for real-world scenarios where target domain data is rare. The presented approach finds a target subspace representation for source and target data to address domain differences by orthogonal basis transfer. We employ Nystrom techniques and show the reliability of this approximation without a particular landmark matrix by applying post-transfer normalization. It is evaluated on typical domain adaptation tasks with standard benchmark data.
The Gershgorin Theorem @cite_17 provides a geometric structure to bound eigenvalues to so-called discs for complex square matrices but also generalize to none complex square matrices. By evaluating the circles, it is possible to estimate the numerical range of eigenvalues of @math .
{ "cite_N": [ "@cite_17" ], "mid": [ "1511572470" ], "abstract": [ "This book studies the original results, and their extensions, of the Russian mathematician, S.A. Gersgorin, who wrote a seminal paper in 1931, on how to easily obtain estimates of all n eigenvalues (characteristic values) of any given n-by-n complex matrix. Since the publication of this paper, there has been many newer results spawned by his paper, and this book will be the first which is devoted solely to this resulting area. As such, it will include the latest research results, such as Brauer ovals of Cassini and Brualdi lemniscates, and their comparisons. This book is dedicated to the late Olga Taussky-Todd and her husband, John Todd. It was Olga who brought to light Gersgorin's paper and its significance to the mathematical world. The level of this book requires only a modest background in linear algebra and analysis, and is therefore comprehensible to upper-level and graduate level students in mathematics." ] }
1907.01277
2954304925
Data-driven models for audio source separation such as U-Net or Wave-U-Net are usually models dedicated to and specifically trained for a single task, e.g. a particular instrument isolation. Training them for various tasks at once commonly results in worse performances than training them for a single specialized task. In this work, we introduce the Conditioned-U-Net (C-U-Net) which adds a control mechanism to the standard U-Net. The control mechanism allows us to train a unique and generic U-Net to perform the separation of various instruments. The C-U-Net decides the instrument to isolate according to a one-hot-encoding input vector. The input vector is embedded to obtain the parameters that control Feature-wise Linear Modulation (FiLM) layers. FiLM layers modify the U-Net feature maps in order to separate the desired instrument via affine transformations. The C-U-Net performs different instrument separations, all with a single model achieving the same performances as the dedicated ones at a lower cost.
We refer the reader to @cite_13 for an extensive overview of the different source separation techniques. We review only the data-driven approaches. Here, the neural networks have taken the lead. Although architectures such as RNN @cite_18 or CNN @cite_2 have been studied, the most successful one use a deep U-Net architecture (also called U-Net). In @cite_19 , the U-Net is applied to a spectrogram to separate the vocal and accompaniment components, training a specific model for each task. Since the output is the spectrogram, they need to reconstruct the audio signal which potentially leads to artifacts. For this reason, Wave-U-Net proposes to apply the U-Net to the audio-waveform @cite_14 . They also adapt their model for isolating different sources at once by adding to their dedicated version as many outputs as sources to separate. However, this multi-instruments version performs worse than the dedicated one (for vocal isolation) and has to be retrained to different source combinations.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_19", "@cite_2", "@cite_13" ], "mid": [ "1790748249", "2805288670", "2774707525", "2587994092", "2796571515" ], "abstract": [ "Monaural source separation is important for many real world applications. It is challenging because, with only a single channel of information available, without any constraints, an infinite number of solutions are possible. In this paper, we explore joint optimization of masking functions and deep recurrent neural networks for monaural source separation tasks, including speech separation, singing voice separation, and speech denoising. The joint optimization of the deep recurrent neural networks with an extra masking layer enforces a reconstruction constraint. Moreover, we explore a discriminative criterion for training neural networks to further enhance the separation performance. We evaluate the proposed system on the TSP, MIR-1K, and TIMIT datasets for speech separation, singing voice separation, and speech denoising tasks, respectively. Our approaches achieve 2.30-4.98 dB SDR gain compared to NMF models in the speech separation task, 2.30-2.48 dB GNSDR gain and 4.32-5.42 dB GSIR gain compared to existing models in the singing voice separation task, and outperform NMF and DNN baselines in the speech denoising task.", "Models for audio source separation usually operate on the magnitude spectrum, which ignores phase information and makes separation performance dependant on hyper-parameters for the spectral front-end. Therefore, we investigate end-to-end source separation in the time-domain, which allows modelling phase information and avoids fixed spectral transformations. Due to high sampling rates for audio, employing a long temporal input context on the sample level is difficult, but required for high quality separation results because of long-range temporal correlations. In this context, we propose the Wave-U-Net, an adaptation of the U-Net to the one-dimensional time domain, which repeatedly resamples feature maps to compute and combine features at different time scales. We introduce further architectural improvements, including an output layer that enforces source additivity, an upsampling technique and a context-aware prediction framework to reduce output artifacts. Experiments for singing voice separation indicate that our architecture yields a performance comparable to a state-of-the-art spectrogram-based U-Net architecture, given the same data. Finally, we reveal a problem with outliers in the currently used SDR evaluation metrics and suggest reporting rank-based statistics to alleviate this problem.", "The decomposition of a music audio signal into its vocal and backing track components is analogous to image-to-image translation, where a mixed spectrogram is transformed into its constituent sources. We propose a novel application of the U-Net architecture — initially developed for medical imaging — for the task of source separation, given its proven capacity for recreating the fine, low-level detail required for high-quality audio reproduction. Through both quantitative evaluation and subjective assessment, experiments demonstrate that the proposed algorithm achieves state-of-the-art performance.", "In this paper we introduce a low-latency monaural source separation framework using a Convolutional Neural Network (CNN). We use a CNN to estimate time-frequency soft masks which are applied for source separation. We evaluate the performance of the neural network on a database comprising of musical mixtures of three instruments: voice, drums, bass as well as other instruments which vary from song to song. The proposed architecture is compared to a Multilayer Perceptron (MLP), achieving on-par results and a significant improvement in processing time. The algorithm was submitted to source separation evaluation campaigns to test efficiency, and achieved competitive results.", "Popular music is often composed of an accompaniment and a lead component, the latter typically consisting of vocals. Filtering such mixtures to extract one or both components has many applications, such as automatic karaoke and remixing. This particular case of source separation yields very specific challenges and opportunities, including the particular complexity of musical structures, but also relevant prior knowledge coming from acoustics, musicology or sound engineering. Due to both its importance in applications and its challenging difficulty, lead and accompaniment separation has been a popular topic in signal processing for decades. In this article, we provide a comprehensive review of this research topic, organizing the different approaches according to whether they are model-based or data-centered. For model-based methods, we organize them according to whether they concentrate on the lead signal, the accompaniment, or both. For data-centered approaches, we discuss the particular difficulty of obtaining data for learning lead separation systems, and then review recent approaches, notably those based on deep learning. Finally, we discuss the delicate problem of evaluating the quality of music separation through adequate metrics and present the results of the largest evaluation, to-date, of lead and accompaniment separation systems. In conjunction with the above, a comprehensive list of references is provided, along with relevant pointers to available implementations and repositories." ] }
1907.01277
2954304925
Data-driven models for audio source separation such as U-Net or Wave-U-Net are usually models dedicated to and specifically trained for a single task, e.g. a particular instrument isolation. Training them for various tasks at once commonly results in worse performances than training them for a single specialized task. In this work, we introduce the Conditioned-U-Net (C-U-Net) which adds a control mechanism to the standard U-Net. The control mechanism allows us to train a unique and generic U-Net to perform the separation of various instruments. The C-U-Net decides the instrument to isolate according to a one-hot-encoding input vector. The input vector is embedded to obtain the parameters that control Feature-wise Linear Modulation (FiLM) layers. FiLM layers modify the U-Net feature maps in order to separate the desired instrument via affine transformations. The C-U-Net performs different instrument separations, all with a single model achieving the same performances as the dedicated ones at a lower cost.
The closest work to ours is @cite_6 . In there, they propose to use multi-channel audio as input to a Variational Auto-Encoder (VAE) to separate 4 different speakers. The VAE is conditioned on the ID of the speaker to be separated. The proposed method outperforms its baseline.
{ "cite_N": [ "@cite_6" ], "mid": [ "2886577208" ], "abstract": [ "This paper proposes a multichannel source separation technique called the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class labels, we can use the trained decoder distribution as a universal generative model capable of generating spectrograms conditioned on a specified class label. By treating the latent space variables and the class label as the unknown parameters of this generative model, we can develop a convergence-guaranteed semi-blind source separation algorithm that consists of iteratively estimating the power spectrograms of the underlying sources as well as the separation matrices. In experimental evaluations, our MVAE produced better separation performance than a baseline method." ] }
1907.01099
2954438498
Many computational models were proposed to extract temporal patterns from clinical time series for each patient and among patient group for predictive healthcare. However, the common relations among patients (e.g., share the same doctor) were rarely considered. In this paper, we represent patients and clinicians relations by bipartite graphs addressing for example from whom a patient get a diagnosis. We then solve for the top eigenvectors of the graph Laplacian, and include the eigenvectors as latent representations of the similarity between patient-clinician pairs into a time-sensitive prediction model. We conducted experiments using real-world data to predict the initiation of first-line treatment for Chronic Lymphocytic Leukemia (CLL) patients. Results show that relational similarity can improve prediction over multiple baselines, for example a 5 incremental over long-short term memory baseline in terms of area under precision-recall curve.
Medical treatment prediction is a core research task of disease progression modeling. Recently, many deep learning models have made rapid advancements on this topic. In @cite_19 , a two-level attention model was designed to detect influential past visits and significant clinical variables for better prediction accuracy and interpretability. In @cite_17 , a graph-based attention model was proposed to extract hierarchical information from medical oncologies and improve RNN-based rare disease prediction. In @cite_1 , a bi-directional RNN was designed to remember information of both the past and future visits based on three attention mechanism to measure the relationship of different visits for prediction. In @cite_2 , a RNN architecture through dynamically matching temporal patterns was proposed to learn the similarity between two longitudinal patient record sequences for personalized prediction of Parkinson's Disease. Other similar approaches have also been proposed @cite_8 @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_8", "@cite_1", "@cite_19", "@cite_2", "@cite_17" ], "mid": [ "2914241418", "2896538705", "2690721124", "2963271116", "2623881437", "2557074642" ], "abstract": [ "Diagnosis prediction aims to predict the future health status of patients according to their historical visit records, which is an important yet challenging task in healthcare informatics. Existing diagnosis prediction approaches mainly employ recurrent neural networks (RNNs) with attention mechanisms to make predictions. However, these approaches ignore the importance of code descriptions, i.e., the medical definitions of diagnosis codes. We believe that taking diagnosis code descriptions into account can help the state-of-the-art models not only to learn meaningful code representations, but also to improve the predictive performance. Thus, in this paper, we propose a simple, but general diagnosis prediction framework, which includes two basic components: diagnosis code embedding and predictive model. To learn the interpretable code embeddings, we apply convolutional neural networks (CNNs) to model medical descriptions of diagnosis codes extracted from online medical websites. The learned medical embedding matrix is used to embed the input visits into vector representations, which are fed into the predictive models. Any existing diagnosis prediction approach (referred to as the base model) can be cast into the proposed framework as the predictive model (called the enhanced model). We conduct experiments on two real medical datasets: the MIMIC-III dataset and the Heart Failure claim dataset. Experimental results show that the enhanced diagnosis prediction approaches significantly improve the prediction performance.", "The goal of diagnosis prediction task is to predict the future health information of patients from their historical Electronic Healthcare Records (EHR). The most important and challenging problem of diagnosis prediction is to design an accurate, robust and interpretable predictive model. Existing work solves this problem by employing recurrent neural networks (RNNs) with attention mechanisms, but these approaches suffer from the data sufficiency problem. To obtain good performance with insufficient data, graph-based attention models are proposed. However, when the training data are sufficient, they do not offer any improvement in performance compared with ordinary attention-based models. To address these issues, we propose KAME, an end-to-end, accurate and robust model for predicting patients' future health information. KAME not only learns reasonable embeddings for nodes in the knowledge graph, but also exploits general knowledge to improve the prediction accuracy with the proposed knowledge attention mechanism. With the learned attention weights, KAME allows us to interpret the importance of each piece of knowledge in the graph. Experimental results on three real world datasets show that the proposed KAME significantly improves the prediction performance compared with the state-of-the-art approaches, guarantees the robustness with both sufficient and insufficient data, and learns interpretable disease representations.", "Predicting the future health information of patients from the historical Electronic Health Records (EHR) is a core research task in the development of personalized healthcare. Patient EHR data consist of sequences of visits over time, where each visit contains multiple medical codes, including diagnosis, medication, and procedure codes. The most important challenges for this task are to model the temporality and high dimensionality of sequential EHR data and to interpret the prediction results. Existing work solves this problem by employing recurrent neural networks (RNNs) to model EHR data and utilizing simple attention mechanism to interpret the results. However, RNN-based approaches suffer from the problem that the performance of RNNs drops when the length of sequences is large, and the relationships between subsequent visits are ignored by current RNN-based approaches. To address these issues, we propose Dipole, an end-to-end, simple and robust model for predicting patients' future health information. Dipole employs bidirectional recurrent neural networks to remember all the information of both the past visits and the future visits, and it introduces three attention mechanisms to measure the relationships of different visits for the prediction. With the attention mechanisms, Dipole can interpret the prediction results effectively. Dipole also allows us to interpret the learned medical code representations which are confirmed positively by medical experts. Experimental results on two real world EHR datasets show that the proposed Dipole can significantly improve the prediction accuracy compared with the state-of-the-art diagnosis prediction approaches and provide clinically meaningful interpretation.", "Accuracy and interpretability are two dominant features of successful predictive models. Typically, a choice must be made in favor of complex black box models such as recurrent neural networks (RNN) for accuracy versus less accurate but more interpretable traditional models such as logistic regression. This tradeoff poses challenges in medicine where both accuracy and interpretability are important. We addressed this challenge by developing the REverse Time AttentIoN model (RETAIN) for application to Electronic Health Records (EHR) data. RETAIN achieves high accuracy while remaining clinically interpretable and is based on a two-level neural attention model that detects influential past visits and significant clinical variables within those visits (e.g. key diagnoses). RETAIN mimics physician practice by attending the EHR data in a reverse time order so that recent clinical visits are likely to receive higher attention. RETAIN was tested on a large health system EHR dataset with 14 million visits completed by 263K patients over an 8 year period and demonstrated predictive accuracy and computational scalability comparable to state-of-the-art methods such as RNN, and ease of interpretability comparable to traditional models.", "", "Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain: - Data insufficiency: Often in healthcare predictive modeling, the sample size is insufficient for deep learning methods to achieve satisfactory results. Interpretation: The representations learned by deep learning methods should align with medical knowledge. To address these challenges, we propose GRaph-based Attention Model (GRAM) that supplements electronic health records (EHR) with hierarchical information inherent to medical ontologies. Based on the data volume and the ontology structure, GRAM represents a medical concept as a combination of its ancestors in the ontology via an attention mechanism. We compared predictive performance (i.e. accuracy, data needs, interpretability) of GRAM to various methods including the recurrent neural network (RNN) in two sequential diagnoses prediction tasks and one heart failure prediction task. Compared to the basic RNN, GRAM achieved 10 higher accuracy for predicting diseases rarely observed in the training data and 3 improved area under the ROC curve for predicting heart failure using an order of magnitude less training data. Additionally, unlike other methods, the medical concept representations learned by GRAM are well aligned with the medical ontology. Finally, GRAM exhibits intuitive attention behaviors by adaptively generalizing to higher level concepts when facing data insufficiency at the lower level concepts." ] }
1907.01300
2954077879
Inability of the naive users to formulate appropriate queries is a fundamental problem in web search engines. Therefore, assisting users to issue more effective queries is an important way to improve users' happiness. One effective approach is query reformulation, which generates new effective queries according to the current query issued by users. Previous researches typically generate words and phrases related to the original query. Since the definition of query reformulation is quite general, it is completely difficult to develop a uniform term-based approach for this problem. This paper uses readily available data, particularly over one billion anchor phrases in Clueweb09 corpus, in order to learn an end-to-end encoder-decoder model to automatically generate effective queries. Following successful researches in the field of sequence to sequence models, we employ a character-level convolutional neural network with max-pooling at encoder and an attention-based recurrent neural network at decoder. The whole model learned in an unsupervised end-to-end manner.Experiments on TREC collections show that the reformulated queries automatically generated by the proposed solution can significantly improve the retrieval performance.
Sequence to sequence models @cite_31 aims at building an end-to-end deep neural network that takes as input a sequence @math and returns a sequence @math , where @math and @math are source and target symbols respectively, while @math and @math are the length of the source and target sequence. This approach has delivered state of the art performance in areas such as neural machine translation in both industry @cite_4 @cite_41 and academia @cite_14 @cite_0 @cite_1 @cite_32 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_41", "@cite_1", "@cite_32", "@cite_0", "@cite_31" ], "mid": [ "2964308564", "", "2522606127", "2157331557", "", "2964199361", "2949888546" ], "abstract": [ "Abstract: Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.", "", "Machine transliteration is the process of automatically transforming the script of a word from a source language to a target language, while preserving pronunciation. Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. In this paper a character-based encoder-decoder model has been proposed that consists of two Recurrent Neural Networks. The encoder is a Bidirectional recurrent neural network that encodes a sequence of symbols into a fixed-length vector representation, and the decoder generates the target sequence using an attention-based recurrent neural network. The encoder, the decoder and the attention mechanism are jointly trained to maximize the conditional probability of a target sequence given a source sequence. Our experiments on different datasets show that the proposed encoder-decoder model is able to achieve significantly higher transliteration quality over traditional statistical models.", "In this paper, we propose a novel neural network model called RNN Encoder‐ Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixedlength vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder‐Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.", "", "Neural machine translation is a relatively new approach to statistical machine translation based purely on neural networks. The neural machine translation models often consist of an encoder and a decoder. The encoder extracts a fixed-length representation from a variable-length input sentence, and the decoder generates a correct translation from this representation. In this paper, we focus on analyzing the properties of the neural machine translation using two models; RNN Encoder‐Decoder and a newly proposed gated recursive convolutional neural network. We show that the neural machine translation performs relatively well on short sentences without unknown words, but its performance degrades rapidly as the length of the sentence and the number of unknown words increase. Furthermore, we find that the proposed gated recursive convolutional network learns a grammatical structure of a sentence automatically.", "Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT'14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous best result on this task. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
The authors of @cite_7 use Differential Language Analysis (DLA) in order to find language features across millions of Facebook messages that distinguish demographic and psychological attributes. They show that their approach can yield additional insights (correlations between personality and behavior as manifest through language) and more information (as measured through predictive accuracy) than traditional a priori word-category approaches.
{ "cite_N": [ "@cite_7" ], "mid": [ "2119595472" ], "abstract": [ "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase ‘sick of’ and the word ‘depressed’), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive ‘my’ when mentioning their ‘wife’ or ‘girlfriend’ more often than females use ‘my’ with ‘husband’ or 'boyfriend’). To date, this represents the largest study, by an order of magnitude, of language and personality." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
The framework proposed in @cite_3 relies on a semi-supervised topic model to construct a representation of an app's version as a set of latent topics from version metadata and textual descriptions. The authors discriminate the topics based on genre information and weight them on a per-user basis, in order to generate a version-sensitive ranked list of apps for a target user.
{ "cite_N": [ "@cite_3" ], "mid": [ "2036839989" ], "abstract": [ "Existing recommender systems usually model items as static -- unchanging in attributes, description, and features. However, in domains such as mobile apps, a version update may provide substantial changes to an app as updates, reflected by an increment in its version number, may attract a consumer's interest for a previously unappealing version. Version descriptions constitute an important recommendation evidence source as well as a basis for understanding the rationale for a recommendation. We present a novel framework that incorporates features distilled from version descriptions into app recommendation. We use a semi-supervised topic model to construct a representation of an app's version as a set of latent topics from version metadata and textual descriptions. We then discriminate the topics based on genre information and weight them on a per-user basis to generate a version-sensitive ranked list of apps for a target user. Incorporating our version features with state-of-the-art individual and hybrid recommendation techniques significantly improves recommendation quality. An important advantage of our method is that it targets particular versions of apps, allowing previously disfavored apps to be recommended when user-relevant features are added." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
In @cite_9 the authors propose a dynamic user and word embedding algorithm that can jointly and dynamically model user and word representations in the same semantic space. They consider the context of streams of documents in Twitter, and propose a scalable black-box variational inference algorithm to infer the dynamic embeddings of both users and words in streams. They also propose a streaming keyword diversification model to diversify top-K keywords for characterizing users’ profiles over time.
{ "cite_N": [ "@cite_9" ], "mid": [ "2808858103" ], "abstract": [ "In this paper, we study the problem of dynamic user profiling in Twitter. We address the problem by proposing a dynamic user and word embedding model (DUWE), a scalable black-box variational inference algorithm, and a streaming keyword diversification model (SKDM). DUWE dynamically tracks the semantic representations of users and words over time and models their embeddings in the same space so that their similarities can be effectively measured. Our inference algorithm works with a convex objective function that ensures the robustness of the learnt embeddings. SKDM aims at retrieving top-K relevant and diversified keywords to profile users' dynamic interests. Experiments on a Twitter dataset demonstrate that our proposed embedding algorithms outperform state-of-the-art non-dynamic and dynamic embedding and topic models." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
In @cite_2 individuals are associated each other due to some actions they share (e.g., they have visited the same web pages). The proximity between individuals on networks built upon such relationships is informative about their profile matching. In particular, brand-affinity audiences are built by selecting the social-network neighbors of existing brand actors, identified via co-visitation of social-networking pages. This is achieved without saving any information about the identities of the browsers or content of the social-network pages, thus allowing for user anonymization.
{ "cite_N": [ "@cite_2" ], "mid": [ "2160986013" ], "abstract": [ "This paper describes and evaluates privacy-friendly methods for extracting quasi-social networks from browser behavior on user-generated content sites, for the purpose of finding good audiences for brand advertising (as opposed to click maximizing, for example). Targeting social-network neighbors resonates well with advertisers, and on-line browsing behavior data counterintuitively can allow the identification of good audiences anonymously. Besides being one of the first papers to our knowledge on data mining for on-line brand advertising, this paper makes several important contributions. We introduce a framework for evaluating brand audiences, in analogy to predictive-modeling holdout evaluation. We introduce methods for extracting quasi-social networks from data on visitations to social networking pages, without collecting any information on the identities of the browsers or the content of the social-network pages. We introduce measures of brand proximity in the network, and show that audiences with high brand proximity indeed show substantially higher brand affinity. Finally, we provide evidence that the quasi-social network embeds a true social network, which along with results from social theory offers one explanation for the increase in brand affinity of the selected audiences." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
In @cite_0 compact and effective user profiles are generated from the history of user actions, i.e., a mixture of user interests over a period of time. The authors propose a streaming, distributed inference algorithm which is able to handle tens of millions of users. They show that their model contributes towards improved behavioral targeting of display advertising relative to baseline models that do not incorporate topical and or temporal dependencies.
{ "cite_N": [ "@cite_0" ], "mid": [ "2142534468" ], "abstract": [ "Historical user activity is key for building user profiles to predict the user behavior and affinities in many web applications such as targeting of online advertising, content personalization and social recommendations. User profiles are temporal, and changes in a user's activity patterns are particularly useful for improved prediction and recommendation. For instance, an increased interest in car-related web pages may well suggest that the user might be shopping for a new vehicle.In this paper we present a comprehensive statistical framework for user profiling based on topic models which is able to capture such effects in a fully fashion. Our method models topical interests of a user dynamically where both the user association with the topics and the topics themselves are allowed to vary over time, thus ensuring that the profiles remain current. We describe a streaming, distributed inference algorithm which is able to handle tens of millions of users. Our results show that our model contributes towards improved behavioral targeting of display advertising relative to baseline models that do not incorporate topical and or temporal dependencies. As a side-effect our model yields human-understandable results which can be used in an intuitive fashion by advertisers." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
In @cite_4 a computer user behavior is represented as the sequence of the commands she he types during her his work. This sequence is transformed into a distribution of relevant subsequences of commands in order to find out a profile that defines its behavior. Also, because a user profile is not necessarily fixed but rather it evolves changes, the authors propose an evolving method to keep up to date the created profiles using an Evolving Systems approach.
{ "cite_N": [ "@cite_4" ], "mid": [ "2046704354" ], "abstract": [ "Knowledge about computer users is very beneficial for assisting them, predicting their future actions or detecting masqueraders. In this paper, a new approach for creating and recognizing automatically the behavior profile of a computer user is presented. In this case, a computer user behavior is represented as the sequence of the commands she he types during her his work. This sequence is transformed into a distribution of relevant subsequences of commands in order to find out a profile that defines its behavior. Also, because a user profile is not necessarily fixed but rather it evolves changes, we propose an evolving method to keep up to date the created profiles using an Evolving Systems approach. In this paper, we combine the evolving classifier with a trie-based user profiling to obtain a powerful self-learning online scheme. We also develop further the recursive formula of the potential of a data point to become a cluster center using cosine distance, which is provided in the Appendix. The novel approach proposed in this paper can be applicable to any problem of dynamic evolving user behavior modeling where it can be represented as a sequence of actions or events. It has been evaluated on several real data streams." ] }
1907.01326
2954229381
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
The observation that behavior of users is highly influenced by the behavior of their neighbors or community members is used in @cite_8 to enrich user profiles, based on latent user communities in collaborative tagging.
{ "cite_N": [ "@cite_8" ], "mid": [ "2078425114" ], "abstract": [ "In the era of big data, collaborative tagging (a.k.a. folksonomy) systems have proliferated as a consequence of the growth of Web 2.0 communities. Constructing user profiles from folksonomy systems is useful for many applications such as personalized search and recommender systems. The identification of latent user communities is one way to better understand and meet user needs. The behavior of users is highly influenced by the behavior of their neighbors or community members, and this can be utilized in constructing user profiles. However, conventional user profiling techniques often encounter data sparsity problems as data from a single user is insufficient to build a powerful profile. Hence, in this paper we propose a method of enriching user profiles based on latent user communities in folksonomy data. Specifically, the proposed approach contains four sub-processes: (i) tag-based user profiles are extracted from a folksonomy tripartite graph; (ii) a multi-faceted folksonomy graph is constructed by integrating tag and image affinity subgraphs with the folksonomy tripartite graph; (iii) random walk distance is used to unify various relationships and measure user similarities; (iv) a novel prototype-based clustering method based on user similarities is used to identify user communities, which are further used to enrich the extracted user profiles. To evaluate the proposed method, we conducted experiments using a public dataset, the results of which show that our approach outperforms previous ones in user profile enrichment." ] }
1907.01260
2953468243
Controversial social and political issues of the day spur people to express their opinion on social networks, often sharing links to online media articles and reposting statements from prominent members of the platforms. Discovering the stances of people and entire media on current, debatable topics is important for social statisticians and policy makers. Many supervised solutions exist for determining viewpoints, but manually annotating training data is costly. In this paper, we propose a method that uses unsupervised learning and is able to characterize both the general political leaning of online media and popular Twitter users, as well as their stances with respect to controversial topics, by leveraging on the retweet behavior of users. We evaluate the model by comparing its bias predictions to gold labels from the Media Bias Fact Check website, and we further perform manual analysis.
Multiple methods involving supervised learning were employed for stance detection. Such methods require the availability of an initial set of labeled users, and they use some of the aforementioned features for classification @cite_17 @cite_21 @cite_10 . Such classification can label users with precision typically ranging between 70 methods for user stance detection include collective classification @cite_0 , where users in a network are jointly labeled, and classification in a low- dimensional user-space @cite_25 .
{ "cite_N": [ "@cite_21", "@cite_0", "@cite_10", "@cite_25", "@cite_17" ], "mid": [ "1942453818", "1998474965", "2013416264", "2768226620", "2766931170" ], "abstract": [ "Lately, the Islamic State of Iraq and Syria (ISIS) has managed to control large parts of Syria and Iraq. To better understand the roots of support for ISIS, we present a study using Twitter data. We collected a large number of Arabic tweets referring to ISIS and classified them as pro-ISIS or anti-ISIS. We then analyzed the historical timelines of both user groups and looked at their pre-ISIS period to gain insights into the antecedents of support. Also, we built a classifier to ‘predict’, in retrospect, who will support or oppose the group. We show that ISIS supporters largely differ from ISIS opposition in that the former referred a lot more to Arab Spring uprisings that failed than the latter.", "In this paper, we address the problem of classifying tweets into topical categories. Because of the short, noisy and ambiguous nature of tweets, we propose to collectively conduct the classification by exploiting the context information (i.e. related tweets) other than individually as in conventional text classification methods. In particular, we augment the content-based representation of text with tweets sharing same #hashtag or URL, which results in a tweet graph. We then formulate the tweet classification task under a graph optimization framework. We investigate three popular approaches, namely, Loopy Belief Propagation (LBP), Relaxation Labeling (RL), and Iterative Classification Algorithm (ICA). Extensive experiment results show that the graph-based tweet classification approach remarkably improves the performance, while the ICA model with relationship of sharing the same #hashtag gives the best result on separate tweet graph.", "More and more technologies are taking advantage of the explosion of social media (Web search, content recommendation services, marketing, ad targeting, etc.). This paper focuses on the problem of automatically constructing user profiles, which can significantly benefit such technologies. We describe a general and robust machine learning framework for large-scale classification of social media users according to dimensions of interest. We report encouraging experimental results on 3 tasks with different characteristics: political affiliation detection, ethnicity identification and detecting affinity for a particular business.", "Predicting the stance of social media users on a topic can be challenging, particularly for users who never express explicit stances. Earlier work has shown that using users' historical or non-relevant tweets can be used to predict stance. We build on prior work by making use of users' interaction elements, such as retweeted accounts and mentioned hashtags, to compute the similarities between users and to classify new users in a user similarity feature space. We show that this approach significantly improves stance prediction on two datasets that differ in terms of language, topic, and cultural background.", "The Paris terrorist attacks occurred on November 13, 2015, prompting a massive response on social media including Twitter, with millions of posted tweets in the first few hours after the attacks. Most of the tweets were condemning the attacks and showing support to Parisians. One of the trending debates related to the attacks concerned possible association between terrorism and Islam, and Muslims in general. This created a global discussion between those attacking and those defending Islam and Muslims. In this paper, we use this incident to examine the effect of online social network interactions prior to an event to predict what attitudes will be expressed in response to the event. Specifically, we focus on how a person's online content and network dynamics can be used to predict future attitudes and stance in the aftermath of a major event. In our study, we collected a set of 8.36 million tweets related to the Paris attacks within the 50 hours following the event, of which we identified over 900k tweets mentioning Islam and Muslims. We quantitatively analyzed users' network interactions and historical tweets to predict their attitudes towards Islam and Muslim. We provide a description of the quantitative results based on the content (hashtags) and network interactions (retweets, replies, and mentions). We analyze two types of data: (1) we use post-event tweets to learn users' stated stance towards Muslims based on sampling methods and crowd-sourced annotations; and (2) we employ pre-event interactions on Twitter to build a classifier to predict post-event stance. We found that pre-event network interactions can predict attitudes towards Muslims with 82 macro F-measure, even in the absence of prior mentions of Islam, Muslims, or related terms." ] }
1812.03379
2905311161
Live video-streaming platforms such as Twitch enable top content creators to reap significant profits and influence. To that effect, various behavioral norms are recommended to new entrants and those seeking to increase their popularity and success. Chiefly among them are to simply put in the effort and promote on social media outlets such as Twitter, Instagram, and the like. But does following these behaviors indeed have a relationship with eventual popularity? In this paper, we collect a corpus of Twitch streamer popularity measures --- spanning social and financial measures --- and their behavior data on Twitch and third party platform. We also compile a set of community-defined behavioral norms. We then perform temporal analysis to identify the increased predictive value that a streamer's future behavior contributes to predicting future popularity. At the population level, we find that behavioral information improves the prediction of relative growth that exceeds the median streamer. At the individual level, we find that although it is difficult to quickly become successful in absolute terms, streamers that put in considerable effort are more successful than the rest, and that creating social media accounts to promote oneself is effective irrespective of when the accounts are created. Ultimately, we find that studying the popularity and success of content creators in the long term is a promising and rich research area.
Much prior work has studied content features that lead to social network virality. For instance, models to predict Facebook photo re-shares @cite_13 , Twitter re-tweets @cite_1 , Twitter hashtag usage @cite_25 , Digg story up-votes @cite_20 , and hourly volume of news phrases @cite_7 . This area of work identifies both content-specific features (e.g., of a potential Tweet), as well as user-specific features (e.g., their popularity, network characteristics), that are predictive of the content's eventual popularity (i.e # of Shares). Although these studies may use user popularity as a predictive feature, it is unclear how the user became popular . In contrast, we specifically investigate which community-accepted behaviors are predictive of popularity growth over time.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_13", "@cite_25", "@cite_20" ], "mid": [ "", "2127267264", "1996263819", "2171410332", "2070366435" ], "abstract": [ "", "Social network services have become a viable source of information for users. In Twitter, information deemed important by the community propagates through retweets. Studying the characteristics of such popular messages is important for a number of tasks, such as breaking news detection, personalized message recommendation, viral marketing and others. This paper investigates the problem of predicting the popularity of messages as measured by the number of future retweets and sheds some light on what kinds of factors influence information propagation in Twitter. We formulate the task into a classification problem and study two of its variants by investigating a wide spectrum of features based on the content of the messages, temporal information, metadata of messages and users, as well as structural properties of the users' social graph on a large scale dataset. We show that our method can successfully predict messages which will attract thousands of retweets with good performance.", "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.", "Because of Twitter’s popularity and the viral nature of information dissemination on Twitter, predicting which Twitter topics will become popular in the near future becomes a task of considerable economic importance. Many Twitter topics are annotated by hashtags. In this article, we propose methods to predict the popularity of new hashtags on Twitter by formulating the problem as a classification task. We use five standard classification models (i.e., Naive bayes, k-nearest neighbors, decision trees, support vector machines, and logistic regression) for prediction. The main challenge is the identification of effective features for describing new hashtags. We extract 7 content features from a hashtag string and the collection of tweets containing the hashtag and 11 contextual features from the social graph formed by users who have adopted the hashtag. We conducted experiments on a Twitter data set consisting of 31 million tweets from 2 million Singapore-based users. The experimental results show that the standard classifiers using the extracted features significantly outperform the baseline methods that do not use these features. Among the five classifiers, the logistic regression model performs the best in terms of the Micro-F1 measure. We also observe that contextual features are more effective than content features.", "We present a method for accurately predicting the long time popularity of online content from early measurements of user's access. Using two content sharing portals, Youtube and Digg, we show that by modeling the accrual of views and votes on content offered by these services we can predict the long-term dynamics of individual submissions from initial data. In the case of Digg, measuring access to given stories during the first two hours allows us to forecast their popularity 30 days ahead with remarkable accuracy, while downloads of Youtube videos need to be followed for 10 days to attain the same performance. The differing time scales of the predictions are shown to be due to differences in how content is consumed on the two portals: Digg stories quickly become outdated, while Youtube videos are still found long after they are initially submitted to the portal. We show that predictions are more accurate for submissions for which attention decays quickly, whereas predictions for evergreen content will be prone to larger errors." ] }
1812.03379
2905311161
Live video-streaming platforms such as Twitch enable top content creators to reap significant profits and influence. To that effect, various behavioral norms are recommended to new entrants and those seeking to increase their popularity and success. Chiefly among them are to simply put in the effort and promote on social media outlets such as Twitter, Instagram, and the like. But does following these behaviors indeed have a relationship with eventual popularity? In this paper, we collect a corpus of Twitch streamer popularity measures --- spanning social and financial measures --- and their behavior data on Twitch and third party platform. We also compile a set of community-defined behavioral norms. We then perform temporal analysis to identify the increased predictive value that a streamer's future behavior contributes to predicting future popularity. At the population level, we find that behavioral information improves the prediction of relative growth that exceeds the median streamer. At the individual level, we find that although it is difficult to quickly become successful in absolute terms, streamers that put in considerable effort are more successful than the rest, and that creating social media accounts to promote oneself is effective irrespective of when the accounts are created. Ultimately, we find that studying the popularity and success of content creators in the long term is a promising and rich research area.
Unlike predicting the popularity of content, which focuses on predicting popularity in the near future, our goal is to study the process of on a social network by observing behavioral characteristics over long spans of time. To the best of our knowledge, there are many community-based anecdotes about effective behavior, and relatively few quantitative or longitudinal studies. suggest that behavior may be a factor in growing social network influence; find that how Twitter users interact with their social network affects their follower count @cite_21 ; find that diverse content can help increase followings on Pinterest @cite_9 . We extend these ideas by examining a broad set of behaviors derived from the Twitch community and quantitatively studying their ability to predict future popularity growth for varying time ranges.
{ "cite_N": [ "@cite_9", "@cite_21" ], "mid": [ "2129165073", "2132969050" ], "abstract": [ "Pinterest is a popular social curation site where people collect, organize, and share pictures of items. We studied a fundamental issue for such sites: what patterns of activity attract attention (audience and content reposting)-- We organized our studies around two key factors: the extent to which users specialize in particular topics, and homophily among users. We also considered the existence of differences between female and male users. We found: (a) women and men differed in the types of content they collected and the degree to which they specialized; male Pinterest users were not particularly interested in stereotypically male topics; (b) sharing diverse types of content increases your following, but only up to a certain point; (c) homophily drives repinning: people repin content from other users who share their interests; homophily also affects following, but to a lesser extent. Our findings suggest strategies both for users (e.g., strategies to attract an audience) and maintainers (e.g., content recommendation methods) of social curation sites.", "Follower count is important to Twitter users: it can indicate popularity and prestige. Yet, holistically, little is understood about what factors -- like social behavior, message content, and network structure - lead to more followers. Such information could help technologists design and build tools that help users grow their audiences. In this paper, we study 507 Twitter users and a half-million of their tweets over 15 months. Marrying a longitudinal approach with a negative binomial auto-regression model, we find that variables for message content, social behavior, and network structure should be given equal consideration when predicting link formations on Twitter. To our knowledge, this is the first longitudinal study of follow predictors, and the first to show that the relative contributions of social behavior and mes-sage content are just as impactful as factors related to social network structure for predicting growth of online social networks. We conclude with practical and theoretical implications for designing social media technologies." ] }