aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1907.00549 | 2800365265 | Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available. | Several research groups documented a strong thermal influence on the depth generation process of RGB-D sensors. @cite_3 note a severe depth shift due to internal or external thermal changes. @cite_9 document a nonlinear distortion effects during thermal changes. They also propose practical rules of thumb on reducing accuracy errors caused by thermal conditions. | {
"cite_N": [
"@cite_9",
"@cite_3"
],
"mid": [
"150545858",
"1669953666"
],
"abstract": [
"Several approaches to calibration of the Kinect as a range sensor have been presented in the past. Those approaches do not take into account a possible influence of thermal and environmental conditions. This paper shows that variations of the temperature and air draft have a notable influence on Kinect's images and range measurements. Based on these findings, practical rules are stated to reduce calibration and measurement errors caused by thermal conditions.",
"We present a novel application of the Kinect™, an input device designed for the Microsoft® Xbox 360® video game system. The device can be used by Earth scientists as a low-cost, high-resolution, short-range 3D 4D camera imaging system producing data similar to a terrestrial light detection and ranging (LiDAR) sensor. The Kinect contains a structured light emitter, an infrared camera (the combination of these two produce a distance image), a visual wavelength camera, a three-axis accelerometer, and four microphones. The cost is US $100, frame rate is 30 Hz, spatial and depth resolutions are mm to cm depending on range, and the optimal operating range is 0.5 to 5 m. The resolution of the distance measurements decreases with distance and is ≤1 mm at 0.5 m and 75 mm at 5 m. We illustrate data collection and basic data analysis routines in three experiments designed to demonstrate the breadth and utility of this new sensor in domains of glaciology, stream bathymetry, and geomorphology, although the device is applicable to a number of other Earth science fields. Copyright © 2012 John Wiley & Sons, Ltd."
]
} |
1907.00422 | 2955758416 | Dedicated lanes for connected and automated vehicles (CAVs) can not only provide the technological accommodation, but also the desired market incentive for road user to adapt CAVs. Thus far, the majority of the impact assessment of CAV focused on the network-wide benefits. In this paper, we investigate the change of the traffic flow characteristic with two configurations of dedicated CAV lane across levels of market penetration. The traffic flow characteristics are quantified from the perspectives of headway distribution, communication density, and speed-flow diagram. The results highlight the contributions of the CAV lane. First, CAV lanes significantly improves the speed-flow characteristics by extending the stable region of the speed-flow curve and yielding a greater optimum flow. The highest value of optimum flow is 3400 vehicle per lane per hour at 90 MPR with one CAV lane. Furthermore, the concentration of CAVs at a lane results a narrower headway distribution (with smaller standard deviation), even with partial market penetration. Moreover, the CAV lane creates a more consistent CAV density which maintains the communication density level at a predictable level, hence decreasing the probability of packet drop. | A multi-resolution modeling was proposed in @cite_12 to study the mobility impact of CAV lanes. Traffic flow-based static traffic assignment and the mesoscopic simulation-based dynamic traffic assignment were adapted in the bi-level framework. The former yielded the MPR-based trends, whereas the latter refined the trend based on traffic congestion. The results indicated that it was not beneficial to provide toll incentive for CAVs at lower MPR due to the marginal increase in highway capacity. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2302987950"
],
"abstract": [
"This research investigates how different levels of modeling can be applied to assess the mobility impacts of Cooperative Adaptive Cruise Control (CACC), as an example of connected vehicle technologies on Managed Lanes (ML) with various incentives (preferential treatments), pricing strategies, and access restrictions. The transportation modeling in this study involves the use of the Static Traffic Assignment (STA) of a demand forecasting model based on a macroscopic traffic model, mesoscopic simulation-based Dynamic Traffic Assignment (DTA), and results from microscopic simulation modeling. The results of this study demonstrate the benefit of using results from tools with different resolution of modeling to support each other’s analyses. In general, the trends obtained based on results from the STA modeling of advanced vehicle technologies in terms of the market share of traffic in ML and the reduction in congestion on General Purpose Lanes (GPL) are consistent with those obtained from DTA. However, DTA results show more significant shifts due to its better modeling of traffic congestion. The results also show that providing toll incentives for CACC-equipped vehicles to use express lanes is not beneficial at lower market penetration due to the small increase in capacity with these market penetrations. Such incentives are beneficial at higher market penetrations, particularly with higher demand levels."
]
} |
1907.00422 | 2955758416 | Dedicated lanes for connected and automated vehicles (CAVs) can not only provide the technological accommodation, but also the desired market incentive for road user to adapt CAVs. Thus far, the majority of the impact assessment of CAV focused on the network-wide benefits. In this paper, we investigate the change of the traffic flow characteristic with two configurations of dedicated CAV lane across levels of market penetration. The traffic flow characteristics are quantified from the perspectives of headway distribution, communication density, and speed-flow diagram. The results highlight the contributions of the CAV lane. First, CAV lanes significantly improves the speed-flow characteristics by extending the stable region of the speed-flow curve and yielding a greater optimum flow. The highest value of optimum flow is 3400 vehicle per lane per hour at 90 MPR with one CAV lane. Furthermore, the concentration of CAVs at a lane results a narrower headway distribution (with smaller standard deviation), even with partial market penetration. Moreover, the CAV lane creates a more consistent CAV density which maintains the communication density level at a predictable level, hence decreasing the probability of packet drop. | A time-dependent deployment framework was proposed in @cite_7 , which was formulated with a network equilibrium model and a diffusion model (for modeling adoption of a new product in a population). With the constraint of a given set of candidate lanes which corresponds to the field condition, the social cost was minimized with the consideration of levels of MPR. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2524294774"
],
"abstract": [
"Abstract This paper develops a mathematical approach to optimize a time-dependent deployment plan of autonomous vehicle (AV) lanes on a transportation network with heterogeneous traffic stream consisting of both conventional vehicles (CVs) and AVs, so as to minimize the social cost and promote the adoption of AVs. Specifically, AV lanes are exclusive lanes that can only be utilized by AVs, and the deployment plan specifies when, where, and how many AV lanes to be deployed. We first present a multi-class network equilibrium model to describe the flow distributions of both CVs and AVs, given the presence of AV lanes in the network. Considering that the net benefit (e.g., reduced travel cost) derived from the deployment of AV lanes will further promote the AV adoption, we proceed to apply a diffusion model to forecast the evolution of AV market penetration. With the equilibrium model and diffusion model, a time-dependent deployment model is then formulated, which can be solved by an efficient solution algorithm. Lastly, numerical examples based on the south Florida network are presented to demonstrate the proposed models."
]
} |
1907.00422 | 2955758416 | Dedicated lanes for connected and automated vehicles (CAVs) can not only provide the technological accommodation, but also the desired market incentive for road user to adapt CAVs. Thus far, the majority of the impact assessment of CAV focused on the network-wide benefits. In this paper, we investigate the change of the traffic flow characteristic with two configurations of dedicated CAV lane across levels of market penetration. The traffic flow characteristics are quantified from the perspectives of headway distribution, communication density, and speed-flow diagram. The results highlight the contributions of the CAV lane. First, CAV lanes significantly improves the speed-flow characteristics by extending the stable region of the speed-flow curve and yielding a greater optimum flow. The highest value of optimum flow is 3400 vehicle per lane per hour at 90 MPR with one CAV lane. Furthermore, the concentration of CAVs at a lane results a narrower headway distribution (with smaller standard deviation), even with partial market penetration. Moreover, the CAV lane creates a more consistent CAV density which maintains the communication density level at a predictable level, hence decreasing the probability of packet drop. | An analytical modeling framework for assessing the benefits of CAV operations was proposed in @cite_10 . The overall results indicated that CAV improved network mobility performance, when the MPR was low, even in the absence of ML policies. Throughput without managed lane increased by 4 Nearly all of the previous studies evaluated the benefits of CAV at a aggregated level with the emphasis of the overall improvement. In this paper, we further the impact analysis to a traffic flow level, aiming to investigate the traffic flow characteristic with the presence of CAV lanes. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2166924734"
],
"abstract": [
"Automated vehicles have the potential to bring about transformative safety, mobility, energy, and environmental benefits to the surface transportation system. They are also being introduced into a complex transportation system, where second-order impacts, such as the possibility of increased vehicle-miles traveled, are of significant concern. Given the complexity of the impacts, a modeling framework is needed to ensure that they are adequately captured. This report presents a framework for estimating the potential benefits and dis-benefits of technologies contributing to the automation of the Nation’s surface transportation system. Components of the framework include (1) Safety: exposure to near-crash situations, crash prevention, and crash severity reduction; (2) Vehicle mobility: vehicle throughput, both in car following situations and at intersections; (3) Energy environment: fuel consumption and tailpipe emissions; (4) Accessibility: personal mobility, for motorists and nonmotorists; (5) Transportation system usage: response of travelers to changes in mobility and accessibility, as well as potential new modes of transportation such as increased car sharing; (6) Land use: effects of automation on land use, and (7) Economic analysis: the macro-economic impacts of all of the above changes."
]
} |
1907.00348 | 2954057520 | Training convolutional neural networks for image classification tasks usually causes information loss. Although most of the time the information lost is redundant with respect to the target task, there are still cases where discriminative information is also discarded. For example, if the samples that belong to the same category have multiple correlated features, the model may only learn a subset of the features and ignore the rest. This may not be a problem unless the classification in the test set highly depends on the ignored features. We argue that the discard of the correlated discriminative information is partially caused by the fact that the minimization of the classification loss doesn't ensure to learn the overall discriminative information but only the most discriminative information. To address this problem, we propose an information flow maximization (IFM) loss as a regularization term to find the discriminative correlated features. With less information loss the classifier can make predictions based on more informative features. We validate our method on the shiftedMNIST dataset and show the effectiveness of IFM loss in learning representative and discriminative features. | There are many work concentrating on information maximization for deep networks. In @cite_13 , Chen al introduce InfoGAN which is a generative adversarial network that maximizes the mutual information between a small subset of the latent variables and the observation. @cite_7 , Belghazi al present a Mutual Information Neural Estimator (MINE) that estimates mutual information between high dimensional continuous random variables by gradient descent over neural networks. @cite_5 , Hjelm al introduce Deep InfoMax (DIM) to maximize mutual information between a representation and the output of a deep neural network encoder to improve the representation's suitability for downstream tasks. @cite_8 , Jacobsen al propose an invertible network architecture and an alternative objective that extract overall discriminative knowledge in the prediction model. | {
"cite_N": [
"@cite_5",
"@cite_13",
"@cite_7",
"@cite_8"
],
"mid": [
"2887997457",
"2963226019",
"2783047733",
"2898726413"
],
"abstract": [
"",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.",
"This paper presents a Mutual Information Neural Estimator (MINE) that is linearly scalable in dimensionality as well as in sample size. MINE is back-propable and we prove that it is strongly consistent. We illustrate a handful of applications in which MINE is succesfully applied to enhance the property of generative models in both unsupervised and supervised settings. We apply our framework to estimate the information bottleneck, and apply it in tasks related to supervised classification problems. Our results demonstrate substantial added flexibility and improvement in these settings.",
"Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs. One core idea of adversarial example research is to reveal neural network errors under such distribution shifts. We decompose these errors into two complementary sources: sensitivity and invariance. We show deep networks are not only too sensitive to task-irrelevant changes of their input, as is well-known from epsilon-adversarial examples, but are also too invariant to a wide range of task-relevant changes, thus making vast regions in input space vulnerable to adversarial attacks. We show such excessive invariance occurs across various tasks and architecture types. On MNIST and ImageNet one can manipulate the class-specific content of almost any image without changing the hidden activations. We identify an insufficiency of the standard cross-entropy loss as a reason for these failures. Further, we extend this objective based on an information-theoretic analysis so it encourages the model to consider all task-dependent features in its decision. This provides the first approach tailored explicitly to overcome excessive invariance and resulting vulnerabilities."
]
} |
1907.00553 | 2954580547 | This paper tackles a friction compensation problem without using a friction model. The unique feature of the proposed friction observer is that the nominal motor-side signal is fed back into the controller instead of the measured signal. By doing so, asymptotic stability and passivity of the controller are maintained. Another advantage of the proposed observer is that it provides a clear understanding for the stiction compensation which is hard to be captured in model-free approaches. This allows to design observers that do not overcompensate for the stiction. The proposed scheme is validated through simulations and experiments. | The idea of applying friction compensation on the motor-side was realized in @cite_12 @cite_11 @cite_7 using disturbance observer (DOB) technique. In a very early study @cite_12 , however, the joint torque information was not taken into account, meaning that the interaction on the link-side was treated as a disturbance. However, in robotics applications, it can be beneficial to close the loop around the motor-side dynamics using @math because then the link-side dynamics can interact with the environment through @math . @cite_13 considered the joint torque information in the observer design, but the analysis was limited to a single-link robot. @cite_7 proposed an observer for multi-link robots, but the friction model was assumed to be linear and known. | {
"cite_N": [
"@cite_13",
"@cite_7",
"@cite_12",
"@cite_11"
],
"mid": [
"1881298966",
"2102400507",
"2109777206",
""
],
"abstract": [
"This paper is basic study on vibration control, disturbance rejection and friction compensation in robots with flexible driving systems. Firstly, the system gain characteristic at the antiresonance frequency is introduced to evaluate the vibratory behavior of the control system. Secondly, the joint torque negative feedback which has a good effect on vibration suppression is discussed, while it is shown that the property of disturbance rejection is easily deteriorated when using high-gain joint torque feedback. Based on the assignment of pole-distribution, the relations between the vibration suppression and the disturbance rejection are analyzed. Thirdly, a feedforward compensation control based on a friction observer is proposed, it is shown that the tracking error is effectively decreased by this friction compensation. Lastly, a position control system considering the trade-off between vibration suppression and disturbance rejection is presented, and experimental results are also given.",
"In this paper, a disturbance observer based control algorithm is proposed for industrial robots having flexible joints. The joint flexibility of the robot is modeled as a two mass system. We study on the practical issues for implementing disturbance observer based control scheme in flexible joint robots. For industrial robots, generally the sensors are located on the motor side. If we construct disturbance observer using motor side dynamics, due to the zero dynamics, disturbance observer cannot directly reject the disturbance at the link side. To solve this problem, we propose a dual observer that estimates disturbance and states simultaneously. Using the proposed dual observer, we construct full state feedback controller. The effectiveness of the proposed control scheme for disturbance rejection and robustness is demonstrated by numerical simulation and experiment using HILS (Hardware In the Loop Simulation) system.",
"The authors propose a novel control method based on two observers to suppress the vibration of a flexible joint, even under fast motion. One observer is a disturbance observer and is used for the realization of an acceleration controller of the motor axis. The other observer estimates the link velocity and is effective for the suppression of torsional vibration. The disturbance observer estimates the total sum of the disturbance torques which are imposed on the motor axis. Feedback compensation by this estimated disturbance torque increases the servo stiffness and the robustness of the motor portion. The feedback of the estimated link velocity totally suppresses the torsional vibration of the flexible joint even under fast motion. The proposed control was implemented in a 16-b microprocessor with a digital signal processor for a laboratory test. >",
""
]
} |
1907.00553 | 2954580547 | This paper tackles a friction compensation problem without using a friction model. The unique feature of the proposed friction observer is that the nominal motor-side signal is fed back into the controller instead of the measured signal. By doing so, asymptotic stability and passivity of the controller are maintained. Another advantage of the proposed observer is that it provides a clear understanding for the stiction compensation which is hard to be captured in model-free approaches. This allows to design observers that do not overcompensate for the stiction. The proposed scheme is validated through simulations and experiments. | To the best of the authors' knowledge, the approach proposed in @cite_18 was the first model-free friction observer for multi-link robotic systems (see Fig. b). In this approach, the resulting observed value corresponds to the real friction smoothed by a 1st order low-pass filter (LPF). Despite successful experimental validation, however, theoretical analysis was not complete. The main challenge is the fact that the observer dynamics may break stability passivity of the controller. Later, @cite_2 and @cite_1 proposed Fig. c to establish a theoretically sound friction observer that guarantees stability of the whole system consisting of the friction observer dynamics and the FJR dynamics )-). To be precise, the scope of @cite_2 @cite_1 was about DOB-based control structures. The DOB becomes the friction observer when the motor inertia is known. | {
"cite_N": [
"@cite_18",
"@cite_1",
"@cite_2"
],
"mid": [
"2137165191",
"2188422583",
"2013763694"
],
"abstract": [
"In this paper we introduce a friction observer for robots with joint torque sensing (in particular for the DLR medical robot) in order to increase the positioning accuracy and the performance of torque control. The observer output corresponds to the low-pass filtered friction torque. It is used for friction compensation in conjunction with a MIMO controller designed for flexible joint arms. A passivity analysis is done for this friction compensation, allowing a Lyapunov based convergence analysis in the context of the nonlinear robot dynamics. For the complete controlled system, global asymptotic stability can be shown. Experimental results validate the practical efficiency of the approach.",
"This paper proposes a robust PD control scheme for flexible-joint robots based on a disturbance observer (DOB). In this paper, the DOB is applied only to the motor-side dynamics of the robot, and the uncertainties on the motor-side are successfully eliminated. It is shown that the proposed DOB-based approach guarantees global asymptotic stability. To this end, two special treatments are required. First, unlike the typical configuration of the DOB, nominal states of the motor-side are fed back to the PD controller. Second, a control input that makes the nominal states stable is additionally introduced. The proposed approach was verified using multi-degree-of-freedom experiments.",
""
]
} |
1907.00605 | 2955527767 | We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the @math -dimensional generalized assignment problem, where we introduce a novel technique that achieves an @math -competitive algorithms and prove a matching lower bound of @math . Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem. | Online packing problems in the random-order model have been studied extensively in recent years, most of them are generalizations of the secretary problem which has an optimal @math -competitive algorithm @cite_21 @cite_17 . An immediate generalization is the multiple-choice secretary problem, in which one is allowed to pick up to @math secretaries. It was studied by Kleinberg @cite_3 , where he presented an asymptotically optimal @math -competitive algorithm. Another related problem is the weighted-matching problem which has an optimal @math -competitive algorithm by @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_3",
"@cite_17"
],
"mid": [
"122351777",
"73458621",
"2061418963",
""
],
"abstract": [
"We study online variants of weighted bipartite matching on graphs and hypergraphs. In our model for online matching, the vertices on the right-hand side of a bipartite graph are given in advance and the vertices on the left-hand side arrive online in random order. Whenever a vertex arrives, its adjacent edges with the corresponding weights are revealed and the online algorithm has to decide which of these edges should be included in the matching. The studied matching problems have applications, e.g., in online ad auctions and combinatorial auctions where the right-hand side vertices correspond to items and the left-hand side vertices to bidders.",
"Improved thermosetting sealing materials, for example polyurethane or polysulphide compositions which can be cured in a high frequency alternating electric field, are described which contain electrically non-conducting pigments and or filler whose dielectric constant exceeds 200.",
"In the classical secretary problem, a set S of numbers is presented to an online algorithm in random order. At any time the algorithm may stop and choose the current element, and the goal is to maximize the probability of choosing the largest element in the set. We study a variation in which the algorithm is allowed to choose k elements, and the goal is to maximize their sum. We present an algorithm whose competitive ratio is 1-O(√1 k). To our knowledge, this is the first algorithm whose competitive ratio approaches 1 as k ← ∞. As an application we solve an open problem in the theory of online auction mechanisms.",
""
]
} |
1907.00605 | 2955527767 | We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the @math -dimensional generalized assignment problem, where we introduce a novel technique that achieves an @math -competitive algorithms and prove a matching lower bound of @math . Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem. | The online knapsack problem, which generalizes the multiple-secretary problem, was studied by @cite_2 who presented an @math -competitive algorithm. It was later improved by the work of @cite_11 on online GAP, which generalizes all of the above problems. They presented an @math -competitive algorithm which is the best-known competitive-ratio for online GAP and the online knapsack problem. Our result for VGAP improves on that. | {
"cite_N": [
"@cite_11",
"@cite_2"
],
"mid": [
"2899383020",
"2164792208"
],
"abstract": [
"We study packing linear programs (LPs) in an online model where the columns are presented to the algorithm in random order. This natural problem was investigated in various recent studies motivated, e.g., by online ad allocations and yield management, where rows correspond to resources and columns to requests specifying demands for resources. Our main contribution is a @math -competitive online algorithm. Here @math denotes the column sparsity, i.e., the maximum number of resources that occur in a single column, and @math denotes the capacity ratio @math , i.e., the ratio between the capacity of a resource and the maximum demand for this resource. In other words, we achieve a @math -approximation if the capacity ratio satisfies @math , which is known to be the best possible for any (randomized) online algorithms. Our result improves exponentially on previous work with respect to the capacity ratio. In contrast to existing results on packing LP ...",
"We consider situations in which a decision-maker with a fixed budget faces a sequence of options, each with a cost and a value, and must select a subset of them online so as to maximize the total value. Such situations arise in many contexts, e.g., hiring workers, scheduling jobs, and bidding in sponsored search auctions. This problem, often called the online knapsack problem, is known to be inapproximable. Therefore, we make the enabling assumption that elements arrive in a randomorder. Hence our problem can be thought of as a weighted version of the classical secretary problem, which we call the knapsack secretary problem. Using the random-order assumption, we design a constant-competitive algorithm for arbitrary weights and values, as well as a e-competitive algorithm for the special case when all weights are equal (i.e., the multiple-choice secretary problem). In contrast to previous work on online knapsack problems, we do not assume any knowledge regarding the distribution of weights and values beyond the fact that the order is random."
]
} |
1907.00605 | 2955527767 | We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the @math -dimensional generalized assignment problem, where we introduce a novel technique that achieves an @math -competitive algorithms and prove a matching lower bound of @math . Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem. | In their work, also studied the online packing LPs problem with column sparsity @math . The general online packing LPs problem was studied before by @cite_13 @cite_5 @cite_14 . In this problem, there is a set of resources and a set of requests. Each request has several options to be served and each option is associated with a profit and a certain demand from each resource. For column sparsity @math , each request may have a demand from at most @math of the resources. This problem generalizes VGAP studied in this paper, however, to the best of our knowledge, the only known competitive online algorithms for this problem are for the special case of @math , where @math is the capacity ratio, i.e., the minimal ratio between the capacity of a resource and the maximum demand for this resource. For this case they presented an @math -competitive algorithm which in case @math is @math -competitive. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_13"
],
"mid": [
"1659045240",
"2112788936",
"2148352389"
],
"abstract": [
"Inspired by online ad allocation, we study online stochastic packing integer programs from theoretical and practical standpoints. We first present a near-optimal online algorithm for a general class of packing integer programs which model various online resource allocation problems including online variants of routing, ad allocations, generalized assignment, and combinatorial auctions. As our main theoretical result, we prove that a simple dual training-based algorithm achieves a (1-o(1))- approximation guarantee in the random order stochastic model. This is a significant improvement over logarithmic or constant-factor approximations for the adversarial variants of the same problems (e.g. factor 1 - 1 e for online ad allocation, and log(m) for online routing). We then focus on the online display ad allocation problem and study the efficiency and fairness of various training-based and online allocation algorithms on data sets collected from real-life display ad allocation system. Our experimental evaluation confirms the effectiveness of training-based algorithms on real data sets, and also indicates an intrinsic trade-off between fairness and efficiency.",
"We consider packing LP's with m rows where all constraint coefficients are normalized to be in the unit interval. The n columns arrive in random order and the goal is to set the corresponding decision variables irrevocably when they arrive to obtain a feasible solution maximizing the expected reward. Previous (1−e)-competitive algorithms require the right-hand side of the LP to be @math , a bound that worsens with the number of columns and rows. However, the dependence on the number of columns is not required in the single-row case and known lower bounds for the general case are also independent of n. Our goal is to understand whether the dependence on n is required in the multi-row case, making it fundamentally harder than the single-row version. We refute this by exhibiting an algorithm which is (1−e)-competitive as long as the right-hand sides are @math . Our techniques refine previous PAC-learning based approaches which interpret the online decisions as linear classifications of the columns based on sampled dual prices. The key ingredient of our improvement comes from a non-standard covering argument together with the realization that only when the columns of the LP belong to few 1-d subspaces we can obtain small such covers; bounding the size of the cover constructed also relies on the geometry of linear classifiers. General packing LP's are handled by perturbing the input columns, which can be seen as making the learning problem more robust.",
"A natural optimization model that formulates many online resource allocation problems is the online linear programming LP problem in which the constraint matrix is revealed column by column along with the corresponding objective coefficient. In such a model, a decision variable has to be set each time a column is revealed without observing the future inputs, and the goal is to maximize the overall objective function. In this paper, we propose a near-optimal algorithm for this general class of online problems under the assumptions of random order of arrival and some mild conditions on the size of the LP right-hand-side input. Specifically, our learning-based algorithm works by dynamically updating a threshold price vector at geometric time intervals, where the dual prices learned from the revealed columns in the previous period are used to determine the sequential decisions in the current period. Through dynamic learning, the competitiveness of our algorithm improves over the past study of the same problem. We also present a worst case example showing that the performance of our algorithm is near optimal."
]
} |
1907.00605 | 2955527767 | We study online multidimensional variants of the generalized assignment problem which are used to model prominent real-world applications, such as the assignment of virtual machines with multiple resource requirements to physical infrastructure in cloud computing. These problems can be seen as an extension of the well known secretary problem and thus the standard online worst-case model cannot provide any performance guarantee. The prevailing model in this case is the random-order model, which provides a useful realistic and robust alternative. Using this model, we study the @math -dimensional generalized assignment problem, where we introduce a novel technique that achieves an @math -competitive algorithms and prove a matching lower bound of @math . Furthermore, our algorithm improves upon the best-known competitive-ratio for the online (one-dimensional) generalized assignment problem and the online knapsack problem. | Some related problems have competitive algorithms in the worst-case model too. One example is the AdWords problem which is a special case of GAP in which the profit of each item is equal to its size. Under the assumption that items are small compared to the capacity of the bins, @cite_6 presented an optimal @math -competitive algorithm. Without this assumption, the best known competitive-ratio is @math @cite_0 . Another example is the online vector bin packing problem, in which items arrive one-by-one, and the goal is to pack them all in the minimum number of unit sized @math -dimensional bins. This problem was studied by @cite_8 who showed that the First Fit algorithm has a worst-case competitive-ratio of @math . More recently, @cite_19 showed that this algorithm is asymptotically optimal by proving a lower bound of @math . | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_6",
"@cite_8"
],
"mid": [
"2962711478",
"2053641648",
"2131951207",
"2025787252"
],
"abstract": [
"Abstract In most of microeconomic theory, consumers are assumed to exhibit decreasing marginal utilities. This paper considers combinatorial auctions among such submodular buyers. The valuations of such buyers are placed within a hierarchy of valuations that exhibit no complementarities, a hierarchy that includes also OR and XOR combinations of singleton valuations, and valuations satisfying the gross substitutes property. Those last valuations are shown to form a zero-measure subset of the submodular valuations that have positive measure. While we show that the allocation problem among submodular valuations is NP-hard, we present an efficient greedy 2-approximation algorithm for this case and generalize it to the case of limited complementarities. No such approximation algorithm exists in a setting allowing for arbitrary complementarities. Some results about strategic aspects of combinatorial auctions among players with decreasing marginal utilities are also presented.",
"In the d-dimensional bin packing problem (VBP), one is given vectors x1,x2, ... ,xn ∈ Rd and the goal is to find a partition into a minimum number of feasible sets: 1,2 ... ,n = ∪is Bi. A set Bi is feasible if ∑j ∈ Bi xj ≤ 1, where 1 denotes the all 1's vector. For online VBP, it has been outstanding for almost 20 years to clarify the gap between the best lower bound Ω(1) on the competitive ratio versus the best upper bound of O(d). We settle this by describing a Ω(d1-e) lower bound. We also give strong lower bounds (of Ω(d1 B-e) ) if the bin size B ∈ Z+ is allowed to grow. Finally, we discuss almost-matching upper bound results for general values of B; we show an upper bound whose exponent is additively \"shifted by 1\" from the lower bound exponent.",
"How does a search engine company decide what ads to display with each query so as to maximize its revenueq This turns out to be a generalization of the online bipartite matching problem. We introduce the notion of a trade-off revealing LP and use it to derive an optimal algorithm achieving a competitive ratio of 1−1 e for this problem.",
""
]
} |
1907.00421 | 2954297171 | Session types, types for structuring communication between endpoints in distributed systems, are recently being integrated into mainstream programming languages. In practice, a very important notion for dealing with such types is that of subtyping, since it allows for typing larger classes of system, where a program has not precisely the expected behavior but a similar one. Unfortunately, recent work has shown that subtyping for session types in an asynchronous setting is undecidable. To cope with this negative result, the only approaches we are aware of either restrict the syntax of session types or limit communication (by considering forms of bounded asynchrony). Both approaches are too restrictive in practice, hence we proceed differently by presenting an algorithm for checking subtyping which is sound, but not complete (in some cases it terminates without returning a decisive verdict). The algorithm is based on a tree representation of the coinductive definition of asynchronous subtyping; this tree could be infinite, and the algorithm checks for the presence of finite witnesses of infinite successful subtrees. Furthermore, we provide a tool that implements our algorithm and we apply it to many examples that cannot be managed with the previous approaches. | Related work Gay and Hole @cite_21 @cite_29 introduced (synchronous) subtyping for session types and show it is decidable. @cite_30 adapted the notion of session subtyping to asynchronous communication, by introducing delayed inputs. Later, @cite_15 @cite_14 provided an alternative definition prohibiting orphan messages, we used this definition in this work. Recently, asynchronous subtyping was shown to be undecidable by encoding it as an equivalent question in the setting of Turing machines @cite_0 and queue machines @cite_31 . Recent work @cite_0 @cite_31 @cite_28 investigated restrictions to achieve decidability, these restrictions are either on the size of the FIFO channels or syntactical. In the latter case, we recall the single-out and single-in restrictions, i.e., where all output (respectively input) choices are singletons. | {
"cite_N": [
"@cite_30",
"@cite_31",
"@cite_14",
"@cite_28",
"@cite_29",
"@cite_21",
"@cite_0",
"@cite_15"
],
"mid": [
"",
"2553870630",
"",
"2786270130",
"2088962847",
"1511544530",
"2610085676",
"2754865860"
],
"abstract": [
"",
"Session types are used to describe communication protocols in distributed systems and, as usual in type theories, session subtyping characterizes substitutability of the communicating processes. We investigate the (un)decidability of subtyping for session types in asynchronously communicating systems. We first devise a core undecidable subtyping relation that is obtained by imposing limitations on the structure of types. Then, as a consequence of this initial undecidability result, we show that (differently from what stated or conjectured in the literature) the three notions of asynchronous subtyping defined so far for session types are all undecidable. Namely, we consider the asynchronous session subtyping by Mostrous and Yoshida for binary sessions, the relation by for binary sessions under the assumption that every message emitted is eventually consumed, and the one by for multiparty session types. Finally, by showing that two fragments of the core subtyping relation are decidable, we evince that further restrictions on the structure of types make our core subtyping relation decidable.",
"",
"Abstract Session types are behavioural types for guaranteeing that concurrent programs are free from basic communication errors. Recent work has shown that asynchronous session subtyping is undecidable. However, since session types have become popular in mainstream programming languages in which asynchronous communication is the norm rather than the exception, it is crucial to detect significant decidable subtyping relations. Previous work considered extremely restrictive fragments in which limitations were imposed to the size of communication buffer (at most 1) or to the possibility to express multiple choices (disallowing them completely in one of the compared types). In this work, for the first time, we show decidability of a fragment that does not impose any limitation on communication buffers and allows both the compared types to include multiple choices for either input or output, thus yielding a fragment which is more significant from an applicability viewpoint. In general, we study the boundary between decidability and undecidability by considering several fragments of subtyping. Notably, we show that subtyping remains undecidable even if restricted to not using output covariance and input contravariance.",
"Extending the pi calculus with the session types proposed by allows high-level specifications of structured patterns of communication, such as client-server protocols, to be expressed as types and verified by static typechecking. We define a notion of subtyping for session types, which allows protocol specifications to be extended in order to describe richer behaviour; for example, an implemented server can be refined without invalidating type-correctness of an overall system. We formalize the syntax, operational semantics and typing rules of an extended pi calculus, prove that typability guarantees absence of run-time communication errors, and show that the typing rules can be transformed into a practical typechecking algorithm.",
"We define an extension of the π-calculus with a static type system which supports high-level specifications of extended patterns of communication, such as client-server protocols. Subtyping allows protocol specifications to be extended in order to describe richer behaviour; an implemented server can then be replaced by a refined implementation, without invalidating type-correctness of the overall system. We use the POP3 protocol as a concrete example of this technique.",
"Asynchronous session subtyping has been studied extensively ini?ź[9, 10, 28---31] and applied ini?ź[23, 32, 33, 35]. An open question was whether this subtyping relation is decidable. This paper settles the question in the negative. To prove this result, we first introduce a new sub-class of two-party communicating finite-state machines CFSMs, called asynchronous duplex ADs, which we show to be Turing complete. Secondly, we give a compatibility relation over CFSMs, which is sound and complete wrt. safety for ADs, and is equivalent to the asynchronous subtyping. Then we show that the halting problem reduces to checking whether two CFSMs are in the relation. In addition, we show the compatibility relation to be decidable for three sub-classes of ADs.",
"Subtyping in concurrency has been extensively studied since early 1990s as one of the most interesting issues in type theory. The correctness of subtyping relations has been usually provided as the soundness for type safety. The converse direction, the completeness, has been largely ignored in spite of its usefulness to define the largest subtyping relation ensuring type safety. This paper formalises preciseness (i.e. both soundness and completeness) of subtyping for mobile processes and studies it for the synchronous and the asynchronous session calculi. We first prove that the well-known session subtyping, the branching-selection subtyping, is sound and complete for the synchronous calculus. Next we show that in the asynchronous calculus, this subtyping is incomplete for type-safety: that is, there exist session types T and S such that T can safely be considered as a subtype of S, but T < S is not derivable by the subtyping. We then propose an asynchronous subtyping system which is sound and complete for the asynchronous calculus. The method gives a general guidance to design rigorous channel-based subtypings respecting desired safety properties. Both the synchronous and the asynchronous calculus are first considered with lin ear channels only, and then they are extended with session initialisations and c ommunications of expressions (including shared channels)."
]
} |
1907.00421 | 2954297171 | Session types, types for structuring communication between endpoints in distributed systems, are recently being integrated into mainstream programming languages. In practice, a very important notion for dealing with such types is that of subtyping, since it allows for typing larger classes of system, where a program has not precisely the expected behavior but a similar one. Unfortunately, recent work has shown that subtyping for session types in an asynchronous setting is undecidable. To cope with this negative result, the only approaches we are aware of either restrict the syntax of session types or limit communication (by considering forms of bounded asynchrony). Both approaches are too restrictive in practice, hence we proceed differently by presenting an algorithm for checking subtyping which is sound, but not complete (in some cases it terminates without returning a decisive verdict). The algorithm is based on a tree representation of the coinductive definition of asynchronous subtyping; this tree could be infinite, and the algorithm checks for the presence of finite witnesses of infinite successful subtrees. Furthermore, we provide a tool that implements our algorithm and we apply it to many examples that cannot be managed with the previous approaches. | Conclusions and future work We have proposed a sound algorithm for checking asynchronous session subtyping, showing that it is still possible to decide whether two types are related for many nontrivial examples. Our algorithm is based on a (potentially infinite) tree representation of the coinductive definition of asynchronous subtyping; it checks for the presence of finite witnesses of infinite successful subtrees. We have provided an implementation and applied it to examples that cannot be recognised by previous approaches. Although the (worst-case) complexity of our algorithm is rather high (the termination condition expects to encounter a set of states already encountered, of which there may be exponentially many), our implementation shows that it actually terminates under a second for machines of size comparable to typical communication protocols used in real programs, e.g., Go programs feature between three and four communication primitives per channel and whose branching construct feature two branches, on average @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2921920223"
],
"abstract": [
"Go is a popular programming language renowned for its good support for system programming and its channel-based message passing concurrency mechanism. These strengths have made it the language of choice of many platform software such as Docker and Kubernetes. In this paper, we analyse 865 Go projects from GitHub in order to understand how message passing concurrency is used in publicly available code. Our results include the following findings: (1) message passing primitives are used frequently and intensively, (2) concurrency-related features are generally clustered in specific parts of a Go project, (3) most projects use synchronous communication channels over asynchronous ones, and (4) most Go projects use simple concurrent thread topologies, which are however currently unsupported by existing static verification frameworks."
]
} |
1907.00421 | 2954297171 | Session types, types for structuring communication between endpoints in distributed systems, are recently being integrated into mainstream programming languages. In practice, a very important notion for dealing with such types is that of subtyping, since it allows for typing larger classes of system, where a program has not precisely the expected behavior but a similar one. Unfortunately, recent work has shown that subtyping for session types in an asynchronous setting is undecidable. To cope with this negative result, the only approaches we are aware of either restrict the syntax of session types or limit communication (by considering forms of bounded asynchrony). Both approaches are too restrictive in practice, hence we proceed differently by presenting an algorithm for checking subtyping which is sound, but not complete (in some cases it terminates without returning a decisive verdict). The algorithm is based on a tree representation of the coinductive definition of asynchronous subtyping; this tree could be infinite, and the algorithm checks for the presence of finite witnesses of infinite successful subtrees. Furthermore, we provide a tool that implements our algorithm and we apply it to many examples that cannot be managed with the previous approaches. | As future work, we plan to enrich our algorithm to recognise subtypes featuring more complex accumulation patterns, e.g., Example . Moreover, due to the tight correspondence with safety of communicating machines @cite_0 , we plan to investigate the possibility of using our approach to characterise a novel decidable subclass of communicating machines. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2610085676"
],
"abstract": [
"Asynchronous session subtyping has been studied extensively ini?ź[9, 10, 28---31] and applied ini?ź[23, 32, 33, 35]. An open question was whether this subtyping relation is decidable. This paper settles the question in the negative. To prove this result, we first introduce a new sub-class of two-party communicating finite-state machines CFSMs, called asynchronous duplex ADs, which we show to be Turing complete. Secondly, we give a compatibility relation over CFSMs, which is sound and complete wrt. safety for ADs, and is equivalent to the asynchronous subtyping. Then we show that the halting problem reduces to checking whether two CFSMs are in the relation. In addition, we show the compatibility relation to be decidable for three sub-classes of ADs."
]
} |
1907.00620 | 2954090638 | We present a simple and novel way to do the task of text-to-SQL problem with weak supervision. We call it Rule-SQL. Given the question and the answer from the database table without the SQL logic form, Rule-SQL use the database rules for the SQL exploration first and then use the explored SQL for supervised training. We design several rules for reducing the exploration search space. For the deep model, we leverage BERT for the representation layer and separate the model to SELECT, AGG and WHERE parts. The experiment result on WikiSQL outperforms the strong baseline of full supervision and is comparable to the start-of-the-art weak supervised mothods. | WikiSQL @cite_11 is a large semantic parsing dataset. It has 80654 natural language and corresponding SQL pairs. The examples of WikiSQL are shown in fig. 1. | {
"cite_N": [
"@cite_11"
],
"mid": [
"2751448157"
],
"abstract": [
"Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9 to 59.4 and logical form accuracy from 23.4 to 48.3 ."
]
} |
1907.00620 | 2954090638 | We present a simple and novel way to do the task of text-to-SQL problem with weak supervision. We call it Rule-SQL. Given the question and the answer from the database table without the SQL logic form, Rule-SQL use the database rules for the SQL exploration first and then use the explored SQL for supervised training. We design several rules for reducing the exploration search space. For the deep model, we leverage BERT for the representation layer and separate the model to SELECT, AGG and WHERE parts. The experiment result on WikiSQL outperforms the strong baseline of full supervision and is comparable to the start-of-the-art weak supervised mothods. | For full supervised training methods, there were a lot of work: @cite_11 proposes Seq2sql which separates the SQL into three sub-part to solve and outperforms the sequence-to-sequence baseline. @cite_6 proposes SQLNet which employ the sequence-to-set and attention technique to solve the order problem of WHERE clause. @cite_9 propose TypeSQL which take the additional knowledge information as input. @cite_10 propose Coarse2Fine model which first generates raw output and then refine the raw output to generate a better result. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_6",
"@cite_11"
],
"mid": [
"2798981442",
"2896392119",
"2768409085",
"2751448157"
],
"abstract": [
"Interacting with relational databases through natural language helps users of any background easily query and analyze a vast amount of data. This requires a system that understands users' questions and converts them to SQL queries automatically. In this paper we present a novel approach, TypeSQL, which views this problem as a slot filling task. Additionally, TypeSQL utilizes type information to better understand rare entities and numbers in natural language questions. We test this idea on the WikiSQL dataset and outperform the prior state-of-the-art by 5.5 in much less time. We also show that accessing the content of databases can significantly improve the performance when users' queries are not well-formed. TypeSQL gets 82.6 accuracy, a 17.5 absolute improvement compared to the previous content-sensitive model.",
"Semantic parsing aims at mapping natural language utterances into structured meaning representations. In this work, we propose a structure-aware neural architecture which decomposes the semantic parsing process into two stages. Given an input utterance, we first generate a rough sketch of its meaning, where low-level information (such as variable names and arguments) is glossed over. Then, we fill in missing details by taking into account the natural language input and the sketch itself. Experimental results on four datasets characteristic of different domains and meaning representations show that our approach consistently improves performance, achieving competitive results despite the use of relatively simple decoders.",
"Synthesizing SQL queries from natural language is a long-standing open problem and has been attracting considerable interest recently. Toward solving the problem, the de facto approach is to employ a sequence-to-sequence-style model. Such an approach will necessarily require the SQL queries to be serialized. Since the same SQL query may have multiple equivalent serializations, training a sequence-to-sequence-style model is sensitive to the choice from one of them. This phenomenon is documented as the \"order-matters\" problem. Existing state-of-the-art approaches rely on reinforcement learning to reward the decoder when it generates any of the equivalent serializations. However, we observe that the improvement from reinforcement learning is limited. In this paper, we propose a novel approach, i.e., SQLNet, to fundamentally solve this problem by avoiding the sequence-to-sequence structure when the order does not matter. In particular, we employ a sketch-based approach where the sketch contains a dependency graph, so that one prediction can be done by taking into consideration only the previous predictions that it depends on. In addition, we propose a sequence-to-set model as well as the column attention mechanism to synthesize the query based on the sketch. By combining all these novel techniques, we show that SQLNet can outperform the prior art by 9 to 13 on the WikiSQL task.",
"Relational databases store a significant amount of the worlds data. However, accessing this data currently requires users to understand a query language such as SQL. We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Our model uses rewards from in the loop query execution over the database to learn a policy to generate the query, which contains unordered parts that are less suitable for optimization via cross entropy loss. Moreover, Seq2SQL leverages the structure of SQL to prune the space of generated queries and significantly simplify the generation problem. In addition to the model, we release WikiSQL, a dataset of 80654 hand-annotated examples of questions and SQL queries distributed across 24241 tables fromWikipedia that is an order of magnitude larger than comparable datasets. By applying policy based reinforcement learning with a query execution environment to WikiSQL, Seq2SQL outperforms a state-of-the-art semantic parser, improving execution accuracy from 35.9 to 59.4 and logical form accuracy from 23.4 to 48.3 ."
]
} |
1907.00382 | 2955340493 | The simple approach of retrieving a closest match of a query image from one in the gallery, compares an image pair using sum of absolute difference in pixel or feature space. The process is computationally expensive, ill-posed to illumination, background composition, pose variation, as well as inefficient to be deployed on gallery sets with more than 1000 elements. Hashing is a faster alternative which involves representing images in reduced dimensional simple feature spaces. Encoding images into binary hash codes enables similarity comparison in an image-pair using the Hamming distance measure. The challenge, however, lies in encoding the images using a semantic hashing scheme that lets subjective neighbors lie within the tolerable Hamming radius. This work presents a solution employing adversarial learning of a deep neural semantic hashing network for fashion inventory retrieval. It consists of a feature extracting convolutional neural network (CNN) learned to (i) minimize error in classifying type of clothing, (ii) minimize hamming distance between semantic neighbors and maximize distance between semantically dissimilar images, (iii) maximally scramble a discriminator's ability to identify the corresponding hash code-image pair when processing a semantically similar query-gallery image pair. Experimental validation for fashion inventory search yields a mean average precision (mAP) of 90.65 in finding the closest match as compared to 53.26 obtained by the prior art of deep Cauchy hashing for hamming space retrieval. | Supervised learning of CNNs for hashing of images have proven to be better at generating hash codes. They typically incorporate the class label information of an image to be able to learn features characteristic of each class of objects, viz. in case of search in fashion databases, different clothing types have characteristic features such as shirts have features characteristically different from trousers or skirts, etc. Recent works employ pair-wise image labels for generating effective hash functions. Such methods employing pair-wise similarity learning generally perform better @cite_10 @cite_0 than non-similarity based hashing @cite_6 which are easier while not requiring any label information for understanding similarity. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_6"
],
"mid": [
"2464915613",
"2798834175",
"1913628733"
],
"abstract": [
"In this paper, we present a new hashing method to learn compact binary codes for highly efficient image retrieval on large-scale datasets. While the complex image appearance variations still pose a great challenge to reliable retrieval, in light of the recent progress of Convolutional Neural Networks (CNNs) in learning robust image representation on various vision tasks, this paper proposes a novel Deep Supervised Hashing (DSH) method to learn compact similarity-preserving binary code for the huge body of image data. Specifically, we devise a CNN architecture that takes pairs of images (similar dissimilar) as training inputs and encourages the output of each image to approximate discrete values (e.g. +1 -1). To this end, a loss function is elaborately designed to maximize the discriminability of the output space by encoding the supervised information from the input image pairs, and simultaneously imposing regularization on the real-valued outputs to approximate the desired discrete values. For image retrieval, new-coming query images can be easily encoded by propagating through the network and then quantizing the network outputs to binary codes representation. Extensive experiments on two large scale datasets CIFAR-10 and NUS-WIDE show the promising performance of our method compared with the state-of-the-arts.",
"Due to its computation efficiency and retrieval quality, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by end-to-end representation learning and hash coding. With compact hash codes, Hamming space retrieval enables the most efficient constant-time search that returns data points within a given Hamming radius to each query, by hash table lookups instead of linear scan. However, subject to the weak capability of concentrating relevant images to be within a small Hamming ball due to mis-specified loss functions, existing deep hashing methods may underperform for Hamming space retrieval. This work presents Deep Cauchy Hashing (DCH), a novel deep hashing model that generates compact and concentrated binary hash codes to enable efficient and effective Hamming space retrieval. The main idea is to design a pairwise cross-entropy loss based on Cauchy distribution, which penalizes significantly on similar image pairs with Hamming distance larger than the given Hamming radius threshold. Comprehensive experiments demonstrate that DCH can generate highly concentrated hash codes and yield state-of-the-art Hamming space retrieval performance on three datasets, NUS-WIDE, CIFAR-10, and MS-COCO.",
"Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images."
]
} |
1907.00382 | 2955340493 | The simple approach of retrieving a closest match of a query image from one in the gallery, compares an image pair using sum of absolute difference in pixel or feature space. The process is computationally expensive, ill-posed to illumination, background composition, pose variation, as well as inefficient to be deployed on gallery sets with more than 1000 elements. Hashing is a faster alternative which involves representing images in reduced dimensional simple feature spaces. Encoding images into binary hash codes enables similarity comparison in an image-pair using the Hamming distance measure. The challenge, however, lies in encoding the images using a semantic hashing scheme that lets subjective neighbors lie within the tolerable Hamming radius. This work presents a solution employing adversarial learning of a deep neural semantic hashing network for fashion inventory retrieval. It consists of a feature extracting convolutional neural network (CNN) learned to (i) minimize error in classifying type of clothing, (ii) minimize hamming distance between semantic neighbors and maximize distance between semantically dissimilar images, (iii) maximally scramble a discriminator's ability to identify the corresponding hash code-image pair when processing a semantically similar query-gallery image pair. Experimental validation for fashion inventory search yields a mean average precision (mAP) of 90.65 in finding the closest match as compared to 53.26 obtained by the prior art of deep Cauchy hashing for hamming space retrieval. | Earlier approaches employing employed image classification models such as with CNNs that were modified to generate binary codes of features extracted in the penultimate layers, with use of functions like or for generating binary codes from continuous valued data. The retrieval task typically is performed in two stages as and @cite_6 . The coarse stage retrieves a large set of candidates using inexpensive distance measures like the Hamming distance. In the fine stage, the distance measures like Euclidean are employed on the continuous valued features for finding the closest match. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1913628733"
],
"abstract": [
"Approximate nearest neighbor search is an efficient strategy for large-scale image retrieval. Encouraged by the recent advances in convolutional neural networks (CNNs), we propose an effective deep learning framework to generate binary hash codes for fast image retrieval. Our idea is that when the data labels are available, binary codes can be learned by employing a hidden layer for representing the latent concepts that dominate the class labels. The utilization of the CNN also allows for learning image representations. Unlike other supervised methods that require pair-wised inputs for binary code learning, our method learns hash codes and image representations in a point-wised manner, making it suitable for large-scale datasets. Experimental results show that our method outperforms several state-of-the-art hashing algorithms on the CIFAR-10 and MNIST datasets. We further demonstrate its scalability and efficacy on a large-scale dataset of 1 million clothing images."
]
} |
1907.00382 | 2955340493 | The simple approach of retrieving a closest match of a query image from one in the gallery, compares an image pair using sum of absolute difference in pixel or feature space. The process is computationally expensive, ill-posed to illumination, background composition, pose variation, as well as inefficient to be deployed on gallery sets with more than 1000 elements. Hashing is a faster alternative which involves representing images in reduced dimensional simple feature spaces. Encoding images into binary hash codes enables similarity comparison in an image-pair using the Hamming distance measure. The challenge, however, lies in encoding the images using a semantic hashing scheme that lets subjective neighbors lie within the tolerable Hamming radius. This work presents a solution employing adversarial learning of a deep neural semantic hashing network for fashion inventory retrieval. It consists of a feature extracting convolutional neural network (CNN) learned to (i) minimize error in classifying type of clothing, (ii) minimize hamming distance between semantic neighbors and maximize distance between semantically dissimilar images, (iii) maximally scramble a discriminator's ability to identify the corresponding hash code-image pair when processing a semantically similar query-gallery image pair. Experimental validation for fashion inventory search yields a mean average precision (mAP) of 90.65 in finding the closest match as compared to 53.26 obtained by the prior art of deep Cauchy hashing for hamming space retrieval. | Recent approaches in line with have employed deep Cauchy hashing. This approach predicts the similarity label using and also uses quantization loss to compensate the relaxation provided by the binary hash code generating function @cite_10 . Cauchy function has proved to be more effective than sigmoid in estimating optimal values of the similarity index and penalizing the losses obtained. Quantization loss ensures that the generated hash codes are close to exact limits of binary values @cite_8 , with the limitation being the large number of epochs required to train these networks. | {
"cite_N": [
"@cite_10",
"@cite_8"
],
"mid": [
"2798834175",
"2964280870"
],
"abstract": [
"Due to its computation efficiency and retrieval quality, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by end-to-end representation learning and hash coding. With compact hash codes, Hamming space retrieval enables the most efficient constant-time search that returns data points within a given Hamming radius to each query, by hash table lookups instead of linear scan. However, subject to the weak capability of concentrating relevant images to be within a small Hamming ball due to mis-specified loss functions, existing deep hashing methods may underperform for Hamming space retrieval. This work presents Deep Cauchy Hashing (DCH), a novel deep hashing model that generates compact and concentrated binary hash codes to enable efficient and effective Hamming space retrieval. The main idea is to design a pairwise cross-entropy loss based on Cauchy distribution, which penalizes significantly on similar image pairs with Hamming distance larger than the given Hamming radius threshold. Comprehensive experiments demonstrate that DCH can generate highly concentrated hash codes and yield state-of-the-art Hamming space retrieval performance on three datasets, NUS-WIDE, CIFAR-10, and MS-COCO.",
"Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks."
]
} |
1907.00382 | 2955340493 | The simple approach of retrieving a closest match of a query image from one in the gallery, compares an image pair using sum of absolute difference in pixel or feature space. The process is computationally expensive, ill-posed to illumination, background composition, pose variation, as well as inefficient to be deployed on gallery sets with more than 1000 elements. Hashing is a faster alternative which involves representing images in reduced dimensional simple feature spaces. Encoding images into binary hash codes enables similarity comparison in an image-pair using the Hamming distance measure. The challenge, however, lies in encoding the images using a semantic hashing scheme that lets subjective neighbors lie within the tolerable Hamming radius. This work presents a solution employing adversarial learning of a deep neural semantic hashing network for fashion inventory retrieval. It consists of a feature extracting convolutional neural network (CNN) learned to (i) minimize error in classifying type of clothing, (ii) minimize hamming distance between semantic neighbors and maximize distance between semantically dissimilar images, (iii) maximally scramble a discriminator's ability to identify the corresponding hash code-image pair when processing a semantically similar query-gallery image pair. Experimental validation for fashion inventory search yields a mean average precision (mAP) of 90.65 in finding the closest match as compared to 53.26 obtained by the prior art of deep Cauchy hashing for hamming space retrieval. | Although, the supervised hashing methods, especially those employing deep learnt hash functions have showed remarkable performance in representing input data using binary codes, they require costly to acquire human-annotated labels for training. In absence of annotated large datasets, their performance significantly degrades. The unsupervised hashing methods on the other hand easily address this issue by providing learning frameworks that do not require any labelled input. Semantic hashing is one of the early studies, which adopts restricted Boltzmann machine (RBM) as a deep hash function @cite_3 . | {
"cite_N": [
"@cite_3"
],
"mid": [
"2100495367"
],
"abstract": [
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | In the ZSL task, the seen categories in the training set and the unseen categories in the testing set are disjoint. In fact, ZSL can be seen as a subfield of transfer learning @cite_28 @cite_40 , as the key idea of ZSL is to transfer the knowledge contained in the training resources to the task of testing instance classification. Early ZSL works @cite_7 @cite_14 @cite_24 follow an intuitive way to object recognition that makes use of the attributes to infer the label of an unseen test image. Recently, learning an embedding function that maps the semantic vectors and visual features into an embedding space, where the visual features and semantic vectors can be compared directly, shows outstanding performance and has been the most popular method @cite_27 @cite_3 @cite_50 @cite_62 . After the projection, nearest neighbor searching methods can be used to find the most similar class attribute vector for the test instance, and the discovered attribute corresponds to the most likely class. The embedding based method is adopted in this work. | {
"cite_N": [
"@cite_14",
"@cite_62",
"@cite_7",
"@cite_28",
"@cite_3",
"@cite_24",
"@cite_40",
"@cite_27",
"@cite_50"
],
"mid": [
"2098411764",
"2762085884",
"2134270519",
"2165698076",
"",
"2128532956",
"2395579298",
"2949533609",
""
],
"abstract": [
"We propose to shift the goal of recognition from naming to describing. Doing so allows us not only to name familiar objects, but also: to report unusual aspects of a familiar object (“spotty dog”, not just “dog”); to say something about unfamiliar objects (“hairy and four-legged”, not just “unknown”); and to learn how to recognize new objects with few or no visual examples. Rather than focusing on identity assignment, we make inferring attributes the core problem of recognition. These attributes can be semantic (“spotty”) or discriminative (“dogs have it but sheep do not”). Learning attributes presents a major new challenge: generalization across object categories, not just across instances within a category. In this paper, we also introduce a novel feature selection method for learning attributes that generalize well across categories. We support our claims by thorough evaluation that provides insights into the limitations of the standard recognition paradigm of naming and demonstrates the new abilities provided by our attribute-based framework.",
"Sufficient training examples are the fundamental requirement for most of the learning tasks. However, collecting well-labelled training examples is costly. Inspired by Zero-shot Learning (ZSL) that can make use of visual attributes or natural language semantics as an intermediate level clue to associate low-level features with high-level classes, in a novel extension of this idea, we aim to synthesise training data for novel classes using only semantic attributes. Despite the simplicity of this idea, there are several challenges. First, how to prevent the synthesised data from over-fitting to training classes? Second, how to guarantee the synthesised data is discriminative for ZSL tasks? Third, we observe that only a few dimensions of the learnt features gain high variances whereas most of the remaining dimensions are not informative. Thus, the question is how to make the concentrated information diffuse to most of the dimensions of synthesised data. To address the above issues, we propose a novel embedding algorithm named Unseen Visual Data Synthesis (UVDS) that projects semantic features to the high-dimensional visual feature space. Two main techniques are introduced in our proposed algorithm. (1) We introduce a latent embedding space which aims to reconcile the structural difference between the visual and semantic spaces, meanwhile preserve the local structure. (2) We propose a novel Diffusion Regularisation (DR) that explicitly forces the variances to diffuse over most dimensions of the synthesised data. By an orthogonal rotation (more precisely, an orthogonal transformation), DR can remove the redundant correlated attributes and further alleviate the over-fitting problem. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data for zero-shot learning. Extensive experimental results suggest that our proposed approach significantly outperforms the state-of-the-art methods.",
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes.",
"A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"",
"We study the problem of object recognition for categories for which we have no training examples, a task also called zero--data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently; the world contains tens of thousands of different object classes, and image collections have been formed and suitably annotated for only a few of them. To tackle the problem, we introduce attribute-based classification: Objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be prelearned independently, for example, from existing image data sets unrelated to the current task. Afterward, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper, we also introduce a new data set, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more data sets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.",
"Machine learning and data mining techniques have been used in numerous real-world applications. An assumption of traditional machine learning methodologies is the training data and testing data are taken from the same domain, such that the input feature space and data distribution characteristics are the same. However, in some real-world machine learning scenarios, this assumption does not hold. There are cases where training data is expensive or difficult to collect. Therefore, there is a need to create high-performance learners trained with more easily obtained data from different domains. This methodology is referred to as transfer learning. This survey paper formally defines transfer learning, presents information on current solutions, and reviews applications applied to transfer learning. Lastly, there is information listed on software downloads for various transfer learning solutions and a discussion of possible future research work. The transfer learning solutions surveyed are independent of data size and can be applied to big data environments.",
"Zero-shot learning (ZSL) aims to recognize unseen object classes without any training samples, which can be regarded as a form of transfer learning from seen classes to unseen ones. This is made possible by learning a projection between a feature space and a semantic space (e.g. attribute space). Key to ZSL is thus to learn a projection function that is robust against the often large domain gap between the seen and unseen classes. In this paper, we propose a novel ZSL model termed domain-invariant projection learning (DIPL). Our model has two novel components: (1) A domain-invariant feature self-reconstruction task is introduced to the seen unseen class data, resulting in a simple linear formulation that casts ZSL into a min-min optimization problem. Solving the problem is non-trivial, and a novel iterative algorithm is formulated as the solver, with rigorous theoretic algorithm analysis provided. (2) To further align the two domains via the learned projection, shared semantic structure among seen and unseen classes is explored via forming superclasses in the semantic space. Extensive experiments show that our model outperforms the state-of-the-art alternatives by significant margins.",
""
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | Most recently, unseen class information is used to get better performance in the ZSL task @cite_61 @cite_48 @cite_19 @cite_29 @cite_59 @cite_16 @cite_54 . For instance, in the work @cite_61 , unseen information is employed to assist aligning of the visual-semantic structures. As another example, some recent works @cite_19 @cite_29 @cite_59 @cite_16 @cite_54 adopt generative models to enlarge synthesized labeled examples from the unseen classes, and consequently, these examples can be assisted to train a better projection model. Furthermore, a related scenario is the transductive zero-shot learning @cite_23 @cite_13 @cite_44 @cite_27 , which assumes that the unlabeled samples from unseen classes are available during training. However, those works to some extent breach the strict ZSL settings that the testing resources should not be accessed in the training stage. In our work, we make no use of unseen classes information and consider only the seen resources are available at training time. | {
"cite_N": [
"@cite_61",
"@cite_48",
"@cite_29",
"@cite_54",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_59",
"@cite_23",
"@cite_16",
"@cite_13"
],
"mid": [
"2951478085",
"",
"2963538198",
"2924476266",
"2514265125",
"2963545832",
"2949533609",
"2963960318",
"2794925779",
"",
"2141350700"
],
"abstract": [
"Zero-shot learning (ZSL) aims to recognize objects of novel classes without any training samples of specific classes, which is achieved by exploiting the semantic information and auxiliary datasets. Recently most ZSL approaches focus on learning visual-semantic embeddings to transfer knowledge from the auxiliary datasets to the novel classes. However, few works study whether the semantic information is discriminative or not for the recognition task. To tackle such problem, we propose a coupled dictionary learning approach to align the visual-semantic structures using the class prototypes, where the discriminative information lying in the visual space is utilized to improve the less discriminative semantic space. Then, zero-shot recognition can be performed in different spaces by the simple nearest neighbor approach using the learned class prototypes. Extensive experiments on four benchmark datasets show the effectiveness of the proposed approach.",
"",
"Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"",
"Zero-shot Recognition (ZSR) is to learn recognition models for novel classes without labeled data. It is a challenging task and has drawn considerable attention in recent years. The basic idea is to transfer knowledge from seen classes via the shared attributes. This paper focus on the transductive ZSR, i.e., we have unlabeled data for novel classes. Instead of learning models for seen and novel classes separately as in existing works, we put forward a novel joint learning approach which learns the shared model space (SMS) for models such that the knowledge can be effectively transferred between classes using the attributes. An effective algorithm is proposed for optimization. We conduct comprehensive experiments on three benchmark datasets for ZSR. The results demonstrates that the proposed SMS can significantly outperform the state-of-the-art related approaches which validates its efficacy for the ZSR task.",
"We present a generative framework for generalized zero-shot learning where the training and test classes are not necessarily disjoint. Built upon a variational autoencoder based architecture, consisting of a probabilistic encoder and a probabilistic conditional decoder, our model can generate novel exemplars from seen unseen classes, given their respective class attributes. These exemplars can subsequently be used to train any off-the-shelf classification model. One of the key aspects of our encoder-decoder architecture is a feedback-driven mechanism in which a discriminator (a multivariate regressor) learns to map the generated exemplars to the corresponding class attribute vectors, leading to an improved generator. Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings. Through a comprehensive set of experiments, we show that our model outperforms several state-of-the-art methods, on several benchmark datasets, for both standard as well as generalized zero-shot learning.",
"Zero-shot learning (ZSL) aims to recognize unseen object classes without any training samples, which can be regarded as a form of transfer learning from seen classes to unseen ones. This is made possible by learning a projection between a feature space and a semantic space (e.g. attribute space). Key to ZSL is thus to learn a projection function that is robust against the often large domain gap between the seen and unseen classes. In this paper, we propose a novel ZSL model termed domain-invariant projection learning (DIPL). Our model has two novel components: (1) A domain-invariant feature self-reconstruction task is introduced to the seen unseen class data, resulting in a simple linear formulation that casts ZSL into a min-min optimization problem. Solving the problem is non-trivial, and a novel iterative algorithm is formulated as the solver, with rigorous theoretic algorithm analysis provided. (2) To further align the two domains via the learned projection, shared semantic structure among seen and unseen classes is explored via forming superclasses in the semantic space. Extensive experiments show that our model outperforms the state-of-the-art alternatives by significant margins.",
"Suffering from the extreme training data imbalance between seen and unseen classes, most of existing state-of-the-art approaches fail to achieve satisfactory results for the challenging generalized zero-shot learning task. To circumvent the need for labeled examples of unseen classes, we propose a novel generative adversarial network (GAN) that synthesizes CNN features conditioned on class-level semantic information, offering a shortcut directly from a semantic descriptor of a class to a class-conditional feature distribution. Our proposed approach, pairing a Wasserstein GAN with a classification loss, is able to generate sufficiently discriminative CNN features to train softmax classifiers or any multimodal embedding method. Our experimental results demonstrate a significant boost in accuracy over the state of the art on five challenging datasets - CUB, FLO, SUN, AWA and ImageNet - in both the zero-shot learning and generalized zero-shot learning settings.",
"Most existing Zero-Shot Learning (ZSL) methods have the strong bias problem, in which instances of unseen (target) classes tend to be categorized as one of the seen (source) classes. So they yield poor performance after being deployed in the generalized ZSL settings. In this paper, we propose a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bias problem. Our method follows the way of transductive learning, which assumes that both the labeled source images and unlabeled target images are available for training. In the semantic embedding space, the labeled source images are mapped to several fixed points specified by the source categories, and the unlabeled target images are forced to be mapped to other points specified by the target categories. Experiments conducted on AwA2, CUB and SUN datasets demonstrate that our method outperforms existing state-of-the-art approaches by a huge margin of 9.3 24.5 following generalized ZSL settings, and by a large margin of 0.2 16.2 following conventional ZSL settings.",
"",
"Most existing zero-shot learning approaches exploit transfer learning via an intermediate semantic representation shared between an annotated auxiliary dataset and a target dataset with different classes and no annotation. A projection from a low-level feature space to the semantic representation space is learned from the auxiliary dataset and applied without adaptation to the target dataset. In this paper we identify two inherent limitations with these approaches. First, due to having disjoint and potentially unrelated classes, the projection functions learned from the auxiliary dataset domain are biased when applied directly to the target dataset domain. We call this problem the projection domain shift problem and propose a novel framework, transductive multi-view embedding , to solve it. The second limitation is the prototype sparsity problem which refers to the fact that for each target class, only a single prototype is available for zero-shot learning given a semantic representation. To overcome this problem, a novel heterogeneous multi-view hypergraph label propagation method is formulated for zero-shot learning in the transductive embedding space. It effectively exploits the complementary information offered by different semantic representations and takes advantage of the manifold structures of multiple representation spaces in a coherent manner. We demonstrate through extensive experiments that the proposed approach (1) rectifies the projection shift between the auxiliary and target domains, (2) exploits the complementarity of multiple semantic representations, (3) significantly outperforms existing methods for both zero-shot and N-shot recognition on three image and video benchmark datasets, and (4) enables novel cross-view annotation tasks."
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | Compared with the strict ZSL, there is a more realistic and challenging task which is called generalized zero-shot learning (GZSL). Its targets include both seen and unseen categories. The problem of GZSL is proposed at the very beginning of ZSL work @cite_7 , and most of the above mentioned literatures evaluate their methods on both ZSL and GZSL settings. In this work, we also take GZSL into account. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2134270519"
],
"abstract": [
"We study the problem of object classification when training and test classes are disjoint, i.e. no training examples of the target classes are available. This setup has hardly been studied in computer vision research, but it is the rule rather than the exception, because the world contains tens of thousands of different object classes and for only a very few of them image, collections have been formed and annotated with suitable class labels. In this paper, we tackle the problem by introducing attribute-based classification. It performs object detection based on a human-specified high-level description of the target objects instead of training images. The description consists of arbitrary semantic attributes, like shape, color or even geographic information. Because such properties transcend the specific learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In order to evaluate our method and to facilitate research in this area, we have assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes. Our experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes."
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | The choice of the embedding space is a key to the success of a ZSL model. Semantic space is often chosen as the embedding space in lots of researches @cite_35 @cite_21 @cite_15 @cite_32 . Owing to the advantage that each class is represented by one semantic vector in the semantic space, taking the semantic space as the embeddding space is helpful for the better embedded visual data structure. However, on the downside, this strategy will significantly shrink the variance of the data points and thus aggravate the hubness problem @cite_5 @cite_49 . To alleviate this problem, some recent works @cite_49 @cite_30 choose the visual space as the embedding space and map the semantic vectors to the visual space. However, using the visual space as embedding space faces a new problem. Instance features in the visual space are not distributed in an ideal structure due to the possibility of large inter-class similarities and small intra-class similarities. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_21",
"@cite_32",
"@cite_49",
"@cite_5",
"@cite_15"
],
"mid": [
"2950652153",
"2334493732",
"652269744",
"2123024445",
"1492420801",
"2157133710",
"2044913453"
],
"abstract": [
"Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models.",
"We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"Zero-shot learning consists in learning how to recognise new concepts by just having a description of them. Many sophisticated approaches have been proposed to address the challenges this problem comprises. In this paper we describe a zero-shot learning approach that can be implemented in just one line of code, yet it is able to outperform state of the art approaches on standard datasets. The approach is based on a more general framework which models the relationships between features, attributes, and classes as a two linear layers network, where the weights of the top layer are not learned but are given by the environment. We further provide a learning bound on the generalisation error of this kind of approaches, by casting them as domain adaptation methods. In experiments carried out on three standard real datasets, we found that our approach is able to perform significantly better than the state of art on all of them, obtaining a ratio of improvement up to 17 .",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"This paper discusses the effect of hubness in zero-shot learning, when ridge regression is used to find a mapping between the example space to the label space. Contrary to the existing approach, which attempts to find a mapping from the example space to the label space, we show that mapping labels into the example space is desirable to suppress the emergence of hubs in the subsequent nearest neighbor search step. Assuming a simple data model, we prove that the proposed approach indeed reduces hubness. This was verified empirically on the tasks of bilingual lexicon extraction and image labeling: hubness was reduced with both of these tasks and the accuracy was improved accordingly.",
"Different aspects of the curse of dimensionality are known to present serious challenges to various machine-learning methods and tasks. This paper explores a new aspect of the dimensionality curse, referred to as hubness, that affects the distribution of k-occurrences: the number of times a point appears among the k nearest neighbors of other points in a data set. Through theoretical and empirical analysis involving synthetic and real data sets we show that under commonly used assumptions this distribution becomes considerably skewed as dimensionality increases, causing the emergence of hubs, that is, points with very high k-occurrences which effectively represent \"popular\" nearest neighbors. We examine the origins of this phenomenon, showing that it is an inherent property of data distributions in high-dimensional vector space, discuss its interaction with dimensionality reduction, and explore its influence on a wide range of machine-learning tasks directly or indirectly based on measuring distances, belonging to supervised, semi-supervised, and unsupervised learning families.",
"Image classification has advanced significantly in recent years with the availability of large-scale image sets. However, fine-grained classification remains a major challenge due to the annotation cost of large numbers of fine-grained categories. This project shows that compelling classification performance can be achieved on such categories even without labeled training data. Given image and class embeddings, we learn a compatibility function such that matching embeddings are assigned a higher score than mismatching ones; zero-shot classification of an image proceeds by finding the label yielding the highest joint compatibility score. We use state-of-the-art image features and focus on different supervised attributes and unsupervised output embeddings either derived from hierarchies or learned from unlabeled text corpora. We establish a substantially improved state-of-the-art on the Animals with Attributes and Caltech-UCSD Birds datasets. Most encouragingly, we demonstrate that purely unsupervised output embeddings (learned from Wikipedia and improved with finegrained text) achieve compelling results, even outperforming the previous supervised state-of-the-art. By combining different output embeddings, we further improve results."
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | Common intermediate embedding space is also popular in the literature @cite_42 @cite_45 . Besides, more than one projection method can be realized in some works @cite_61 @cite_9 @cite_27 in the testing process. For instance, in the work @cite_61 , an intermediate aligned space is learned using the class prototypes, and the recognition can be conducted in all three space, namely the visual space, the semantic space and the intermediate space. | {
"cite_N": [
"@cite_61",
"@cite_9",
"@cite_42",
"@cite_27",
"@cite_45"
],
"mid": [
"2951478085",
"2611632661",
"2289084343",
"2949533609",
"2405223529"
],
"abstract": [
"Zero-shot learning (ZSL) aims to recognize objects of novel classes without any training samples of specific classes, which is achieved by exploiting the semantic information and auxiliary datasets. Recently most ZSL approaches focus on learning visual-semantic embeddings to transfer knowledge from the auxiliary datasets to the novel classes. However, few works study whether the semantic information is discriminative or not for the recognition task. To tackle such problem, we propose a coupled dictionary learning approach to align the visual-semantic structures using the class prototypes, where the discriminative information lying in the visual space is utilized to improve the less discriminative semantic space. Then, zero-shot recognition can be performed in different spaces by the simple nearest neighbor approach using the learned class prototypes. Extensive experiments on four benchmark datasets show the effectiveness of the proposed approach.",
"Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art.",
"Given semantic descriptions of object classes, zeroshot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of \"phantom\" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.",
"Zero-shot learning (ZSL) aims to recognize unseen object classes without any training samples, which can be regarded as a form of transfer learning from seen classes to unseen ones. This is made possible by learning a projection between a feature space and a semantic space (e.g. attribute space). Key to ZSL is thus to learn a projection function that is robust against the often large domain gap between the seen and unseen classes. In this paper, we propose a novel ZSL model termed domain-invariant projection learning (DIPL). Our model has two novel components: (1) A domain-invariant feature self-reconstruction task is introduced to the seen unseen class data, resulting in a simple linear formulation that casts ZSL into a min-min optimization problem. Solving the problem is non-trivial, and a novel iterative algorithm is formulated as the solver, with rigorous theoretic algorithm analysis provided. (2) To further align the two domains via the learned projection, shared semantic structure among seen and unseen classes is explored via forming superclasses in the semantic space. Extensive experiments show that our model outperforms the state-of-the-art alternatives by significant margins.",
"Zero-shot recognition (ZSR) deals with the problem of predicting class labels for target domain instances based on source domain side information (e.g. attributes) of unseen classes. We formulate ZSR as a binary prediction problem. Our resulting classifier is class-independent. It takes an arbitrary pair of source and target domain instances as input and predicts whether or not they come from the same class, i.e. whether there is a match. We model the posterior probability of a match since it is a sufficient statistic and propose a latent probabilistic model in this context. We develop a joint discriminative learning framework based on dictionary learning to jointly learn the parameters of our model for both domains, which ultimately leads to our class-independent classifier. Many of the existing embedding methods can be viewed as special cases of our probabilistic model. On ZSR our method shows 4.90 improvement over the state-of-the-art in accuracy averaged across four benchmark datasets. We also adapt ZSR method for zero-shot retrieval and show 22.45 improvement accordingly in mean average precision (mAP)."
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | In those embedding strategies, the intermediate embedding space makes it possible to adjust data structures both of semantic vectors and visual features. Thus, the intermediate embedding space strategy is adopted in the proposed visual space optimization based method. Considering the intrinsic superiority of using the visual space as embedding space on alleviating the hubness problem @cite_30 , the intermediate space in this method is closer to the visual space instead of being equivalent to visual space and semantic space. Besides, in order to take the visual space as the embedding space with more discriminative structure, the other method proposed in this work is to learn the visual feature prototypes, so that each visual class can be represented by one visual prototype instead of numerous discrete visual features. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2950652153"
],
"abstract": [
"Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual semantic description of object classes and visual representation of object images can be projected to for nearest neighbour search. Despite the success of deep neural networks that learn an end-to-end model between text and images in other vision problems such as image captioning, very few deep ZSL model exists and they show little advantage over ZSL models that utilise deep feature representations but do not learn an end-to-end embedding. In this paper we argue that the key to make deep ZSL models succeed is to choose the right embedding space. Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space. This is because that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective. This model design also provides a natural mechanism for multiple semantic modalities (e.g., attributes and sentence descriptions) to be fused and optimised jointly in an end-to-end manner. Extensive experiments on four benchmarks show that our model significantly outperforms the existing models."
]
} |
1907.00330 | 2954705892 | Zero-shot learning, which aims to recognize new categories that are not included in the training set, has gained popularity owing to its potential ability in the real-word applications. Zero-shot learning models rely on learning an embedding space, where both semantic descriptions of classes and visual features of instances can be embedded for nearest neighbor search. Recently, most of the existing works consider the visual space formulated by deep visual features as an ideal choice of the embedding space. However, the discrete distribution of instances in the visual space makes the data structure unremarkable. We argue that optimizing the visual space is crucial as it allows semantic vectors to be embedded into the visual space more effectively. In this work, we propose two strategies to accomplish this purpose. One is the visual prototype based method, which learns a visual prototype for each visual class, so that, in the visual space, a class can be represented by a prototype feature instead of a series of discrete visual features. The other is to optimize the visual feature structure in an intermediate embedding space, and in this method we successfully devise a multilayer perceptron framework based algorithm that is able to learn the common intermediate embedding space and meanwhile to make the visual data structure more distinctive. Through extensive experimental evaluation on four benchmark datasets, we demonstrate that optimizing visual space is beneficial for zero-shot learning. Besides, the proposed prototype based method achieves the new state-of-the-art performance. | Since there is a huge gap between visual and semantic spaces, the learned model tends to not discover the intrinsic topological structure when maps the data into the embedding space. Some works @cite_50 @cite_61 @cite_9 @cite_27 @cite_43 @cite_22 @cite_17 @cite_52 @cite_8 @cite_3 @cite_53 have been conducted to keep the data structure during the projection. Manifold learning is a popular method used to keep the data structure in the ZSL @cite_43 @cite_22 @cite_17 @cite_52 @cite_8 . Taking the visual space as the embedding space, the work @cite_62 introduces an auxiliary latent-embedding space with manifold regularization to reconcile the semantic space with the visual feature space, which can preserve the intrinsic data structural information of both visual and semantic spaces. | {
"cite_N": [
"@cite_61",
"@cite_62",
"@cite_22",
"@cite_8",
"@cite_9",
"@cite_53",
"@cite_52",
"@cite_3",
"@cite_43",
"@cite_27",
"@cite_50",
"@cite_17"
],
"mid": [
"2951478085",
"2762085884",
"2883360306",
"2746797088",
"2611632661",
"2891932073",
"2601051138",
"",
"2605805765",
"2949533609",
"",
"2518962550"
],
"abstract": [
"Zero-shot learning (ZSL) aims to recognize objects of novel classes without any training samples of specific classes, which is achieved by exploiting the semantic information and auxiliary datasets. Recently most ZSL approaches focus on learning visual-semantic embeddings to transfer knowledge from the auxiliary datasets to the novel classes. However, few works study whether the semantic information is discriminative or not for the recognition task. To tackle such problem, we propose a coupled dictionary learning approach to align the visual-semantic structures using the class prototypes, where the discriminative information lying in the visual space is utilized to improve the less discriminative semantic space. Then, zero-shot recognition can be performed in different spaces by the simple nearest neighbor approach using the learned class prototypes. Extensive experiments on four benchmark datasets show the effectiveness of the proposed approach.",
"Sufficient training examples are the fundamental requirement for most of the learning tasks. However, collecting well-labelled training examples is costly. Inspired by Zero-shot Learning (ZSL) that can make use of visual attributes or natural language semantics as an intermediate level clue to associate low-level features with high-level classes, in a novel extension of this idea, we aim to synthesise training data for novel classes using only semantic attributes. Despite the simplicity of this idea, there are several challenges. First, how to prevent the synthesised data from over-fitting to training classes? Second, how to guarantee the synthesised data is discriminative for ZSL tasks? Third, we observe that only a few dimensions of the learnt features gain high variances whereas most of the remaining dimensions are not informative. Thus, the question is how to make the concentrated information diffuse to most of the dimensions of synthesised data. To address the above issues, we propose a novel embedding algorithm named Unseen Visual Data Synthesis (UVDS) that projects semantic features to the high-dimensional visual feature space. Two main techniques are introduced in our proposed algorithm. (1) We introduce a latent embedding space which aims to reconcile the structural difference between the visual and semantic spaces, meanwhile preserve the local structure. (2) We propose a novel Diffusion Regularisation (DR) that explicitly forces the variances to diffuse over most dimensions of the synthesised data. By an orthogonal rotation (more precisely, an orthogonal transformation), DR can remove the redundant correlated attributes and further alleviate the over-fitting problem. On four benchmark datasets, we demonstrate the benefit of using synthesised unseen data for zero-shot learning. Extensive experimental results suggest that our proposed approach significantly outperforms the state-of-the-art methods.",
"In this letter, we propose a novel low-rank-represen-tation (LRR) based manifold-regularization approach for zero-shot learning (ZSL). Most existing regularization-based ZSL approaches perform the alignment between visual feature space and semantic space based on the affinity matrix constructed from the test instances. The affinity matrix plays a significant role in exploiting the manifold structures of visual feature space, hence we propose to use the LRR to guide the affinity-matrix construction by exploring the subspace structures of data. Considering the locality and similarity information among data, we incorporate a Laplacian regularization term to the LRR framework to ensure that the learned affinity matrix can capture the local geometric structures in data. We also explicitly impose the nonnegative sparse constraint on the affinity matrix to facilitate the learning of local manifold structures. Moreover, we use an effective manifold-regularization methodology to learn discriminative semantic representations of test instances, leading to significant improvements in classification performance over the unseen classes. Extensive experiments on three benchmark datasets demonstrate that the proposed approach outperforms the state of the arts.",
"We address zero-shot learning using a new manifold alignment framework based on a localized multi-scale transform on graphs. Our inference approach includes a smoothness criterion for a function mapping nodes on a graph (visual representation) onto a linear space (semantic representation), which we optimize using multi-scale graph wavelets. The robustness of the ensuing scheme allows us to operate with automatically generated semantic annotations, resulting in an algorithm that is entirely free of manual supervision, and yet improves the state-of-the-art as measured on benchmark datasets.",
"Existing zero-shot learning (ZSL) models typically learn a projection function from a feature space to a semantic embedding space (e.g. attribute space). However, such a projection function is only concerned with predicting the training seen class semantic representation (e.g. attribute prediction) or classification. When applied to test data, which in the context of ZSL contains different (unseen) classes without training data, a ZSL model typically suffers from the project domain shift problem. In this work, we present a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE). Taking the encoder-decoder paradigm, an encoder aims to project a visual feature vector into the semantic space as in the existing ZSL models. However, the decoder exerts an additional constraint, that is, the projection code must be able to reconstruct the original visual feature. We show that with this additional reconstruction constraint, the learned projection function from the seen classes is able to generalise better to the new unseen classes. Importantly, the encoder and decoder are linear and symmetric which enable us to develop an extremely efficient learning algorithm. Extensive experiments on six benchmark datasets demonstrate that the proposed SAE outperforms significantly the existing ZSL models with the additional benefit of lower computational cost. Furthermore, when the SAE is applied to supervised clustering problem, it also beats the state-of-the-art.",
"Conventional zero-shot learning approaches often suffer from severe performance degradation in the generalized zero-shot learning (GZSL) scenario, i.e., to recognize test images that are from both seen and unseen classes. This paper studies the Class-level Over-fitting (CO) and empirically shows its effects to GZSL. We then address ZSL as a triple verification problem and propose a unified optimization of regression and compatibility functions, i.e., two main streams of existing ZSL approaches. The complementary losses mutually regularizes the same model to mitigate the CO problem. Furthermore, we implement a deep extension paradigm to linear models and significantly outperform state-of-the-art methods in both GZSL and ZSL scenarios on the four standard benchmarks.",
"Zero-shot recognition aims to accurately recognize objects of unseen classes by using a shared visual-semantic mapping between the image feature space and the semantic embedding space. This mapping is learned on training data of seen classes and is expected to have transfer ability to unseen classes. In this paper, we tackle this problem by exploiting the intrinsic relationship between the semantic space manifold and the transfer ability of visual-semantic mapping. We formalize their connection and cast zero-shot recognition as a joint optimization problem. Motivated by this, we propose a novel framework for zero-shot recognition, which contains dual visual-semantic mapping paths. Our analysis shows this framework can not only apply prior semantic knowledge to infer underlying semantic manifold in the image feature space, but also generate optimized semantic embedding space, which can enhance the transfer ability of the visual-semantic mapping to unseen classes. The proposed method is evaluated for zero-shot recognition on four benchmark datasets, achieving outstanding results.",
"",
"The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition. Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets.",
"Zero-shot learning (ZSL) aims to recognize unseen object classes without any training samples, which can be regarded as a form of transfer learning from seen classes to unseen ones. This is made possible by learning a projection between a feature space and a semantic space (e.g. attribute space). Key to ZSL is thus to learn a projection function that is robust against the often large domain gap between the seen and unseen classes. In this paper, we propose a novel ZSL model termed domain-invariant projection learning (DIPL). Our model has two novel components: (1) A domain-invariant feature self-reconstruction task is introduced to the seen unseen class data, resulting in a simple linear formulation that casts ZSL into a min-min optimization problem. Solving the problem is non-trivial, and a novel iterative algorithm is formulated as the solver, with rigorous theoretic algorithm analysis provided. (2) To further align the two domains via the learned projection, shared semantic structure among seen and unseen classes is explored via forming superclasses in the semantic space. Extensive experiments show that our model outperforms the state-of-the-art alternatives by significant margins.",
"",
"We develop a novel method for zero shot learning (ZSL) based on test-time adaptation of similarity functions learned using training data. Existing methods exclusively employ source-domain side information for recognizing unseen classes during test time. We show that for batch-mode applications, accuracy can be significantly improved by adapting these predictors to the observed test-time target-domain ensemble. We develop a novel structured prediction method for maximum a posteriori (MAP) estimation, where parameters account for test-time domain shift from what is predicted primarily using source domain information. We propose a Gaussian parameterization for the MAP problem and derive an efficient structure prediction algorithm. Empirically we test our method on four popular benchmark image datasets for ZSL, and show significant improvement over the state-of-the-art, on average, by 11.50 and 30.12 in terms of accuracy for recognition and mean average precision (mAP) for retrieval, respectively."
]
} |
1907.00107 | 2945408776 | Systems that make sequential decisions in the presence of partial feedback on actions often need to strike a balance between maximizing immediate payoffs based on available information, and acquiring new information that may be essential for maximizing future payoffs. This trade-off is captured by the multi-armed bandit (MAB) framework that has been studied and applied for designing sequential experiments when at each time epoch a single observation is collected on the action that was selected at that epoch. However, in many practical settings additional information may become available between decision epochs. We introduce a generalized MAB formulation in which auxiliary information on each arm may appear arbitrarily over time. By obtaining matching lower and upper bounds, we characterize the minimax complexity of this family of MAB problems as a function of the information arrival process, and study how salient characteristics of this process impact policy design and achievable performance. We establish the robustness of a Thompson sampling policy in the presence of additional information, but observe that other policies that are of practical importance do not exhibit such robustness. We therefore introduce a broad adaptive exploration approach for designing policies that, without any prior knowledge on the information arrival process, attain the best performance (in terms of regret rate) that is achievable when the information arrival process is a priori known. Our approach is based on adjusting MAB policies designed to perform well in the absence of auxiliary information by using dynamically customized virtual time indexes to endogenously control the exploration rate of the policy. We demonstrate our approach through appropriately adjusting known MAB policies and establishing improved performance bounds for these policies in the presence of auxiliary information. | An active stream of literature has been studying recommender systems, focusing on modelling and maintaining connections between users and products; see, e.g., , the survey by , and a book by . One key element that impacts the performance of recommender systems is the often limited data that is available. Focusing on the prominent information acquisition aspect of the problem, several studies (to which we referred earlier) have addressed sequential recommendation problems using a MAB framework where in each time period information is obtained only on items that are recommended at that period. Another approach is to identify and leverage additional sources of relevant information. Following that avenue, @cite_5 consider the problem of estimating user-item propensities, and propose a method to incorporate auxiliary data such as browsing and search histories to enhance the predictive power of recommender systems. While their work concerns with the impact of auxiliary information in an offline prediction context, our paper focuses on the impact of auxiliary information streams on the design, information acquisition, and appropriate exploration rate in a sequential experimentation framework. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2944320443"
],
"abstract": [
"Product and content personalization is now ubiquitous in e-commerce. There are typically not enough available transactional data for this task. As such, companies today seek to use a variety of inf..."
]
} |
1907.00107 | 2945408776 | Systems that make sequential decisions in the presence of partial feedback on actions often need to strike a balance between maximizing immediate payoffs based on available information, and acquiring new information that may be essential for maximizing future payoffs. This trade-off is captured by the multi-armed bandit (MAB) framework that has been studied and applied for designing sequential experiments when at each time epoch a single observation is collected on the action that was selected at that epoch. However, in many practical settings additional information may become available between decision epochs. We introduce a generalized MAB formulation in which auxiliary information on each arm may appear arbitrarily over time. By obtaining matching lower and upper bounds, we characterize the minimax complexity of this family of MAB problems as a function of the information arrival process, and study how salient characteristics of this process impact policy design and achievable performance. We establish the robustness of a Thompson sampling policy in the presence of additional information, but observe that other policies that are of practical importance do not exhibit such robustness. We therefore introduce a broad adaptive exploration approach for designing policies that, without any prior knowledge on the information arrival process, attain the best performance (in terms of regret rate) that is achievable when the information arrival process is a priori known. Our approach is based on adjusting MAB policies designed to perform well in the absence of auxiliary information by using dynamically customized virtual time indexes to endogenously control the exploration rate of the policy. We demonstrate our approach through appropriately adjusting known MAB policies and establishing improved performance bounds for these policies in the presence of auxiliary information. | Since its inception, the MAB framework has been analyzed under different assumptions for studying a variety of applications including clinical trials , strategic pricing , assortment selection , online auctions , online advertising , and product recommendations , among others. For a comprehensive overview of MAB formulations we refer the readers to the monographs by @cite_34 and @cite_12 for Bayesian dynamic programming formulations, as well as to @cite_30 and @cite_19 that cover the machine learning literature and the so-called adversarial setting. A sharp regret characterization for the more traditional framework (random rewards realized from stationary distributions), often referred to as the stochastic MAB problem, was first established by @cite_32 , followed by analysis of policies designed for this framework, such as @math -greedy, UCB1, and Thompson sampling among others; see, e.g., @cite_43 and @cite_25 . | {
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_19",
"@cite_43",
"@cite_34",
"@cite_25",
"@cite_12"
],
"mid": [
"1570963478",
"2009551863",
"2049934117",
"2168405694",
"",
"2141645258",
"2317700292"
],
"abstract": [
"1. Introduction 2. Prediction with expert advice 3. Tight bounds for specific losses 4. Randomized prediction 5. Efficient forecasters for large classes of experts 6. Prediction with limited feedback 7. Prediction and playing games 8. Absolute loss 9. Logarithmic loss 10. Sequential investment 11. Linear pattern recognition 12. Linear classification 13. Appendix.",
"",
"1: Introduction 2: Stochastic bandits: fundamental results 3: Adversarial bandits: fundamental results 4: Contextual Bandits 5: Linear bandits 6: Nonlinear bandits 7: Variants. Acknowledgements. References",
"Reinforcement learning policies face the exploration versus exploitation dilemma, i.e. the search for a balance between exploring the environment to find profitable actions while taking the empirically best action as often as possible. A popular measure of a policy's success in addressing this dilemma is the regret, that is the loss due to the fact that the globally optimal policy is not followed all the times. One of the simplest examples of the exploration exploitation dilemma is the multi-armed bandit problem. Lai and Robbins were the first ones to show that the regret for this problem has to grow at least logarithmically in the number of plays. Since then, policies which asymptotically achieve this regret have been devised by Lai and Robbins and many others. In this work we show that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.",
"",
"Thompson Sampling is one of the oldest heuristics for multi-armed bandit problems. It is a randomized algorithm based on Bayesian ideas, and has recently generated significant interest after several studies demonstrated it to have comparable or better empirical performance compared to the state of the art methods. In this paper, we provide a novel regret analysis for Thompson Sampling that proves the first near-optimal problem-independent bound of O( √ NT lnT ) on the expected regret of this algorithm. Our novel martingale-based analysis techniques are conceptually simple, and easily extend to distributions other than the Beta distribution. For the version of Thompson Sampling that uses Gaussian priors, we prove a problem-independent bound of O( √ NT lnN) on the expected regret, and demonstrate the optimality of this bound by providing a matching lower bound. This lower bound of Ω( √ NT lnN) is the first lower bound on the performance of a natural version of Thompson Sampling that is away from the general lower bound of O( √ NT ) for the multi-armed bandit problem. Our near-optimal problem-independent bounds for Thompson Sampling solve a COLT 2012 open problem of Chapelle and Li. Additionally, our techniques simultaneously provide the optimal problem-dependent bound of (1+ ǫ) ∑ i lnT d(μi,μ1) +O(Nǫ2 ) on the expected regret. The optimal problem-dependent regret bound for this problem was first proven recently by [2012b]. Appearing in Proceedings of the 16 International Conference on Artificial Intelligence and Statistics (AISTATS) 2013, Scottsdale, AZ, USA. Volume 31 of JMLR: W&CP 31. Copyright 2013 by the authors.",
"3. Multi‐armed Bandit Allocation Indices. By J. C. Gittins. ISBN 0 471 92059 2. Wiley, Chichester, 1989. xii + 252pp. £29.95."
]
} |
1907.00107 | 2945408776 | Systems that make sequential decisions in the presence of partial feedback on actions often need to strike a balance between maximizing immediate payoffs based on available information, and acquiring new information that may be essential for maximizing future payoffs. This trade-off is captured by the multi-armed bandit (MAB) framework that has been studied and applied for designing sequential experiments when at each time epoch a single observation is collected on the action that was selected at that epoch. However, in many practical settings additional information may become available between decision epochs. We introduce a generalized MAB formulation in which auxiliary information on each arm may appear arbitrarily over time. By obtaining matching lower and upper bounds, we characterize the minimax complexity of this family of MAB problems as a function of the information arrival process, and study how salient characteristics of this process impact policy design and achievable performance. We establish the robustness of a Thompson sampling policy in the presence of additional information, but observe that other policies that are of practical importance do not exhibit such robustness. We therefore introduce a broad adaptive exploration approach for designing policies that, without any prior knowledge on the information arrival process, attain the best performance (in terms of regret rate) that is achievable when the information arrival process is a priori known. Our approach is based on adjusting MAB policies designed to perform well in the absence of auxiliary information by using dynamically customized virtual time indexes to endogenously control the exploration rate of the policy. We demonstrate our approach through appropriately adjusting known MAB policies and establishing improved performance bounds for these policies in the presence of auxiliary information. | On the other hand, few papers have studied cases where exploration is not only essential but should be enhanced in order to maintain optimality. For example, @cite_29 introduce a partial monitoring setting where after playing an arm the agent does not get to see the incurred loss but only observes a feedback that carries some information about it, and show that such feedback structure requires higher exploration rates. @cite_38 consider a general framework where the reward distribution may change over time according to a budget of variation, and characterize the manner in which optimal exploration rates increase as a function of said budget. In addition, @cite_2 consider a platform in which the preferences of arriving users may depend on the experience of previous users. They show that in this setting classical MAB policies may under-explore, and introduce a balanced-exploration approach that results in optimal performance. | {
"cite_N": [
"@cite_38",
"@cite_29",
"@cite_2"
],
"mid": [
"2962821829",
"1964631708",
"2786485478"
],
"abstract": [
"In a multi-armed bandit (MAB) problem a gambler needs to choose at each round of play one of K arms, each characterized by an unknown reward distribution. Reward realizations are only observed when an arm is selected, and the gambler's objective is to maximize his cumulative expected earnings over some given horizon of play T. To do this, the gambler needs to acquire information about arms (exploration) while simultaneously optimizing immediate rewards (exploitation); the price paid due to this trade off is often referred to as the regret, and the main question is how small can this price be as a function of the horizon length T. This problem has been studied extensively when the reward distributions do not change over time; an assumption that supports a sharp characterization of the regret, yet is often violated in practical settings. In this paper, we focus on a MAB formulation which allows for a broad range of temporal uncertainties in the rewards, while still maintaining mathematical tractability. We fully characterize the (regret) complexity of this class of MAB problems by establishing a direct link between the extent of allowable reward \"variation\" and the minimal achievable regret. Our analysis draws some connections between two rather disparate strands of literature: the adversarial and the stochastic MAB frameworks.",
"We consider repeated games in which the player, instead of observing the action chosen by the opponent in each game round, receives a feedback generated by the combined choice of the two players. We study Hannan-consistent players for these games, that is, randomized playing strategies whose per-round regret vanishes with probability one as the number n of game rounds goes to infinity. We prove a general lower bound of Ω(n-1 3) for the convergence rate of the regret, and exhibit a specific strategy that attains this rate for any game for which a Hannan-consistent player exists.",
"Many platforms are characterized by the fact that future user arrivals are likely to have preferences similar to users who were satisfied in the past. In other words, arrivals exhibit positive externalities . We study multiarmed bandit (MAB) problems with positive externalities. Our model has a finite number of arms and users are distinguished by the arm(s) they prefer. We model positive externalities by assuming that the preferred arms of future arrivals are self-reinforcing based on the experiences of past users. We show that classical algorithms such as UCB which are optimal in the classical MAB setting may even exhibit linear regret in the context of positive externalities. We provide an algorithm which achieves optimal regret and show that such optimal regret exhibits substantially different structure from that observed in the standard MAB setting."
]
} |
1907.00068 | 2954607598 | Image registration is a fundamental step in medical image analysis. Ideally, the transformation that registers one image to another should be a diffeomorphism that is both invertible and smooth. Traditional methods like geodesic shooting approach the problem via differential geometry, with theoretical guarantees that the resulting transformation will be smooth and invertible. Most previous research using unsupervised deep neural networks for registration have used a local smoothness constraint (typically, a spatial variation loss) to address the smoothness issue. These networks usually produce non-invertible transformations with folding'' in multiple voxel locations, indicated by a negative determinant of the Jacobian matrix of the transformation. While using a loss function that specifically penalizes the folding is a straightforward solution, this usually requires carefully tuning the regularization strength, especially when there are also other losses. In this paper we address this problem from a different angle, by investigating possible training mechanisms that will help the network avoid negative Jacobians and produce smoother deformations. We contribute two independent ideas in this direction. Both ideas greatly reduce the number of folding locations in the predicted deformation, without making changes to the hyperparameters or the architecture used in the existing baseline registration network. | To author's best knowledge when completing this paper, @cite_12 and @cite_6 are most relevant research. @cite_12 designed an inverse consistent network and argued adding an anti-folding constraint" to prevent folding in predicted transformation. Different from his work, we did not create new loss in this paper, but focuses on discovering possible training mechanisms that will help regularize the network. The alternating training with refinement model is similar to @cite_6 , but our purpose is for regularizing deformation in image transformation instead of image generations. The code for the paper is released at https: github.com dykuang Medical-image-registration. | {
"cite_N": [
"@cite_6",
"@cite_12"
],
"mid": [
"2099471712",
"2890374903"
],
"abstract": [
"We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.",
"Deformable image registration is a fundamental task in medical image analysis, aiming to establish a dense and non-linear correspondence between a pair of images. Previous deep-learning studies usually employ supervised neural networks to directly learn the spatial transformation from one image to another, requiring task-specific ground-truth registration for model training. Due to the difficulty in collecting precise ground-truth registration, implementation of these supervised methods is practically challenging. Although several unsupervised networks have been recently developed, these methods usually ignore the inherent inverse-consistent property (essential for diffeomorphic mapping) of transformations between a pair of images. Also, existing approaches usually encourage the to-be-estimated transformation to be locally smooth via a smoothness constraint only, which could not completely avoid folding in the resulting transformation. To this end, we propose an Inverse-Consistent deep Network (ICNet) for unsupervised deformable image registration. Specifically, we develop an inverse-consistent constraint to encourage that a pair of images are symmetrically deformed toward one another, until both warped images are matched. Besides using the conventional smoothness constraint, we also propose an anti-folding constraint to further avoid folding in the transformation. The proposed method does not require any supervision information, while encouraging the diffeomoprhic property of the transformation via the proposed inverse-consistent and anti-folding constraints. We evaluate our method on T1-weighted brain magnetic resonance imaging (MRI) scans for tissue segmentation and anatomical landmark detection, with results demonstrating the superior performance of our ICNet over several state-of-the-art approaches for deformable image registration. Our code will be made publicly available."
]
} |
1907.00058 | 2954918297 | Quantification of anatomical shape changes still relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of heart conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled hearts when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set. More importantly, it enabled the visualisation in three-dimensions of the most discriminative anatomical features between the two conditions. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging. | An autoencoder is a non-linear dimensionality reduction technique which learns a compact feature representation of the input data by encoding it into and decoding it from a low-dimensional feature vector. Deep autoencoder-based architectures have achieved wide success in computer vision applications as an extension of PCA-based approaches, including feature learning of 3D objects @cite_36 . Autoencoder-based models have also been used to learn compact representations of medical images @cite_32 . Relevant to this work, Oktay @cite_25 showed how autoencoder-derived features of LV segmentations can be successfully used to constrain deep networks for medical image analysis tasks, and how these features outperform PCA features in the classification of healthy subjects versus dilated cardiomyopathy and HCM patients. | {
"cite_N": [
"@cite_36",
"@cite_25",
"@cite_32"
],
"mid": [
"1955462214",
"2620296437",
"2592929672"
],
"abstract": [
"Shape descriptor is a concise yet informative representation that provides a 3D object with an identification as a member of some category. We have developed a concise deep shape descriptor to address challenging issues from ever-growing 3D datasets in areas as diverse as engineering, medicine, and biology. Specifically, in this paper, we developed novel techniques to extract concise but geometrically informative shape descriptor and new methods of defining Eigen-shape descriptor and Fisher-shape descriptor to guide the training of a deep neural network. Our deep shape descriptor tends to maximize the inter-class margin while minimize the intra-class variance. Our new shape descriptor addresses the challenges posed by the high complexity of 3D model and data representation, and the structural variations and noise present in 3D models. Experimental results on 3D shape retrieval demonstrate the superior performance of deep shape descriptor over other state-of-the-art techniques in handling noise, incompleteness and structural variations.",
"Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy ( e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks ( e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.",
"Abstract Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskelet al. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research."
]
} |
1907.00058 | 2954918297 | Quantification of anatomical shape changes still relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of heart conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled hearts when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set. More importantly, it enabled the visualisation in three-dimensions of the most discriminative anatomical features between the two conditions. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging. | Deep generative models have demonstrated great performance in learning data distributions over a low-dimensional set of latent variables and in generating new unseen samples, which is not possible with standard autoencoder models. Among this class of models, variational autoencoder (VAE) models @cite_27 learn a continuous latent representation by enforcing it to follow a predefined distribution. VAEs have been successful at learning the latent space representing deforming 3D shapes for a variety of applications, including shape space embedding and generation, outperforming state-of-the-art methods @cite_20 , @cite_19 . In the medical imaging domain, VAEs have been exploited to approximate the distribution and likelihood of previously unseen MR images @cite_8 , to learn a low-dimensional manifold of 3D fet al skull segmentations @cite_6 and to learn a low-dimensional probabilistic deformation model for cardiac image registration @cite_33 . | {
"cite_N": [
"@cite_33",
"@cite_8",
"@cite_6",
"@cite_19",
"@cite_27",
"@cite_20"
],
"mid": [
"2904555209",
"2785239769",
"2889648615",
"2962928839",
"",
"2728326942"
],
"abstract": [
"We propose to learn a low-dimensional probabilistic deformation model from data which can be used for the registration and the analysis of deformations. The latent variable model maps similar deformations close to each other in an encoding space. It enables to compare deformations, to generate normal or pathological deformations for any new image, or to transport deformations from one image pair to any other image. Our unsupervised method is based on the variational inference. In particular, we use a conditional variational autoencoder network and constrain transformations to be symmetric and diffeomorphic by applying a differentiable exponentiation layer with a symmetric loss function. We also present a formulation that includes spatial regularization such as the diffusion-based filters. In addition, our framework provides multi-scale velocity field estimations. We evaluated our method on 3-D intra-subject registration using 334 cardiac cine-MRIs. On this dataset, our method showed the state-of-the-art performance with a mean DICE score of 81.2 and a mean Hausdorff distance of 7.3 mm using 32 latent dimensions compared to three state-of-the-art methods while also demonstrating more regular deformation fields. The average time per registration was 0.32 s. Besides, we visualized the learned latent space and showed that the encoded deformations can be used to transport deformations and to cluster diseases with a classification accuracy of 83 after applying a linear projection.",
"Algorithms for magnetic resonance (MR) image reconstruction from undersampled measurements exploit prior information to compensate for missing k-space data. Deep learning (DL) provides a powerful framework for extracting such information from existing image datasets, through learning, and then using it for reconstruction. Leveraging this, recent methods employed DL to learn mappings from undersampled to fully sampled images using paired datasets, including undersampled and corresponding fully sampled images, integrating prior knowledge implicitly. In this letter, we propose an alternative approach that learns the probability distribution of fully sampled MR images using unsupervised DL, specifically variational autoencoders (VAE), and use this as an explicit prior term in reconstruction, completely decoupling the encoding operation from the prior. The resulting reconstruction algorithm enjoys a powerful image prior to compensate for missing k-space data without requiring paired datasets for training nor being prone to associated sensitivities, such as deviations in undersampling patterns used in training and test time or coil settings. We evaluated the proposed method with T1 weighted images from a publicly available dataset, multi-coil complex images acquired from healthy volunteers ( @math ), and images with white matter lesions. The proposed algorithm, using the VAE prior, produced visually high quality reconstructions and achieved low RMSE values, outperforming most of the alternative methods on the same dataset. On multi-coil complex data, the algorithm yielded accurate magnitude and phase reconstruction results. In the experiments on images with white matter lesions, the method faithfully reconstructed the lesions.",
"2D ultrasound (US) is the primary imaging modality in antenatal healthcare. Despite the limitations of traditional 2D biometrics to characterize the true 3D anatomy of the fetus, the adoption of 3DUS is still very limited. This is particularly significant in developing countries and remote areas, due to the lack of experienced sonographers and the limited access to 3D technology. In this paper, we present a new deep conditional generative network for the 3D reconstruction of the fet al skull from 2DUS standard planes of the head routinely acquired during the fet al screening process. Based on the generative properties of conditional variational autoencoders (CVAE), our reconstruction architecture (REC-CVAE) directly integrates the three US standard planes as conditional variables to generate a unified latent space of the skull. Additionally, we propose HiREC-CVAE, a hierarchical generative network based on the different clinical relevance of each predictive view. The hierarchical structure of HiREC-CVAE allows the network to learn a sequence of nested latent spaces, providing superior predictive capabilities even in the absence of some of the 2DUS scans. The performance of the proposed architectures was evaluated on a dataset of 72 cases, showing accurate reconstruction capabilities from standard non-registered 2DUS images.",
"3D geometric contents are becoming increasingly popular. In this paper, we study the problem of analyzing deforming 3D meshes using deep neural networks. Deforming 3D meshes are flexible to represent 3D animation sequences as well as collections of objects of the same category, allowing diverse shapes with large-scale non-linear deformations. We propose a novel framework which we call mesh variational autoencoders (mesh VAE), to explore the probabilistic latent space of 3D surfaces. The framework is easy to train, and requires very few training examples. We also propose an extended model which allows flexibly adjusting the significance of different latent variables by altering the prior distribution. Extensive experiments demonstrate that our general framework is able to learn a reasonable representation for a collection of deformable shapes, and produce competitive results for a variety of applications, including shape generation, shape interpolation, shape space embedding and shape exploration, outperforming state-of-the-art methods.",
"",
"We introduce a generative model of part-segmented 3D objects: the shape variational auto-encoder (ShapeVAE). The ShapeVAE describes a joint distribution over the existence of object parts, the locations of a dense set of surface points, and over surface normals associated with these points. Our model makes use of a deep encoder-decoder architecture that leverages the part-decomposability of 3D objects to embed high-dimensional shape representations and sample novel instances. Given an input collection of part-segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference enables imputation of missing parts or surface normals. In addition, by generating both points and surface normals, our model allows for the use of powerful surface-reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape-completion and test-set log-likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape-embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals."
]
} |
1907.00058 | 2954918297 | Quantification of anatomical shape changes still relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of heart conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled hearts when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set. More importantly, it enabled the visualisation in three-dimensions of the most discriminative anatomical features between the two conditions. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging. | Hierarchical VAEs are a class of generative models that decompose the input data into a hierarchical representation @cite_22 , @cite_13 . Although highly flexible, these models have been traditionally difficult to optimise, especially in the training of their higher levels, as often their lowest layer alone can contain enough information to reconstruct the data distribution, and the other levels are ignored. In this work, we focus on the ladder VAE (LVAE) framework @cite_13 , which was shown to be capable of learning a deeper and more distributed latent representation by combining the approximate likelihood and the data-driven prior latent distribution at each level of the generative model. | {
"cite_N": [
"@cite_13",
"@cite_22"
],
"mid": [
"2963135265",
"1909320841"
],
"abstract": [
"Variational autoencoders are powerful models for unsupervised learning. However deep models with several layers of dependent stochastic variables are difficult to train which limits the improvements obtained using these highly expressive models. We propose a new inference model, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network. We show that this model provides state of the art predictive log-likelihood and tighter log-likelihood lower bound compared to the purely bottom-up inference in layered Variational Autoencoders and other generative models. We provide a detailed analysis of the learned hierarchical latent representation and show that our new inference model is qualitatively different and utilizes a deeper more distributed hierarchy of latent variables. Finally, we observe that batch-normalization and deterministic warm-up (gradually turning on the KL-term) are crucial for training variational models with many stochastic layers.",
"We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation -- rules for back-propagation through stochastic variables -- and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation."
]
} |
1907.00058 | 2954918297 | Quantification of anatomical shape changes still relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of heart conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep learning model for shape analysis. In particular, we exploit deep generative networks to model a population of anatomical segmentations through a hierarchy of conditional latent variables. At the highest level of this hierarchy, a two-dimensional latent space is simultaneously optimised to discriminate distinct clinical conditions, enabling the direct visualisation of the classification space. Moreover, the anatomical variability encoded by this discriminative latent space can be visualised in the segmentation space thanks to the generative properties of the model, making the classification task transparent. This approach yielded high accuracy in the categorisation of healthy and remodelled hearts when tested on unseen segmentations from our own multi-centre dataset as well as in an external validation set. More importantly, it enabled the visualisation in three-dimensions of the most discriminative anatomical features between the two conditions. The proposed approach scales effectively to large populations, facilitating high-throughput analysis of normal anatomy and pathology in large-scale studies of volumetric imaging. | * Contributions In this paper, we aim to extend our preliminary work @cite_28 on classification and visualisation of discriminative features by employing LVAEs, with the aim of assisting clinicians in quantifying the morphological changes related to disease, and in order to develop medical image classifiers that can visualise the morphological features driving the classification result. The main contributions of this work can be described as follows: | {
"cite_N": [
"@cite_28"
],
"mid": [
"2886612741"
],
"abstract": [
"Alterations in the geometry and function of the heart define well-established causes of cardiovascular disease. However, current approaches to the diagnosis of cardiovascular diseases often rely on subjective human assessment as well as manual analysis of medical images. Both factors limit the sensitivity in quantifying complex structural and functional phenotypes. Deep learning approaches have recently achieved success for tasks such as classification or segmentation of medical images, but lack interpretability in the feature extraction and decision processes, limiting their value in clinical diagnosis. In this work, we propose a 3D convolutional generative model for automatic classification of images from patients with cardiac diseases associated with structural remodeling. The model leverages interpretable task-specific anatomic patterns learned from 3D segmentations. It further allows to visualise and quantify the learned pathology-specific remodeling patterns in the original input space of the images. This approach yields high accuracy in the categorization of healthy and hypertrophic cardiomyopathy subjects when tested on unseen MR images from our own multi-centre dataset (100 ) as well on the ACDC MICCAI 2017 dataset (90 ). We believe that the proposed deep learning approach is a promising step towards the development of interpretable classifiers for the medical imaging domain, which may help clinicians to improve diagnostic accuracy and enhance patient risk-stratification."
]
} |
1907.00329 | 2955459475 | The current state of cancer therapeutics has been moving away from one-size-fits-all cytotoxic chemotherapy, and towards a more individualized and specific approach involving the targeting of each tumor's genetic vulnerabilities. Different tumors, even of the same type, may be more reliant on certain cellular pathways more than others. With modern advancements in our understanding of cancer genome sequencing, these pathways can be discovered. Investigating each of the millions of possible small molecule inhibitors for each kinase in vitro, however, would be extremely expensive and time consuming. This project focuses on predicting the inhibition activity of small molecules targeting 8 different kinases using multiple deep learning models. We trained fingerprint-based MLPs and simplified molecular-input line-entry specification (SMILES)-based recurrent neural networks (RNNs) and molecular graph convolutional networks (GCNs) to accurately predict inhibitory activity targeting these 8 kinases. | Novel approaches to oncology drug discovery has been largely spurred by the new understanding of how abnormal protein kinase activity has been linked to the development and onset of a variety of diseases. Different tumors, even of the same type, may be more reliant on certain cellular pathways more than others, which can be discovered through cancer genome sequencing. Previous research has been done in order to characterize the structural basis of kinase inhibitor selectivity, and identify potential kinases that are involved in cancer mechanisms, but there is a lacking of research done to predict potential inhibitors. Once kinases have been identified, the IC-50 values, SMILES codes, connectivity, and other features can be utilized in conjunction with deep learning models to predict the activity of small molecule inhibitors @cite_7 . | {
"cite_N": [
"@cite_7"
],
"mid": [
"2077804275"
],
"abstract": [
"Inhibition of kinase activity has received enormous interest as a therapeutic strategy for cancer. This Review discusses the current approaches to develop and characterize new inhibitors."
]
} |
1907.00329 | 2955459475 | The current state of cancer therapeutics has been moving away from one-size-fits-all cytotoxic chemotherapy, and towards a more individualized and specific approach involving the targeting of each tumor's genetic vulnerabilities. Different tumors, even of the same type, may be more reliant on certain cellular pathways more than others. With modern advancements in our understanding of cancer genome sequencing, these pathways can be discovered. Investigating each of the millions of possible small molecule inhibitors for each kinase in vitro, however, would be extremely expensive and time consuming. This project focuses on predicting the inhibition activity of small molecules targeting 8 different kinases using multiple deep learning models. We trained fingerprint-based MLPs and simplified molecular-input line-entry specification (SMILES)-based recurrent neural networks (RNNs) and molecular graph convolutional networks (GCNs) to accurately predict inhibitory activity targeting these 8 kinases. | More recently, research groups have been interested in learning how to represent the molecular structure information in a more robust way. While common fingerprinting methods are generally successful at representing small molecules well, they are non-differentiable and cannot adapt to emphasize molecular structure aspects differently based on data given. The Pande Group at Stanford University has been focusing on a new representation of small molecules as undirected graphs of atoms. Graph convolutional neural networks aim to featurize molecules in a differentiable way, so that the way the molecule is represented can change based on the data and task given. highlights the flexibility of the graph convolution architecture, despite the model performing similar to fingerprint-based approaches @cite_13 . | {
"cite_N": [
"@cite_13"
],
"mid": [
"2290847742"
],
"abstract": [
"Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement."
]
} |
1907.00329 | 2955459475 | The current state of cancer therapeutics has been moving away from one-size-fits-all cytotoxic chemotherapy, and towards a more individualized and specific approach involving the targeting of each tumor's genetic vulnerabilities. Different tumors, even of the same type, may be more reliant on certain cellular pathways more than others. With modern advancements in our understanding of cancer genome sequencing, these pathways can be discovered. Investigating each of the millions of possible small molecule inhibitors for each kinase in vitro, however, would be extremely expensive and time consuming. This project focuses on predicting the inhibition activity of small molecules targeting 8 different kinases using multiple deep learning models. We trained fingerprint-based MLPs and simplified molecular-input line-entry specification (SMILES)-based recurrent neural networks (RNNs) and molecular graph convolutional networks (GCNs) to accurately predict inhibitory activity targeting these 8 kinases. | We will first directly feed the concatenated fingerprint feature representation into a multilayer perceptron (MLP) with a two hidden dense layers of size 128, each with ReLU activation. Before the output layer, we include a dropout layer for regularization. The output layer is a dense layer of size 2 - corresponding to the two possible labels - with softmax activation for a probability prediction for each label. The MLP architecture is summarized in Figure . We use binary cross entropy as the loss function, as given in the following equation: @math where @math is the loss, @math is the current set of model parameters, @math is the total number of data samples, @math is true label (0 or 1) for data sample @math , and @math is predicted probability output by the model that the label for data sample @math is 1. Adam is a gradient descent algorithm with an adaptive learning rate that in practice yields quicker model convergence than vanilla stochastic gradient descent, so we use Adam to optimize our model @cite_3 . We set the initial learning rate to @math , minibatch size to 32, and dropout probability to @math . | {
"cite_N": [
"@cite_3"
],
"mid": [
"1522301498"
],
"abstract": [
"We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."
]
} |
1907.00329 | 2955459475 | The current state of cancer therapeutics has been moving away from one-size-fits-all cytotoxic chemotherapy, and towards a more individualized and specific approach involving the targeting of each tumor's genetic vulnerabilities. Different tumors, even of the same type, may be more reliant on certain cellular pathways more than others. With modern advancements in our understanding of cancer genome sequencing, these pathways can be discovered. Investigating each of the millions of possible small molecule inhibitors for each kinase in vitro, however, would be extremely expensive and time consuming. This project focuses on predicting the inhibition activity of small molecules targeting 8 different kinases using multiple deep learning models. We trained fingerprint-based MLPs and simplified molecular-input line-entry specification (SMILES)-based recurrent neural networks (RNNs) and molecular graph convolutional networks (GCNs) to accurately predict inhibitory activity targeting these 8 kinases. | Essentially, a graph convolution layer, similar to a convolutional layer, represents each node as a combination of its neighbors, as shown in Figure . This is accomplished by feeding both atom features (-dimensional vectors for each atom) and pair features (-dimensional vectors for each pair of atoms) through Weave modules in series @cite_13 . Weave modules combine these atom and pair features (denoted @math and @math ) together to generate another set of atom and pair features, which can then be fed into another Weave module. The architecture of a Weave module is shown in Figure , and the definitions of each of the operations is shown below: where @math is an arbitrary, trainable function and @math is an arbitrary commutative function. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2290847742"
],
"abstract": [
"Molecular “fingerprints” encoding structural information are the workhorse of cheminformatics and machine learning in drug discovery applications. However, fingerprint representations necessarily emphasize particular aspects of the molecular structure while ignoring others, rather than allowing the model to make data-driven decisions. We describe molecular graph convolutions, a machine learning architecture for learning from undirected graphs, specifically small molecules. Graph convolutions use a simple encoding of the molecular graph—atoms, bonds, distances, etc.—which allows the model to take greater advantage of information in the graph structure. Although graph convolutions do not outperform all fingerprint-based methods, they (along with other graph-based methods) represent a new paradigm in ligand-based virtual screening with exciting opportunities for future improvement."
]
} |
1907.00269 | 2953626793 | The use of robotics in controlled environments has flourished over the last several decades and training robots to perform tasks using control strategies developed from dynamical models of their hardware have proven very effective. However, in many real-world settings, the uncertainties of the environment, the safety requirements and generalized capabilities that are expected of robots make rigid industrial robots unsuitable. This created great research interest into developing control strategies for flexible robot hardware for which building dynamical models are challenging. In this paper, inspired by the success of deep reinforcement learning (DRL) in other areas, we systematically study the efficacy of policy search methods using DRL in training flexible robots. Our results indicate that DRL is successfully able to learn efficient and robust policies for complex tasks at various degrees of flexibility. We also note that DRL using Deep Deterministic Policy Gradients can be sensitive to the choice of sensors and adding more informative sensors does not necessarily make the task easier to learn. | Robot controller design is dominated by building precise mathematical models of its dynamics. It is not always practical to build a general model of a robot's dynamics that is invariant to the various real-world factors ranging from noise to changes in the environment, motor backlash, motor torque output, or the focus of this paper, link flexibility. In such cases, reinforcement learning and policy search algorithms that can learn from a robot's experience have been shown to be successful @cite_20 @cite_0 for tasks such as object manipulation @cite_23 @cite_15 @cite_2 , locomotion @cite_3 @cite_30 @cite_28 @cite_1 and flight @cite_16 . However, most of this work involves using a model-free component to approximate features of the robot or the world that cannot be modeled while still using model-based controllers for other parts of the system @cite_11 @cite_9 | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_3",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"",
"",
"",
"1994923984",
"",
"2028941029",
"",
"",
"1949804828",
"2012587148",
"2100235553"
],
"abstract": [
"",
"",
"",
"",
"This paper presents some results from a study of biped dynamic walking using reinforcement learning. During this study a hardware biped robot was built, a new reinforcement learning algorithm as well as a new learning architecture were developed. The biped learned dynamic walking without any previous knowledge about its dynamic model. The self scaling reinforcement (SSR) learning algorithm was developed in order to deal with the problem of reinforcement learning in continuous action domains. The learning architecture was developed in order to solve complex control problems. It uses different modules that consist of simple controllers and small neural networks. The architecture allows for easy incorporation of new modules that represent new knowledge, or new requirements for the desired task.",
"",
"Complexity and uncertainty in modern robots and other autonomous systems make it difficult to design controllers for such systems that can achieve desired levels of precision and robustness. Therefore learning methods are being incorporated into controllers for such systems, thereby providing the adaptibility necessary to meet the performance demands of the task. We argue that for learning tasks arising frequently in control applications, the most useful methods in practice probably are those we call direct associative reinforcement learning methods. We describe direct reinforcement learning methods and also illustrate with an example the utility of these methods for learning skilled robot control under uncertainty.",
"",
"",
"Helicopters have highly stochastic, nonlinear, dynamics, and autonomous helicopter flight is widely regarded to be a challenging control problem. As helicopters are highly unstable at low speeds, it is particularly difficult to design controllers for low speed aerobatic maneuvers. In this paper, we describe a successful application of reinforcement learning to designing a controller for sustained inverted flight on an autonomous helicopter. Using data collected from the helicopter in flight, we began by learning a stochastic, nonlinear model of the helicopter’s dynamics. Then, a reinforcement learning algorithm was applied to automatically learn a controller for autonomous inverted hovering. Finally, the resulting controller was successfully tested on our autonomous helicopter platform.",
"Policy search is a subfield in reinforcement learning which focuses on finding good parameters for a given policy parametrization. It is well suited for robotics as it can cope with high-dimensional state and action spaces, one of the main challenges in robot learning. We review recent successes of both model-free and model-based policy search in robot learning.Model-free policy search is a general approach to learn policies based on sampled trajectories. We classify model-free methods based on their policy evaluation strategy, policy update strategy, and exploration strategy and present a unified view on existing algorithms. Learning a policy is often easier than learning an accurate forward model, and, hence, model-free methods are more frequently used in practice. However, for each sampled trajectory, it is necessary to interact with the robot, which can be time consuming and challenging in practice. Model-based policy search addresses this problem by first learning a simulator of the robot's dynamics from data. Subsequently, the simulator generates trajectories that are used for policy learning. For both model-free and model-based policy search methods, we review their respective properties and their applicability to robotic systems.",
"Developing robots capable of fine manipulation skills is of major importance in order to build truly assistive robots. These robots need to be compliant in their actuation and control in order to operate safely in human environments. Manipulation tasks imply complex contact interactions with the external world, and involve reasoning about the forces and torques to be applied. Planning under contact conditions is usually impractical due to computational complexity, and a lack of precise dynamics models of the environment. We present an approach to acquiring manipulation skills on compliant robots through reinforcement learning. The initial position control policy for manipulation is initialized through kinesthetic demonstration. We augment this policy with a force torque profile to be controlled in combination with the position trajectories. We use the Policy Improvement with Path Integrals (PI2) algorithm to learn these force torque profiles by optimizing a cost function that measures task success. We demonstrate our approach on the Barrett WAM robot arm equipped with a 6-DOF force torque sensor on two different manipulation tasks: opening a door with a lever door handle, and picking up a pen off the table. We show that the learnt force control policies allow successful, robust execution of the tasks."
]
} |
1907.00269 | 2953626793 | The use of robotics in controlled environments has flourished over the last several decades and training robots to perform tasks using control strategies developed from dynamical models of their hardware have proven very effective. However, in many real-world settings, the uncertainties of the environment, the safety requirements and generalized capabilities that are expected of robots make rigid industrial robots unsuitable. This created great research interest into developing control strategies for flexible robot hardware for which building dynamical models are challenging. In this paper, inspired by the success of deep reinforcement learning (DRL) in other areas, we systematically study the efficacy of policy search methods using DRL in training flexible robots. Our results indicate that DRL is successfully able to learn efficient and robust policies for complex tasks at various degrees of flexibility. We also note that DRL using Deep Deterministic Policy Gradients can be sensitive to the choice of sensors and adding more informative sensors does not necessarily make the task easier to learn. | In work where flexibility is taken into consideration, learning is still based either on building a more complex model @cite_14 @cite_7 @cite_5 , an approximate model @cite_4 or plugging in a learned-model component into a model-based controller. Recently, work involving end-to-end model-free methods using deep reinforcement learning have been demonstrated successfully in rigid real robots @cite_26 @cite_13 @cite_1 . @cite_21 have shown that learning directly in hardware is possible with policy search, they rightly point out that even in simple tasks, factors such as joint slackness etc. make it very difficult to train. While they and @cite_10 have shown that this is possible using policy search, this paper systematically studies how policy search methods perform with flexible hardware. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_26",
"@cite_7",
"@cite_10",
"@cite_21",
"@cite_1",
"@cite_5",
"@cite_13"
],
"mid": [
"2090108740",
"1968559739",
"2964161785",
"",
"2173248099",
"2888235455",
"",
"",
""
],
"abstract": [
"Over the last few decades, extensive use of flexible manipulators in various robotic applications has made it as one of the research interests for many scholars over the world. Recent studies on the modeling, sensor systems and controllers for the applications of flexible robotic manipulators are reviewed in order to complement the previous literature surveyed by Benosman & Vey (Robotica 22:533---545, 2004) and Dwivedy & Eberhard (Mech. Mach. Theory 41:749---777, 2006) . A brief introduction of the essential modeling techniques is first presented, followed by a review of the practical alternatives of sensor systems that can help scientists or engineers to choose the appropriate sensors for their applications. It followed by the main goal of this paper with a comprehensive review of the control strategies for the flexible manipulators and flexible joints that were studied in recent literatures. The issues for controlling flexible manipulators are highlighted. Most of the noteworthy control techniques that were not covered in the recent surveys in references (Benosman & Vey Robotica 22:533---545, 2004; Dwivedy & Eberhard Mech. Mach. Theory 41:749---777, 2006) are then reviewed. It concludes by providing some possible issues for future research works.",
"Model based control schemes use the inverse dynamics of the robot arm to produce the main torque component necessary for trajectory tracking. For model-based controller one is required to know the model parameters accurately. This is a very difficult task especially if the manipulator is flexible. So a reduced model based controller has been developed, which requires only the information of space robot base velocity and link parameters. The flexible link is modeled as Euler Bernoulli beam. To simplify the analysis we have considered Jacobian of rigid manipulator. Bond graph modeling is used to model the dynamics of the system and to devise the control strategy. The scheme has been verified using simulation for two links flexible space manipulator.",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"",
"We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.",
"In this paper, we present an automated learning environment for developing control policies directly on the hardware of a modular legged robot. This environment facilitates the reinforcement learning process by computing the rewards using a vision-based tracking system and relocating the robot to the initial position using a resetting mechanism. We employ two state-of-the-art deep reinforcement learning (DRL) algorithms, Trust Region Policy Optimization (TRPO) and Deep Deterministic Policy Gradient (DDPG), to train neural network policies for simple rowing and crawling motions. Using the developed environment, we demonstrate both learning algorithms can effectively learn policies for simple locomotion skills on highly stochastic hardware and environments. We further expedite learning by transferring policies learned on a single legged configuration to multi-legged ones.",
"",
"",
""
]
} |
1907.00327 | 2954016141 | There are many AI tasks involving multiple interacting agents where agents should learn to cooperate and collaborate to effectively perform the task. Here we develop and evaluate various multi-agent protocols to train agents to collaborate with teammates in grid soccer. We train and evaluate our multi-agent methods against a team operating with a smart hand-coded policy. As a baseline, we train agents concurrently and independently, with no communication. Our collaborative protocols were parameter sharing, coordinated learning with communication, and counterfactual policy gradients. Against the hand-coded team, the team trained with parameter sharing and the team trained with coordinated learning performed the best, scoring on 89.5 and 94.5 of episodes respectively when playing against the hand-coded team. Against the parameter sharing team, with adversarial training the coordinated learning team scored on 75 of the episodes, indicating it is the most adaptable of our methods. The insights gained from our work can be applied to other domains where multi-agent collaboration could be beneficial. | The main disadvantage of the centralized approach is the exponential increase in the state and action spaces with increase in number of agents. Manipulations can be performed to reduce the joint action space size to @math @cite_3 for a discrete action space. For example, we could reduce the action space by factoring action probability as @math where @math are the individual actions of agents, which reduces the action space from @math to @math . However, when there are many collaborating agents, this method may still be impractical. Thus for our paper, we will focus on decentralized methods for multi-agent control. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2519058043"
],
"abstract": [
"One of the main challenges in Grid systems is designing an adaptive, scalable, and model-independent method for job scheduling to achieve a desirable degree of load balancing and system efficiency. Centralized job scheduling methods have some drawbacks, such as single point of failure and lack of scalability. Moreover, decentralized methods require a coordination mechanism with limited communications. In this paper, we propose a multi-agent approach to job scheduling in Grid, named Centralized Learning Distributed Scheduling (CLDS), by utilizing the reinforcement learning framework. The CLDS is a model free approach that uses the information of jobs and their completion time to estimate the efficiency of resources. In this method, there are a learner agent and several scheduler agents that perform the task of learning and job scheduling with the use of a coordination strategy that maintains the communication cost at a limited level. We evaluated the efficiency of the CLDS method by designing and performing a set of experiments on a simulated Grid system under different system scales and loads. The results show that the CLDS can effectively balance the load of system even in large scale and heavy loaded Grids, while maintains its adaptive performance and scalability."
]
} |
1907.00267 | 2954637159 | Synthetic images rendered by graphics engines are a promising source for training deep networks. However, it is challenging to ensure that they can help train a network to perform well on real images, because a graphics-based generation pipeline requires numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this work, we propose a new method that optimizes the generation of 3D training data based on what we call "hybrid gradient". We parametrize the design decisions as a real vector, and combine the approximate gradient and the analytical gradient to obtain the hybrid gradient of the network performance with respect to this vector. We evaluate our approach on the task of estimating surface normals from a single image. Experiments on standard benchmarks show that our approach can outperform the prior state of the art on optimizing the generation of 3D training data, particularly in terms of computational efficiency. | Synthetic images generated by computer graphics have been extensively used for training deep networks for numerous tasks, including single image 3D reconstruction @cite_23 @cite_43 @cite_33 @cite_9 @cite_10 @cite_12 , optical flow estimation @cite_0 @cite_38 @cite_17 , human pose estimation @cite_18 @cite_45 , action recognition @cite_28 , natural language modeling @cite_6 , and many others @cite_36 @cite_35 @cite_37 @cite_7 @cite_19 @cite_8 @cite_14 . The success of these works has demonstrated the effectiveness of synthetic images. | {
"cite_N": [
"@cite_35",
"@cite_36",
"@cite_43",
"@cite_10",
"@cite_38",
"@cite_18",
"@cite_8",
"@cite_23",
"@cite_17",
"@cite_37",
"@cite_7",
"@cite_28",
"@cite_6",
"@cite_19",
"@cite_12",
"@cite_14",
"@cite_33",
"@cite_9",
"@cite_0",
"@cite_45"
],
"mid": [
"2896182651",
"2767011576",
"2563685048",
"2771376539",
"1513100184",
"2576289912",
"2487365028",
"1923184257",
"2949907962",
"2799034341",
"2605102758",
"2962778061",
"2561715562",
"2964047820",
"2190691619",
"2963088756",
"2780351918",
"2098883970",
"2784652921",
"2962729993"
],
"abstract": [
"Data-driven algorithms have surpassed traditional techniques in almost every aspect in robotic vision problems. Such algorithms need vast amounts of quality data to be able to work properly after their training process. Gathering and annotating that sheer amount of data in the real world is a time-consuming and error-prone task. Those problems limit scale and quality. Synthetic data generation has become increasingly popular since it is faster to generate and automatic to annotate. However, most of the current datasets and environments lack realism, interactions, and details from the real world. UnrealROX is an environment built over Unreal Engine 4 which aims to reduce that reality gap by leveraging hyperrealistic indoor scenes that are explored by robot agents which also interact with objects in a visually realistic manner in that simulated world. Photorealistic scenes and robots are rendered by Unreal Engine into a virtual reality headset which captures gaze so that a human operator can move the robot and use controllers for the robotic hands; scene information is dumped on a per-frame basis so that it can be reproduced offline to generate raw data and ground truth annotations. This virtual reality environment enables robotic vision researchers to generate realistic and visually plausible data with full ground truth for a wide variety of problems such as class and instance semantic segmentation, object detection, depth estimation, visual grasping, and navigation.",
"UnrealCV is a project to help computer vision researchers build virtual worlds using Unreal Engine 4 (UE4). It extends UE4 with a plugin by providing (1) A set of UnrealCV commands to interact with the virtual world. (2) Communication between UE4 and an external program, such as Caffe. UnrealCV can be used in two ways. The first one is using a compiled game binary with UnrealCV embedded. This is as simple as running a game, no knowledge of Unreal Engine is required. The second is installing UnrealCV plugin to Unreal Engine 4 (UE4) and use the editor of UE4 to build a new virtual world. UnrealCV is an open-source software under the MIT license. Since the initial release in September 2016, it has gathered an active community of users, including students and researchers.",
"Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http: www.scenenn.net.",
"In this paper, we address the shape-from-shading problem by training deep networks with synthetic images. Unlike conventional approaches that combine deep learning and synthetic imagery, we propose an approach that does not need any external shape dataset to render synthetic images. Our approach consists of two synergistic processes: the evolution of complex shapes from simple primitives, and the training of a deep network for shape-from-shading. The evolution generates better shapes guided by the network training, while the training improves by using the evolved shapes. We show that our approach achieves state-of-the-art performance on a shape-from-shading benchmark.",
"Ground truth optical flow is difficult to measure in real scenes with natural motion. As a result, optical flow data sets are restricted in terms of size, complexity, and diversity, making optical flow algorithms difficult to train and test on realistic data. We introduce a new optical flow data set derived from the open source 3D animated short film Sintel. This data set has important features not present in the popular Middlebury flow evaluation: long sequences, large motions, specular reflections, motion blur, defocus blur, and atmospheric effects. Because the graphics data that generated the movie is open source, we are able to render scenes under conditions of varying complexity to evaluate where existing flow algorithms fail. We evaluate several recent optical flow algorithms and find that current highly-ranked methods on the Middlebury evaluation have difficulty with this more complex data set suggesting further research on optical flow estimation is needed. To validate the use of synthetic data, we compare the image- and flow-statistics of Sintel to those of real films and videos and show that they are similar. The data set, metrics, and evaluation website are publicly available.",
"Estimating human pose, shape, and motion from images and videos are fundamental challenges with many applications. Recent advances in 2D human pose estimation use large amounts of manually-labeled training data for learning convolutional neural networks (CNNs). Such data is time consuming to acquire and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL tasks): a new large-scale dataset with synthetically-generated but realistic images of people rendered from 3D sequences of human motion capture data. We generate more than 6 million frames together with ground truth pose, depth maps, and segmentation masks. We show that CNNs trained on our synthetic dataset allow for accurate human depth estimation and human part segmentation in real RGB images. Our results and the new dataset open up new possibilities for advancing person analysis using cheap and large-scale synthetic data.",
"Recent progress in computer vision has been driven by high-capacity models trained on large datasets. Unfortunately, creating large datasets with pixel-level labels has been extremely costly due to the amount of human effort required. In this paper, we present an approach to rapidly creating pixel-accurate semantic label maps for images extracted from modern computer games. Although the source code and the internal operation of commercial games are inaccessible, we show that associations between image patches can be reconstructed from the communication between the game and the graphics hardware. This enables rapid propagation of semantic labels within and across images synthesized by the game, with no access to the source code or the content. We validate the presented approach by producing dense pixel-level semantic annotations for 25 thousand images synthesized by a photorealistic open-world computer game. Experiments on semantic segmentation datasets show that using the acquired data to supplement real-world images significantly increases accuracy and that the acquired data enables reducing the amount of hand-labeled real-world data: models trained with game data and just ( 1 3 ) of the CamVid training set outperform models trained on the complete CamVid training set.",
"Although RGB-D sensors have enabled major break-throughs for several vision tasks, such as 3D reconstruction, we have not attained the same level of success in high-level scene understanding. Perhaps one of the main reasons is the lack of a large-scale benchmark with 3D annotations and 3D evaluation metrics. In this paper, we introduce an RGB-D benchmark suite for the goal of advancing the state-of-the-arts in all major scene understanding tasks. Our dataset is captured by four different sensors and contains 10,335 RGB-D images, at a similar scale as PASCAL VOC. The whole dataset is densely annotated and includes 146,617 2D polygons and 64,595 3D bounding boxes with accurate object orientations, as well as a 3D room layout and scene category for each image. This dataset enables us to train data-hungry algorithms for scene-understanding tasks, evaluate them using meaningful 3D metrics, avoid overfitting to a small testing set, and study cross-sensor bias.",
"Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning method, and validate our approach by building and publicly releasing a new video dataset, called Virtual KITTI (see this http URL), automatically labeled with accurate ground truth for object detection, tracking, scene and instance segmentation, depth, and optical flow. We provide quantitative experimental evidence suggesting that (i) modern deep learning algorithms pre-trained on real data behave similarly in real and virtual worlds, and (ii) pre-training on virtual data improves performance. As the gap between real and virtual worlds is small, virtual worlds enable measuring the impact of various weather and imaging conditions on recognition performance, all other things being equal. We show these factors may affect drastically otherwise high-performing deep models for tracking.",
"Developing visual perception models for active agents and sensorimotor control in the physical world are cumbersome as existing algorithms are too slow to efficiently learn in real-time and robots are fragile and costly. This has given rise to learning-in-simulation which consequently casts a question on whether the results transfer to real-world. In this paper, we investigate developing real-world perception for active agents, propose Gibson Environment for this purpose, and showcase a set of perceptual tasks learned therein. Gibson is based upon virtualizing real spaces, rather than artificially designed ones, and currently includes over 1400 floor spaces from 572 full buildings. The main characteristics of Gibson are: I. being from the real-world and reflecting its semantic complexity, II. having an internal synthesis mechanism \"Goggles\" enabling deploying the trained models in real-world without needing domain adaptation, III. embodiment of agents and making them subject to constraints of physics and space.",
"Bridging the ‘reality gap’ that separates simulated robotics from experiments on hardware could accelerate robotic research through improved data availability. This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator. With enough variability in the simulator, the real world may appear to the model as just another variation. We focus on the task of object localization, which is a stepping stone to general robotic manipulation skills. We find that it is possible to train a real-world object detector that is accurate to 1.5 cm and robust to distractors and partial occlusions using only data from a simulator with non-realistic random textures. To demonstrate the capabilities of our detectors, we show they can be used to perform grasping in a cluttered environment. To our knowledge, this is the first successful transfer of a deep neural network trained only on simulated RGB images (without pre-training on real images) to the real world for the purpose of robotic control.",
"Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for Procedural Human Action Videos. It contains a total of 39,982 videos, with more than 1,000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly outperforming fine-tuning state-of-the-art unsupervised generative models of videos.",
"When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.",
"We present a benchmark suite for visual perception. The benchmark is based on more than 250K high-resolution video frames, all annotated with ground-truth data for both low-level and high-level vision tasks, including optical flow, semantic instance segmentation, object detection and tracking, object-level 3D scene layout, and visual odometry. Ground-truth data for all tasks is available for every frame. The data was collected while driving, riding, and walking a total of 184 kilometers in diverse ambient conditions in a realistic virtual world. To create the benchmark, we have developed a new approach to collecting ground-truth data from simulated worlds without access to their source code or content. We conduct statistical analyses that show that the composition of the scenes in the benchmark closely matches the composition of corresponding physical environments. The realism of the collected data is further validated via perceptual experiments. We analyze the performance of state-of-the-art methods for multiple tasks, providing reference baselines and highlighting challenges for future research.",
"We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.",
"Towards bridging the gap between machine and human intelligence, it is of utmost importance to introduce environments that are visually realistic and rich in content. In such environments, one can evaluate and improve a crucial property of practical intelligent systems, namely generalization. In this work, we build House3D, a rich, extensible and efficient environment that contains 45,622 human-designed 3D scenes of houses, ranging from single-room studios to multi-storeyed houses, equipped with a diverse set of fully labeled 3D objects, textures and scene layouts, based on the SUNCG dataset (, 2017). With an emphasis on semantic-level generalization, we study the task of concept-driven navigation, RoomNav, using a subset of houses in House3D. In RoomNav, an agent navigates towards a target specified by a semantic concept. To succeed, the agent learns to comprehend the scene it lives in by developing perception, understand the concept by mapping it to the correct semantics, and navigate to the target by obeying the underlying physical rules. We train RL agents with both continuous and discrete action spaces and show their ability to generalize in new unseen environments. In particular, we observe that (1) training is substantially harder on large house sets but results in better generalization, (2) using semantic signals (e.g., segmentation mask) boosts the generalization performance, and (3) gated networks on semantic input signal lead to improved training performance and generalization. We hope House3D, including the analysis of the RoomNav task, serves as a building block towards designing practical intelligent systems and we wish it to be broadly adopted by the community.",
"We introduce SceneNet RGB-D, a dataset providing pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection. It also provides perfect camera poses and depth data, allowing investigation into geometric computer vision problems such as optical flow, camera pose estimation, and 3D scene labelling tasks. Random sampling permits virtually unlimited scene configurations, and here we provide 5M rendered RGB-D images from 16K randomly generated 3D trajectories in synthetic layouts, with random but physically simulated object configurations. We compare the semantic segmentation performance of network weights produced from pretraining on RGB images from our dataset against generic VGG-16 ImageNet weights. After fine-tuning on the SUN RGB-D and NYUv2 real-world datasets we find in both cases that the synthetically pre-trained network outperforms the VGG-16 weights. When synthetic pre-training includes a depth channel (something ImageNet cannot natively provide) the performance is greater still. This suggests that large-scale high-quality synthetic RGB datasets with task-specific labels can be more useful for pretraining than real-world generic pre-training such as ImageNet. We host the dataset at http: robotvault. bitbucket.io scenenet-rgbd.html.",
"Recent proliferation of a cheap but quality depth sensor, the Microsoft Kinect, has brought the need for a challenging category-level 3D object detection dataset to the fore. We review current 3D datasets and find them lacking in variation of scenes, categories, instances, and viewpoints. Here we present our dataset of color and depth image pairs, gathered in real domestic and office environments. It currently includes over 50 classes, with more images added continuously by a crowd-sourced collection effort. We establish baseline performance in a PASCAL VOC-style detection task, and suggest two ways that inferred world size of the object may be used to improve detection. The dataset and annotations can be downloaded at http: www.kinectdata.com.",
"The finding that very large networks can be trained efficiently and reliably has led to a paradigm shift in computer vision from engineered solutions to learning formulations. As a result, the research challenge shifts from devising algorithms to creating suitable and abundant training data for supervised learning. How to efficiently create such training data? The dominant data acquisition method in visual recognition is based on web data and manual annotation. Yet, for many computer vision problems, such as stereo or optical flow estimation, this approach is not feasible because humans cannot manually enter a pixel-accurate flow field. In this paper, we promote the use of synthetically generated data for the purpose of training deep networks on such tasks. We suggest multiple ways to generate such data and evaluate the influence of dataset properties on the performance and generalization properties of the resulting networks. We also demonstrate the benefit of learning schedules that use different types of data at selected stages of the training process.",
"Human 3D pose estimation from a single image is a challenging task with numerous applications. Convolutional Neural Networks (CNNs) have recently achieved superior performance on the task of 2D pose estimation from a single image, by training on images with 2D annotations collected by crowd sourcing. This suggests that similar success could be achieved for direct estimation of 3D poses. However, 3D poses are much harder to annotate, and the lack of suitable annotated training images hinders attempts towards end-to-end solutions. To address this issue, we opt to automatically synthesize training images with ground truth pose annotations. Our work is a systematic study along this road. We find that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data. We present a fully automatic, scalable approach that samples the human pose space for guiding the synthesis procedure and extracts clothing textures from real images. Furthermore, we explore domain adaptation for bridging the gap between our synthetic training images and real testing photos. We demonstrate that CNNs trained with our synthetic images out-perform those trained with real photos on 3D pose estimation tasks."
]
} |
1907.00267 | 2954637159 | Synthetic images rendered by graphics engines are a promising source for training deep networks. However, it is challenging to ensure that they can help train a network to perform well on real images, because a graphics-based generation pipeline requires numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this work, we propose a new method that optimizes the generation of 3D training data based on what we call "hybrid gradient". We parametrize the design decisions as a real vector, and combine the approximate gradient and the analytical gradient to obtain the hybrid gradient of the network performance with respect to this vector. We evaluate our approach on the task of estimating surface normals from a single image. Experiments on standard benchmarks show that our approach can outperform the prior state of the art on optimizing the generation of 3D training data, particularly in terms of computational efficiency. | To ensure the relevance of the generated the training data to real world tasks, a large amount of manual effort has been necessary, particularly in acquiring 3D assets such as shapes and scenes @cite_12 @cite_9 @cite_11 @cite_34 @cite_43 @cite_33 @cite_22 . To reduce manual labor, some heuristics have been proposed to automatically generate 3D configurations. For example, design an approach to use entropy of object masks and color distribution of the rendered image to select sampled camera poses. simulate gravity for physically plausible object configurations inside a room. | {
"cite_N": [
"@cite_33",
"@cite_22",
"@cite_9",
"@cite_43",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"2780351918",
"2557465155",
"2098883970",
"2563685048",
"2519379752",
"2190691619",
"2253156915"
],
"abstract": [
"We introduce SceneNet RGB-D, a dataset providing pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection. It also provides perfect camera poses and depth data, allowing investigation into geometric computer vision problems such as optical flow, camera pose estimation, and 3D scene labelling tasks. Random sampling permits virtually unlimited scene configurations, and here we provide 5M rendered RGB-D images from 16K randomly generated 3D trajectories in synthetic layouts, with random but physically simulated object configurations. We compare the semantic segmentation performance of network weights produced from pretraining on RGB images from our dataset against generic VGG-16 ImageNet weights. After fine-tuning on the SUN RGB-D and NYUv2 real-world datasets we find in both cases that the synthetically pre-trained network outperforms the VGG-16 weights. When synthetic pre-training includes a depth channel (something ImageNet cannot natively provide) the performance is greater still. This suggests that large-scale high-quality synthetic RGB datasets with task-specific labels can be more useful for pretraining than real-world generic pre-training such as ImageNet. We host the dataset at http: robotvault. bitbucket.io scenenet-rgbd.html.",
"This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http: sscnet.cs.princeton.edu.",
"Recent proliferation of a cheap but quality depth sensor, the Microsoft Kinect, has brought the need for a challenging category-level 3D object detection dataset to the fore. We review current 3D datasets and find them lacking in variation of scenes, categories, instances, and viewpoints. Here we present our dataset of color and depth image pairs, gathered in real domestic and office environments. It currently includes over 50 classes, with more images added continuously by a crowd-sourced collection effort. We establish baseline performance in a PASCAL VOC-style detection task, and suggest two ways that inferred world size of the object may be used to improve detection. The dataset and annotations can be downloaded at http: www.kinectdata.com.",
"Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http: www.scenenn.net.",
"We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http: cvgl.stanford.edu projects objectnet3d.",
"We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans.",
"We have created a dataset of more than ten thousand 3D scans of real objects. To create the dataset, we recruited 70 operators, equipped them with consumer-grade mobile 3D scanning setups, and paid them to scan objects in their environments. The operators scanned objects of their choosing, outside the laboratory and without direct supervision by computer vision professionals. The result is a large and diverse collection of object scans: from shoes, mugs, and toys to grand pianos, construction vehicles, and large outdoor sculptures. We worked with an attorney to ensure that data acquisition did not violate privacy constraints. The acquired data was placed irrevocably in the public domain and is available freely at http: redwood-data.org 3dscan."
]
} |
1907.00267 | 2954637159 | Synthetic images rendered by graphics engines are a promising source for training deep networks. However, it is challenging to ensure that they can help train a network to perform well on real images, because a graphics-based generation pipeline requires numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this work, we propose a new method that optimizes the generation of 3D training data based on what we call "hybrid gradient". We parametrize the design decisions as a real vector, and combine the approximate gradient and the analytical gradient to obtain the hybrid gradient of the network performance with respect to this vector. We evaluate our approach on the task of estimating surface normals from a single image. Experiments on standard benchmarks show that our approach can outperform the prior state of the art on optimizing the generation of 3D training data, particularly in terms of computational efficiency. | Prior work has also performed explicit optimization of 3D configurations. For example, synthesizes layouts with the target of satisfying constraints such as non-overlapping and occupation. learns a probabilistic grammar model for indoor scene generation, with parameters learned using maximum likelihood estimation on the existing 3D configurations in SUNCG @cite_22 . Similarly, tunes the parameters for stochastic scene generation using generative adversarial networks, with the goal of making synthetic images indistinguishable from real images. synthesize 3D room layouts based on human-centric relations among furniture, to achieve visual realism, functionality and naturalness of the scenes. However, these optimization objectives are different from ours, which is the generalization performance of a trained network on real images. The closest prior work to ours is that of , who use a genetic algorithm to optimize the 3D shapes used for rendering synthetic training images. Their optimization objective is the same as ours, but their optimization method is different in that they do not use any gradient information. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2557465155"
],
"abstract": [
"This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http: sscnet.cs.princeton.edu."
]
} |
1907.00267 | 2954637159 | Synthetic images rendered by graphics engines are a promising source for training deep networks. However, it is challenging to ensure that they can help train a network to perform well on real images, because a graphics-based generation pipeline requires numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this work, we propose a new method that optimizes the generation of 3D training data based on what we call "hybrid gradient". We parametrize the design decisions as a real vector, and combine the approximate gradient and the analytical gradient to obtain the hybrid gradient of the network performance with respect to this vector. We evaluate our approach on the task of estimating surface normals from a single image. Experiments on standard benchmarks show that our approach can outperform the prior state of the art on optimizing the generation of 3D training data, particularly in terms of computational efficiency. | One component of our approach is unrolling and backpropagating through the training iterations of a deep network. This is a technique that has often been used by existing work in other contexts, including hyperparameter optimization @cite_46 and meta-learning @cite_29 @cite_16 @cite_30 @cite_27 @cite_39 . Our work is different in that we apply this technique in a novel context: it is used to optimize the generation of 3D training data and it is integrated with approximate gradients to form hybrid gradients. | {
"cite_N": [
"@cite_30",
"@cite_29",
"@cite_39",
"@cite_27",
"@cite_46",
"@cite_16"
],
"mid": [
"",
"2963775850",
"2964078140",
"2962880633",
"2963233958",
""
],
"abstract": [
"",
"The move from hand-designed features to learned features in machine learning has been wildly successful. In spite of this, optimization algorithms are still designed by hand. In this paper we show how the design of an optimization algorithm can be cast as a learning problem, allowing the algorithm to learn to exploit structure in the problems of interest in an automatic way. Our learned algorithms, implemented by LSTMs, outperform generic, hand-designed competitors on the tasks for which they are trained, and also generalize well to new tasks with similar structure. We demonstrate this on a number of tasks, including simple convex problems, training neural networks, and styling images with neural art.",
"Meta-learning for few-shot learning entails acquiring a prior over previous tasks and experiences, such that new tasks be learned from small amounts of data. However, a critical challenge in few-shot learning is task ambiguity: even when a powerful prior can be meta-learned from a large number of prior tasks, a small dataset for a new task can simply be too ambiguous to acquire a single model (e.g., a classifier) for that task that is accurate. In this paper, we propose a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution. Our approach extends model-agnostic meta-learning, which adapts to new tasks via gradient descent, to incorporate a parameter distribution that is trained via a variational lower bound. At meta-test time, our algorithm adapts via a simple procedure that injects noise into gradient descent, and at meta-training time, the model is trained such that this stochastic adaptation procedure produces samples from the approximate model posterior. Our experimental results show that our method can sample plausible classifiers and regressors in ambiguous few-shot learning problems.",
"Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning perspective and represent any particular optimization algorithm as a policy. We learn an optimization algorithm using guided policy search and demonstrate that the resulting algorithm outperforms existing hand-engineered algorithms in terms of convergence speed and or the final objective value.",
"Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.",
""
]
} |
1907.00302 | 2954046861 | Proof-of-work blockchains must implement a difficulty adjustment algorithm (DAA) in order to maintain a consistent inter-arrival time between blocks. Conventional DAAs are essentially feedback controllers, and as such, they are inherently reactive. This approach leaves them susceptible to manipulation and often causes them to either under- or over-correct. We present Bonded Mining, a proactive DAA that works by collecting hash rate commitments secured by bond from miners. The difficulty is set directly from the commitments and the bond is used to penalize miners who deviate from their commitment. We devise a statistical test that is capable of detecting hash rate deviations by utilizing only on-blockchain data. The test is sensitive enough to detect a variety of deviations from commitments, while almost never misclassifying honest miners. We demonstrate in simulation that, under reasonable assumptions, Bonded Mining is more effective at maintaining a target block time than the Bitcoin Cash DAA, one of the newest and most dynamic DAAs currently deployed. In this preliminary work, the lowest hash rate miner our approach supports is 1 of the total and we directly consider only two types of fundamental attacks. Future work will address these limitations. | We use a bond as collateral for a pledge to perform an offered hash rate over a specific period of time. Similarly, Proof of Stake (PoS) protocols (including hybrid PoW PoS) @cite_26 @cite_13 @cite_3 accept collateral as a pledge to act honestly in validating transactions over a specific period of time. We do not intend our work to be a replacement for PoS, but note that Bonded Mining similarly provides a fixed set of consensus participants that in our case are validated by PoW. For hybrids of PoW PoS and hybrids of PoW BFT @cite_20 @cite_18 , Bonded Mining can bolster the performance of the PoW component. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_3",
"@cite_13",
"@cite_20"
],
"mid": [
"2963517786",
"2399579591",
"2766209351",
"2772287189",
"1819513546"
],
"abstract": [
"While showing great promise, Bitcoin requires users to wait tens of minutes for transactions to commit, and even then, offering only probabilistic guarantees. This paper introduces ByzCoin, a novel Byzantine consensus protocol that leverages scalable collective signing to commit Bitcoin transactions irreversibly within seconds. ByzCoin achieves Byzantine consensus while preserving Bitcoin's open membership by dynamically forming hash power-proportionate consensus groups that represent recently-successful block miners. ByzCoin employs communication trees to optimize transaction commitment and verification under normal operation while guaranteeing safety and liveness under Byzantine faults, up to a near-optimal tolerance of f faulty group members among 3f + 2 total. ByzCoin mitigates double spending and selfish mining attacks by producing collectively signed transaction blocks within one minute of transaction submission. Tree-structured communication further reduces this latency to less than 30 seconds. Due to these optimizations, ByzCoin achieves a throughput higher than PayPal currently handles, with a confirmation latency of 15-20 seconds.",
"We study decentralized cryptocurrency protocols in which the participants do not deplete physical scarce resources. Such protocols commonly rely on Proof of Stake, i.e., on mechanisms that extend voting power to the stakeholders of the system. We offer analysis of existing protocols that have a substantial amount of popularity. We then present our novel pure Proof of Stake protocols, and argue that they help in mitigating problems that the existing protocols exhibit.",
"We introduce Casper, a proof of stake-based finality system which overlays an existing proof of work blockchain. Casper is a partial consensus mechanism combining proof of stake algorithm research and Byzantine fault tolerant consensus theory. We introduce our system, prove some desirable features, and show defenses against long range revisions and catastrophic crashes. The Casper overlay provides almost any proof of work chain with additional protections against block reversions.",
"We design and implement TwinsCoin, the first cryptocurrency based on a provably secure and scalable public blockchain design using both proof-of-work and proof-of-stake mechanisms. Different from the proof-of-work based Bitcoin, our construction uses two types of resources, computing power and coins (i.e., stake). The blockchain in our system is more robust than that in a pure proof-of-work based system; even if the adversary controls the majority of mining power, we can still have the chance to secure the system by relying on honest stake. In contrast, Bitcoin blockchain will be insecure if the adversary controls more than 50 of mining power. Our design follows a recent provably secure proof-of-work proof-of-stake hybrid blockchain[11]. In order to make our construction practical, we considerably enhance its design. In particular, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide a theoretical analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically. We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact. In addition to the blockchain implementation, a testnet is deployed. Source code is publicly available.",
"The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state with high probability. We also show how Discoin, a Bitcoin variant that decouples block creation and transaction confirmation, can be built on top of PeerCensus, enabling real-time payments. Unlike Bitcoin, once transactions in Discoin are committed, they stay committed."
]
} |
1812.00880 | 2902258260 | We leverage automatic differentiation (AD) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. Given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. The entire algorithm is framed as nested stochastic variational inference. An inner loop solves a soft data association problem via loopy belief propagation; a middle loop performs soft EM clustering using a regularized Newton solver (leveraging an AD framework); an outer loop backpropagates through the inner loops to train global parameters. We place priors over sensor parameters for different traffic object types, and demonstrate improvements with richer priors incorporating knowledge of the environment. We test our algorithm on detections of road signs observed by cars with mounted cameras, though in practice this technique can be used for any geo-tagged images. The detections were extracted by neural image detectors and classifiers, and we independently triangulate each type of sign (e.g. stop, traffic light). We find that our model is more robust to DNN misclassifications than current methods, generalizes across sign types, and can use geometric information to increase precision. Our algorithm outperforms our current production baseline based on k-means clustering. We show that variational inference training allows generalization by learning sign-specific parameters. | Joint tracking and calibration has a long history, @cite_18 . Using bearings-only sensors, @cite_7 developed a unified approach to multi-object triangulation and parameter learning for sensor calibration. @cite_8 developed a joint algorithm for multi-target tracking and sensor bias estimation based on the Probability Hypothesis Density (PHD) filter. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_8"
],
"mid": [
"2050620248",
"201803455",
"2154659203"
],
"abstract": [
"An exact solution is provided for the multiple sensor bias estimation problem based on local tracks. It is shown that the sensor bias estimates can be obtained dynamically using the outputs of the local (biased) state estimators. This is accomplished by manipulating the local state estimates such that they yield pseudomeasurements of the sensor biases with additive noises that are zero-mean, white, and with easily calculated covariances. These results allow evaluation of the Cramer-Rao lower bound (CRLB) on the covariance of the sensor bias estimates, i.e., a quantification of the available information about the sensor biases in any scenario. Monte Carlo simulations show that this method has significant improvement in performance with reduced rms errors of 70 compared with commonly used decoupled Kalman filter. Furthermore, the new method is shown to be statistically efficient, i.e., it meets the CRLB. The extension of the new technique for dynamically varying sensor biases is also presented.",
"Object triangulation, 3-D object tracking, feature correspondence, and camera calibration are key problems for estimation from camera networks. This paper addresses these problems within a unified Bayesian framework for joint multi-object tracking and camera calibration, based on the finite set statistics methodology. In contrast to the mainstream approaches, an alternative parametrization is investigated for triangulation, called disparity space. The approach for feature correspondence is based on the probability hypothesis density (phd) filter, and hence inherits the ability to handle the initialization of new tracks as well as the discrimination between targets and clutter within a Bayesian paradigm. The phd filtering approach then forms the basis of a camera calibration method from static or moving objects. Results are shown on simulated and real data.",
"Tracking systems are based on models, in particular, the target dynamics model and the sensor measurement model. In most practical situations the two models are not known exactly and are typically parametrized by an unknown random vector @math . The paper proposes a Bayesian algorithm based on importance sampling for the estimation of the static parameter @math . The input are measurements collected by the tracking system, with non-cooperative targets present in the surveillance volume during the data acquisition. The algorithm relies on the particle filter implementation of the probability density hypothesis (PHD) filter to evaluate the likelihood of @math . Thus, the calibration algorithm, as a byproduct, also provides a multi-target state estimate. An application of the proposed algorithm to translational sensor bias estimation is presented in detail as an illustration. The resulting sensor-bias estimation method is applicable to asynchronous sensors and does not require prior knowledge of measurement-to-target associations."
]
} |
1812.00880 | 2902258260 | We leverage automatic differentiation (AD) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. Given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. The entire algorithm is framed as nested stochastic variational inference. An inner loop solves a soft data association problem via loopy belief propagation; a middle loop performs soft EM clustering using a regularized Newton solver (leveraging an AD framework); an outer loop backpropagates through the inner loops to train global parameters. We place priors over sensor parameters for different traffic object types, and demonstrate improvements with richer priors incorporating knowledge of the environment. We test our algorithm on detections of road signs observed by cars with mounted cameras, though in practice this technique can be used for any geo-tagged images. The detections were extracted by neural image detectors and classifiers, and we independently triangulate each type of sign (e.g. stop, traffic light). We find that our model is more robust to DNN misclassifications than current methods, generalizes across sign types, and can use geometric information to increase precision. Our algorithm outperforms our current production baseline based on k-means clustering. We show that variational inference training allows generalization by learning sign-specific parameters. | The PHD filter @cite_16 and labelled multi-Bernoulli filter @cite_27 represent collections of unknown number of objects with unknown positions; our representation can be seen as a Laplace approximation to a multi-Bernoulli filter. To track multiple objects with unlabeled detections, Williams and Lau @cite_20 solve the data association problem by using loopy belief propagation @cite_13 to produce an approximate soft assignment of detections to objects. @cite_25 describe a similar loopy BP-based multi-object tracking system, formulating the tracker as a variational inference method @cite_2 , similar to our formulation. | {
"cite_N": [
"@cite_27",
"@cite_2",
"@cite_16",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"",
"2225156818",
"",
"2951088220",
"2136969810",
"2157316965"
],
"abstract": [
"",
"ABSTRACTOne of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by Kullback–Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data...",
"",
"Recently, researchers have demonstrated that loopy belief propagation - the use of Pearls polytree algorithm IN a Bayesian network WITH loops OF error- correcting codes.The most dramatic instance OF this IS the near Shannon - limit performance OF Turbo Codes codes whose decoding algorithm IS equivalent TO loopy belief propagation IN a chain - structured Bayesian network. IN this paper we ask : IS there something special about the error - correcting code context, OR does loopy propagation WORK AS an approximate inference schemeIN a more general setting? We compare the marginals computed using loopy propagation TO the exact ones IN four Bayesian network architectures, including two real - world networks : ALARM AND QMR.We find that the loopy beliefs often converge AND WHEN they do, they give a good approximation TO the correct marginals.However,ON the QMR network, the loopy beliefs oscillated AND had no obvious relationship TO the correct posteriors. We present SOME initial investigations INTO the cause OF these oscillations, AND show that SOME simple methods OF preventing them lead TO the wrong results.",
"We introduce a novel probabilistic tracking algorithm that incorporates combinatorial data association constraints and model-based track management using variational Bayes. We use a Bethe entropy approximation to incorporate data association constraints that are often ignored in previous probabilistic tracking algorithms. Noteworthy aspects of our method include a model-based mechanism to replace heuristic logic typically used to initiate and destroy tracks, and an assignment posterior with linear computation cost in window length as opposed to the exponential scaling of previous MAP-based approaches. We demonstrate the applicability of our method on radar tracking and computer vision problems.",
"Data association, the problem of reasoning over correspondence between targets and measurements, is a fundamental problem in tracking. This paper presents a graphical model formulation of data association and applies an approximate inference method, belief propagation (BP), to obtain estimates of marginal association probabilities. We prove that BP is guaranteed to converge, and bound the number of iterations necessary. Experiments reveal a favourable comparison to prior methods in terms of accuracy and computational complexity."
]
} |
1812.00880 | 2902258260 | We leverage automatic differentiation (AD) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. Given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. The entire algorithm is framed as nested stochastic variational inference. An inner loop solves a soft data association problem via loopy belief propagation; a middle loop performs soft EM clustering using a regularized Newton solver (leveraging an AD framework); an outer loop backpropagates through the inner loops to train global parameters. We place priors over sensor parameters for different traffic object types, and demonstrate improvements with richer priors incorporating knowledge of the environment. We test our algorithm on detections of road signs observed by cars with mounted cameras, though in practice this technique can be used for any geo-tagged images. The detections were extracted by neural image detectors and classifiers, and we independently triangulate each type of sign (e.g. stop, traffic light). We find that our model is more robust to DNN misclassifications than current methods, generalizes across sign types, and can use geometric information to increase precision. Our algorithm outperforms our current production baseline based on k-means clustering. We show that variational inference training allows generalization by learning sign-specific parameters. | The recent availability of automatic differentiation frameworks like PyTorch @cite_19 has led to more end-to-end learning approaches in localization @cite_21 and tracking @cite_14 . One crucial advance has been the ability to differentiate through solutions of optimization problems @cite_0 @cite_3 to enable nested optimization. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_3",
"@cite_0",
"@cite_19"
],
"mid": [
"2805116782",
"2887648467",
"2592457170",
"2505728881",
"2899771611"
],
"abstract": [
"We present differentiable particle filters (DPFs): a differentiable implementation of the particle filter algorithm with learnable motion and measurement models. Since DPFs are end-to-end differentiable, we can efficiently train their models by optimizing end-to-end state estimation performance, rather than proxy objectives such as model accuracy. DPFs encode the structure of recursive state estimation with prediction and measurement update that operate on a probability distribution over states. This structure represents an algorithmic prior that improves learning performance in state estimation problems while enabling explainability of the learned model. Our experiments on simulated and real data show substantial benefits from end-to- end learning with algorithmic priors, e.g. reducing error rates by 80 . Our experiments also show that, unlike long short-term memory networks, DPFs learn localization in a policy-agnostic way and thus greatly improve generalization. Source code is available at this https URL .",
"Particle filtering is a powerful approach to sequential state estimation and finds application in many domains, including robot localization, object tracking, etc. To apply particle filtering in practice, a critical challenge is to construct probabilistic system models, especially for systems with complex dynamics or rich sensory inputs such as camera images. This paper introduces the Particle Filter Network (PFnet), which encodes both a system model and a particle filter algorithm in a single neural network. The PF-net is fully differentiable and trained end-to-end from data. Instead of learning a generic system model, it learns a model optimized for the particle filter algorithm. We apply the PF-net to a visual localization task, in which a robot must localize in a rich 3-D world, using only a schematic 2-D floor map. In simulation experiments, PF-net consistently outperforms alternative learning architectures, as well as a traditional model-based method, under a variety of sensor inputs. Further, PF-net generalizes well to new, unseen environments.",
"This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers encode constraints and complex dependencies between the hidden states that traditional convolutional and fully-connected layers often cannot capture. In this paper, we explore the foundations for such an architecture: we show how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one notable example, we show that the method is capable of learning to play mini-Sudoku (4x4) given just input and output games, with no a priori information about the rules of the game; this highlights the ability of our architecture to learn hard constraints better than other neural architectures.",
"Some recent works in machine learning and computer vision involve the solution of a bi-level optimization problem. Here the solution of a parameterized lower-level problem binds variables that appear in the objective of an upper-level problem. The lower-level problem typically appears as an argmin or argmax optimization problem. Many techniques have been proposed to solve bi-level optimization problems, including gradient descent, which is popular with current end-to-end learning approaches. In this technical report we collect some results on differentiating argmin and argmax optimization problems with and without constraints and provide some insightful motivating examples.",
""
]
} |
1812.00880 | 2902258260 | We leverage automatic differentiation (AD) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. Given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. The entire algorithm is framed as nested stochastic variational inference. An inner loop solves a soft data association problem via loopy belief propagation; a middle loop performs soft EM clustering using a regularized Newton solver (leveraging an AD framework); an outer loop backpropagates through the inner loops to train global parameters. We place priors over sensor parameters for different traffic object types, and demonstrate improvements with richer priors incorporating knowledge of the environment. We test our algorithm on detections of road signs observed by cars with mounted cameras, though in practice this technique can be used for any geo-tagged images. The detections were extracted by neural image detectors and classifiers, and we independently triangulate each type of sign (e.g. stop, traffic light). We find that our model is more robust to DNN misclassifications than current methods, generalizes across sign types, and can use geometric information to increase precision. Our algorithm outperforms our current production baseline based on k-means clustering. We show that variational inference training allows generalization by learning sign-specific parameters. | Probabilistic inference intends to infer distributions or values of latent variables @math given observed variables @math according to a probability distribution @math . Variational inference @cite_2 is an approximate inference technique that treats probabilistic inference as an optimization problem by fitting an approximate distribution @math to the model @math by maximizing the evidence lower bound (ELBO) When variational parameters are shared across data, variational inference is amenable to stochastic optimization via minibatching (stochastic gradient variational Bayes @cite_22 ) and random sampling of latent variables (stochastic variational inference @cite_4 ). | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_2"
],
"mid": [
"2166851633",
"",
"2225156818"
],
"abstract": [
"We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions. We develop this technique for a large class of probabilistic models and we demonstrate it with two probabilistic topic models, latent Dirichlet allocation and the hierarchical Dirichlet process topic model. Using stochastic variational inference, we analyze several large collections of documents: 300K articles from Nature, 1.8M articles from The New York Times, and 3.8M articles from Wikipedia. Stochastic inference can easily handle data sets of this size and outperforms traditional variational inference, which can only handle a smaller subset. (We also show that the Bayesian nonparametric topic model outperforms its parametric counterpart.) Stochastic variational inference lets us apply complex Bayesian models to massive data sets.",
"",
"ABSTRACTOne of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this article, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find a member of that family which is close to the target density. Closeness is measured by Kullback–Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data..."
]
} |
1812.00880 | 2902258260 | We leverage automatic differentiation (AD) and probabilistic programming to develop an end-to-end optimization algorithm for batch triangulation of a large number of unknown objects. Given noisy detections extracted from noisily geo-located street level imagery without depth information, we jointly estimate the number and location of objects of different types, together with parameters for sensor noise characteristics and prior distribution of objects conditioned on side information. The entire algorithm is framed as nested stochastic variational inference. An inner loop solves a soft data association problem via loopy belief propagation; a middle loop performs soft EM clustering using a regularized Newton solver (leveraging an AD framework); an outer loop backpropagates through the inner loops to train global parameters. We place priors over sensor parameters for different traffic object types, and demonstrate improvements with richer priors incorporating knowledge of the environment. We test our algorithm on detections of road signs observed by cars with mounted cameras, though in practice this technique can be used for any geo-tagged images. The detections were extracted by neural image detectors and classifiers, and we independently triangulate each type of sign (e.g. stop, traffic light). We find that our model is more robust to DNN misclassifications than current methods, generalizes across sign types, and can use geometric information to increase precision. Our algorithm outperforms our current production baseline based on k-means clustering. We show that variational inference training allows generalization by learning sign-specific parameters. | Probabilistic programming languages (PPLs) @cite_26 @cite_17 generalize probabilistic graphical models (PGMs) by allowing control flow, recursion, and other high level programming features in probabilistic models. A probabilistic program with static single assignments and no control flow corresponds to a probabilistic graphical model. | {
"cite_N": [
"@cite_26",
"@cite_17"
],
"mid": [
"2577537660",
"2897613819"
],
"abstract": [
"Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectation propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.",
"Pyro is a probabilistic programming language built on Python as a platform for developing advanced probabilistic models in AI research. To scale to large datasets and high-dimensional models, Pyro uses stochastic variational inference algorithms and probability distributions built on top of PyTorch, a modern GPU-accelerated deep learning framework. To accommodate complex or model-specific algorithmic behavior, Pyro leverages Poutine, a library of composable building blocks for modifying the behavior of probabilistic programs."
]
} |
1812.00899 | 2903460972 | The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by @math on average, while preserving performance of belief state tracking, by @math on turn request and @math on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy. | A similar scalable dialogue state tracking model is also proposed by @cite_0 , which is based on conditioning the encoder input. They used a similar conditioning of user utterance representation on slot values (candidate sets) and slot type. However, our proposed model is based on conditioning only on slot type. Therefor, our proposed model is simpler since it contains only one conditioned encoder for user utterance, whereas @cite_0 model requires two independet conditioned encoder. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2757930983"
],
"abstract": [
"Dialogue state tracking (DST) is a key component of task-oriented dialogue systems. DST estimates the user's goal at each user turn given the interaction until then. State of the art approaches for state tracking rely on deep learning methods, and represent dialogue state as a distribution over all possible slot values for each slot present in the ontology. Such a representation is not scalable when the set of possible values are unbounded (e.g., date, time or location) or dynamic (e.g., movies or usernames). Furthermore, training of such models requires labeled data, where each user turn is annotated with the dialogue state, which makes building models for new domains challenging. In this paper, we present a scalable multi-domain deep learning based approach for DST. We introduce a novel framework for state tracking which is independent of the slot value set, and represent the dialogue state as a distribution over a set of values of interest (candidate set) derived from the dialogue history or knowledge. Restricting these candidate sets to be bounded in size addresses the problem of slot-scalability. Furthermore, by leveraging the slot-independent architecture and transfer learning, we show that our proposed approach facilitates quick adaptation to new domains."
]
} |
1812.00899 | 2903460972 | The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance. This paper proposes a new scalable and accurate neural dialogue state tracking model, based on the recently proposed Global-Local Self-Attention encoder (GLAD) model by which uses global modules to share parameters between estimators for different types (called slots) of dialogue states, and uses local modules to learn slot-specific features. By using only one recurrent networks with global conditioning, compared to (1 + # slots) recurrent networks with global and local conditioning used in the GLAD model, our proposed model reduces the latency in training and inference times by @math on average, while preserving performance of belief state tracking, by @math on turn request and @math on joint goal and accuracy. Evaluation on Multi-domain dataset (Multi-WoZ) also demonstrates that our model outperforms GLAD on turn inform and joint goal accuracy. | Recently, @cite_1 proposed a model for unknown slot type by using a pointer network, based on conditioning to slot type embedding. Our proposed model is also relaxing the current GLAD architecture for unknown slot types during inference. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2891732163"
],
"abstract": [
"Even though machine learning has become the major scene in dialogue research community, the real breakthrough has been blocked by the scale of data available. To address this fundamental obstacle, we introduce the Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics. At a size of @math k dialogues, it is at least one order of magnitude larger than all previous annotated task-oriented corpora. The contribution of this work apart from the open-sourced dataset labelled with dialogue belief states and dialogue actions is two-fold: firstly, a detailed description of the data collection procedure along with a summary of data structure and analysis is provided. The proposed data-collection pipeline is entirely based on crowd-sourcing without the need of hiring professional annotators; secondly, a set of benchmark results of belief tracking, dialogue act and response generation is reported, which shows the usability of the data and sets a baseline for future studies."
]
} |
1812.00910 | 2903389359 | Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We perform a comprehensive analysis of white-box privacy inference attacks on deep learning models. We measure the privacy leakage by leveraging the final model parameters as well as the parameter updates during the training and fine-tuning processes. We design the attacks in the stand-alone and federated settings, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, widely used to train deep neural networks. We show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants of a federated learning setting can run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies. | Investigating different privacy aspects of deep neural networks is an active field of research. @cite_9 showed if an adversary has access to the parameters of machine learning models such as Support Vector Machines (SVM) or Hidden Markov Models (HMM) @cite_19 , she can extract valuable information about the training data (e.g., the accent of the speaker in speech recognition models). | {
"cite_N": [
"@cite_19",
"@cite_9"
],
"mid": [
"1503398984",
"2962835266"
],
"abstract": [
"Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.",
"Machine-learning ML enables computers to learn how to recognise patterns, make unintended decisions, or react to a dynamic environment. The effectiveness of trained machines varies because of more suitable ML algorithms or because superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. In this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. Such information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights."
]
} |
1812.00910 | 2903389359 | Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We perform a comprehensive analysis of white-box privacy inference attacks on deep learning models. We measure the privacy leakage by leveraging the final model parameters as well as the parameter updates during the training and fine-tuning processes. We design the attacks in the stand-alone and federated settings, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, widely used to train deep neural networks. We show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants of a federated learning setting can run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies. | Multiple research papers address the problem of membership inference attack against model in a black-box setting @cite_33 @cite_7 @cite_11 . @cite_12 performed one of the first membership inference attacks on genomic data. @cite_33 showed that an ML model's output has distinguishable properties about its training data, which could be exploited by the adversary's inference model. They also introduced shadow models which mimic the behavior of the target model, and could be used by the attacker to train the attack model. @cite_36 demonstrated the relationship between overfitting and membership inference attacks. @cite_3 used generative adversarial networks to perform membership attacks on generative models. An attacker with additional information about the training data distribution can perform different types of inference attacks. Input inference @cite_38 , attribute inference @cite_5 , parameter inference @cite_8 @cite_15 , and side-channel attacks @cite_13 are several examples of such attacks. | {
"cite_N": [
"@cite_38",
"@cite_33",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_3",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_12",
"@cite_11"
],
"mid": [
"",
"2949777041",
"",
"2461943168",
"2795435272",
"2884827599",
"2965267010",
"",
"2789993878",
"2040228409",
""
],
"abstract": [
"",
"We quantitatively investigate how machine learning models leak information about the individual data records on which they were trained. We focus on the basic membership inference attack: given a data record and black-box access to a model, determine if the record was in the model's training dataset. To perform membership inference against a target model, we make adversarial use of machine learning and train our own inference model to recognize differences in the target model's predictions on the inputs that it trained on versus the inputs that it did not train on. We empirically evaluate our inference techniques on classification models trained by commercial \"machine learning as a service\" providers such as Google and Amazon. Using realistic datasets and classification tasks, including a hospital discharge dataset whose membership is sensitive from the privacy perspective, we show that these models can be vulnerable to membership inference attacks. We then investigate the factors that influence this leakage and evaluate mitigation strategies.",
"",
"Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (\"predictive analytics\") systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis. The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model's parameters or training data, aims to duplicate the functionality of (i.e., \"steal\") the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.",
"Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models' structure or their observable behavior. However, the underlying cause of this privacy risk is not well understood beyond a handful of anecdotal accounts that suggest overfitting and influence might play a role. This paper examines the effect that overfitting and influence have on the ability of an attacker to learn information about the training data from machine learning models, either through training set membership inference or attribute inference attacks. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. Interestingly, our formal analysis also shows that overfitting is not necessary for these attacks and begins to shed light on what other factors may be in play. Finally, we explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks.",
"We present a data-driven framework called generative adversarial privacy (GAP). Inspired by recent advancements in generative adversarial networks (GANs), GAP allows the data holder to learn the privatization mechanism directly from the data. Under GAP, finding the optimal privacy mechanism is formulated as a constrained minimax game between a privatizer and an adversary. We show that for appropriately chosen adversarial loss functions, GAP provides privacy guarantees against strong information-theoretic adversaries. We also evaluate the performance of GAP on multi-dimensional Gaussian mixture models and the GENKI face database.",
"This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model. Because such models are sometimes trained on sensitive data (e.g., the text of users' private messages), this methodology can benefit privacy by allowing deep-learning practitioners to select means of training that minimize such memorization. In experiments, we show that unintended memorization is a persistent, hard-to-avoid issue that can have serious consequences. Specifically, for models trained without consideration of memorization, we describe new, efficient procedures that can extract unique, secret sequences, such as credit card numbers. We show that our testing strategy is a practical and easy-to-use first line of defense, e.g., by describing its application to quantitatively limit data exposure in Google's Smart Compose, a commercial text-completion neural network trained on millions of users' email messages.",
"",
"Deep learning has become the de-facto computational paradigm for various kinds of perception problems, including many privacy-sensitive applications such as online medical image analysis. No doubt to say, the data privacy of these deep learning systems is a serious concern. Different from previous research focusing on exploiting privacy leakage from deep learning models, in this paper, we present the first attack on the implementation of deep learning models. To be specific, we perform the attack on an FPGA-based convolutional neural network accelerator and we manage to recover the input image from the collected power traces without knowing the detailed parameters in the neural network by utilizing the characteristics of the \"line buffer\" performing convolution in the CNN accelerators. For the MNIST dataset, our power side-channel attack is able to achieve up to 89 recognition accuracy.",
"We use high-density single nucleotide polymorphism (SNP) genotyping microarrays to demonstrate the ability to accurately and robustly determine whether individuals are in a complex genomic DNA mixture. We first develop a theoretical framework for detecting an individual's presence within a mixture, then show, through simulations, the limits associated with our method, and finally demonstrate experimentally the identification of the presence of genomic DNA of specific individuals within a series of highly complex genomic mixtures, including mixtures where an individual contributes less than 0.1 of the total genomic DNA. These findings shift the perceived utility of SNPs for identifying individual trace contributors within a forensics mixture, and suggest future research efforts into assessing the viability of previously sub-optimal DNA sources due to sample contamination. These findings also suggest that composite statistics across cohorts, such as allele frequency or genotype counts, do not mask identity within genome-wide association studies. The implications of these findings are discussed.",
""
]
} |
1812.00910 | 2903389359 | Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We perform a comprehensive analysis of white-box privacy inference attacks on deep learning models. We measure the privacy leakage by leveraging the final model parameters as well as the parameter updates during the training and fine-tuning processes. We design the attacks in the stand-alone and federated settings, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, widely used to train deep neural networks. We show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants of a federated learning setting can run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies. | @cite_31 proposed an attack on collaborative learning to generate average samples of a class. They attack an unrealistic scenario where all the data in one class is held by one participant, and the adversary knows it. It also assumes fine-grained parameter updates at each mini-batch. The proposed method generates an average sample from the victim's class. This attack works on datasets where all of the members in the same classes are similar. No metric for leakage is used to measure privacy, except illustrating the generated images. | {
"cite_N": [
"@cite_31"
],
"mid": [
"2951368041"
],
"abstract": [
"Deep Learning has recently become hugely popular in machine learning, providing significant improvements in classification accuracy in the presence of highly-structured and large databases. Researchers have also considered privacy implications of deep learning. Models are typically trained in a centralized manner with all the data being processed by the same training algorithm. If the data is a collection of users' private data, including habits, personal pictures, geographical positions, interests, and more, the centralized server will have access to sensitive information that could potentially be mishandled. To tackle this problem, collaborative deep learning models have recently been proposed where parties locally train their deep learning structures and only share a subset of the parameters in the attempt to keep their respective training sets private. Parameters can also be obfuscated via differential privacy (DP) to make information extraction even more challenging, as proposed by Shokri and Shmatikov at CCS'15. Unfortunately, we show that any privacy-preserving collaborative deep learning is susceptible to a powerful attack that we devise in this paper. In particular, we show that a distributed, federated, or decentralized deep learning approach is fundamentally broken and does not protect the training sets of honest participants. The attack we developed exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data). Interestingly, we show that record-level DP applied to the shared parameters of the model, as suggested in previous work, is ineffective (i.e., record-level DP is not designed to address our attack)."
]
} |
1812.00910 | 2903389359 | Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We perform a comprehensive analysis of white-box privacy inference attacks on deep learning models. We measure the privacy leakage by leveraging the final model parameters as well as the parameter updates during the training and fine-tuning processes. We design the attacks in the stand-alone and federated settings, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We design and evaluate our novel white-box membership inference attacks against deep learning algorithms to measure their training data membership leakage. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, widely used to train deep neural networks. We show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants of a federated learning setting can run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies. | @cite_27 designed a new property inference attack on fully connected networks. In this attack, the attacker goal is to extract unintended properties about the target model training data from a released trained model (when attacker has full access to the trained model similar to the our whitebox attack). However, their method is only limited to fully connected networks and they used non-standard small target models. Unlike this work, we showed that our attacks perform well on the state of art machine learning methods, also we used online available well-generalized pretrained models to illustrate that our attack models are suited for the real world scenarios. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2897830718"
],
"abstract": [
"With the growing adoption of machine learning, sharing of learned models is becoming popular. However, in addition to the prediction properties the model producer aims to share, there is also a risk that the model consumer can infer other properties of the training data the model producer did not intend to share. In this paper, we focus on the inference of global properties of the training data, such as the environment in which the data was produced, or the fraction of the data that comes from a certain class, as applied to white-box Fully Connected Neural Networks (FCNNs). Because of their complexity and inscrutability, FCNNs have a particularly high risk of leaking unexpected information about their training sets; at the same time, this complexity makes extracting this information challenging. We develop techniques that reduce this complexity by noting that FCNNs are invariant under permutation of nodes in each layer. We develop our techniques using representations that capture this invariance and simplify the information extraction task. We evaluate our techniques on several synthetic and standard benchmark datasets and show that they are very effective at inferring various data properties. We also perform two case studies to demonstrate the impact of our attack. In the first case study we show that a classifier that recognizes smiling faces also leaks information about the relative attractiveness of the individuals in its training set. In the second case study we show that a classifier that recognizes Bitcoin mining from performance counters also leaks information about whether the classifier was trained on logs from machines that were patched for the Meltdown and Spectre attacks."
]
} |
1812.00889 | 2902744786 | This paper develops and evaluates a novel method that allows for the detection of affordances in a scalable and multiple-instance manner on visually recovered pointclouds. Our approach has many advantages over alternative methods, as it is based on highly parallelizable, one-shot learning that is fast in commodity hardware. The approach is hybrid in that it uses a geometric representation together with a state-of-the-art deep learning method capable of identifying 3D scene saliency. The geometric component allows for a compact and efficient representation, boosting the performance of the deep network architecture which proved insufficient on its own. Moreover, our approach allows not only to predict whether an input scene affords or not the interactions, but also the pose of the objects that allow these interactions to take place. Our predictions align well with crowd-sourced human judgment as they are preferred with 87 probability, show high rates of improvement with almost four times (4x) better performance over a deep learning-only baseline and are seven times (7x) faster than previous art. | In the Robotics community the favored approach has been the representation and learning of actions. Mainly to predict consequences of actions over a set of objects @cite_15 @cite_21 ; or to learn to assist humans in every day task @cite_29 @cite_1 @cite_34 @cite_40 @cite_4 @cite_2 @cite_38 . These approaches use visual features describing shape, color, size and relative distances to capture object properties and effects. | {
"cite_N": [
"@cite_38",
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_40",
"@cite_2",
"@cite_15",
"@cite_34"
],
"mid": [
"2296494999",
"210657420",
"2009151595",
"1574909006",
"2040001722",
"",
"",
"2528967817",
"2074043780"
],
"abstract": [
"For robots, the ability to model human configurations and temporal dynamics is crucial for the task of anticipating future human activities, yet requires conflicting properties: On one hand, we need a detailed high-dimensional description of human configurations to reason about the physical plausibility of the prediction; on the other hand, we need a compact representation to be able to parsimoniously model the relations between the human and the environment. We therefore propose a new model, GP-LCRF, which admits both the high-dimensional and low-dimensional representation of humans. It assumes that the high-dimensional representation is generated from a latent variable corresponding to its lowdimensional representation using a Gaussian process. The generative process not only defines the mapping function between the highand low-dimensional spaces, but also models a distribution of humans embedded as a potential function in GP-LCRF along with other potentials to jointly model the rich context among humans, objects and the activity. Through extensive experiments on activity anticipation, we show that our GP-LCRF consistently outperforms the state-of-the-art results and reduces the predicted human trajectory error by 11.6 .",
"In order to avoid an expensive manual labelling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visually similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in such videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.",
"In this paper, we propose a method to recognize human body movements and we combine it with the contextual knowledge of human-robot collaboration scenarios provided by an object affordances framework that associates actions with its effects and the objects involved in them. The aim is to equip humanoid robots with action prediction capabilities, allowing them to anticipate effects as soon as a human partner starts performing a physical action, thus enabling interactions between man and robot to be fast and natural. We consider simple actions that characterize a human-robot collaboration scenario with objects being manipulated on a table: inspired from automatic speech recognition techniques, we train a statistical gesture model in order to recognize those physical gestures in real time. Analogies and differences between the two domains are discussed, highlighting the requirements of an automatic gesture recognizer for robots in order to perform robustly and in real time.",
"The ability to learn about and efficiently use tools constitutes a desirable property for general purpose humanoid robots, as it allows them to extend their capabilities beyond the limitations of their own body. Yet, it is a topic that has only recently been tackled from the robotics community. Most of the studies published so far make use of tool representations that allow their models to generalize the knowledge among similar tools in a very limited way. Moreover, most studies assume that the tool is always grasped in its common or canonical grasp position, thus not considering the influence of the grasp configuration in the outcome of the actions performed with them. In the current paper we present a method that tackles both issues simultaneously by using an extended set of functional features and a novel representation of the effect of the tool use. Together, they implicitly account for the grasping configuration and allow the iCub to generalize among tools based on their geometry. Moreover, learning happens in a self-supervised manner: First, the robot autonomously discovers the affordance categories of the tools by clustering the effect of their usage. These categories are subsequently used as a teaching signal to associate visually obtained functional features to the expected tool's affordance. In the experiments, we show how this technique can be effectively used to select, given a tool, the best action to achieve a desired effect.",
"Analyzing affordances has its root in socio-cognitive development of primates. Knowing what the environment, including other agents, can offer in terms of action capabilities is important for our day-to-day interaction and cooperation. In this paper, we will merge two complementary aspects of affordances: from agent-object perspective, what an agent afford to do with an object, and from agent-agent perspective, what an agent can afford to do for other agent, and present a unified notion of Affordance Graph. The graph will encode affordances for a variety of tasks: take, give, pick, put on, put into, show, hide, make accessible, etc. Another novelty will be to incorporate the aspects of effort and perspective-taking in constructing such graph. Hence, the Affordance Graph will tell about the action-capabilities of manipulating the objects among the agents and across the places, along with the information about the required level of efforts and the potential places. We will also demonstrate some interesting applications.",
"",
"",
"Affordances capture the relationships between a robot and the environment in terms of the actions that the robot is able to perform. The notable characteristic of affordance-based perception is that an object is perceived by what it affords (e.g., graspable and rollable), instead of identities (e.g., name, color, and shape). Affordances play an important role in basic robot capabilities such as recognition, planning, and prediction. The key challenges in affordance research are: 1) how to automatically discover the distinctive features that specify an affordance in an online and incremental manner and 2) how to generalize these features to novel environments. This survey provides an entry point for interested researchers, including: 1) a general overview; 2) classification and critical analysis of existing work; 3) discussion of how affordances are useful in developmental robotics; 4) some open questions about how to use the affordance concept; and 5) a few promising research directions.",
"The ability to learn from human demonstration is essential for robots in human environments. The activity models that the robot builds from observation must take both the human motion and the objects involved into account. Object models designed for this purpose should reflect the role of the object in the activity - its function, or affordances. The main contribution of this paper is to represent object directly in terms of their interaction with human hands, rather than in terms of appearance. This enables the direct representation of object affordances function, while being robust to intra-class differences in appearance. Object hypotheses are first extracted from a video sequence as tracks of associated image segments. The object hypotheses are encoded as strings, where the vocabulary corresponds to different types of interaction with human hands. The similarity between two such object descriptors can be measured using a string kernel. Experiments show these functional descriptors to capture differences and similarities in object affordances function that are not represented by appearance."
]
} |
1812.00889 | 2902744786 | This paper develops and evaluates a novel method that allows for the detection of affordances in a scalable and multiple-instance manner on visually recovered pointclouds. Our approach has many advantages over alternative methods, as it is based on highly parallelizable, one-shot learning that is fast in commodity hardware. The approach is hybrid in that it uses a geometric representation together with a state-of-the-art deep learning method capable of identifying 3D scene saliency. The geometric component allows for a compact and efficient representation, boosting the performance of the deep network architecture which proved insufficient on its own. Moreover, our approach allows not only to predict whether an input scene affords or not the interactions, but also the pose of the objects that allow these interactions to take place. Our predictions align well with crowd-sourced human judgment as they are preferred with 87 probability, show high rates of improvement with almost four times (4x) better performance over a deep learning-only baseline and are seven times (7x) faster than previous art. | In Computer Vision, work has been done using static imagery, where the affordance or interaction are provided as a label rather than demonstrated. The works proposed in @cite_24 @cite_4 @cite_8 @cite_35 @cite_12 are based on labeled 2D images to predict functional regions or attributes on every day objects. Approaches performing semantic reasoning from 2D images such as @cite_23 @cite_16 @cite_36 include human context to build knowledge representations useful for deciding on possible actions. Yet another body of research is the one exploiting 3D information to learn and predict affordances of objects in the environment. Affordances such as rollable, containment or sittable are studied in @cite_14 @cite_37 using simulations on 3D CAD models. In @cite_27 @cite_6 @cite_42 geometric features on RGB-D images are used to predict affordances such as pushable, liftable, graspable, support, cut or contain in a pixel-wise manner. Works such as @cite_32 @cite_38 @cite_0 @cite_17 @cite_11 predict human poses or locations suitable for human activities such as sitting, walking or laying-down in indoor scenes. | {
"cite_N": [
"@cite_35",
"@cite_37",
"@cite_14",
"@cite_4",
"@cite_38",
"@cite_11",
"@cite_8",
"@cite_36",
"@cite_42",
"@cite_32",
"@cite_6",
"@cite_24",
"@cite_0",
"@cite_27",
"@cite_23",
"@cite_16",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"",
"1989695841",
"210657420",
"2296494999",
"",
"2757918133",
"2779795220",
"2561523096",
"2032293070",
"1524405667",
"1989739075",
"2292279777",
"1987518424",
"",
"1900424585",
"",
""
],
"abstract": [
"",
"",
"The ability to perceive possible interactions with the environment is a key capability of task-guided robotic agents. An important subset of possible interactions depends solely on the objects of interest and their position and orientation in the scene. We call these object-based interactions 0-order affordances and divide them among non-hidden and hidden whether the current configuration of an object in the scene renders its affordance directly usable or not. Conversely to other works, we propose that detecting affordances that are not directly perceivable increase the usefulness of robotic agents with manipulation capabilities, so that by appropriate manipulation they can modify the object configuration until the seeked affordance becomes available. In this paper we show how 0-order affordances depending on the geometry of the objects and their pose can be learned using a supervised learning strategy on 3D mesh representations of the objects allowing the use of the whole object geometry. Moreover, we show how the learned affordances can be detected in real scenes obtained with a low-cost depth sensor like the Microsoft Kinect through object recognition and 6D0F pose estimation and present results for both learning on meshes and detection on real scenes to demonstrate the practical application of the presented approach.",
"In order to avoid an expensive manual labelling process or to learn object classes autonomously without human intervention, object discovery techniques have been proposed that extract visually similar objects from weakly labelled videos. However, the problem of discovering small or medium sized objects is largely unexplored. We observe that videos with activities involving human-object interactions can serve as weakly labelled data for such cases. Since neither object appearance nor motion is distinct enough to discover objects in such videos, we propose a framework that samples from a space of algorithms and their parameters to extract sequences of object proposals. Furthermore, we model similarity of objects based on appearance and functionality, which is derived from human and object motion. We show that functionality is an important cue for discovering objects from activities and demonstrate the generality of the model on three challenging RGB-D and RGB datasets.",
"For robots, the ability to model human configurations and temporal dynamics is crucial for the task of anticipating future human activities, yet requires conflicting properties: On one hand, we need a detailed high-dimensional description of human configurations to reason about the physical plausibility of the prediction; on the other hand, we need a compact representation to be able to parsimoniously model the relations between the human and the environment. We therefore propose a new model, GP-LCRF, which admits both the high-dimensional and low-dimensional representation of humans. It assumes that the high-dimensional representation is generated from a latent variable corresponding to its lowdimensional representation using a Gaussian process. The generative process not only defines the mapping function between the highand low-dimensional spaces, but also models a distribution of humans embedded as a potential function in GP-LCRF along with other potentials to jointly model the rich context among humans, objects and the activity. Through extensive experiments on activity anticipation, we show that our GP-LCRF consistently outperforms the state-of-the-art results and reduces the predicted human trajectory error by 11.6 .",
"",
"We propose AffordanceNet, a new deep learning approach to simultaneously detect multiple objects and their affordances from RGB images. Our AffordanceNet has two branches: an object detection branch to localize and classify the object, and an affordance detection branch to assign each pixel in the object to its most probable affordance label. The proposed framework employs three key components for effectively handling the multiclass problem in the affordance mask: a sequence of deconvolutional layers, a robust resizing strategy, and a multi-task loss function. The experimental results on the public datasets show that our AffordanceNet outperforms recent state-of-the-art methods by a fair margin, while its end-to-end architecture allows the inference at the speed of 150ms per image. This makes our AffordanceNet is well suitable for real-time robotic applications. Furthermore, we demonstrate the effectiveness of AffordanceNet in different testing environments and in real robotic applications. The source code is available at this https URL",
"We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects. In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society. We also aim to teach artificial agents why some actions should not be taken in certain situations, and what would likely happen if these actions would be taken. We collect a new dataset that builds upon ADE20k, referred to as ADE-Affordance, which contains annotations enabling such rich visual reasoning. We propose a model that exploits Graph Neural Networks to propagate contextual information from the scene in order to perform detailed affordance reasoning about each object. Our model is showcased through various ablation studies, pointing to successes and challenges in this complex task.",
"We present a novel and real-time method to detect object affordances from RGB-D images. Our method trains a deep Convolutional Neural Network (CNN) to learn deep features from the input data in an end-to-end manner. The CNN has an encoder-decoder architecture in order to obtain smooth label predictions. The input data are represented as multiple modalities to let the network learn the features more effectively. Our method sets a new benchmark on detecting object affordances, improving the accuracy by 20 in comparison with the state-of-the-art methods that use hand-designed geometric features. Furthermore, we apply our detection method on a full-size humanoid robot (WALK-MAN) to demonstrate that the robot is able to perform grasps after efficiently detecting the object affordances.",
"We present a human-centric paradigm for scene understanding. Our approach goes beyond estimating 3D scene geometry and predicts the \"workspace\" of a human which is represented by a data-driven vocabulary of human interactions. Our method builds upon the recent work in indoor scene understanding and the availability of motion capture data to create a joint space of human poses and scene geometry by modeling the physical interactions between the two. This joint space can then be used to predict potential human poses and joint locations from a single image. In a way, this work revisits the principle of Gibsonian affor-dances, reinterpreting it for the modern, data-driven era.",
"As robots begin to collaborate with humans in everyday workspaces, they will need to understand the functions of tools and their parts. To cut an apple or hammer a nail, robots need to not just know the tool's name, but they must localize the tool's parts and identify their functions. Intuitively, the geometry of a part is closely related to its possible functions, or its affordances. Therefore, we propose two approaches for learning affordances from local shape and geometry primitives: 1) superpixel based hierarchical matching pursuit (S-HMP); and 2) structured random forests (SRF). Moreover, since a part can be used in many ways, we introduce a large RGB-Depth dataset where tool parts are labeled with multiple affordances and their relative rankings. With ranked affordances, we evaluate the proposed methods on 3 cluttered scenes and over 105 kitchen, workshop and garden tools, using ranked correlation and a weighted F-measure score [26]. Experimental results over sequences containing clutter, occlusions, and viewpoint changes show that the approaches return precise predictions that could be used by a robot. S-HMP achieves high accuracy but at a significant computational cost, while SRF provides slightly less accurate predictions but in real-time. Finally, we validate the effectiveness of our approaches on the Cornell Grasping Dataset [25] for detecting graspable regions, and achieve state-of-the-art performance.",
"We revisit the notion of object affordances, an idea that speaks to an object's functional properties more than its class label. We study the problem of spatially localizing affordances in the form of 2D segmentation masks annotated with discrete affordance labels. For example, we use affordance masks to denote on what surfaces a person sits, grabs, and looks at when interacting with a variety of everyday objects (such as chairs, bikes, and TVs). We introduce such a functionally-annotated dataset derived from the PASCAL VOC benchmark and empirically evaluate several approaches for predicting such functionally-relevant object regions. We compare \"blind\" approaches that ignore image data, bottom-up approaches that reason about local surface layout, and top-down approaches that reason about structural constraints between surfaces regions of objects. We show that the difficulty of functional region prediction varies considerably across objects, and that in general, top-down functional object models do well, though there is much room for improvement.",
"Robots are often required to operate in environments where humans are not present, but yet require the human context information for better human robot interaction. Even when humans are present in the environment, detecting their presence in cluttered environments could be challenging. As a solution to this problem, this paper presents the concept of affordance-map which learns human context by looking at geometric features of the environment. Instead of observing real humans to learn human context, it uses virtual human models and their relationships with the environment to map hidden human affordances in 3D scenes. The affordance-map learning problem is formulated as a multi label classification problem that can be learned using cost-sensitive SVM. Experiments carried out in a real 3D scene dataset recorded promising results and proved the applicability of affordance-map for mapping human context.",
"",
"",
"Affordances are fundamental attributes of objects. Affordances reveal the functionalities of objects and the possible actions that can be performed on them. Understanding affordances is crucial for recognizing human activities in visual data and for robots to interact with the world. In this paper we introduce the new problem of mining the knowledge of semantic affordance: given an object, determining whether an action can be performed on it. This is equivalent to connecting verb nodes and noun nodes in WordNet, or filling an affordance matrix encoding the plausibility of each action-object pair. We introduce a new benchmark with crowdsourced ground truth affordances on 20 PASCAL VOC object classes and 957 action classes. We explore a number of approaches including text mining, visual mining, and collaborative filtering. Our analyses yield a number of significant insights that reveal the most effective ways of collecting knowledge of semantic affordances.",
"",
""
]
} |
1812.00879 | 2903107813 | We propose and demonstrate the use of a Model-Assisted Generative Adversarial Network to produce simulated images that accurately match true images through the variation of underlying model parameters that describe the image generation process. The generator learns the parameter values that give images that best match the true images. Two case studies show the excellent agreement between the generated best match parameters and the true parameters. The best match parameter values that produce the most accurate simulated images can be extracted and used to re-tune the default simulation to minimise any bias when applying image recognition techniques to simulated and true images. In the case of a real-world experiment, the true data is replaced by experimental data with unknown true parameter values. The Model-Assisted Generative Adversarial Network uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast image production. | To the best of our knowledge, there is no GAN variant in the literature that aims to generate a vector of parameters that are used to produce fake images through a defined mapping of the parameters to an image as opposed to generating the fake images directly. However, there are some related studies to consider. One example is the conditional GAN @cite_12 that was used to generate MNIST digits conditioned on class labels. More recent studies used conditional GANs for more complex tasks, such as generating aged versions of people's faces that preserve their identities @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2951961735",
"2125389028"
],
"abstract": [
"It has been recently shown that Generative Adversarial Networks (GANs) can produce synthetic images of exceptional visual fidelity. In this work, we propose the GAN-based method for automatic face aging. Contrary to previous works employing GANs for altering of facial attributes, we make a particular emphasize on preserving the original person's identity in the aged version of his her face. To this end, we introduce a novel approach for \"Identity-Preserving\" optimization of GAN's latent vectors. The objective evaluation of the resulting aged and rejuvenated face images by the state-of-the-art face recognition and age estimation solutions demonstrate the high potential of the proposed method.",
"Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels."
]
} |
1812.00879 | 2903107813 | We propose and demonstrate the use of a Model-Assisted Generative Adversarial Network to produce simulated images that accurately match true images through the variation of underlying model parameters that describe the image generation process. The generator learns the parameter values that give images that best match the true images. Two case studies show the excellent agreement between the generated best match parameters and the true parameters. The best match parameter values that produce the most accurate simulated images can be extracted and used to re-tune the default simulation to minimise any bias when applying image recognition techniques to simulated and true images. In the case of a real-world experiment, the true data is replaced by experimental data with unknown true parameter values. The Model-Assisted Generative Adversarial Network uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast image production. | During the last few years, some studies successfully learnt knowledge constraints from image and text generation @cite_20 that were used to improve the results over base generative models, or to learn disentangled representations in a completely unsupervised manner @cite_1 . In addition, the authors of Ref. @cite_16 introduced a new inversion technique to identify attributes of a dataset that a trained GAN is able to model, and quantitatively compare the performance of different generative networks. | {
"cite_N": [
"@cite_16",
"@cite_1",
"@cite_20"
],
"mid": [
"2963105487",
"2434741482",
"2950757414"
],
"abstract": [
"Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an “inverse model,” a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion , to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website ( https: github.com ToniCreswell InvertingGAN ).",
"This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.",
"The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified a priori, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL. The resulting algorithm is model-agnostic to apply to any DGMs, and is flexible to adapt arbitrary constraints with the model jointly. Experiments on human image generation and templated sentence generation show models with learned knowledge constraints by our algorithm greatly improve over base generative models."
]
} |
1812.00879 | 2903107813 | We propose and demonstrate the use of a Model-Assisted Generative Adversarial Network to produce simulated images that accurately match true images through the variation of underlying model parameters that describe the image generation process. The generator learns the parameter values that give images that best match the true images. Two case studies show the excellent agreement between the generated best match parameters and the true parameters. The best match parameter values that produce the most accurate simulated images can be extracted and used to re-tune the default simulation to minimise any bias when applying image recognition techniques to simulated and true images. In the case of a real-world experiment, the true data is replaced by experimental data with unknown true parameter values. The Model-Assisted Generative Adversarial Network uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast image production. | Several domains could benefit from the approach we present in this paper, but its best application is probably in physical experiments @cite_15 . Although GANs have not been broadly used in real-world scientific experiments, some promising work has been done on the production of @math images @cite_21 , GAN-based calorimeter simulations @cite_6 @cite_18 , and in the production of galaxy images @cite_0 . Contrary to the above studies, the GAN that we present in this paper could be, for instance, used to learn the optimal parameters needed by a Monte Carlo simulation for mimicking detector images in physics experiments. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_6",
"@cite_0",
"@cite_15"
],
"mid": [
"2775970449",
"2581875816",
"2614083378",
"2572438701",
"2883445762"
],
"abstract": [
"Physicists at the Large Hadron Collider (LHC) rely on detailed simulations of particle collisions to build expectations of what experimental data may look like under different theory modeling assumptions. Petabytes of simulated data are needed to develop analysis techniques, though they are expensive to generate using existing algorithms and computing resources. The modeling of detectors and the precise description of particle cascades as they interact with the material in the calorimeter are the most computationally demanding steps in the simulation pipeline. We therefore introduce a deep neural network-based generative model to enable high-fidelity, fast, electromagnetic calorimeter simulation. There are still challenges for achieving precision across the entire phase space, but our current solution can reproduce a variety of particle shower properties while achieving speed-up factors of up to 100,000 @math . This opens the door to a new era of fast simulation that could save significant computing time and disk space, while extending the reach of physics searches and precision measurements at the LHC and beyond.",
"We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in high energy particle physics by applying a novel Generative Adversarial Network (GAN) architecture to the production of jet images—2D representations of energy depositions from particles interacting with a calorimeter. We propose a simple architecture, the Location-Aware Generative Adversarial Network, that learns to produce realistic radiation patterns from simulated high energy particle collisions. The pixel intensities of GAN-generated images faithfully span over many orders of magnitude and exhibit the desired low-dimensional physical properties (i.e., jet mass, n-subjettiness, etc.). We shed light on limitations, and provide a novel empirical validation of image quality and validity of GAN-produced simulations of the natural world. This work provides a base for further explorations of GANs for use in faster simulation in high energy particle physics.",
"The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce , a new fast simulation technique based on generative adversarial networks (GANs). We apply these neural networks to the modeling of electromagnetic showers in a longitudinally segmented calorimeter, and achieve speedup factors comparable to or better than existing full simulation techniques on CPU ( @math - @math ) and even faster on GPU (up to @math ). There are still challenges for achieving precision across the entire phase space, but our solution can reproduce a variety of geometric shower shape properties of photons, positrons and charged pions. This represents a significant stepping stone toward a full neural network-based detector simulation that could save significant computing time and enable many analyses now and in the future.",
"Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here we train a generative adversarial network (GAN) on a sample of @math images of nearby galaxies at @math from the Sloan Digital Sky Survey and conduct @math cross validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance which far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low-signal-to-noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.",
"Our knowledge of the fundamental particles of nature and their interactions is summarized by the standard model of particle physics. Advancing our understanding in this field has required experiments that operate at ever higher energies and intensities, which produce extremely large and information-rich data samples. The use of machine-learning techniques is revolutionizing how we interpret these data samples, greatly increasing the discovery potential of present and future experiments. Here we summarize the challenges and opportunities that come with the use of machine learning at the frontiers of particle physics."
]
} |
1812.00913 | 2971262757 | Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird'seye-view images contain sharper features (e.g, road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford Robot-Car Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach. | Several works have tried to adjust for inaccuracies caused by invalidity of the first two assumptions. The authors of @cite_34 @cite_41 used vanishing point detection, @cite_7 estimated the slope of the road according to the lane markings, and @cite_14 employed motion estimation obtained from SLAM. Invalidity of the third assumption is tackled in @cite_18 by using a laser scanner to exclude obstacles from being transformed to IPM. Another approach @cite_16 @cite_9 @cite_4 creates a look up table for all pixels, by taking into account the distance of objects on the road surface, in order to reduce artefacts at further distance. However, these methods generally assume simple environments (i.e. highway). Contrarily, we learn a non-linear mapping more suited for urban scenes. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_41",
"@cite_9",
"@cite_16",
"@cite_34"
],
"mid": [
"2076945774",
"2553321288",
"",
"1484723123",
"",
"",
"1966151866",
"2125878157"
],
"abstract": [
"Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.",
"This paper proposes an adaptive Inverse Perspective Mapping (IPM) algorithm to obtain accurate bird's-eye view images from the sequential images of forward looking cameras. These images are often distorted by the motion of the vehicle; even a small motion can cause a substantial effect on bird'seye view images. In this paper, we propose an adaptive model for the IPM to accurately transform camera images to bird'seye view images by using motion information. Using motion derived from the monocular visual simultaneous localization and mapping (SLAM), experimental result shows that the proposed approaches can provide stable bird's-eye view images, even with large motion during the drive.",
"",
"In this paper, the authors examine the issue of Inverse Perspective Mapping (IPM) which removes the perspective effect from acquired images that are associated with autonomous driving systems. They in turn present an Extended IPM (EIPM) which is based on stereo image processing and which is used to update the road slope ahead of the vehicle. The EIPM removes the assumption of a flat road ahead of the vehicle and allows for the recovery of road texture even in the presence of a slope. The authors describe how this technique is applied to synthetic images and how it has been integrated into the GOLD system on the ARGO autonomous vehicle.",
"",
"",
"This paper proposes the Top-View Transformation Model for image coordinate transformation, which involves transforming a perspective projection image into its corresponding bird's eye vision. A fitting parameters searching algorithm estimates the parameters that are used to transform the coordinates from the source image. Using this approach, it is not necessary to provide any interior and exterior orientation parameters of the camera. The designed car parking assistant system can be installed at the rear end of the car, providing the driver with a clearer image of the area behind the car. The processing time can be reduced by storing and using the transformation matrix estimated from the first image frame for a sequence of video images. The transformation matrix can be stored as the Matrix Mapping Table, and loaded into the embedded platform to perform the transformation. Experimental results show that the proposed approaches can provide a clearer and more accurate bird's eye view to the vehicle driver.",
"In this work, a new inverse perspective mapping (IPM) technique is proposed based on a robust estimation of the vanishing point, which provide bird-view images of the road, so that facilitating the tasks of road modeling and vehicle detection and tracking. This new approach has been design to cope with the instability that cameras mounted on a moving vehicle suffer. The estimation of the vanishing point relies on a novel and efficient feature extraction strategy, which segmentates the lane markings of the images by combining a histogram-based segmentation with temporal and frequency filtering. Then, the vanishing point of each image is stabilized by means of a temporal filtering along the estimates of previous images. In a last step, the IPM image is computed based on the stabilized vanishing point. Tests have been carried out on several long video sequences captured from cameras inside a vehicle being driven along highways and local roads, with different illumination and weather conditions, presence of shadows, occluding vehicles, and slope changes. Results have shown a significant improvement in terms of lane width constancy and parallelism between lane markings over non-stabilized IPM algorithms."
]
} |
1812.00913 | 2971262757 | Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird'seye-view images contain sharper features (e.g, road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford Robot-Car Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach. | Very recently, @cite_12 proposed the first learning approach for IPM using a synthetic dataset. The authors introduced BridgeGAN which employs the homography IPM to bridge the significant appearance gap between the frontal view and bird's-eye view. In contrast, we use real-world data and consequently labels to generate boosted IPM for larger scenes. Therefore, our learned mapping is directly beneficial for scene understanding tasks (see Section ). | {
"cite_N": [
"@cite_12"
],
"mid": [
"2885937160"
],
"abstract": [
"Environment perception is an important task with great practical value and bird view is an essential part for creating panoramas of surrounding environment. Due to the large gap and severe deformation between the frontal view and bird view, generating a bird view image from a single frontal view is challenging. To tackle this problem, we propose the BridgeGAN, i.e., a novel generative model for bird view synthesis. First, an intermediate view, i.e., homography view, is introduced to bridge the large gap. Next, conditioned on the three views (frontal view, homography view and bird view) in our task, a multi-GAN based model is proposed to learn the challenging cross-view translation. Furthermore, to guarantee one-to-one cross-view correspondences and consistent cross-view feature representations, two consistency constraints are designed for our task. Extensive experiments conducted on a synthetic dataset have demonstrated that the images generated by our model are much better than those generated by existing methods, with more consistent global appearance and sharper details. Ablation studies and discussions show its reliability and robustness in some challenging cases."
]
} |
1812.00518 | 2903043295 | We focus on an important yet challenging problem: using a 2D deep network to deal with 3D segmentation for medical imaging analysis. Existing approaches either applied multi-view planar (2D) networks or directly used volumetric (3D) networks for this purpose, but both of them are not ideal: 2D networks cannot capture 3D contexts effectively, and 3D networks are both memory-consuming and less stable arguably due to the lack of pre-trained models. In this paper, we bridge the gap between 2D and 3D using a novel approach named Elastic Boundary Projection (EBP). The key observation is that, although the object is a 3D volume, what we really need in segmentation is to find its boundary which is a 2D surface. Therefore, we place a number of pivot points in the 3D space, and for each pivot, we determine its distance to the object boundary along a dense set of directions. This creates an elastic shell around each pivot which is initialized as a perfect sphere. We train a 2D deep network to determine whether each ending point falls within the object, and gradually adjust the shell so that it gradually converges to the actual shape of the boundary and thus achieves the goal of segmentation. EBP allows 3D segmentation without cutting the volume into slices or small patches, which stands out from conventional 2D and 3D approaches. EBP achieves promising accuracy in segmenting several abdominal organs from CT scans. | Computer aided diagnosis (CAD) is a research area which aims at helping human doctors in clinics. Currently, a lot of CAD approaches start from medical imaging analysis to obtain accurate descriptions of the scanned organs, soft tissues, etc. . One of the most popular topics in this area is object segmentation, i.e. , determining which voxels belong to the target in 3D data, such as abdominal CT scans studied in this paper. Recently, the success of deep convolutional neural networks for image classification @cite_1 @cite_7 @cite_6 @cite_26 has been transferred to object segmentation in both natural images @cite_22 @cite_5 and medical images @cite_25 @cite_3 . | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_7",
"@cite_1",
"@cite_6",
"@cite_3",
"@cite_5",
"@cite_25"
],
"mid": [
"2511730936",
"2952632681",
"1686810756",
"",
"2949650786",
"2432481613",
"",
"2952232639"
],
"abstract": [
"Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1) 2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL .",
"Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes one third of a second for a typical image.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"",
"Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.",
"Convolutional Neural Networks (CNNs) have been recently employed to solve problems from both the computer vision and medical image analysis fields. Despite their popularity, most approaches are only able to process 2D images while most medical data used in clinical practice consists of 3D volumes. In this work we propose an approach to 3D image segmentation based on a volumetric, fully convolutional, neural network. Our CNN is trained end-to-end on MRI volumes depicting prostate, and learns to predict segmentation for the whole volume at once. We introduce a novel objective function, that we optimise during training, based on Dice coefficient. In this way we can deal with situations where there is a strong imbalance between the number of foreground and background voxels. To cope with the limited number of annotated volumes available for training, we augment the data applying random non-linear transformations and histogram matching. We show in our experimental evaluation that our approach achieves good performances on challenging test data while requiring only a fraction of the processing time needed by other previous methods.",
"",
"There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at this http URL ."
]
} |
1812.00518 | 2903043295 | We focus on an important yet challenging problem: using a 2D deep network to deal with 3D segmentation for medical imaging analysis. Existing approaches either applied multi-view planar (2D) networks or directly used volumetric (3D) networks for this purpose, but both of them are not ideal: 2D networks cannot capture 3D contexts effectively, and 3D networks are both memory-consuming and less stable arguably due to the lack of pre-trained models. In this paper, we bridge the gap between 2D and 3D using a novel approach named Elastic Boundary Projection (EBP). The key observation is that, although the object is a 3D volume, what we really need in segmentation is to find its boundary which is a 2D surface. Therefore, we place a number of pivot points in the 3D space, and for each pivot, we determine its distance to the object boundary along a dense set of directions. This creates an elastic shell around each pivot which is initialized as a perfect sphere. We train a 2D deep network to determine whether each ending point falls within the object, and gradually adjust the shell so that it gradually converges to the actual shape of the boundary and thus achieves the goal of segmentation. EBP allows 3D segmentation without cutting the volume into slices or small patches, which stands out from conventional 2D and 3D approaches. EBP achieves promising accuracy in segmenting several abdominal organs from CT scans. | Prior to the deep learning era, planar image segmentation algorithms were often designed to detect the boundary of a 2D object @cite_17 @cite_2 @cite_30 . Although these approaches have been significantly outperformed by deep neural networks in the area of medical imaging analysis @cite_27 @cite_10 , we borrow the idea of finding the 2D boundary instead of the 3D volume and design our approach. | {
"cite_N": [
"@cite_30",
"@cite_27",
"@cite_2",
"@cite_10",
"@cite_17"
],
"mid": [
"",
"2171417304",
"2124351162",
"2158362736",
"2169551590"
],
"abstract": [
"",
"In this paper, an effective model-based approach for computer-aided kidney segmentation of abdominal CT images with anatomic structure consideration is presented. This automatic segmentation system is expected to assist physicians in both clinical diagnosis and educational training. The proposed method is a coarse to fine segmentation approach divided into two stages. First, the candidate kidney region is extracted according to the statistical geometric location of kidney within the abdomen. This approach is applicable to images of different sizes by using the relative distance of the kidney region to the spine. The second stage identifies the kidney by a series of image processing operations. The main elements of the proposed system are: 1) the location of the spine is used as the landmark for coordinate references; 2) elliptic candidate kidney region extraction with progressive positioning on the consecutive CT images; 3) novel directional model for a more reliable kidney region seed point identification; and 4) adaptive region growing controlled by the properties of image homogeneity. In addition, in order to provide different views for the physicians, we have implemented a visualization tool that will automatically show the renal contour through the method of second-order neighborhood edge detection. We considered segmentation of kidney regions from CT scans that contain pathologies in clinical practice. The results of a series of tests on 358 images from 30 patients indicate an average correlation coefficient of up to 88 between automatic and manual segmentation",
"The problem of efficient, interactive foreground background segmentation in still images is of great practical importance in image editing. Classical image segmentation tools use either texture (colour) information, e.g. Magic Wand, or edge (contrast) information, e.g. Intelligent Scissors. Recently, an approach based on optimization by graph-cut has been developed which successfully combines both types of information. In this paper we extend the graph-cut approach in three respects. First, we have developed a more powerful, iterative version of the optimisation. Secondly, the power of the iterative algorithm is used to simplify substantially the user interaction needed for a given quality of result. Thirdly, a robust algorithm for \"border matting\" has been developed to estimate simultaneously the alpha-matte around an object boundary and the colours of foreground pixels. We show that for moderately difficult examples the proposed method outperforms competitive tools.",
"In this paper we present a hierarchical, learning-based approach for automatic and accurate liver segmentation from 3D CT volumes. We target CT volumes that come from largely diverse sources (e.g., diseased in six different organs) and are generated by different scanning protocols (e.g., contrast and non-contrast, various resolution and position). Three key ingredients are combined to solve the segmentation problem. First, a hierarchical framework is used to efficiently and effectively monitor the accuracy propagation in a coarse-to-fine fashion. Second, two new learning techniques, marginal space learning and steerable features, are applied for robust boundary inference. This enables handling of highly heterogeneous texture pattern. Third, a novel shape space initialization is proposed to improve traditional methods that are limited to similarity transformation. The proposed approach is tested on a challenging dataset containing 174 volumes. Our approach not only produces excellent segmentation accuracy, but also runs about fifty times faster than state-of-the-art solutions [7, 9].",
"In this paper we describe a new technique for general purpose interactive segmentation of N-dimensional images. The user marks certain pixels as \"object\" or \"background\" to provide hard constraints for segmentation. Additional soft constraints incorporate both boundary and region information. Graph cuts are used to find the globally optimal segmentation of the N-dimensional image. The obtained solution gives the best balance of boundary and region properties among all segmentations satisfying the constraints. The topology of our segmentation is unrestricted and both \"object\" and \"background\" segments may consist of several isolated parts. Some experimental results are presented in the context of photo video editing and medical image segmentation. We also demonstrate an interesting Gestalt example. A fast implementation of our segmentation method is possible via a new max-flow algorithm."
]
} |
1812.00552 | 2903429511 | Universal adversarial perturbations (UAPs), a.k.a. input-agnostic perturbations, has been proved to exist and be able to fool cutting-edge deep learning models on most of the data samples. Existing UAP methods mainly focus on attacking image classification models. Nevertheless, little attention has been paid to attacking image retrieval systems. In this paper, we make the first attempt in attacking image retrieval systems. Concretely, image retrieval attack is to make the retrieval system return irrelevant images to the query at the top ranking list. It plays an important role to corrupt the neighbourhood relationships among features in image retrieval attack. To this end, we propose a novel method to generate retrieval-against UAP to break the neighbourhood relationships of image features via degrading the corresponding ranking metric. To expand the attack method to scenarios with varying input sizes or untouchable network parameters, a multi-scale random resizing scheme and a ranking distillation strategy are proposed. We evaluate the proposed method on four widely-used image retrieval datasets, and report a significant performance drop in terms of different metrics, such as mAP and mP@10. Finally, we test our attack methods on the real-world visual search engine, i.e., Google Images, which demonstrates the practical potentials of our methods. | Szegedy @cite_9 have demonstrated that neural networks can be fooled by adversarial example, which is a clean image being intentionally perturbed, by adding noise called adversarial perturbation that are quasi-imperceptible to human eyes. Subsequently, various methods have been proposed to generate such perturbations @cite_31 @cite_21 @cite_42 . Simple methods such as FGSM @cite_31 determine the perturbation with one-step gradient-based method. An iterative scheme is proposed in @cite_36 to achieve better attacking performance via applying gradient ascent multiple times. Besides, complex approaches like @cite_42 find perturbation from the perspective of classification boundary. However, these methods compute perturbations for each data point specifically and independently. More recently, Moosavi @cite_32 have shown that there exists universal adversarial perturbations (UAP), which aims to find an image-agnostic perturbation that can predict wrong labels for most natural images. UAP is a single adversarial noise that is offline trained and can adversarially perturb the corresponding outputs of a given model online. It is observed that perturbations crafted for specific models or training sets can fool other models and datasets @cite_31 @cite_40 , referred as transfer attacking, which is widely adopted in black-box attack that no information about the model is known in advance. | {
"cite_N": [
"@cite_36",
"@cite_9",
"@cite_21",
"@cite_42",
"@cite_32",
"@cite_40",
"@cite_31"
],
"mid": [
"2460937040",
"2964153729",
"2774644650",
"2243397390",
"2543927648",
"",
"2963207607"
],
"abstract": [
"Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.",
"Abstract: Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.",
"Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions.",
"State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1",
"Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability. We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye. We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks. The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.",
"",
"Abstract: Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."
]
} |
1812.00500 | 2903184186 | It is still challenging to build an AI system that can perform tasks that involve vision and language at human level. So far, researchers have singled out individual tasks separately, for each of which they have designed networks and trained them on its dedicated datasets. Although this approach has seen a certain degree of success, it comes with difficulties of understanding relations among different tasks and transferring the knowledge learned for a task to others. We propose a multi-task learning approach that enables to learn vision-language representation that is shared by many tasks from their diverse datasets. The representation is hierarchical, and prediction for each task is computed from the representation at its corresponding level of the hierarchy. We show through experiments that our method consistently outperforms previous single-task-learning methods on image caption retrieval, visual question answering, and visual grounding. We also analyze the learned hierarchical representation by visualizing attention maps generated in our network. | Since its introduction @cite_19 , multi-task learning has achieved many successes in several areas including computer vision and natural language processing. However, there have been only a few works that explored joint learning of multiple multi-modal tasks of vision and language. @cite_4 proposed a method for learning relations between multiple regions in the image by jointly refining the features of three different semantic tasks, scene graph generation, object detection, and image region captioning. @cite_16 showed that joint training on VQA and VQG (visual question generation) contributes to improve VQA accuracy and also understanding of interactions among images, questions, and answers. Although these works have demonstrated the potential of multi-task learning for the vision-language tasks, they strongly rely on the availability of the datasets providing supervision over multiple tasks, where an input is shared by all the tasks while a different label is given to it for each task. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_4"
],
"mid": [
"",
"2759816466",
"2963649796"
],
"abstract": [
"",
"Recently visual question answering (VQA) and visual question generation (VQG) are two trending topics in the computer vision, which have been explored separately. In this work, we propose an end-to-end unified framework, the Invertible Question Answering Network (iQAN), to leverage the complementary relations between questions and answers in images by jointly training the model on VQA and VQG tasks. Corresponding parameter sharing scheme and regular terms are proposed as constraints to explicitly leverage Q,A's dependencies to guide the training process. After training, iQAN can take either question or answer as input, then output the counterpart. Evaluated on the large-scale visual question answering datasets CLEVR and VQA2, our iQAN improves the VQA accuracy over the baselines. We also show the dual learning framework of iQAN can be generalized to other VQA architectures and consistently improve the results over both the VQA and VQG tasks.",
"Object detection, scene graph generation and region captioning, which are three scene understanding tasks at different semantic levels, are tied together: scene graphs are generated on top of objects detected in an image with their pairwise relationship predicted, while region captioning gives a language description of the objects, their attributes, relations and other context information. In this work, to leverage the mutual connections across semantic levels, we propose a novel neural network model, termed as Multi-level Scene Description Network (denoted as MSDN), to solve the three vision tasks jointly in an end-to-end manner. Object, phrase, and caption regions are first aligned with a dynamic graph based on their spatial and semantic connections. Then a feature refining structure is used to pass messages across the three levels of semantic tasks through the graph. We benchmark the learned model on three tasks, and show the joint learning across three tasks with our proposed method can bring mutual improvements over previous models. Particularly, on the scene graph generation task, our proposed method outperforms the stateof- art method with more than 3 margin. Code has been made publicly available."
]
} |
1812.00564 | 2903470619 | Can health entities collaboratively train deep learning models without sharing sensitive raw data? This paper proposes several configurations of a distributed deep learning method called SplitNN to facilitate such collaborations. SplitNN does not share raw data or model details with collaborating institutions. The proposed configurations of splitNN cater to practical settings of i) entities holding different modalities of patient data, ii) centralized and local health entities collaborating on multiple tasks and iii) learning without sharing labels. We compare performance and resource efficiency trade-offs of splitNN and other distributed deep learning methods like federated learning, large batch synchronous stochastic gradient descent and show highly encouraging results for splitNN. | In addition to splitNN @cite_8 , techniques of federated deep learning @cite_18 and large batch synchronous stochastic gradient descent (SGD) @cite_22 are currently available approaches for distributed deep learning. There has been no work as yet on federated deep learning and large batch synchronous SGD methods with regards to their applicability to useful non-vanilla settings of distributed deep learning studied in rest of this paper such as a) distributed deep learning with vertically partitioned data, b) distributed deep learning without label sharing, c) distributed semi-supervised learning and d) distributed multi-task learning. That said, with regards to ‘non-neural network’ based federated learning techniques, the work in @cite_1 shows their applicability to vertically partitioned distributed data @cite_19 @cite_5 @cite_6 @cite_10 shows applicability to multi-task learning in distributed settings. We now propose configurations of splitNN for all these useful settings in the rest of this paper. | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_8",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_5",
"@cite_10"
],
"mid": [
"2885216147",
"2336650964",
"2963209930",
"2773194476",
"",
"2039795745",
"1997375126",
"2140613126"
],
"abstract": [
"Machine learning models benefit from large and diverse datasets. Using such datasets, however, often requires trusting a centralized data aggregator. For sensitive applications like healthcare and finance this is undesirable as it could compromise patient privacy or divulge trade secrets. Recent advances in secure and privacy-preserving computation, including trusted hardware enclaves and differential privacy, offer a way for mutually distrusting parties to efficiently train a machine learning model without revealing the training data. In this work, we introduce Myelin, a deep learning framework which combines these privacy-preservation primitives, and use it to establish a baseline level of performance for fully private machine learning.",
"Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies.",
"Abstract In domains such as health care and finance, shortage of labeled data and computational resources is a critical issue while developing machine learning algorithms. To address the issue of labeled data scarcity in training and deployment of neural network-based systems, we propose a new technique to train deep neural networks over several data sources. Our method allows for deep neural networks to be trained using data from multiple entities in a distributed fashion. We evaluate our algorithm on existing datasets and show that it obtains performance which is similar to a regular neural network trained on a single machine. We further extend it to incorporate semi-supervised learning when training with few labeled samples, and analyze any security concerns that may arise. Our algorithm paves the way for distributed training of deep neural networks in data sensitive applications when raw data may not be shared directly.",
"Consider two data providers, each maintaining private records of different feature sets about common entities. They aim to learn a linear model jointly in a federated setting, namely, data is local and a shared model is trained from locally computed updates. In contrast with most work on distributed learning, in this scenario (i) data is split vertically, i.e. by features, (ii) only one data provider knows the target variable and (iii) entities are not linked across the data providers. Hence, to the challenge of private learning, we add the potentially negative consequences of mistakes in entity resolution. Our contribution is twofold. First, we describe a three-party end-to-end solution in two phases ---privacy-preserving entity resolution and federated logistic regression over messages encrypted with an additively homomorphic scheme---, secure against a honest-but-curious adversary. The system allows learning without either exposing data in the clear or sharing which entities the data providers have in common. Our implementation is as accurate as a naive non-private solution that brings all data in one place, and scales to problems with millions of entities with hundreds of features. Second, we provide what is to our knowledge the first formal analysis of the impact of entity resolution's mistakes on learning, with results on how optimal classifiers, empirical losses, margins and generalisation abilities are affected. Our results bring a clear and strong support for federated learning: under reasonable assumptions on the number and magnitude of entity resolution's mistakes, it can be extremely beneficial to carry out federated learning in the setting where each peer's data provides a significant uplift to the other.",
"",
"This paper addresses the vertical partitioning of a set of logical records or a relation into fragments. The rationale behind vertical partitioning is to produce fragments, groups of attribute columns, that “closely match” the requirements of transactions. Vertical partitioning is applied in three contexts: a database stored on devices of a single type, a database stored in different memory levels, and a distributed database. In a two-level memory hierarchy, most transactions should be processed using the fragments in primary memory. In distributed databases, fragment allocation should maximize the amount of local transaction processing. Fragments may be nonoverlapping or overlapping. A two-phase approach for the determination of fragments is proposed; in the first phase, the design is driven by empirical objective functions which do not require specific cost information. The second phase performs cost optimization by incorporating the knowledge of a specific application environment. The algorithms presented in this paper have been implemented, and examples of their actual use are shown.",
"In addition to indexes and materialized views, horizontal and vertical partitioning are important aspects of physical design in a relational database system that significantly impact performance. Horizontal partitioning also provides manageability; database administrators often require indexes and their underlying tables partitioned identically so as to make common operations such as backup restore easier. While partitioning is important, incorporating partitioning makes the problem of automating physical design much harder since: (a) The choices of partitioning can strongly interact with choices of indexes and materialized views. (b) A large new space of physical design alternatives must be considered. (c) Manageability requirements impose a new constraint on the problem. In this paper, we present novel techniques for designing a scalable solution to this integrated physical design problem that takes both performance and manageability into account. We have implemented our techniques and evaluated it on Microsoft SQL Server. Our experiments highlight: (a) the importance of taking an integrated approach to automated physical design and (b) the scalability of our techniques.",
"Efficient management of RDF data is an important factor in realizing the Semantic Web vision. Performance and scalability issues are becoming increasingly pressing as Semantic Web technology is applied to real-world applications. In this paper, we examine the reasons why current data management solutions for RDF data scale poorly, and explore the fundamental scalability limitations of these approaches. We review the state of the art for improving performance for RDF databases and consider a recent suggestion, \"property tables.\" We then discuss practically and empirically why this solution has undesirable features. As an improvement, we propose an alternative solution: vertically partitioning the RDF data. We compare the performance of vertical partitioning with prior art on queries generated by a Web-based RDF browser over a large-scale (more than 50 million triples) catalog of library data. Our results show that a vertical partitioned schema achieves similar performance to the property table technique while being much simpler to design. Further, if a column-oriented DBMS (a database architected specially for the vertically partitioned case) is used instead of a row-oriented DBMS, another order of magnitude performance improvement is observed, with query times dropping from minutes to several seconds."
]
} |
1812.00469 | 2903265578 | In this paper, we propose a general approach to optimize anchor boxes for object detection. Nowadays, anchor boxes are widely adopted in state-of-the-art detection frameworks. However, all these frameworks pre-define anchor box shapes in a heuristic way and fix the size during training. To improve the accuracy and reduce the effort to design the anchor boxes, we propose to dynamically learn the shapes, which allows the anchors to automatically adapt to the data distribution and the network learning capability. The learning approach can be easily implemented in the stochastic gradient descent way and be plugged into any anchor box-based detection framework. The extra training cost is almost negligible and it has no impact on the inference time cost. Exhaustive experiments also demonstrate that the proposed anchor optimization method consistently achieves significant improvement ( @math mAP absolute gain) over the baseline method on several benchmark datasets including Pascal VOC 07+12, MS COCO and Brainwash. Meanwhile, the robustness is also verified towards different anchor box initialization methods, which greatly simplifies the problem of anchor box design. | When the general object detection framework is applied to specific problems, the anchor sizes have to be revisited and modified accordingly. For example of the text detection in @cite_5 , the aspect ratios also include @math and @math as well as @math , @math , @math , @math , @math , since the text could exhibit wider or higher than the general objects. For the face detection in @cite_28 @cite_9 , the aspect ratio only include @math since the face is roughly in a square shape. For pedestrian detection in @cite_1 , a ratio of 0.41 based on @cite_8 is adopted for the anchor box. As suggested in @cite_1 , inappropriate anchor boxes could be noisy and degrade the accuracy. | {
"cite_N": [
"@cite_8",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_5"
],
"mid": [
"2031454541",
"2747648373",
"2750317406",
"2497039038",
"2784050770"
],
"abstract": [
"Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"We introduce the Single Stage Headless (SSH) face detector. Unlike two stage proposal-classification detectors, SSH detects faces in a single stage directly from the early convolutional layers in a classification network. SSH is headless. That is, it is able to achieve state-of-the-art results while removing the \"head\" of its underlying classification network -- i.e. all fully connected layers in the VGG-16 which contains a large number of parameters. Additionally, instead of relying on an image pyramid to detect faces with various scales, SSH is scale-invariant by design. We simultaneously detect faces with different scales in a single forward pass of the network, but from different layers. These properties make SSH fast and light-weight. Surprisingly, with a headless VGG-16, SSH beats the ResNet-101-based state-of-the-art on the WIDER dataset. Even though, unlike the current state-of-the-art, SSH does not use an image pyramid and is 5X faster. Moreover, if an image pyramid is deployed, our light-weight network achieves state-of-the-art on all subsets of the WIDER dataset, improving the AP by 2.5 . SSH also reaches state-of-the-art results on the FDDB and Pascal-Faces datasets while using a small input size, leading to a runtime of 50 ms image on a GPU. The code is available at this https URL.",
"This paper presents a real-time face detector, named Single Shot Scale-invariant Face Detector (S @math FD), which performs superiorly on various scales of faces with a single deep neural network, especially for small faces. Specifically, we try to solve the common problem that anchor-based detectors deteriorate dramatically as the objects become smaller. We make contributions in the following three aspects: 1) proposing a scale-equitable face detection framework to handle different scales of faces well. We tile anchors on a wide range of layers to ensure that all scales of faces have enough features for detection. Besides, we design anchor scales based on the effective receptive field and a proposed equal proportion interval principle; 2) improving the recall rate of small faces by a scale compensation anchor matching strategy; 3) reducing the false positive rate of small faces via a max-out background label. As a consequence, our method achieves state-of-the-art detection performance on all the common face detection benchmarks, including the AFW, PASCAL face, FDDB and WIDER FACE datasets, and can run at 36 FPS on a Nvidia Titan X (Pascal) for VGA-resolution images.",
"Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast Faster R-CNN have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available.",
"Scene text detection is an important step of scene text recognition system and also a challenging problem. Different from general object detections, the main challenges of scene text detection lie on arbitrary orientations, small sizes, and significantly variant aspect ratios of text in natural images. In this paper, we present an end-to-end trainable fast scene text detector, named TextBoxes++, which detects arbitrary-oriented scene text with both high accuracy and efficiency in a single network forward pass. No post-processing other than efficient non-maximum suppression is involved. We have evaluated the proposed TextBoxes++ on four public data sets. In all experiments, TextBoxes++ outperforms competing methods in terms of text localization accuracy and runtime. More specifically, TextBoxes++ achieves an f-measure of 0.817 at 11.6 frames s for 1024 × 1024 ICDAR 2015 incidental text images and an f-measure of 0.5591 at 19.8 frames s for 768 × 768 COCO-Text images. Furthermore, combined with a text recognizer, TextBoxes++ significantly outperforms the state-of-the-art approaches for word spotting and end-to-end text recognition tasks on popular benchmarks. Code is available at: https: github.com MhLiao TextBoxes_plusplus."
]
} |
1812.00469 | 2903265578 | In this paper, we propose a general approach to optimize anchor boxes for object detection. Nowadays, anchor boxes are widely adopted in state-of-the-art detection frameworks. However, all these frameworks pre-define anchor box shapes in a heuristic way and fix the size during training. To improve the accuracy and reduce the effort to design the anchor boxes, we propose to dynamically learn the shapes, which allows the anchors to automatically adapt to the data distribution and the network learning capability. The learning approach can be easily implemented in the stochastic gradient descent way and be plugged into any anchor box-based detection framework. The extra training cost is almost negligible and it has no impact on the inference time cost. Exhaustive experiments also demonstrate that the proposed anchor optimization method consistently achieves significant improvement ( @math mAP absolute gain) over the baseline method on several benchmark datasets including Pascal VOC 07+12, MS COCO and Brainwash. Meanwhile, the robustness is also verified towards different anchor box initialization methods, which greatly simplifies the problem of anchor box design. | To ease the effort of anchor shape design, the most relevant work might be MetaAnchor @cite_22 . Leveraging neural network weight prediction, the anchors are modeled as functions implemented by an extra neural network and computed from customized prior boxes. The mechanism is shown to be robust to anchor settings and bounding box distributions, compared to predefined fixed anchor scheme. However, the method involves an extra network to predict the weights of another neural network, resulting extra training effort and inference time cost, and also needs to choose a set of customized prior boxes by hand. Comparatively, our method can be easily embedded into any detection framework without extra network, and has negligible impact on the training time space cost and no impact on the inference time. | {
"cite_N": [
"@cite_22"
],
"mid": [
"2810862788"
],
"abstract": [
"We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on the transfer task. Our experiment on COCO detection task shows MetaAnchor consistently outperforms the counterparts in various scenarios."
]
} |
1812.00651 | 2903467685 | Rumour is a collective emergent phenomenon with a potential for provoking a crisis. Modelling approaches have been deployed since five decades ago; however, the focus was mostly on epidemic behaviour of the rumours which does not take into account the differences of the agents. We use social practice theory to model agent decision making in organizational rumourmongering. Such an approach provides us with an opportunity to model rumourmongering agents with a layer of cognitive realism and study the impacts of various intervention strategies for prevention and control of rumours in organizations. | The research area of agent-based social simulations (ABSS) specializes on simulating the social phenomena as phenomena that emerge from the behaviour of individual agents. ABSS is a powerful tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. This approach permits one to study how individual behaviour give rise to macroscopic phenomenon @cite_10 . Such an approach is an ideal way to study the macro effects of various social practices, because it can capture routines which are practiced by individuals on a regular basis in micro level and see their collective influence in a macro level. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2902549507"
],
"abstract": [
"To understand societ al phenomena through simulation, we need computational variants of socio-cognitive theories. Social Practice Theory has provided a unique understanding of social phenomena regarding the routinized, social and interconnected aspects of behaviour. This paper provides the Social Practice Agent (SoPrA) model that enables the use of Social Practice Theory (SPT) for agent-based simulations. We extract requirements from SPT, construct a computational model in the Unified Modelling Language, verify its implementation in Netlogo and Prot 'eg 'e and show how SoPrA maps on a use case of commuting. The next step is to model the dynamic aspect of SPT and validate SoPrA's ability to provide understanding in different scenario's. This paper provides the groundwork with a computational model that is a correct depiction of SPT, computational feasible and can be directly mapped to the habitual, social and interconnected aspects of a target scenario."
]
} |
1812.00285 | 2952485586 | Curriculum learning in reinforcement learning is a training methodology that seeks to speed up learning of a difficult target task, by first training on a series of simpler tasks and transferring the knowledge acquired to the target task. Automatically choosing a sequence of such tasks (i.e. a curriculum) is an open problem that has been the subject of much recent work in this area. In this paper, we build upon a recent method for curriculum design, which formulates the curriculum sequencing problem as a Markov Decision Process. We extend this model to handle multiple transfer learning algorithms, and show for the first time that a curriculum policy over this MDP can be learned from experience. We explore various representations that make this possible, and evaluate our approach by learning curriculum policies for multiple agents in two different domains. The results show that our method produces curricula that can train agents to perform on a target task as fast or faster than existing methods. | The problem of curriculum learning has similarities to the problem of in transfer learning. In this problem, the goal is to select the best source task from a prespecified set for a given target task. These approaches typically compute a similarity measure between the MDPs of the source and target task @cite_29 @cite_12 , or learn a model of transferability that can be applied to novel source-target task pairs @cite_21 @cite_22 . However, none of these methods have been successfully applied to select a multi-step sequence of tasks. | {
"cite_N": [
"@cite_29",
"@cite_21",
"@cite_22",
"@cite_12"
],
"mid": [
"1546944959",
"2271262891",
"2578423033",
"2141559023"
],
"abstract": [
"A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (, 2001, 2003; , 2004). An integral component of such metrics is the Kantorovich metric between probability distributions. However, while this metric enables many satisfying theoretical properties, it is costly to compute in practice. In this paper, we use techniques from network optimization and statistical sampling to overcome this problem. We obtain in this manner a variety of distance functions for MDP state aggregation, which differ in the tradeoff between time and space complexity, as well as the quality of the aggregation. We provide an empirical evaluation of these trade-offs.",
"In a reinforcement learning setting, the goal of transfer learning is to improve performance on a target task by re-using knowledge from one or more source tasks. A key problem in transfer learning is how to choose appropriate source tasks for a given target task. Current approaches typically require that the agent has some experience in the target domain, or that the target task is specified by a model (e.g., a Markov Decision Process) with known parameters. To address these limitations, this paper proposes a framework for selecting source tasks in the absence of a known model or target task samples. Instead, our approach uses meta-data (e.g., attribute-value pairs) associated with each task to learn the expected benefit of transfer given a source-target task pair. To test the method, we conducted a large-scale experiment in the Ms. Pac-Man domain in which an agent played over 170 million games spanning 192 variations of the task. The agent used vast amounts of experience about transfer learning in the domain to model the benefit (or detriment) of transferring knowledge from one task to another. Subsequently, the agent successfully selected appropriate source tasks for previously unseen target tasks.",
"Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer. These inter-task relationships are typically estimated based on training data for each task, which is inefficient in lifelong learning settings where the goal is to learn each consecutive task rapidly from as little data as possible. To reduce this burden, we develop a lifelong reinforcement learning method based on coupled dictionary learning that incorporates high-level task descriptors to model the intertask relationships. We show that using task descriptors improves the performance of the learned task policies, providing both theoretical justification for the benefit and empirical demonstration of the improvement across a variety of dynamical control problems. Given only the descriptor for a new task, the lifelong learner is also able to accurately predict the task policy through zero-shot learning using the coupled dictionary, eliminating the need to pause to gather training data before addressing the task.",
"Transfer learning can improve the reinforcement learning of a new task by allowing the agent to reuse knowledge acquired from other source tasks. Despite their success, transfer learning methods rely on having relevant source tasks; transfer from inappropriate tasks can inhibit performance on the new task. For fully autonomous transfer, it is critical to have a method for automatically choosing relevant source tasks, which requires a similarity measure between Markov Decision Processes (MDPs). This issue has received little attention, and is therefore still a largely open problem. This paper presents a data-driven automated similarity measure for MDPs. This novel measure is a significant step toward autonomous reinforcement learning transfer, allowing agents to: (1) characterize when transfer will be useful and, (2) automatically select tasks to use for transfer. The proposed measure is based on the reconstruction error of a restricted Boltzmann machine that attempts to model the behavioral dynamics of the two MDPs being compared. Empirical results illustrate that this measure is correlated with the performance of transfer and therefore can be used to identify similar source tasks for transfer learning."
]
} |
1812.00285 | 2952485586 | Curriculum learning in reinforcement learning is a training methodology that seeks to speed up learning of a difficult target task, by first training on a series of simpler tasks and transferring the knowledge acquired to the target task. Automatically choosing a sequence of such tasks (i.e. a curriculum) is an open problem that has been the subject of much recent work in this area. In this paper, we build upon a recent method for curriculum design, which formulates the curriculum sequencing problem as a Markov Decision Process. We extend this model to handle multiple transfer learning algorithms, and show for the first time that a curriculum policy over this MDP can be learned from experience. We explore various representations that make this possible, and evaluate our approach by learning curriculum policies for multiple agents in two different domains. The results show that our method produces curricula that can train agents to perform on a target task as fast or faster than existing methods. | Finally, curriculum learning has also been explored in the context of supervised learning @cite_25 @cite_24 @cite_2 . Various related paradigms such as multi-task reinforcement learning @cite_30 and lifelong learning @cite_20 have also been examined. The main difference between curriculum learning and these works is that we have full control over the order in which tasks are selected, and the goal is to optimize performance for a specific target task, rather than all tasks. | {
"cite_N": [
"@cite_30",
"@cite_24",
"@cite_2",
"@cite_25",
"@cite_20"
],
"mid": [
"2169743339",
"2605801332",
"",
"2762242067",
"2106008664"
],
"abstract": [
"We consider the problem of multi-task reinforcement learning, where the agent needs to solve a sequence of Markov Decision Processes (MDPs) chosen randomly from a fixed but unknown distribution. We model the distribution over MDPs using a hierarchical Bayesian infinite mixture model. For each novel MDP, we use the previously learned distribution as an informed prior for modelbased Bayesian reinforcement learning. The hierarchical Bayesian framework provides a strong prior that allows us to rapidly infer the characteristics of new environments based on previous environments, while the use of a nonparametric model allows us to quickly adapt to environments we have not encountered before. In addition, the use of infinite mixtures allows for the model to automatically learn the number of underlying MDP components. We evaluate our approach and show that it leads to significant speedups in convergence to an optimal policy after observing only a small number of tasks.",
"We introduce a method for automatically selecting the path, or syllabus, that a neural network follows through a curriculum so as to maximise learning efficiency. A measure of the amount that the network learns from each data sample is provided as a reward signal to a nonstationary multi-armed bandit algorithm, which then determines a stochastic syllabus. We consider a range of signals derived from two distinct indicators of learning progress: rate of increase in prediction accuracy, and rate of increase in network complexity. Experimental results for LSTM networks on three curricula demonstrate that our approach can significantly accelerate learning, in some cases halving the time required to attain a satisfactory performance level.",
"",
"Abstract Layered learning is a hierarchical machine learning paradigm that enables learning of complex behaviors by incrementally learning a series of sub-behaviors. A key feature of layered learning is that higher layers directly depend on the learned lower layers. In its original formulation, lower layers were frozen prior to learning higher layers. This article considers a major extension to the paradigm that allows learning certain behaviors independently, and then later stitching them together by learning at the “seams” where their influences overlap. The UT Austin Villa 2014 RoboCup 3D simulation team, using such overlapping layered learning, learned a total of 19 layered behaviors for a simulated soccer-playing robot, organized both in series and in parallel. To the best of our knowledge this is more than three times the number of layered behaviors in any prior layered learning system. Furthermore, the complete learning process is repeated on four additional robot body types, showcasing its generality as a paradigm for efficient behavior learning. The resulting team won the RoboCup 2014 championship with an undefeated record, scoring 52 goals and conceding none. This article includes a detailed experimental analysis of the team's performance and the overlapping layered learning approach that led to its success.",
"Policy gradient algorithms have shown considerable recent success in solving high-dimensional sequential decision making tasks, particularly in robotics. However, these methods often require extensive experience in a domain to achieve high performance. To make agents more sample-efficient, we developed a multi-task policy gradient method to learn decision making tasks consecutively, transferring knowledge between tasks to accelerate learning. Our approach provides robust theoretical guarantees, and we show empirically that it dramatically accelerates learning on a variety of dynamical systems, including an application to quadrotor control."
]
} |
1812.00124 | 2902658950 | The labeling cost of large number of bounding boxes is one of the main challenges for training modern object detectors. To reduce the dependence on expensive bounding box annotations, we propose a new semi-supervised object detection formulation, in which a few seed box level annotations and a large scale of image level annotations are used to train the detector. We adopt a training-mining framework, which is widely used in weakly supervised object detection tasks. However, the mining process inherently introduces various kinds of labelling noises: false negatives, false positives and inaccurate boundaries, which can be harmful for training the standard object detectors (e.g. Faster RCNN). We propose a novel NOise Tolerant Ensemble RCNN (NOTE-RCNN) object detector to handle such noisy labels. Comparing to standard Faster RCNN, it contains three highlights: an ensemble of two classification heads and a distillation head to avoid overfitting on noisy labels and improve the mining precision, masking the negative sample loss in box predictor to avoid the harm of false negative labels, and training box regression head only on seed annotations to eliminate the harm from inaccurate boundaries of mined bounding boxes. We evaluate the methods on ILSVRC 2013 and MSCOCO 2017 dataset; we observe that the detection accuracy consistently improves as we iterate between mining and training steps, and state-of-the-art performance is achieved. | The majority of recent work treats weakly supervised object detection as a Multiple Instance Learning (MIL) @cite_34 problem. An image is decomposed into object proposals using proposal generators, such as EdgeBox @cite_13 or SelectiveSearch @cite_9 . The basic pipeline is to iteratively mine ( localize) objects as training samples using the detectors and then train detectors with the updated training samples. The detector can be a proposal level SVM classifier @cite_24 @cite_2 @cite_25 or modern CNN-based detector @cite_33 @cite_10 @cite_30 , such as RCNN @cite_15 or Fast RCNN @cite_28 . Deselaers al @cite_0 first argued to use objectness score as a generic object appearance prior to the particular target categories. Cinbis al @cite_8 proposed a multi-fold multiple instance learning procedure, which prevents training from prematurely locking onto erroneous object locations. Uijlings al @cite_17 argued to use pre-trained detectors as the proposal generator and show its effectiveness in knowledge transfer from source to target categories. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_8",
"@cite_28",
"@cite_10",
"@cite_9",
"@cite_24",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_34",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"",
"",
"",
"",
"",
"2952072685",
"2951270658",
"",
"2102605133",
"2110119381",
"7746136",
"",
"2962685835"
],
"abstract": [
"",
"",
"",
"",
"",
"",
"Learning to localize objects with minimal supervision is an important problem in computer vision, since large fully annotated datasets are extremely costly to obtain. In this paper, we propose a new method that achieves this goal with only image-level labels of whether the objects are present or not. Our approach combines a discriminative submodular cover problem for automatically discovering a set of positive object windows with a smoothed latent SVM formulation. The latter allows us to leverage efficient quasi-Newton optimization techniques. Our experiments demonstrate that the proposed approach provides a 50 relative improvement in mean average precision over the current state-of-the-art on PASCAL VOC 2007 detection.",
"Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.",
"",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"The multiple instance problem arises in tasks where the training examples are ambiguous: a single example object may have many alternative feature vectors (instances) that describe it, and yet only one of those feature vectors may be responsible for the observed classification of the object. This paper describes and compares three kinds of algorithms that learn axis-parallel rectangles to solve the multiple instance problem. Algorithms that ignore the multiple instance problem perform very poorly. An algorithm that directly confronts the multiple instance problem (by attempting to identify which feature vectors are responsible for the observed classifications) performs best, giving 89 correct predictions on a musk odor prediction task. The paper also illustrates the use of artificial data to debug and compare these algorithms.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"",
"We propose to revisit knowledge transfer for training object detectors on target classes from weakly supervised training images, helped by a set of source classes with bounding-box annotations. We present a unified knowledge transfer framework based on training a single neural network multi-class object detector over all source classes, organized in a semantic hierarchy. This generates proposals with scores at multiple levels in the hierarchy, which we use to explore knowledge transfer over a broad range of generality, ranging from class-specific (bycicle to motorbike) to class-generic (objectness to any class). Experiments on the 200 object classes in the ILSVRC 2013 detection dataset show that our technique (1) leads to much better performance on the target classes (70.3 CorLoc, 36.9 mAP) than a weakly supervised baseline which uses manually engineered objectness [11] (50.5 CorLoc, 25.4 mAP). (2) delivers target object detectors reaching 80 of the mAP of their fully supervised counterparts. (3) outperforms the best reported transfer learning results on this dataset (+41 CorLoc and +3 mAP over [18, 46], +16.2 mAP over [32]). Moreover, we also carry out several across-dataset knowledge transfer experiments [27, 24, 35] and find that (4) our technique outperforms the weakly supervised baseline in all dataset pairs by 1.5 A— - 1.9A—, establishing its general applicability."
]
} |
1812.00124 | 2902658950 | The labeling cost of large number of bounding boxes is one of the main challenges for training modern object detectors. To reduce the dependence on expensive bounding box annotations, we propose a new semi-supervised object detection formulation, in which a few seed box level annotations and a large scale of image level annotations are used to train the detector. We adopt a training-mining framework, which is widely used in weakly supervised object detection tasks. However, the mining process inherently introduces various kinds of labelling noises: false negatives, false positives and inaccurate boundaries, which can be harmful for training the standard object detectors (e.g. Faster RCNN). We propose a novel NOise Tolerant Ensemble RCNN (NOTE-RCNN) object detector to handle such noisy labels. Comparing to standard Faster RCNN, it contains three highlights: an ensemble of two classification heads and a distillation head to avoid overfitting on noisy labels and improve the mining precision, masking the negative sample loss in box predictor to avoid the harm of false negative labels, and training box regression head only on seed annotations to eliminate the harm from inaccurate boundaries of mined bounding boxes. We evaluate the methods on ILSVRC 2013 and MSCOCO 2017 dataset; we observe that the detection accuracy consistently improves as we iterate between mining and training steps, and state-of-the-art performance is achieved. | Recently, there are also work that designs end-to-end deep networks combining with multiple instance learning. Bilen al @cite_29 designed a two-stream network, one for classification and the other for localization, it outputs final scores for the proposals by the element-wise multiplication on the scores from the two streams. al @cite_31 proposed a context-aware CNN model based on contrast and additive contextual guidance, which improved the object localization accuracy. | {
"cite_N": [
"@cite_29",
"@cite_31"
],
"mid": [
"2101611867",
"2519284461"
],
"abstract": [
"Weakly supervised learning of object detection is an important problem in image understanding that still does not have a satisfactory solution. In this paper, we address this problem by exploiting the power of deep convolutional neural networks pre-trained on large-scale image-level classification tasks. We propose a weakly supervised deep detection architecture that modifies one such network to operate at the level of image regions, performing simultaneously region selection and classification. Trained as an image classifier, the architecture implicitly learns object detectors that are better than alternative weakly supervised detection systems on the PASCAL VOC data. The model, which is a simple and elegant end-to-end architecture, outperforms standard data augmentation and fine-tuning techniques for the task of image-level classification as well.",
"We aim to localize objects in images using image-level supervision only. Previous approaches to this problem mainly focus on discriminative object regions and often fail to locate precise object boundaries. We address this problem by introducing two types of context-aware guidance models, additive and contrastive models, that leverage their surrounding context regions to improve localization. The additive model encourages the predicted object region to be supported by its surrounding context region. The contrastive model encourages the predicted object region to be outstanding from its surrounding context region. Our approach benefits from the recent success of convolutional neural networks for object recognition and extends Fast R-CNN to weakly supervised object localization. Extensive experimental evaluation on the PASCAL VOC 2007 and 2012 benchmarks shows that our context-aware approach significantly improves weakly supervised localization and detection."
]
} |
1812.00124 | 2902658950 | The labeling cost of large number of bounding boxes is one of the main challenges for training modern object detectors. To reduce the dependence on expensive bounding box annotations, we propose a new semi-supervised object detection formulation, in which a few seed box level annotations and a large scale of image level annotations are used to train the detector. We adopt a training-mining framework, which is widely used in weakly supervised object detection tasks. However, the mining process inherently introduces various kinds of labelling noises: false negatives, false positives and inaccurate boundaries, which can be harmful for training the standard object detectors (e.g. Faster RCNN). We propose a novel NOise Tolerant Ensemble RCNN (NOTE-RCNN) object detector to handle such noisy labels. Comparing to standard Faster RCNN, it contains three highlights: an ensemble of two classification heads and a distillation head to avoid overfitting on noisy labels and improve the mining precision, masking the negative sample loss in box predictor to avoid the harm of false negative labels, and training box regression head only on seed annotations to eliminate the harm from inaccurate boundaries of mined bounding boxes. We evaluate the methods on ILSVRC 2013 and MSCOCO 2017 dataset; we observe that the detection accuracy consistently improves as we iterate between mining and training steps, and state-of-the-art performance is achieved. | Note that in previous work @cite_23 , the definition of semi-supervised object detection is slightly different from ours, in which only the image-level labels and pre-trained source detectors are considered, but seed bounding box annotations are not used. Beginning from LSDA @cite_23 , Hoffman al proposed to learn parameter transferring functions between the classification network and the detection network, so that a classification model trained by image level labels can be transferred to a detection model. Tang al @cite_12 explored the usage of visual and semantic similarities among the source categories and the target categories in the parameter transferring function. Hu al @cite_21 further extended this method to semi-supervised instance segmentation, which transfers models for object detection to instance segmentation. Uijlings al @cite_17 adopted the MIL framework from weakly supervised object detection, and replaced the unsupervised proposal generator @cite_13 by the pre-trained source detectors to use the shared knowledge. Li al @cite_11 proposed to use a small amount of location annotations to simultaneously performs disease identification and localization. | {
"cite_N": [
"@cite_11",
"@cite_21",
"@cite_23",
"@cite_13",
"@cite_12",
"@cite_17"
],
"mid": [
"",
"2768923469",
"2951084305",
"7746136",
"2418676188",
"2962685835"
],
"abstract": [
"",
"Most methods for object instance segmentation require all training examples to be labeled with segmentation masks. This requirement makes it expensive to annotate new categories and has restricted instance segmentation models to 100 well-annotated classes. The goal of this paper is to propose a new partially supervised training paradigm, together with a novel weight transfer function, that enables training instance segmentation models on a large set of categories all of which have box annotations, but only a small fraction of which have mask annotations. These contributions allow us to train Mask R-CNN to detect and segment 3000 visual concepts using box annotations from the Visual Genome dataset and mask annotations from the 80 classes in the COCO dataset. We evaluate our approach in a controlled study on the COCO dataset. This work is a first step towards instance segmentation models that have broad comprehension of the visual world.",
"A major challenge in scaling object detection is the difficulty of obtaining labeled images for large numbers of categories. Recently, deep convolutional neural networks (CNNs) have emerged as clear winners on object classification benchmarks, in part due to training with 1.2M+ labeled classification images. Unfortunately, only a small fraction of those labels are available for the detection task. It is much cheaper and easier to collect large quantities of image-level labels from search engines than it is to collect detection data and label it with precise bounding boxes. In this paper, we propose Large Scale Detection through Adaptation (LSDA), an algorithm which learns the difference between the two tasks and transfers this knowledge to classifiers for categories without bounding box annotated data, turning them into detectors. Our method has the potential to enable detection for the tens of thousands of categories that lack bounding box annotations, yet have plenty of classification data. Evaluation on the ImageNet LSVRC-2013 detection challenge demonstrates the efficacy of our approach. This algorithm enables us to produce a >7.6K detector by using available classification data from leaf nodes in the ImageNet tree. We additionally demonstrate how to modify our architecture to produce a fast detector (running at 2fps for the 7.6K detector). Models and software are available at",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"Deep CNN-based object detection systems have achieved remarkable success on several large-scale object detection benchmarks. However, training such detectors requires a large number of labeled bounding boxes, which are more difficult to obtain than image-level annotations. Previous work addresses this issue by transforming image-level classifiers into object detectors. This is done by modeling the differences between the two on categories with both imagelevel and bounding box annotations, and transferring this information to convert classifiers to detectors for categories without bounding box annotations. We improve this previous work by incorporating knowledge about object similarities from visual and semantic domains during the transfer process. The intuition behind our proposed method is that visually and semantically similar categories should exhibit more common transferable properties than dissimilar categories, e.g. a better detector would result by transforming the differences between a dog classifier and a dog detector onto the cat class, than would by transforming from the violin class. Experimental results on the challenging ILSVRC2013 detection dataset demonstrate that each of our proposed object similarity based knowledge transfer methods outperforms the baseline methods. We found strong evidence that visual similarity and semantic relatedness are complementary for the task, and when combined notably improve detection, achieving state-of-the-art detection performance in a semi-supervised setting.",
"We propose to revisit knowledge transfer for training object detectors on target classes from weakly supervised training images, helped by a set of source classes with bounding-box annotations. We present a unified knowledge transfer framework based on training a single neural network multi-class object detector over all source classes, organized in a semantic hierarchy. This generates proposals with scores at multiple levels in the hierarchy, which we use to explore knowledge transfer over a broad range of generality, ranging from class-specific (bycicle to motorbike) to class-generic (objectness to any class). Experiments on the 200 object classes in the ILSVRC 2013 detection dataset show that our technique (1) leads to much better performance on the target classes (70.3 CorLoc, 36.9 mAP) than a weakly supervised baseline which uses manually engineered objectness [11] (50.5 CorLoc, 25.4 mAP). (2) delivers target object detectors reaching 80 of the mAP of their fully supervised counterparts. (3) outperforms the best reported transfer learning results on this dataset (+41 CorLoc and +3 mAP over [18, 46], +16.2 mAP over [32]). Moreover, we also carry out several across-dataset knowledge transfer experiments [27, 24, 35] and find that (4) our technique outperforms the weakly supervised baseline in all dataset pairs by 1.5 A— - 1.9A—, establishing its general applicability."
]
} |
1812.00252 | 2920690946 | In this work, we present a novel framework for on-line human gait stability prediction of the elderly users of an intelligent robotic rollator using Long Short Term Memory (LSTM) networks, fusing multimodal RGB-D and Laser Range Finder (LRF) data from non-wearable sensors. A Deep Learning (DL) based approach is used for the upper body pose estimation. The detected pose is used for estimating the body Center of Mass (CoM) using Unscented Kalman Filter (UKF). An Augmented Gait State Estimation framework exploits the LRF data to estimate the legs' positions and the respective gait phase. These estimates are the inputs of an encoder-decoder sequence to sequence model which predicts the gait stability state as Safe or Fall Risk walking. It is validated with data from real patients, by exploring different network architectures, hyperparameter settings and by comparing the proposed method with other baselines. The presented LSTM-based human gait stability predictor is shown to provide robust predictions of the human stability state, and thus has the potential to be integrated into a general user-adaptive control architecture as a fall-risk alarm. | Fall detection and prevention is a hot topic in the field of assistive robotics @cite_22 . Most of the proposed control strategies for robotic assistive platforms in literature do not deal with the problem of fall prevention and research works focus on navigation and obstacle avoidance @cite_35 @cite_5 @cite_44 . However, there exist some targeted research focusing on incorporating strategies for preventing or detecting fall incidents and facilitating user's mobility. In @cite_9 @cite_43 the authors developed an admittance controller for a passive walker with a fall-prevention function considering the position and velocity of the user, utilizing data from two LRFs. They model the user as a solid body link, in order to compute the position of the center of gravity @cite_50 , based on which they applied a braking force on the rollator to prevent falls. A fall detection for a cane robot was presented in @cite_45 @cite_4 , that computes the zero moment point stability of the elderly, using on shoe sensors that provide ground force reactions. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_22",
"@cite_9",
"@cite_44",
"@cite_43",
"@cite_45",
"@cite_50",
"@cite_5"
],
"mid": [
"2022548831",
"2293494488",
"2055698923",
"2122290439",
"2410134747",
"",
"2027268226",
"2736923069",
"2498528753"
],
"abstract": [
"This paper proposes a control approach for an active robotic walking support system based on environment feedback. The support system is controlled using imposed apparent dynamics and its parameters are varied by the environment information. The environment information will not cause any motion but will only change the characteristics or maneuverability of the support system. This approach leads to a passive behavior for an active walking support system. In addition, the stability of the support system based on the apparent dynamics is also discussed. This is important as a guideline on what parameters to vary that will not cause instability to the system. Experimental results are presented to show the validity of the control algorithm with environment feedback.",
"An intelligent walking-aid cane robot is developed for assisting the elderly and the physically challenged with walking. A motion control method is proposed for the cane robot based on human walking intention estimation. Moreover, the safety is investigated for both the cane robot and the elderly. The fall detection and prevention concepts are proposed to guarantee the safety of the elderly while walking with the cane robot. However, the deficiency of the cane robot is that it can be overturned easily because of its small size and light weight. Therefore, a controllable universal joint is designed for adjusting the tilted angle of its stick. The stability of the cane robot during the fall prevention procedure can then be enhanced by controlling the tilted angle of stick to an optimal position. A center of pressure (COP)-based fall detection (COP-FD) method is used to detect the risk of falling. In this method, the user's COP is calculated in real time using an integrated force sensory system, which comprises a six-axis force torque sensor and an inshoe load sensor. When the COP reaches the boundary of the specified safety area, i.e., the support polygon, it is assessed that the user is going to fall down. The COP-FD method can be used in various cases of falling. However, for cases of stumbling, a rapid fall detection method is proposed based on leg motion detection, and Dubois' fuzzy possibility theory is applied to adapt to different users. When the risk of falling has been detected, a fall prevention impedance control is executed considering both the interaction compliance and system stability. In the study, a control simulation platform was established to obtain the optimal controller parameters, and all the proposed methods were finally verified through simulations and experiments.",
"According to nihseniorhealth.gov (a website for older adults), falling represents a great threat as people get older, and providing mechanisms to detect and prevent falls is critical to improve people's lives. Over 1.6 million U.S. adults are treated for fall-related injuries in emergency rooms every year suffering fractures, loss of independence, and even death. It is clear then, that this problem must be addressed in a prompt manner, and the use of pervasive computing plays a key role to achieve this. Fall detection (FD) and fall prevention (FP) are research areas that have been active for over a decade, and they both strive for improving people's lives through the use of pervasive computing. This paper surveys the state of the art in FD and FP systems, including qualitative comparisons among various studies. It aims to serve as a point of reference for future research on the mentioned systems. A general description of FD and FP systems is provided, including the different types of sensors used in both approaches. Challenges and current solutions are presented and described in great detail. A 3-level taxonomy associated with the risk factors of a fall is proposed. Finally, cutting edge FD and FP systems are thoroughly reviewed and qualitatively compared, in terms of design issues and other parameters.",
"In this paper, we introduce a passive-type walker using servo brakes referred to as RT Walker. RT Walker realizes several functions such as obstacles steps avoidance function, path following function, gravity compensation function, variable motion characteristics function, etc., by controlling only servo brakes without using servo motors. These passive-type systems are dependable for using practically in real world environment, because of the passive dynamics with respect to the applied force moment, simple structure, lightweight, and so on. However, the most serious problem of them is the falling accident of user, because the passive-type systems are lightweight and move easily based on the small force moment applied by the user unintentionally. In this paper, we pay attention to a method for estimating the human state during the usage of the walker and propose a motion control algorithm for realizing a fall-prevention function based on its human state. We also implement the proposed control methods in RT Walker experimentally and illustrate the validity of them",
"Mobility assistance robots provide support to elderly or patients during walking. The design of a safe and intuitive assistance behavior is one of the major challenges in this context. We present an integrated approach for the context-specific, on-line adaptation of the assistance level of a rollator-type mobility assistance robot by gain-scheduling of low-level robot control parameters. A human-inspired decision-making model, the drift-diffusion Model, is introduced as the key principle to gain-schedule parameters and with this to adapt the provided robot assistance in order to achieve a human-like assistive behavior. The mobility assistance robot is designed to provide (a) cognitive assistance to help the user following a desired path towards a predefined destination as well as (b) sensorial assistance to avoid collisions with obstacles while allowing for an intentional approach of them. Further, the robot observes the user long-term performance and fatigue to adapt the overall level of (c) physical assistance provided. For each type of assistance a decision-making problem is formulated that affects different low-level control parameters. The effectiveness of the proposed approach is demonstrated in technical validation experiments. Moreover, the proposed approach is evaluated in a user study with 35 elderly persons. Obtained results indicate that the proposed gain-scheduling technique incorporating ideas of human decision-making models shows a general high potential for the application in adaptive shared control of mobility assistance robots.",
"",
"An intelligent cane robot is designed for assisting the elderly or physically challenged people walking in daily life. The cane robot is driven by an Omni-directional mobile base; an aluminous stick is fixed on the base. A variety of sensors are installed on the cane robot for estimating the user's intention and status (normal walking or falling). The user's intentions of to which user want to move can be estimated by analyzing the active force on cane robot. As a nursing-care robot, the safety is a most important concern; before the elderly fall over, the cane robot should detect the sign of the falling and control the robot to assist the elderly to prevent it. Therefore a fall detection concept is proposed to estimate the risk of the falling based on the theory of zero moment point (ZMP) stability. An on-shoe sensor is designed to measure the ground reaction force and calculate the ZMP based on distributed force. The safety walking status is defined in the case of the ZMP is in the boundary of the support polygon. While the ZMP moving out of that boundary, the user will fall over.",
"Various assistive machines have been developed to prevent falling accidents of the elderly. In order to achieve advanced support using robot technology, it is important to acquire data or real-time state estimation of user's various motions. However, a lot of expensive and sophisticated sensors utilized to estimate user's state accurately are difficult to use in general households or institutions. In this article, we propose a method to estimate the user's state utilizing a few inexpensive and simple sensors. We focused on CoG (Center of Gravity) to estimate user's state, but when utilizing less sensors than required to calculate the human link model parameters, the position of CoG is underspecified. Then we considered the range of value of unknown parameters to calculate candidates of CoG. The range of CoG candidates can become narrow enough to estimate human state in real-time by properly selecting and placing the sensors. Therefore, the evaluation of CoG candidates allows us to determine where and which sensors to set when designing assistive robots. We firstly selected some sensors which can be generally found on assistive machines, and we created sets of measurements using the number of unknown parameters. From the result of the experiment using a motion capture system, we confirmed that the range of the candidates was considerably narrow when using some of the created measurement sets. We validated the proposed method to estimate user's CoG candidates by actually placing the sensors according to the designed measurement sets and confirmed that the CoG candidates corresponded to those obtained using the motion capture system.",
"The concept of a physical and cognitive HRI for walker-assisted gait was presented in the previous chapter. The HRI is implemented by means of a multimodal interface, which is used to develop a natural human-robot interaction in the context of human mobility assistance. That way, both cHRI and pHRI were included in this interface. Specifically, this chapter describes the cHRI component, which combines two sensor modalities: active ranging sensing (LRF) and human motion capturing (IMU) to perform the human tracking. This sensor combination presents important advantages to monitor the human gait from a mobile robot point of view, such as mentioned in the previous last chapter."
]
} |
1812.00252 | 2920690946 | In this work, we present a novel framework for on-line human gait stability prediction of the elderly users of an intelligent robotic rollator using Long Short Term Memory (LSTM) networks, fusing multimodal RGB-D and Laser Range Finder (LRF) data from non-wearable sensors. A Deep Learning (DL) based approach is used for the upper body pose estimation. The detected pose is used for estimating the body Center of Mass (CoM) using Unscented Kalman Filter (UKF). An Augmented Gait State Estimation framework exploits the LRF data to estimate the legs' positions and the respective gait phase. These estimates are the inputs of an encoder-decoder sequence to sequence model which predicts the gait stability state as Safe or Fall Risk walking. It is validated with data from real patients, by exploring different network architectures, hyperparameter settings and by comparing the proposed method with other baselines. The presented LSTM-based human gait stability predictor is shown to provide robust predictions of the human stability state, and thus has the potential to be integrated into a general user-adaptive control architecture as a fall-risk alarm. | Regarding the extraction of gait motions, different types of sensors have been used @cite_27 @cite_49 . Gait analysis can be achieved by using Hidden Markov Models for modelling normal @cite_17 and pathological human gait @cite_26 , and extracting gait parameters @cite_28 . Recently, we have developed a new method for online augmented human state estimation, that uses Interacting Multiple Model Particle Filters with Probabilistic Data Association @cite_24 , which tracks the users' legs using data from a LRF, while it provides real-time gait phase estimation. We have also presented a new human-robot formation controller that utilizes the gait status characterization for user adaptation towards a fall preventing system @cite_13 . | {
"cite_N": [
"@cite_26",
"@cite_28",
"@cite_24",
"@cite_27",
"@cite_49",
"@cite_13",
"@cite_17"
],
"mid": [
"2197966930",
"2479812003",
"2794378256",
"2056873772",
"",
"2911193078",
"2041278045"
],
"abstract": [
"The precise analysis of a patient's or an elderly person's walking pattern is very important for an effective intelligent active mobility assistance robot. This walking pattern can be described by a cyclic motion, which can be modeled using the consecutive gait phases. In this paper, we present a completely non-invasive framework for analyzing and recognizing a pathological human walking gait pattern. Our framework utilizes a laser range finder sensor to detect and track the human legs, and an appropriately synthesized Hidden Markov Model (HMM) for state estimation, and recognition of the gait patterns. We demonstrate the applicability of this setup using real data, collected from an ensemble of different elderly persons with a number of pathologies. The results presented in this paper demonstrate that the proposed human data analysis scheme has the potential to provide the necessary methodological (modeling, inference, and learning) framework for a cognitive behavior-based robot control system. More specifically, the proposed framework has the potential to be used for the classification of specific walking pathologies, which is needed for the development of a context-aware robot mobility assistant.",
"A robust and effective gait analysis functionality is an essential characteristic for an assistance mobility robot dealing with elderly persons. The aforementioned functionality is crucial for dealing with mobility disabilities which are widespread in these parts of the population. In this work we present experimental validation of our in house developed system. We are using real data, collected from an ensemble of different elderly persons with a number of pathologies, and we present a validation study by using a GaitRite System. Our system, following the standard literature conventions, characterizes the human motion with a set of parameters which subsequently can be used to assess and distinguish between possible motion disabilities, using a laser range finder as its main sensor. The initial results, presented in this work, demonstrate the applicability of our framework in real test cases. Regarding such frameworks, a crucial technical question is the necessary complexity of the overall tracking system. To answer this question, we compare two approaches with different complexity levels. The first is a static rule based system acting on filtered laser data, while the second system utilizes a Hidden Markov Model for gait cycle estimation, and extraction of the gait parameters. The results demonstrate that the added complexity of the HMM system is necessary for improving the accuracy and efficacy of the system.",
"The accurate human gait tracking is an important factor for various robotic applications, such as robotic walkers aiming to provide assistance to patients with different mobility impairment, social robot companions, etc. A context-aware robot control architecture needs constant knowledge of the user's kinematic state to assess the patient's gait status and adjust its movement properly to provide optimal assistance. In this letter, we present a novel human gait tracking approach that uses two particle filters (PFs) and probabilistic data association (PDA) with an interacting multiple model (IMM) scheme for a real-time selection of the appropriate motion model according to the human gait analysis and the use of the Viterbi algorithm for an augmented human gait state estimation. The gait state estimates also interact with the IMM as a prior information that drives the Markov sampling process, while the PDA ensures that the legs of the same person are coupled. The observation data in this work are provided by a laser range finder mounted on a robotic assistant walker. A detailed experimental validation is presented using ground truth data from a motion capture system, which was used in real experiments with elder subjects who presented various mobility impairments. The validation analysis regards the algorithm's accuracy, robustness to occlusions and clutter, and the gait state classification success, subject to the effect of different number of samples used in the PFs. The results for the elder subjects show the dynamics of the proposed algorithm to be used in a real-time application due to its efficacy to provide accurate and robust augmented human gait estimates with a small number of particles.",
"Abstract For effective gait rehabilitation treatments, the status of a patient’s gait needs to be analyzed precisely. Since the gait motions are cyclic with several gait phases, the gait motions can be analyzed by gait phases. In this paper, a Hidden Markov Model (HMM) is applied to analyze the gait phases in the gait motions. Smart Shoes are utilized to obtain the ground reaction forces (GRFs) as observed data in the HMM. The posterior probabilities from the HMM are used to infer the gait phases, and the abnormal transition between gait phases are checked by the transition matrix. The proposed gait phase analysis methods have been applied to actual gait data, and the results show that the proposed methods have the potential of tools for diagnosing the status of a patient and evaluating a rehabilitation treatment.",
"",
"In this paper we describe a control strategy for a user-adaptive human-robot system for an intelligent robotic Mobility Assistive Device (MAD)using raw data from a single laser-range-finder (LRF)mounted on the MAD and scanning the walking area. The proposed control architecture consists of three modules. In the first module, a previously proposed methodology (termed IMM-PDA-PF)delivers the augmented human state estimation of the user by providing robust leg tracking and on-line estimation of the human gait phases. This information is processed at the next module for providing the pathological gait parametrization and characterization, by computing specific gait parameters for each gait cycle. These gait parameters form the feature vector that classifies the user in a certain class related to risk of fall. Those are of particular significance to the system, since the gait parameters and the respective class are used in the third module, i.e. the human-robot formation controller, in order to adapt the desired formation of the human-robot system, by selecting the appropriate control variables. The experimental evaluation comprises gait data from real patients, and demonstrates the stability of the human-robot formation control, indicating the importance of incorporating an on-line gait characterization of the user, using non-wearable and non-invasive methods, in the context of a robotic MAD.",
"For an effective intelligent active mobility assistance robot, the walking pattern of a patient or an elderly person has to be analyzed precisely. A well-known fact is that the walking patterns are gaits, that is, cyclic patterns with several consecutive phases. These cyclic motions can be modeled using the consecutive gait phases. In this paper, we present a completely non-invasive framework for analyzing a normal human walking gait pattern. Our framework utilizes a laser range finder sensor to collect the data, a combination of filters to preprocess these data, and an appropriately synthesized Hidden Markov Model (HMM) for state estimation, and recognition of the gait data. We demonstrate the applicability of this setup using real data, collected from an ensemble of different persons. The results presented in this paper demonstrate that the proposed human data analysis scheme has the potential to provide the necessary methodological (modeling, inference, and learning) framework for a cognitive behavior-based robot control system. More specifically, the proposed framework has the potential to be used for the recognition of abnormal gait patterns and the subsequent classification of specific walking pathologies, which is needed for the development of a contextaware robot mobility assistant."
]
} |
1812.00252 | 2920690946 | In this work, we present a novel framework for on-line human gait stability prediction of the elderly users of an intelligent robotic rollator using Long Short Term Memory (LSTM) networks, fusing multimodal RGB-D and Laser Range Finder (LRF) data from non-wearable sensors. A Deep Learning (DL) based approach is used for the upper body pose estimation. The detected pose is used for estimating the body Center of Mass (CoM) using Unscented Kalman Filter (UKF). An Augmented Gait State Estimation framework exploits the LRF data to estimate the legs' positions and the respective gait phase. These estimates are the inputs of an encoder-decoder sequence to sequence model which predicts the gait stability state as Safe or Fall Risk walking. It is validated with data from real patients, by exploring different network architectures, hyperparameter settings and by comparing the proposed method with other baselines. The presented LSTM-based human gait stability predictor is shown to provide robust predictions of the human stability state, and thus has the potential to be integrated into a general user-adaptive control architecture as a fall-risk alarm. | Gait stability is mostly analysed by using wearable sensors @cite_31 , like motion markers placed on the human body to calculate the body's CoM and the foot placements @cite_42 , and force sensors to estimate the center of pressure of the feet @cite_34 . Gait stability analysis for walking aid users can be found in @cite_3 . Regarding stability classification, an early approach can be found in @cite_46 , where the authors use the body skeleton provided by the RGB-D Kinect sensor as input and perform action classification to detect four classes of falling scenarios and sitting. However, the system in tested only with a physical therapist performing different walking problems. | {
"cite_N": [
"@cite_31",
"@cite_42",
"@cite_3",
"@cite_46",
"@cite_34"
],
"mid": [
"2503939313",
"1983935089",
"2724898217",
"2620846147",
"2162455072"
],
"abstract": [
"Smart walkers aimed at providing better support than conventional walker. These devices have synchroneous movements with users but they have to act differently in case of unbalanced gait. The available discriminatory factors used in smart walkers are generally based only on position data. These static data cannot be representative of the dynamical process of falling. This paper proposes three methods that could be used to detect unbalanced gait. They are based on different kinematic data: forward velocity, angular velocity around transverse axis and eXtrapolated Center Of Mass (XCOM) with stability margins. These methods are evaluated and compared experimentally on 4 young healthy subjects experiencing unexpected unbalanced gait when using a robotic walker.",
"During gait the body is in a continuous state of imbalance, with each subsequent step preventing a fall. Gait balance is maintained by regulating the interactions between the center of mass (CoM) and base of support (BoS). The purpose of this study was to investigate the interaction of the CoM position and velocity (CoMv) in relation to the dynamically changing BoS throughout gait. This was quantified using: (1) The shortest distance from the CoM to the boundary of the BoS; (2) The distance from the CoM to the centroid of the BoS; and (3) The distance from the CoM to the BoS along the direction of the CoMv. These interactions were investigated in healthy young adults, healthy older adults, and elderly fallers, who performed level walking at a self-selected speed. Elderly fallers demonstrated a conservative CoM–BoS separation at toe off and reduced balance control ability, specifically a decreased time to contact, when compared to healthy young adults at heel strike. Decreased time available in responding to perturbations might result in a greater number of falls. Understanding foot position and CoM trajectories might allow for appropriate rehabilitation practices.",
"Abstract To assist balance and mobility, older adults are often prescribed walking aids. Nevertheless, surprisingly their use has been associated with increased falls-risk. To address this finding we first need to characterise a person's stability while using a walking aid. Therefore, we present a generalisable method for the assessment of stability of walking frame (WF) users. Our method, for the first time, considers user and device as a combined system. We define the combined centre of pressure (CoP system ) of user and WF to be the point through which the resultant ground reaction force for all feet of both the WF and user acts if theresultant moment acts only around an axisperpendicular tothe ground plane. We also define the combined base of support (BoS system ) to be the convex polygon formed by the boundaries of the anatomical and WF feet in contact with the ground and interconnecting lines between them. To measure these parameters we have developed an instrumented WF with a load cell in each foot which we use together with pressure-sensing insoles and a camera system, the latter providing the relative position of the WF and anatomical feet. Software uses the resulting data to calculate the stability margin of the combined system, defined as the distance between CoP system and the nearest edge of BoS system . Our software also calculates the weight supported through the frame and when each foot (of user and or frame) is on the floor. Finally, we present experimental work demonstrating the value of our approach.",
"Human action behavior classification plays an important role for controlling systems having interaction with human users. Safety and dependability of such systems are crucial especially for walking assist systems. In this paper, upper body joint model of a user of a walking assist system is extracted using a depth sensor and a probabilistic model is proposed to detect possible non-walking states that might happen to the user. The 3D model of upper body skeleton, is reduced in dimension by applying Principal Component Analysis (PCA). The principal components are tested to have a normal distribution allowing a multivariate normal distribution fitting for walking data. The model is shown to be capable of recognizing four different falling scenarios and sitting. In these non-walking states, the motion of a passive-type walker called “RT Walker”, is controlled by generating brake force to assure fall prevention and sitting standing up support. The experimental data is gathered from an experienced physical therapist capable of imitating different walking problems.",
"To assess the improvement of human body balance, a low cost and portable measuring device of center of pressure (COP), known as center of pressure and complexity monitoring system (CPCMS), has been developed for data logging and analysis. In order to prove that the system can estimate the different magnitude of different sways in comparison with the commercial Advanced Mechanical Technology Incorporation (AMTI) system, four sway tests have been developed (i.e., eyes open, eyes closed, eyes open with water pad, and eyes closed with water pad) to produce different sway displacements. Firstly, static and dynamic tests were conducted to investigate the feasibility of the system. Then, correlation tests of the CPCMS and AMTI systems have been compared with four sway tests. The results are within the acceptable range. Furthermore, multivariate empirical mode decomposition (MEMD) and enhanced multivariate multiscale entropy (MMSE) analysis methods have been used to analyze COP data reported by the CPCMS and compare it with the AMTI system. The improvements of the CPCMS are 35 to 70 (open eyes test) and 60 to 70 (eyes closed test) with and without water pad. The AMTI system has shown an improvement of 40 to 80 (open eyes test) and 65 to 75 (closed eyes test). The results indicate that the CPCMS system can achieve similar results to the commercial product so it can determine the balance."
]
} |
1812.00252 | 2920690946 | In this work, we present a novel framework for on-line human gait stability prediction of the elderly users of an intelligent robotic rollator using Long Short Term Memory (LSTM) networks, fusing multimodal RGB-D and Laser Range Finder (LRF) data from non-wearable sensors. A Deep Learning (DL) based approach is used for the upper body pose estimation. The detected pose is used for estimating the body Center of Mass (CoM) using Unscented Kalman Filter (UKF). An Augmented Gait State Estimation framework exploits the LRF data to estimate the legs' positions and the respective gait phase. These estimates are the inputs of an encoder-decoder sequence to sequence model which predicts the gait stability state as Safe or Fall Risk walking. It is validated with data from real patients, by exploring different network architectures, hyperparameter settings and by comparing the proposed method with other baselines. The presented LSTM-based human gait stability predictor is shown to provide robust predictions of the human stability state, and thus has the potential to be integrated into a general user-adaptive control architecture as a fall-risk alarm. | Human pose estimation is a challenging topic due to the variable formations of the human body, the parts occlusions, etc. The rise of powerful DL frameworks along with the use of large annotated datasets opened a new era of research for optimal human pose estimation @cite_32 . Most approaches provide solutions regarding the detection of the 2D pose from color images by detecting keypoints or parts on the human body @cite_8 @cite_6 achieving high accuracy. The problem of 3D pose estimation is more challenging @cite_0 , as the detected poses are scaled and normalized. Recent approaches aim to solve the ambiguity of 2D-to-3D correspondences by learning 3D poses from single color images @cite_15 @cite_18 . Another relevant research topic concerns the tracking of human poses @cite_33 , but while achieving improved levels of accuracy compared to previous methods, due to the contribution of DL, the high estimation error makes it prohibitive to integrate it into a robotic application that requires high accuracy and robustness. A recent application of pose estimation for a robotic application can be found in @cite_16 . | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_8",
"@cite_32",
"@cite_6",
"@cite_0",
"@cite_15",
"@cite_16"
],
"mid": [
"",
"",
"2964304707",
"2080873731",
"2559085405",
"2612706635",
"2798637590",
"2962962024"
],
"abstract": [
"",
"",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.",
"We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.",
"Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.",
"We propose an approach to estimate 3D human pose in real world units from a single RGBD image and show that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose estimation exclusively from depth. Our approach builds on robust human keypoint detectors for color images and incorporates depth for lifting into 3D. We combine the system with our learning from demonstration framework to instruct a service robot without the need of markers. Experiments in real world settings demonstrate that our approach enables a PR2 robot to imitate manipulation actions observed from a human teacher."
]
} |
1812.00086 | 2902705165 | Graph convolutional network (GCN) is an emerging neural network approach. It learns new representation of a node by aggregating feature vectors of all neighbors in the aggregation process without considering whether the neighbors or features are useful or not. Recent methods have improved solutions by sampling a fixed size set of neighbors, or assigning different weights to different neighbors in the aggregation process, but features within a feature vector are still treated equally in the aggregation process. In this paper, we introduce a new convolution operation on regular size feature maps constructed from features of a fixed node bandwidth via sampling to get the first-level node representation, which is then passed to a standard GCN to learn the second-level node representation. Experiments show that our method outperforms competing methods in semi-supervised node classification tasks. Furthermore, our method opens new doors for exploring new GCN architectures, particularly deeper GCN models. | categorizes graph representation leaning methods into three approaches: the factorization-based approach, random walk-based approach and neural network-based approach. Early methods for learning node representations largely focused on matrix factorization. They are directly inspired by classic techniques for dimensionality reduction @cite_4 @cite_24 . Inspired by the Word2Vec method @cite_15 , proposed the DeepWalk that generates random paths over a graph. It learns the new node representation by maximizing the co-occurrence probability of the neighbors in the walk. Node2vec @cite_16 and LINE @cite_25 extend DeepWalk with more sophisticated walks. PLANTOID learns the embedding from both labels and graph structure by injecting the label information @cite_5 . Graph neural networks (GNNs) have previously been introduced by and , which consist of an iterative process propagating the node states until the node representation reaches a stable fixed point. More recently, several improved methods have been proposed. introduced gated recurrent units @cite_12 to alleviate the restriction. further introduced a convolution-like propagation rule on graphs, which does not scale to large graphs with wide node degree distributions. | {
"cite_N": [
"@cite_4",
"@cite_24",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_25",
"@cite_12"
],
"mid": [
"",
"",
"2315403234",
"2950133940",
"2366141641",
"1888005072",
"2950635152"
],
"abstract": [
"",
"",
"We present a semi-supervised learning framework based on graph embeddings. Given a graph between instances, we train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. We develop both transductive and inductive variants of our method. In the transductive variant of our method, the class labels are determined by both the learned embeddings and input feature vectors, while in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. On a large and diverse set of benchmark tasks, including text classification, distantly supervised entity extraction, and entity classification, we show improved performance over many of the existing models.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.",
"Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.",
"This paper studies the problem of embedding very large information networks into low-dimensional vector spaces, which is useful in many tasks such as visualization, node classification, and link prediction. Most existing graph embedding methods do not scale for real world information networks which usually contain millions of nodes. In this paper, we propose a novel network embedding method called the LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and or weighted. The method optimizes a carefully designed objective function that preserves both the local and global network structures. An edge-sampling algorithm is proposed that addresses the limitation of the classical stochastic gradient descent and improves both the effectiveness and the efficiency of the inference. Empirical experiments prove the effectiveness of the LINE on a variety of real-world information networks, including language networks, social networks, and citation networks. The algorithm is very efficient, which is able to learn the embedding of a network with millions of vertices and billions of edges in a few hours on a typical single machine. The source code of the LINE is available online https: github.com tangjianpku LINE .",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
} |
1812.00086 | 2902705165 | Graph convolutional network (GCN) is an emerging neural network approach. It learns new representation of a node by aggregating feature vectors of all neighbors in the aggregation process without considering whether the neighbors or features are useful or not. Recent methods have improved solutions by sampling a fixed size set of neighbors, or assigning different weights to different neighbors in the aggregation process, but features within a feature vector are still treated equally in the aggregation process. In this paper, we introduce a new convolution operation on regular size feature maps constructed from features of a fixed node bandwidth via sampling to get the first-level node representation, which is then passed to a standard GCN to learn the second-level node representation. Experiments show that our method outperforms competing methods in semi-supervised node classification tasks. Furthermore, our method opens new doors for exploring new GCN architectures, particularly deeper GCN models. | The above graph representation methods mainly consider the graph structure (node and edge) information but they do not use the node feature matrix @math in the learning process. proposed the graph convolutional networks (GCN) as an effective graph representation model that can naturally combine structure information and node features in the learning process. It is derived from conducting graph convolution in the spectral domain @cite_22 @cite_12 . It represents a node by aggregating feature vectors from its neighbors (including itself), which is similar with the convolution operation in CNN. The propagation rule of GCN can be summarized by the following expression: where is a normalized adjacency matrix of the undirected graph @math with added self-connections . @math is an identity matrix. The diagonal entries of @math is @math is a layer-specific trainable weight matrix, @math denotes an activation function such as the @math , and @math is the matrix of activation in the @math layer. @math is the node feature matrix. | {
"cite_N": [
"@cite_22",
"@cite_12"
],
"mid": [
"1662382123",
"2950635152"
],
"abstract": [
"Convolutional Neural Networks are extremely efficient architectures in image and audio recognition tasks, thanks to their ability to exploit the local translational invariance of signal classes over their domain. In this paper we consider possible generalizations of CNNs to signals defined on more general domains without the action of a translation group. In particular, we propose two constructions, one based upon a hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. We show through experiments that for low-dimensional graphs it is possible to learn convolutional layers with a number of parameters independent of the input size, resulting in efficient deep architectures.",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
} |
1812.00197 | 2902931653 | Modern computer systems are becoming faster, more efficient, and increasingly interconnected with each generation. Consequently, these platforms also grow more complex, with continuously new features introducing the possibility of new bugs. Hence, the semiconductor industry employs a combination of different verification techniques to ensure the security of System-on-Chip (SoC) designs during the development life cycle. However, a growing number of increasingly sophisticated attacks are starting to leverage cross-layer bugs by exploiting subtle interactions between hardware and software, as recently demonstrated through a series of real-world exploits with significant security impact that affected all major hardware vendors. | RTL-level IFT was proposed in @cite_55 where the IFT logic is derived at a higher abstraction level, is faster to verify, and the accuracy vs. scalability trade-off is configurable. In principle, at RTL-level all logic information flows can be tracked, and in @cite_55 the designer is allowed to configure the complexity (whether to track explicit and implicit information flows) and precision of the tracking logic. Another approach isolates timing flows from functional flows and shows how to identify timing information leakage for arithmetic and cryptographic units @cite_13 . However, whether it can scale well to effectively capture timing leakage in real-world complex processor designs remains an open question. | {
"cite_N": [
"@cite_55",
"@cite_13"
],
"mid": [
"2016224355",
"2774518529"
],
"abstract": [
"Understanding the flow of information is an important aspect in computer security. There has been a recent move towards tracking information in hardware and understanding the flow of individual bits through Boolean functions. Such gate level information flow tracking (GLIFT) provides a precise understanding of all flows of information. This paper presents a theoretical analysis of GLIFT. It formalizes the problem, provides fundamental definitions and properties, introduces precise symbolic representations of the GLIFT logic for basic Boolean functions, and gives analytic and quantitative analysis of the GLIFT logic.",
"Emergence of side channel security attacks has challenged the classic assumptions regarding what data is publicly available. As demonstrated repeatedly, statistical analysis of information collected by measuring completion time of hardware designs can reveal confidential information. Even though timing-based side channel leakage can be easily exploited to breach data privacy, conventional hardware verification tools are not yet suited to assess these vulnerabilities. To acquaint the hardware design process with formal security evaluations, we introduce a model for tracking timing-based information flows through HDL codes. Based on this model, we have developed Clepsydra, a tool for automatically generating circuitry for tracking timing flows and generic logical flows within hardware designs in two distinct channels. The circuit generated by Clepsydra can be analyzed by EDA tools to detect timing leakage or formally prove constant execution time. We present proofs regarding soundness and precision of the proposed model along with results of employing Clepsydra to verify security properties on a variety of hardware units including crypto cores, bus architectures, caches and arithmetic modules."
]
} |
1812.00197 | 2902931653 | Modern computer systems are becoming faster, more efficient, and increasingly interconnected with each generation. Consequently, these platforms also grow more complex, with continuously new features introducing the possibility of new bugs. Hence, the semiconductor industry employs a combination of different verification techniques to ensure the security of System-on-Chip (SoC) designs during the development life cycle. However, a growing number of increasingly sophisticated attacks are starting to leverage cross-layer bugs by exploiting subtle interactions between hardware and software, as recently demonstrated through a series of real-world exploits with significant security impact that affected all major hardware vendors. | At the language level, Caisson @cite_79 and Sapper @cite_46 are security-aware HDLs that use a typing system where the designer assigns security labels'' to each variable (wire or register) by the security policies required. However, they both require redesigning the RTL using a new hardware description language which is not practical. SecVerilog @cite_75 @cite_11 overcomes this by extending the Verilog language with a dynamic security type system. Here, designers assign a security label to each variable (wire or register) in the RTL Verilog code to enable a compile-time check of hardware information flow. However, it must use predicate analysis during simulation to reason about the run-time behavior of the hardware state and dependent data types for precise flow tracking. | {
"cite_N": [
"@cite_79",
"@cite_46",
"@cite_75",
"@cite_11"
],
"mid": [
"2150619336",
"2145936802",
"2116469687",
""
],
"abstract": [
"Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights and techniques from designing information-flow secure programming languages to provide a new perspective on designing secure hardware. We describe a new hardware description language, Caisson, that combines domain-specific abstractions common to hardware design with insights from type-based techniques used in secure programming languages. The proper combination of these elements allows for an expressive, provably-secure HDL that operates at a familiar level of abstraction to the target audience of the language, hardware architects. We have implemented a compiler for Caisson that translates designs into Verilog and then synthesizes the designs using existing tools. As an example of Caisson's usefulness we have addressed an open problem in secure hardware by creating the first-ever provably information-flow secure processor with micro-architectural features including pipelining and cache. We synthesize the secure processor and empirically compare it in terms of chip area, power consumption, and clock frequency with both a standard (insecure) commercial processor and also a processor augmented at the gate level to dynamically track information flow. Our processor is competitive with the insecure processor and significantly better than dynamic tracking.",
"Privacy and integrity are important security concerns. These concerns are addressed by controlling information flow, i.e., restricting how information can flow through a system. Most proposed systems that restrict information flow make the implicit assumption that the hardware used by the system is fully correct'' and that the hardware's instruction set accurately describes its behavior in all circumstances. The truth is more complicated: modern hardware designs defy complete verification; many aspects of the timing and ordering of events are left totally unspecified; and implementation bugs present themselves with surprising frequency. In this work we describe Sapper, a novel hardware description language for designing security-critical hardware components. Sapper seeks to address these problems by using static analysis at compile-time to automatically insert dynamic checks in the resulting hardware that provably enforce a given information flow policy at execution time. We present Sapper's design and formal semantics along with a proof sketch of its security. In addition, we have implemented a compiler for Sapper and used it to create a non-trivial secure embedded processor with many modern microarchitectural features. We empirically evaluate the resulting hardware's area and energy overhead and compare them with alternative designs.",
"Information security can be compromised by leakage via low-level hardware features. One recently prominent example is cache probing attacks, which rely on timing channels created by caches. We introduce a hardware design language, SecVerilog, which makes it possible to statically analyze information flow at the hardware level. With SecVerilog, systems can be built with verifiable control of timing channels and other information channels. SecVerilog is Verilog, extended with expressive type annotations that enable precise reasoning about information flow. It also comes with rigorous formal assurance: we prove that SecVerilog enforces timing-sensitive noninterference and thus ensures secure information flow. By building a secure MIPS processor and its caches, we demonstrate that SecVerilog makes it possible to build complex hardware designs with verified security, yet with low overhead in time, space, and HW designer effort.",
""
]
} |
1812.00197 | 2902931653 | Modern computer systems are becoming faster, more efficient, and increasingly interconnected with each generation. Consequently, these platforms also grow more complex, with continuously new features introducing the possibility of new bugs. Hence, the semiconductor industry employs a combination of different verification techniques to ensure the security of System-on-Chip (SoC) designs during the development life cycle. However, a growing number of increasingly sophisticated attacks are starting to leverage cross-layer bugs by exploiting subtle interactions between hardware and software, as recently demonstrated through a series of real-world exploits with significant security impact that affected all major hardware vendors. | demonstrate that software-visible side channels can exist even below cache-line granularity in their CacheBleed @cite_25 attack---undermining a core assumption of prior defenses such as scatter-gather @cite_28 . We categorize it as a timing-flow bug, since software can cause clock cycle differences for accesses mapping to the same bank below cache line granularity to break (assumed) constant-time implementations. | {
"cite_N": [
"@cite_28",
"@cite_25"
],
"mid": [
"1253017325",
"2586555532"
],
"abstract": [
"Hardware side channel vulnerabilities have been studied for many years in embedded silicon-security arena including SmartCards, SetTop-boxes, etc. However, because various recent security activities have goals of improving the software isolation properties of PC platforms, software side channels have become a subject of interest. Recent publications discussed cache-based software side channel vulnerabilities of AES and RSA. Thus, following the classical approach — a new side channel vulnerability opens a new mitigation research path — this paper starts to investigate efficient mitigations to protect AES-software against side channel vulnerabilities. First, we will present several mitigation strategies to harden existing AES software against cache-based software side channel attacks and analyze their theoretical protection. Then, we will present a performance and security evaluation of our mitigation strategies. For ease of evaluation we measured the performance of our code against the performance of the openSSL AES implementation. In addition, we also analyzed our code under various existing attacks. Depending on the level of the required side channel protection, the measured performance loss of our mitigations strategies versus openSSL (respectively best assembler) varies between factors of 1.35 (2.66) and 2.85 (5.83).",
"The scatter–gather technique is a commonly implemented approach to prevent cache-based timing attacks. In this paper, we show that scatter–gather is not constant time. We implement a cache timing attack against the scatter–gather implementation used in the modular exponentiation routine in OpenSSL version 1.0.2f. Our attack exploits cache-bank conflicts on the Sandy Bridge microarchitecture. We have tested the attack on an Intel Xeon E5-2430 processor. For 4096-bit RSA, our attack can fully recover the private key after observing 16,000 decryptions."
]
} |
1812.00197 | 2902931653 | Modern computer systems are becoming faster, more efficient, and increasingly interconnected with each generation. Consequently, these platforms also grow more complex, with continuously new features introducing the possibility of new bugs. Hence, the semiconductor industry employs a combination of different verification techniques to ensure the security of System-on-Chip (SoC) designs during the development life cycle. However, a growing number of increasingly sophisticated attacks are starting to leverage cross-layer bugs by exploiting subtle interactions between hardware and software, as recently demonstrated through a series of real-world exploits with significant security impact that affected all major hardware vendors. | MemJam @cite_12 exploits false read-after-write dependencies in the CPU to maliciously slow down victim accesses to memory blocks within a cache line. Similar to Cachebleed, this breaks any constant-time implementations that rely on cache-line granularity, and we categorize the underlying vulnerability as being hard to detect in existing RTL implementations due to timing-flow gap and many cross-module connections. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2770572532"
],
"abstract": [
"Cache attacks exploit memory access patterns of cryptographic implementations. Constant-Time implementation techniques have become an indispensable tool in fighting cache timing attacks. These techniques engineer the memory accesses of cryptographic operations to follow a uniform key independent pattern. However, the constant-time behavior is dependent on the underlying architecture, which can be highly complex and often incorporates unpublished features. CacheBleed attack targets cache bank conflicts and thereby invalidates the assumption that microarchitectural side-channel adversaries can only observe memory with cache line granularity. In this work, we propose MemJam, a side-channel attack that exploits false dependency of memory read-after-write and provides a high quality intra cache level timing channel. As a proof of concept, we demonstrate the first key recovery attacks on a constant-time implementation of AES, and a SM4 implementation with cache protection in the current Intel Integrated Performance Primitives (Intel IPP) cryptographic library. Further, we demonstrate the first intra cache level timing attack on SGX by reproducing the AES key recovery results on an enclave that performs encryption using the aforementioned constant-time implementation of AES. Our results show that we can not only use this side channel to efficiently attack memory dependent cryptographic operations but also to bypass proposed protections. Compared to CacheBleed, which is limited to older processor generations, MemJam is the first intra cache level attack applicable to all major Intel processors including the latest generations that support the SGX extension."
]
} |
1812.00181 | 2903329274 | Understanding and evaluating the robustness of neural networks under adversarial settings is a subject of growing interest. Attacks proposed in the literature usually work with models trained to minimize cross-entropy loss and output softmax probabilities. In this work, we present interesting experimental results that suggest the importance of considering other loss functions and target representations, specifically, (1) training on mean-squared error and (2) representing targets as codewords generated from a random codebook. We evaluate the robustness of neural networks that implement these proposed modifications using existing attacks, showing an increase in accuracy against untargeted attacks of up to 98.7 and a decrease of targeted attack success rates of up to 99.8 . Our model demonstrates more robustness compared to its conventional counterpart even against attacks that are tailored to our modifications. Furthermore, we find that the parameters of our modified model have significantly smaller Lipschitz bounds, an important measure correlated with a model's sensitivity to adversarial perturbations. | Several defenses have also been proposed. To date, the most effective defense technique is adversarial training ( @cite_25 , @cite_22 , @cite_23 , @cite_16 ), where the model is trained on a mix of clean and adversarial data. This has shown to provide a regularization effect that makes models more robust towards attacks. | {
"cite_N": [
"@cite_22",
"@cite_16",
"@cite_25",
"@cite_23"
],
"mid": [
"",
"2620038827",
"2552767274",
"2767075075"
],
"abstract": [
"",
"Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss. We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step. We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks.",
"Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model's parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on clean inputs. So far, adversarial training has primarily been applied to small problems. In this research, we apply adversarial training to ImageNet. Our contributions include: (1) recommendations for how to succesfully scale adversarial training to large models and datasets, (2) the observation that adversarial training confers robustness to single-step attack methods, (3) the finding that multi-step attack methods are somewhat less transferable than single-step attack methods, so single-step attacks are the best for mounting black-box attacks, and (4) resolution of a \"label leaking\" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples, because the adversarial example construction process uses the true label and the model can learn to exploit regularities in the construction process.",
"Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. For imperceptible perturbations, our method matches or outperforms heuristic approaches."
]
} |
1812.00181 | 2903329274 | Understanding and evaluating the robustness of neural networks under adversarial settings is a subject of growing interest. Attacks proposed in the literature usually work with models trained to minimize cross-entropy loss and output softmax probabilities. In this work, we present interesting experimental results that suggest the importance of considering other loss functions and target representations, specifically, (1) training on mean-squared error and (2) representing targets as codewords generated from a random codebook. We evaluate the robustness of neural networks that implement these proposed modifications using existing attacks, showing an increase in accuracy against untargeted attacks of up to 98.7 and a decrease of targeted attack success rates of up to 99.8 . Our model demonstrates more robustness compared to its conventional counterpart even against attacks that are tailored to our modifications. Furthermore, we find that the parameters of our modified model have significantly smaller Lipschitz bounds, an important measure correlated with a model's sensitivity to adversarial perturbations. | @cite_8 proposed defensive distillation, a mechanism whereby a model is trained based on soft labels generated by another teacher' network in order to prevent overfitting. Other methods include introducing randomness to or applying transformations on the input data and or the layers of the network ( @cite_1 , @cite_15 , @cite_26 , @cite_18 ). However, @cite_28 have identified that the apparent robustness of several defenses can be attributed to the introduction of computation and transformations that mask the gradients and thus break existing attacks that rely on gradients to generate adversarial examples. Their work demonstrates that small, tailored modifications to the attacks can circumvent these defenses completely. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_1",
"@cite_15"
],
"mid": [
"2767962654",
"2787496614",
"",
"2787708942",
"2765384636",
"2787733970"
],
"abstract": [
"Convolutional neural networks have demonstrated high accuracy on various tasks in recent years. However, they are extremely vulnerable to adversarial examples. For example, imperceptible perturbations added to clean images can cause convolutional neural networks to fail. In this paper, we propose to utilize randomization at inference time to mitigate adversarial effects. Specifically, we use two randomization operations: random resizing, which resizes the input images to a random size, and random padding, which pads zeros around the input images in a random manner. Extensive experiments demonstrate that the proposed randomization method is very effective at defending against both single-step and iterative attacks. Our method provides the following advantages: 1) no additional training or fine-tuning, 2) very few additional computations, 3) compatible with other adversarial defense methods. By combining the proposed randomization method with an adversarially trained model, it achieves a normalized score of 0.924 (ranked No.2 among 107 defense teams) in the NIPS 2017 adversarial examples defense challenge, which is far better than using adversarial training alone with a normalized score of 0.773 (ranked No.56). The code is public available at this https URL.",
"In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. Our code has been made publicly available at this https URL",
"",
"We identify obfuscated gradients as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat optimization-based attacks, we find defenses relying on this effect can be circumvented. For each of the three types of obfuscated gradients we discover, we describe indicators of defenses exhibiting this effect and develop attack techniques to overcome it. In a case study, examining all defenses accepted to ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 8 defenses relying on obfuscated gradients. Using our new attack techniques, we successfully circumvent all 7 of them.",
"This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60 of strong white-box and 90 of strong black-box attacks by a variety of major attack methods",
"Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration."
]
} |
1812.00176 | 2903083707 | Discourse structures are beneficial for various NLP tasks such as dialogue understanding, question answering, sentiment analysis, and so on. This paper presents a deep sequential model for parsing discourse dependency structures of multi-party dialogues. The proposed model aims to construct a discourse dependency tree by predicting dependency relations and constructing the discourse structure jointly and alternately. It makes a sequential scan of the Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model decides to which previous EDU the current one should link and what the corresponding relation type is. The predicted link and relation type are then used to build the discourse structure incrementally with a structured encoder. During link prediction and relation classification, the model utilizes not only local information that represents the concerned EDUs, but also global information that encodes the EDU sequence and the discourse structure that is already built at the current step. Experiments show that the proposed model outperforms all the state-of-the-art baselines. | Most previous work for discourse parsing is based on Penn Discourse TreeBank (PDTB) @cite_12 or Rhetorical Structure Theory Discourse TreeBank (RST-DT) @cite_1 . PDTB focuses on shallow discourse relations but ignores the overall discourse structure @cite_17 , while in this paper we aim to parse discourse structures. As for RST, there have been many approaches including transition-based methods @cite_21 @cite_27 @cite_6 and those involving CYK-like algorithms @cite_4 @cite_18 @cite_29 or greedy bottom-up algorithms @cite_25 . However, constituency-based RST does not allow non-adjacent relations, which makes it inapplicable for multi-party dialogues. By contrast, in this paper, we aim to parse non-projective dependency structures, where dependency relations can appear between non-adjacent EDUs. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_6",
"@cite_27",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2565349465",
"1894075015",
"2759996821",
"2959939120",
"2045738181",
"2877801623",
"2741164290",
"2252267789",
"1581597064",
"2799190968"
],
"abstract": [
"",
"Clauses and sentences rarely stand on their own in an actual discourse; rather, the relationship between them carries important information that allows the discourse to express a meaning as a whole beyond the sum of its individual parts. Rhetorical analysis seeks to uncover this coherence structure. In this article, we present CODRA-a COmplete probabilistic Discriminative framework for performing Rhetorical Analysis in accordance with Rhetorical Structure Theory, which posits a tree representation of a discourse. CODRA comprises a discourse segmenter and a discourse parser. First, the discourse segmenter, which is based on a binary classifier, identifies the elementary discourse units in a given text. Then the discourse parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intra-sentential parsing and the other for multi-sentential parsing. We present two approaches to combine these two stages of parsing effectively. By conducting a series of empirical evaluations over two different data sets, we demonstrate that CODRA significantly outperforms the state-of-the-art, often by a wide margin. We also show that a reranking of the k-best parse hypotheses generated by CODRA can potentially improve the accuracy even further.",
"",
"Discourse parsing is an integral part of understanding information flow and argumentative structure in documents. Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. However, discourse treebanks for other languages exist, including Spanish, German, Basque, Dutch and Brazilian Portuguese. The treebanks share the same underlying linguistic theory, but differ slightly in the way documents are annotated. In this paper, we present (a) a new discourse parser which is simpler, yet competitive (significantly better on 2 3 metrics) to state of the art for English, (b) a harmonization of discourse treebanks across languages, enabling us to present (c) what to the best of our knowledge are the first experiments on cross-lingual discourse parsing.",
"The specification discloses a luggage carrier made up of a generally U-shaped frame. The frame has two spaced legs with a hook on the front which hooks over the bumper of an automobile. Two braces are attached to the cross member of the U-shaped member and the front portion of the braces is received on fastening means welded to the under side of the car frame. The cross members provide a supporting surface for carrying articles, boats and the like. A platform may be supported on the frame.",
"",
"",
"Text-level discourse parsing remains a challenge. The current state-of-the-art overall accuracy in relation assignment is 55.73 , achieved by (2013). However, their model has a high order of time complexity, and thus cannot be applied in practice. In this work, we develop a much faster model whose time complexity is linear in the number of sentences. Our model adopts a greedy bottom-up approach, with two linear-chain CRFs applied in cascade as local classifiers. To enhance the accuracy of the pipeline, we add additional constraints in the Viterbi decoding of the first CRF. In addition to efficiency, our parser also significantly outperforms the state of the art. Moreover, our novel approach of post-editing, which modifies a fully-built tree by considering information from constituents on upper levels, can further improve the accuracy.",
"This report contains the guidelines for the annotation of discourse relations in the Penn Discourse Treebank (http: www.seas.upenn.edu pdtb), PDTB. Discourse relations in the PDTB are annotated in a bottom up fashion, and capture both lexically realized relations as well as implicit relations. Guidelines in this report are provided for all aspects of the annotation, including annotation explicit discourse connectives, implicit relations, arguments of relations, senses of relations, and the attribution of relations and their arguments. The report also provides descriptions of the annotation format representation.",
"Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work."
]
} |
1812.00176 | 2903083707 | Discourse structures are beneficial for various NLP tasks such as dialogue understanding, question answering, sentiment analysis, and so on. This paper presents a deep sequential model for parsing discourse dependency structures of multi-party dialogues. The proposed model aims to construct a discourse dependency tree by predicting dependency relations and constructing the discourse structure jointly and alternately. It makes a sequential scan of the Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model decides to which previous EDU the current one should link and what the corresponding relation type is. The predicted link and relation type are then used to build the discourse structure incrementally with a structured encoder. During link prediction and relation classification, the model utilizes not only local information that represents the concerned EDUs, but also global information that encodes the EDU sequence and the discourse structure that is already built at the current step. Experiments show that the proposed model outperforms all the state-of-the-art baselines. | There have been some approaches proposed for parsing discourse dependency structures in two stages. These approaches first predict the local probability of dependency relation for each possible combination of EDU pairs, and then apply a decoding algorithm to construct the final structure. @cite_22 @cite_14 @cite_3 used Maximum Spanning Trees (MST) to construct a dependency tree, and @cite_22 also attempted @math algorithm but did not achieve better performance than MST. @cite_7 further used Integer Linear Programming (ILP) to construct a dependency graph. However, these approaches predict the probability of a dependency relation only with the local information of the two considered EDUs, while the constructed structure is not involved. By contrast, our sequential model predicts dependency relations and constructs the discourse structure jointly and alternately, and utilizes the currently constructed structure in dependency prediction. | {
"cite_N": [
"@cite_7",
"@cite_14",
"@cite_22",
"@cite_3"
],
"mid": [
"2469477431",
"2158211888",
"2097700060",
"2251483872"
],
"abstract": [
"In this paper we present the first, to the best of our knowledge, discourse parser that is able to predict non-tree DAG structures. We use Integer Linear Programming (ILP) to encode both the objective function and the constraints as global decoding over local scores. Our underlying data come from multi-party chat dialogues, which require the prediction of DAGs. We use the dependency parsing paradigm, as has been done in the past (, 2012; , 2014; , 2015), but we use the underlying formal framework of SDRT and exploit SDRT's notions of left and right distributive relations. We achieve an F-measure of 0.531 for fully labeled structures which beats the previous state of the art.",
"Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.",
"This paper presents a novel approach to document-based discourse analysis by performing a global A* search over the space of possible structures while optimizing a global criterion over the set of potential coherence relations. Existing approaches to discourse analysis have so far relied on greedy search strategies or restricted themselves to sentence-level discourse parsing. Another advantage of our approach, over other global alternatives (like Maximum Spanning Tree decoding algorithms), is its flexibility in being able to integrate constraints (including linguistically motivated ones like the Right Frontier Constraint). Finally, our paper provides the first discourse parsing system for French; our evaluation is carried out on the Annodis corpus. While using a lot less training data than earlier approaches than previous work on English, our system manages to achieve state-of-the-art results, with F1-scores of 66.2 and 46.8 when compared to unlabeled and labeled reference structures.",
"In this paper we present the first ever, to the best of our knowledge, discourse parser for multi-party chat dialogues. Discourse in multi-party dialogues dramatically differs from monologues since threaded conversations are commonplace rendering prediction of the discourse structure compelling. Moreover, the fact that our data come from chats renders the use of syntactic and lexical information useless since people take great liberties in expressing themselves lexically and syntactically. We use the dependency parsing paradigm as has been done in the past (, 2012; , 2014). We learn local probability distributions and then use MST for decoding. We achieve 0.680 F1 on unlabelled structures and 0.516 F1 on fully labeled structures which is better than many state of the art systems for monologues, despite the inherent difficulties that multi-party chat dialogues have."
]
} |
1812.00176 | 2903083707 | Discourse structures are beneficial for various NLP tasks such as dialogue understanding, question answering, sentiment analysis, and so on. This paper presents a deep sequential model for parsing discourse dependency structures of multi-party dialogues. The proposed model aims to construct a discourse dependency tree by predicting dependency relations and constructing the discourse structure jointly and alternately. It makes a sequential scan of the Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model decides to which previous EDU the current one should link and what the corresponding relation type is. The predicted link and relation type are then used to build the discourse structure incrementally with a structured encoder. During link prediction and relation classification, the model utilizes not only local information that represents the concerned EDUs, but also global information that encodes the EDU sequence and the discourse structure that is already built at the current step. Experiments show that the proposed model outperforms all the state-of-the-art baselines. | Although transition-based approaches for discourse dependency parsing have been proposed by @cite_2 @cite_16 which also construct dependency structures incrementally, they still underperform the approach using MST by @cite_14 . It is because these transition-based local approaches do not investigate other possible links when predicting a dependency relation as argued by @cite_16 , and they are limited to predict projective structures. Therefore, these approaches are inapplicable for multi-party dialogues. By contrast, our sequential model predicts the parent of each EDU in the dependency tree by comparing all its preceding EDUs, and it can predict non-projective structures which are necessary for multi-party dialogues. | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_2"
],
"mid": [
"2158211888",
"2798990287",
"2783661857"
],
"abstract": [
"Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree. In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs). The state-of-the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arcfactored model and the large-margin learning techniques. Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing.",
"",
"Discourse parsing aims to identify structures and relationships between different discourse units. Most existing approaches analyze a whole discourse at once, which often fails in distinguishing long-span relations and properly representing discourse units. In this article, we propose a novel parsing model to analyze discourse in a two-step fashion with different feature representations to characterize intra sentence and inter sentence discourse structures, respectively. Our model works in a transition-based framework and benefits from a stack long short-term memory neural network model. Experiments on benchmark tree banks show that our method outperforms traditional 1-step parsing methods in both English and Chinese."
]
} |
1812.00176 | 2903083707 | Discourse structures are beneficial for various NLP tasks such as dialogue understanding, question answering, sentiment analysis, and so on. This paper presents a deep sequential model for parsing discourse dependency structures of multi-party dialogues. The proposed model aims to construct a discourse dependency tree by predicting dependency relations and constructing the discourse structure jointly and alternately. It makes a sequential scan of the Elementary Discourse Units (EDUs) in a dialogue. For each EDU, the model decides to which previous EDU the current one should link and what the corresponding relation type is. The predicted link and relation type are then used to build the discourse structure incrementally with a structured encoder. During link prediction and relation classification, the model utilizes not only local information that represents the concerned EDUs, but also global information that encodes the EDU sequence and the discourse structure that is already built at the current step. Experiments show that the proposed model outperforms all the state-of-the-art baselines. | Moreover, state-of-the-art approaches for discourse dependency parsing as mentioned above still rely on hand-crafted features or external parsers. Neural networks have recently been widely applied in various NLP tasks, including RST discourse parsing @cite_18 @cite_21 and dialogue act recognition @cite_5 @cite_23 . And @cite_2 @cite_16 also applied neural networks in their transition-based dependency parsing models. In this paper, we adopt hierarchical Gated Recurrent Unit (GRU) @cite_10 encoders to compute discourse representations. | {
"cite_N": [
"@cite_18",
"@cite_21",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_16",
"@cite_10"
],
"mid": [
"2565349465",
"2959939120",
"2963550483",
"2783661857",
"2756420200",
"2798990287",
"2950635152"
],
"abstract": [
"",
"Discourse parsing is an integral part of understanding information flow and argumentative structure in documents. Most previous research has focused on inducing and evaluating models from the English RST Discourse Treebank. However, discourse treebanks for other languages exist, including Spanish, German, Basque, Dutch and Brazilian Portuguese. The treebanks share the same underlying linguistic theory, but differ slightly in the way documents are annotated. In this paper, we present (a) a new discourse parser which is simpler, yet competitive (significantly better on 2 3 metrics) to state of the art for English, (b) a harmonization of discourse treebanks across languages, enabling us to present (c) what to the best of our knowledge are the first experiments on cross-lingual discourse parsing.",
"Dialogue Act Recognition (DAR) is a challenging problem in dialogue interpretation, which aims to associate semantic labels to utterances and characterize the speaker's intention. Currently, many existing approaches formulate the DAR problem ranging from multi-classification to structured prediction, which suffer from handcrafted feature extensions and attentive contextual dependencies. In this paper, we tackle the problem of DAR from the viewpoint of extending richer Conditional Random Field (CRF) structured dependencies without abandoning end-to-end training. We incorporate hierarchical semantic inference with memory mechanism on the utterance modeling at multiple levels. We then utilize the structured attention network on the linear-chain CRF to dynamically separate the utterances into cliques. The extensive experiments on two primary benchmark datasets Switchboard Dialogue Act (SWDA) and Meeting Recorder Dialogue Act (MRDA) datasets show that our method achieves better performance than other state-of-the-art solutions to the problem.",
"Discourse parsing aims to identify structures and relationships between different discourse units. Most existing approaches analyze a whole discourse at once, which often fails in distinguishing long-span relations and properly representing discourse units. In this article, we propose a novel parsing model to analyze discourse in a two-step fashion with different feature representations to characterize intra sentence and inter sentence discourse structures, respectively. Our model works in a transition-based framework and benefits from a stack long short-term memory neural network model. Experiments on benchmark tree banks show that our method outperforms traditional 1-step parsing methods in both English and Chinese.",
"Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to utterances in a conversation. The problem of associating semantic labels to utterances can be treated as a sequence labeling problem. In this work, we build a hierarchical recurrent neural network using bidirectional LSTM as a base unit and the conditional random field (CRF) as the top layer to classify each utterance into its corresponding dialogue act. The hierarchical network learns representations at multiple levels, i.e., word level, utterance level, and conversation level. The conversation level representations are input to the CRF layer, which takes into account not only all previous utterances but also their dialogue acts, thus modeling the dependency among both, labels and utterances, an important consideration of natural dialogue. We validate our approach on two different benchmark data sets, Switchboard and Meeting Recorder Dialogue Act, and show performance improvement over the state-of-the-art methods by @math and @math absolute points, respectively. It is worth noting that the inter-annotator agreement on Switchboard data set is @math , and our method is able to achieve the accuracy of about @math despite being trained on the noisy data.",
"",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases."
]
} |
1812.00116 | 2902957526 | Interactive user interfaces need to continuously evolve based on the interactions that a user has (or does not have) with the system. This may require constant exploration of various options that the system may have for the user and obtaining signals of user preferences on those. However, such an exploration, especially when the set of available options itself can change frequently, can lead to sub-optimal user experiences. We present Explore-Exploit: a framework designed to collect and utilize user feedback in an interactive and online setting that minimizes regressions in end-user experience. This framework provides a suite of online learning operators for various tasks such as personalization ranking, candidate selection and active learning. We demonstrate how to integrate this framework with run-time services to leverage online and interactive machine learning out-of-the-box. We also present results demonstrating the efficiencies that can be achieved using the Explore-Exploit framework. | The Explore-Exploit framework is built upon a significant amount of related works in multi-armed bandit algorithms, reinforcement learning and active learning. @cite_16 and @cite_0 give a good overview and empirical analysis of different multi-armed bandit algorithms. These algorithms have been shown to be very useful for many real world problems such as recommendation @cite_4 @cite_9 and ranking @cite_5 . Within Explore-Exploit, we have implemented multiple multi-armed bandit algorithms including @math -Greedy, UCB1, Thompson Sampling, etc. Active learning has also demonstrated great advantages in many use cases @cite_6 @cite_10 . In Explore-Exploit, we have built multiple active learning operators that work both in streaming setting @cite_1 @cite_15 and pool setting @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_10"
],
"mid": [
"",
"2112420033",
"",
"2108740451",
"2623968842",
"1544426622",
"2023599408",
"",
"2804268694",
""
],
"abstract": [
"",
"Personalized web services strive to adapt their services (advertisements, news articles, etc.) to individual users by making use of both content and user information. Despite a few recent advances, this problem remains challenging for at least two reasons. First, web service is featured with dynamically changing pools of content, rendering traditional collaborative filtering methods inapplicable. Second, the scale of most web services of practical interest calls for solutions that are both fast in learning and computation. In this work, we model personalized recommendation of news articles as a contextual bandit problem, a principled approach in which a learning algorithm sequentially selects articles to serve users based on contextual information about the users and articles, while simultaneously adapting its article-selection strategy based on user-click feedback to maximize total user clicks. The contributions of this work are three-fold. First, we propose a new, general contextual bandit algorithm that is computationally efficient and well motivated from learning theory. Second, we argue that any bandit algorithm can be reliably evaluated offline using previously recorded random traffic. Finally, using this offline evaluation method, we successfully applied our new algorithm to a Yahoo! Front Page Today Module dataset containing over 33 million events. Results showed a 12.5 click lift compared to a standard context-free bandit algorithm, and the advantage becomes even greater when data gets more scarce.",
"",
"Unlabeled samples can be intelligently selected for labeling to minimize classification error. In many real-world applications, a large number of unlabeled samples arrive in a streaming manner, making it impossible to maintain all the data in a candidate pool. In this work, we focus on binary classification problems and study selective labeling in data streams where a decision is required on each sample sequentially. We consider the unbiasedness property in the sampling process, and design optimal instrumental distributions to minimize the variance in the stochastic process. Meanwhile, Bayesian linear classifiers with weighted maximum likelihood are optimized online to estimate parameters. In empirical evaluation, we collect a data stream of user-generated comments on a commercial news portal in 30 consecutive days, and carry out offline evaluation to compare various sampling strategies, including unbiased active learning, biased variants, and random sampling. Experimental results verify the usefulness of online active learning, especially in the non-stationary situation with concept drift.",
"",
"Although many algorithms for the multi-armed bandit problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as epsilon-greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. Thirdly, the algorithms' performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50 more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies.",
"Algorithms for learning to rank Web documents usually assume a document's relevance is independent of other documents. This leads to learned ranking functions that produce rankings with redundant results. In contrast, user studies have shown that diversity at high ranks is often preferred. We present two online learning algorithms that directly learn a diverse ranking of documents based on users' clicking behavior. We show that these algorithms minimize abandonment, or alternatively, maximize the probability that a relevant document is found in the top k positions of a ranking. Moreover, one of our algorithms asymptotically achieves optimal worst-case performance even if users' interests change.",
"",
"Modern deep learning methods are very sensitive to many hyperparameters, and, due to the long training times of state-of-the-art models, vanilla Bayesian hyperparameter optimization is typically computationally infeasible. On the other hand, bandit-based configuration evaluation approaches based on random search lack guidance and do not converge to the best configurations as quickly. Here, we propose to combine the benefits of both Bayesian optimization and bandit-based methods, in order to achieve the best of both worlds: strong anytime performance and fast convergence to optimal configurations. We propose a new practical state-of-the-art hyperparameter optimization method, which consistently outperforms both Bayesian optimization and Hyperband on a wide range of problem types, including high-dimensional toy functions, support vector machines, feed-forward neural networks, Bayesian neural networks, deep reinforcement learning, and convolutional neural networks. Our method is robust and versatile, while at the same time being conceptually simple and easy to implement.",
""
]
} |
1812.00265 | 2902645734 | Functional groups (FGs) serve as a foundation for analyzing chemical properties of organic molecules. Automatic discovery of FGs will impact various fields of research, including medicinal chemistry, by reducing the amount of lab experiments required for discovery or synthesis of new molecules. Here, we investigate methods based on graph convolutional neural networks (GCNNs) for localizing FGs that contribute to specific chemical properties. Molecules are modeled as undirected graphs with atoms as nodes and bonds as edges. Using this graph structure, we trained GCNNs in a supervised way on experimentally-validated molecular training sets to predict specific chemical properties, e.g., toxicity. Upon learning a GCNN, we analyzed its activation patterns to automatically identify FGs using four different methods: gradient-based saliency maps, Class Activation Mapping (CAM), gradient-weighted CAM (Grad-CAM), and Excitation Back-Propagation. We evaluated the contrastive power of these methods with respect to the specificity of the identified molecular substructures and their relevance for chemical functions. Grad- CAM had the highest contrastive power and generated qualitatively the best FGs. This work paves the way for automatic analysis and design of new molecules. | A long standing limitation of general deep neural networks has been the difficulty in interpreting and explaining the classification results. Recently, explainability methods have been devised for deep networks and specifically CNNs @cite_14 @cite_16 @cite_19 @cite_2 . These methods enable one to probe a CNN and identify the important substructures of the input data (as deemed by the network), which could be used as an explanatory tool or as a tool to discover unknown underlying substructures in the data. For example, in the area of medical imaging, in addition to classifying images having malignant lesions, they can be localized, as the CNN can provide reasoning for classifying an input image. Here, we are interested in measuring the potential of these methods for discovery of FGs in organic molecules. | {
"cite_N": [
"@cite_19",
"@cite_14",
"@cite_16",
"@cite_2"
],
"mid": [
"2616247523",
"2962851944",
"2950328304",
"2503388974"
],
"abstract": [
"We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them",
"We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks."
]
} |
1812.00265 | 2902645734 | Functional groups (FGs) serve as a foundation for analyzing chemical properties of organic molecules. Automatic discovery of FGs will impact various fields of research, including medicinal chemistry, by reducing the amount of lab experiments required for discovery or synthesis of new molecules. Here, we investigate methods based on graph convolutional neural networks (GCNNs) for localizing FGs that contribute to specific chemical properties. Molecules are modeled as undirected graphs with atoms as nodes and bonds as edges. Using this graph structure, we trained GCNNs in a supervised way on experimentally-validated molecular training sets to predict specific chemical properties, e.g., toxicity. Upon learning a GCNN, we analyzed its activation patterns to automatically identify FGs using four different methods: gradient-based saliency maps, Class Activation Mapping (CAM), gradient-weighted CAM (Grad-CAM), and Excitation Back-Propagation. We evaluated the contrastive power of these methods with respect to the specificity of the identified molecular substructures and their relevance for chemical functions. Grad- CAM had the highest contrastive power and generated qualitatively the best FGs. This work paves the way for automatic analysis and design of new molecules. | The most straight-forward approach for generating a sensitivity map over the input data to discover the importance of the underlying substructures is to calculate a gradient map within a layer by considering the norm of the gradient vector with respect to an input for each network weight @cite_14 . However, gradient maps are known to be noisy and smoothening these maps might be necessary @cite_18 . More advanced techniques include Class Activation Mapping (CAM) @cite_16 , Gradient-weighted Class Activation Mapping (Grad-CAM) @cite_19 , and Excitation Back-Propagation (EB) @cite_2 techniques that improve gradient maps by taking into account some notion of context. These techniques have been shown to be effective on CNNs and can identify highly abstract notions in images. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_19",
"@cite_2",
"@cite_16"
],
"mid": [
"2626639386",
"2962851944",
"2616247523",
"2503388974",
"2950328304"
],
"abstract": [
"Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.",
"This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13].",
"We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL A demo and a video of the demo can be found at this http URL and youtu.be COjUB9Izk6E.",
"We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.",
"In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1 top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2 top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them"
]
} |
1812.00312 | 2903029729 | We present a new method to localize a camera within a previously unseen environment perceived from an egocentric point of view. Although this is, in general, an ill-posed problem, humans can effortlessly and efficiently determine their relative location and orientation and navigate into a previously unseen environments, e.g., finding a specific item in a new grocery store. To enable such a capability, we design a new egocentric representation, which we call ECO (Egocentric COgnitive map). ECO is biologically inspired, by the cognitive map that allows human navigation, and it encodes the surrounding visual semantics with respect to both distance and orientation. ECO possesses three main properties: (1) reconfigurability: complex semantics and geometry is captured via the synthesis of atomic visual representations (e.g., image patch); (2) robustness: the visual semantics are registered in a geometrically consistent way (e.g., aligning with respect to the gravity vector, frontalizing, and rescaling to canonical depth), thus enabling us to learn meaningful atomic representations; (3) adaptability: a domain adaptation framework is designed to generalize the learned representation without manual calibration. As a proof-of-concept, we use ECO to localize a camera within real-world scenes---various grocery stores---and demonstrate performance improvements when compared to existing semantic localization approaches. | Image localization techniques often incorporate other correlated sensory data. Cozman and Krotkov @cite_35 introduced localization of an image taken from unknown environment using temporal changes in sun altitudes. @cite_2 incorporated weather data reported by satellite imagery to localize widely distributed cameras. They find matches between weather conditions on images over an year and the expected weather changes indicated by satellite imagery. As GPS became a viable solution for localization in many applications, GPS-tagged images can help to localize images that do not have such tags. Zhang and Kosecka @cite_45 built a GPS-tagged image repository in urban environments and found correspondences between a query image and the database using SIFT features @cite_25 . Hays and Efros @cite_15 leveraged GPS-tagged internet images to estimate a location probability distribution over Earth. @cite_13 extended the work to disambiguate locations of the images without distinct landmarks. @cite_17 estimated image location based on a 3D elevation model of mountainous terrains and evaluated their method on the scale of a country (Switzerland). | {
"cite_N": [
"@cite_35",
"@cite_45",
"@cite_2",
"@cite_15",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"1853701347",
"2043732461",
"2135676438",
"2103163130",
"2537480791",
"2151103935",
"104903125"
],
"abstract": [
"This paper explores the possibility of using Sun altitude for localization of a robot in totally unknown territory. A set of Sun altitudes is obtained by processing a sequence of time-indexed images of the sky. Each altitude constrains the viewer to a circle on the surface of a celestial body, called the circle of equal altitude. A set of circles of equal altitude can be intersected to yield viewer position. We use this principle to obtain the position on Earth. Since altitude measurements are corrupted by noise, a least-square estimate is numerically calculated from the sequence of altitudes. The paper discusses the necessary theory for Sun-based localization, the technical issues of camera calibration and image processing, and presents preliminary results with real data.",
"In this paper we present a prototype system for image based localization in urban environments. Given a database of views of city street scenes tagged by GPS locations, the system computes the GPS location of a novel query view. We first use a wide-baseline matching technique based on SIFT features to select the closest views in the database. Often due to a large change of viewpoint and presence of repetitive structures, a large percentage of matches (> 50 ) are not correct correspondences. The subsequent motion estimation between the query view and the reference view, is then handled by a novel and efficient robust estimation technique capable of dealing with large percentage of outliers. This stage is also accompanied by a model selection step among the fundamental matrix and the homography. Once the motion between the closest reference views is estimated, the location of the query view is then obtained by triangulation of translation directions. Approximate solutions for cases when triangulation cannot be obtained reliably are also described. The presented system is tested on the dataset used in ICCV 2005 Computer Vision Contest and is shown to have higher accuracy than previous reported results.",
"A key problem in widely distributed camera networks is locating the cameras. This paper considers three scenarios for camera localization: localizing a camera in an unknown environment, adding a new camera in a region with many other cameras, and localizing a camera by finding correlations with satellite imagery. We find that simple summary statistics (the time course of principal component coefficients) are sufficient to geolocate cameras without determining correspondences between cameras or explicitly reasoning about weather in the scene. We present results from a database of images from 538 cameras collected over the course of a year. We find that for cameras that remain stationary and for which we have accurate image times- tamps, we can localize most cameras to within 50 miles of the known location. In addition, we demonstrate the use of a distributed camera network in the construction a map of weather conditions.",
"Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.",
"This paper presents a method for estimating geographic location for sequences of time-stamped photographs. A prior distribution over travel describes the likelihood of traveling from one location to another during a given time interval. This distribution is based on a training database of 6 million photographs from Flickr.com. An image likelihood for each location is defined by matching a test photograph against the training database. Inferring location for images in a test sequence is then performed using the Forward-Backward algorithm, and the model can be adapted to individual users as well. Using temporal constraints allows our method to geolocate images without recognizable landmarks, and images with no geographic cues whatsoever. This method achieves a substantial performance improvement over the best-available baseline, and geolocates some users' images with near-perfect accuracy.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.",
"Given a picture taken somewhere in the world, automatic geo-localization of that image is a task that would be extremely useful e.g. for historical and forensic sciences, documentation purposes, organization of the world's photo material and also intelligence applications. While tremendous progress has been made over the last years in visual location recognition within a single city, localization in natural environments is much more difficult, since vegetation, illumination, seasonal changes make appearance-only approaches impractical. In this work, we target mountainous terrain and use digital elevation models to extract representations for fast visual database lookup. We propose an automated approach for very large scale visual localization that can efficiently exploit visual information contours and geometric constraints consistent orientation at the same time. We validate the system on the scale of a whole country Switzerland, 40 000km2 using a new dataset of more than 200 landscape query pictures with ground truth."
]
} |
1812.00312 | 2903029729 | We present a new method to localize a camera within a previously unseen environment perceived from an egocentric point of view. Although this is, in general, an ill-posed problem, humans can effortlessly and efficiently determine their relative location and orientation and navigate into a previously unseen environments, e.g., finding a specific item in a new grocery store. To enable such a capability, we design a new egocentric representation, which we call ECO (Egocentric COgnitive map). ECO is biologically inspired, by the cognitive map that allows human navigation, and it encodes the surrounding visual semantics with respect to both distance and orientation. ECO possesses three main properties: (1) reconfigurability: complex semantics and geometry is captured via the synthesis of atomic visual representations (e.g., image patch); (2) robustness: the visual semantics are registered in a geometrically consistent way (e.g., aligning with respect to the gravity vector, frontalizing, and rescaling to canonical depth), thus enabling us to learn meaningful atomic representations; (3) adaptability: a domain adaptation framework is designed to generalize the learned representation without manual calibration. As a proof-of-concept, we use ECO to localize a camera within real-world scenes---various grocery stores---and demonstrate performance improvements when compared to existing semantic localization approaches. | Two main approaches have been used for image based localization. (1) Recognition based localization: @cite_32 used global context to recognize a scene category using a hidden Markov model framework. @cite_43 applied a RANSAC framework for global camera registration. Robertson and Cipolla @cite_11 estimated image positions relative to a set of rectified views of building facades registered onto a city map. Recently, deep neural networks with a large-scale data continue to push the boundary of localization performance to human level @cite_46 @cite_12 @cite_27 @cite_37 . (2) Camera resectioning based localization: Structure from motion have also been employed for large scale image localization. @cite_50 exploited structure from motion to browse a photo collection from the exact location where it was taken. They used hundreds of images for pose registration in 3D. @cite_4 presented a parallelizable system that can reconstruct hundreds of thousands of images (city scale) within two days. @cite_21 showed larger scale reconstruction (millions of images) that can be executed on a single PC. Unlike existing visual localization frameworks that rely on geometry or visual semantics, our cognitive map representation ties geometry and visual semantics to build a robust first-person representation that can be reliably matched to the relevant spatial context. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_21",
"@cite_32",
"@cite_43",
"@cite_27",
"@cite_50",
"@cite_46",
"@cite_12",
"@cite_11"
],
"mid": [
"2951336016",
"2163446794",
"2099443716",
"2128554449",
"2149646227",
"2605111497",
"",
"2035430745",
"",
"1992191556"
],
"abstract": [
"We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL",
"We present a system that can reconstruct 3D geometry from large, unorganized collections of photographs such as those found by searching for a given city (e.g., Rome) on Internet photo-sharing sites. Our system is built on a set of new, distributed computer vision algorithms for image matching and 3D reconstruction, designed to maximize parallelism at each stage of the pipeline and to scale gracefully with both the size of the problem and the amount of available computation. Our experimental results demonstrate that it is now possible to reconstruct city-scale image collections with more than a hundred thousand images in less than a day.",
"This paper introduces an approach for dense 3D reconstruction from unregistered Internet-scale photo collections with about 3 million images within the span of a day on a single PC (\"cloudless\"). Our method advances image clustering, stereo, stereo fusion and structure from motion to achieve high computational performance. We leverage geometric and appearance constraints to obtain a highly parallel implementation on modern graphics processors and multi-core architectures. This leads to two orders of magnitude higher performance on an order of magnitude larger dataset than competing state-of-the-art approaches.",
"While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. We present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, main street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., tables are more likely in an office than a street). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and show how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides realtime feedback to the user.",
"We have previously developed a mobile robot system which uses scale invariant visual landmarks to localize and simultaneously build a 3D map of the environment In this paper, we look at global localization, also known as the kidnapped robot problem, where the robot localizes itself globally, without any prior location estimate. This is achieved by matching distinctive landmarks in the current frame to a database map. A Hough transform approach and a random sample consensus (RANSAC) approach for global localization are compared, showing that RANSAC is much more efficient. Moreover, robust global localization can be achieved by matching a small sub-map of the local region built from multiple frames.",
"Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city.",
"",
"A common thread that ties together many prior works in scene understanding is their focus on the aspects directly present in a scene such as its categorical classification or the set of objects. In this work, we propose to look beyond the visible elements of a scene; we demonstrate that a scene is not just a collection of objects and their configuration or the labels assigned to its pixels - it is so much more. From a simple observation of a scene, we can tell a lot about the environment surrounding the scene such as the potential establishments near it, the potential crime rate in the area, or even the economic climate. Here, we explore several of these aspects from both the human perception and computer vision perspective. Specifically, we show that it is possible to predict the distance of surrounding establishments such as McDonald's or hospitals even by using scenes located far from them. We go a step further to show that both humans and computers perform well at navigating the environment based only on visual cues from scenes. Lastly, we show that it is possible to predict the crime rates in an area simply by looking at a scene without any real-time criminal activity. Simply put, here, we illustrate that it is possible to look beyond the visible scene.",
"",
"We describe the prototype of a system intended to allow a userto navigate in an urban environment using a mobile telephone equipped wi th a camera. The system uses a database of views of building facades to det ermine the pose of a query view provided by the user. Our method is based o n a novel wide-baseline matching algorithm that can identify corres ponding building facades in two views despite significant changes of viewpoin t and lighting. We show that our system is capable of localising query views r eliably in a large part of Cambridge city centre."
]
} |
1812.00045 | 2903445514 | Deep reinforcement learning (DRL) has achieved great successes in recent years with the help of novel methods and higher compute power. However, there are still several challenges to be addressed such as convergence to locally optimal policies and long training times. In this paper, firstly, we augment Asynchronous Advantage Actor-Critic (A3C) method with a novel self-supervised auxiliary task, i.e. , measuring temporal closeness to terminal states, namely A3C-TP. Secondly, we propose a new framework where planning algorithms such as Monte Carlo tree search or other sources of (simulated) demonstrators can be integrated to asynchronous distributed DRL methods. Compared to vanilla A3C, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game. | Reinforcement learning (RL) seeks to maximize the sum of discounted rewards an agent collects by interacting with an environment. RL approaches mainly fall under three categories: value based methods such as Q-learning @cite_5 or Deep-Q Network @cite_9 , policy based methods such as REINFORCE @cite_38 , and a combination of value and policy based techniques, i.e. actor-critic methods @cite_36 . Recently, there have been several distributed actor-critic based DRL algorithms @cite_27 @cite_35 @cite_14 @cite_2 . | {
"cite_N": [
"@cite_38",
"@cite_35",
"@cite_14",
"@cite_36",
"@cite_9",
"@cite_27",
"@cite_2",
"@cite_5"
],
"mid": [
"2119717200",
"2950872548",
"2786036274",
"",
"2145339207",
"2260756217",
"2949980113",
""
],
"abstract": [
"This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880 expert human performance, and a challenging suite of first-person, three-dimensional tasks leading to a mean speedup in learning of 10 @math and averaging 87 expert human performance on Labyrinth.",
"In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time, which is already a problem in single task learning. We have developed a new distributed agent IMPALA (Importance-Weighted Actor Learner Architecture) that can scale to thousands of machines and achieve a throughput rate of 250,000 frames per second. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace, which was critical for achieving learning stability. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (, 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (, 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents, use less data and crucially exhibits positive transfer between tasks as a result of its multi-task approach.",
"",
"An artificial agent is developed that learns to play a diverse range of classic Atari 2600 computer games directly from sensory experience, achieving a performance comparable to that of an expert human player; this work paves the way to building general-purpose learning algorithms that bridge the divide between perception and action.",
"We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.",
"In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN (, 2016) and Categorical DQN (, 2017), while giving better run-time performance than A3C (, 2016). Our first contribution is a new policy evaluation algorithm called Distributional Retrace, which brings multi-step off-policy updates to the distributional reinforcement learning setting. The same approach can be used to convert several classes of multi-step policy evaluation algorithms designed for expected value evaluation into distributional ones. Next, we introduce the eta -leave-one-out policy gradient algorithm which improves the trade-off between variance and bias by using action values as a baseline. Our final algorithmic contribution is a new prioritized replay algorithm for sequences, which exploits the temporal locality of neighboring observations for more efficient replay prioritization. Using the Atari 2600 benchmarks, we show that each of these innovations contribute to both the sample efficiency and final agent performance. Finally, we demonstrate that Reactor reaches state-of-the-art performance after 200 million frames and less than a day of training.",
""
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.