aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1908.04933
2968171141
Re-Pair is a grammar compression scheme with favorably good compression rates. The computation of Re-Pair comes with the cost of maintaining large frequency tables, which makes it hard to compute Re-Pair on large scale data sets. As a solution for this problem we present, given a text of length @math whose characters are drawn from an integer alphabet, an @math time algorithm computing Re-Pair in @math bits of space including the text space, where @math is the number of terminals and non-terminals. The algorithm works in the restore model, supporting the recovery of the original input in the time for the Re-Pair computation with @math additional bits of working space. We give variants of our solution working in parallel or in the external memory model.
Re-Pair Computation Re-Pair is a grammar proposed by , who gave an algorithm computing it in expected linear time with @math words of working space, where @math is the number of non-terminals (produced by Re-Pair). This space requirement got improved by , who presented a linear time algorithm taking @math words on top of the rewriteable text space for a constant @math with @math . Subsequently, they improved their algorithm in @cite_6 to include the text space within the @math words of working space. However, they assume that the alphabet size @math is constant and @math , where @math is the machine word size. They also provide a solution for @math running in expected linear time. Recently, showed how to convert an arbitrary grammar (representing a text) in the Re-Pair grammar in compressed space, i.e., without decompressing the text. Combined with a grammar compression that can process the text in compressed space in a streaming fashion, this result leads to the first Re-Pair computation in compressed space.
{ "cite_N": [ "@cite_6" ], "mid": [ "2610200252" ], "abstract": [ "Re-Pair is an efficient grammar compressor that operates by recursively replacing high-frequency character pairs with new grammar symbols. The most space-efficient linear-time algorithm computing Re-Pair uses @math words on top of the re-writable text (of length @math and stored in @math words), for any constant @math ; in practice however, this solution uses complex sub-procedures preventing it from being practical. In this paper, we present an implementation of the above-mentioned result making use of more practical solutions; our tool further improves the working space to @math words (text included), for some small constant @math . As a second contribution, we focus on compact representations of the output grammar. The lower bound for storing a grammar with @math rules is @math bits, and the most efficient encoding algorithm in the literature uses at most @math bits and runs in @math time. We describe a linear-time heuristic maximizing the compressibility of the output Re-Pair grammar. On real datasets, our grammar encoding uses---on average---only @math more bits than the information-theoretic minimum. In half of the tested cases, our compressor improves the output size of 7-Zip with maximum compression rate turned on." ] }
1908.05004
2967978532
The subject of this report is the re-identification of individuals in the Myki public transport dataset released as part of the Melbourne Datathon 2018. We demonstrate the ease with which we were able to re-identify ourselves, our co-travellers, and complete strangers; our analysis raises concerns about the nature and granularity of the data released, in particular the ability to identify vulnerable or sensitive groups.
A common theme is that a remarkably small number of distinct points of information are necessary to make an individual unique---whenever one person's information is linked together into a detailed record of their events, a few known events are usually enough to identify them. De Montjoye @cite_5 showed that 80 were unique based on 3 points of time and location, even when neither times nor places were very precisely given.
{ "cite_N": [ "@cite_5" ], "mid": [ "2115240023" ], "abstract": [ "We study fifteen months of human mobility data for one and a half million individuals and find that human mobility traces are highly unique. In fact, in a dataset where the location of an individual is specified hourly, and with a spatial resolution equal to that given by the carrier's antennas, four spatio-temporal points are enough to uniquely identify 95 of the individuals. We coarsen the data spatially and temporally to find a formula for the uniqueness of human mobility traces given their resolution and the available outside information. This formula shows that the uniqueness of mobility traces decays approximately as the 1 10 power of their resolution. Hence, even coarse datasets provide little anonymity. These findings represent fundamental constraints to an individual's privacy and have important implications for the design of frameworks and institutions dedicated to protect the privacy of individuals." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
In @cite_2 , we considered the problem of data aggregation and dissemination in IoT networks serving, for example, monitoring, sensing, or machine control applications. A key aspect of the IoT that differentiates it from classical wireless sensor networks (WSNs) is its heterogeneity. We therefore considered cases where nodes may take on different roles (for example, sensors, destinations, or transit nodes) for different applications, and where multiple applications with different demands may be present in the network simultaneously. Moreover, these demands can be more general than only collecting data and forwarding it to a single sink, as is usually the case for WSNs. Rather, data may be processed within the network (we take the specific case of aggregation), and may be disseminated to multiple sinks via multicast transmissions.
{ "cite_N": [ "@cite_2" ], "mid": [ "2786403027" ], "abstract": [ "Established approaches to data aggregation in wireless sensor networks (WSNs) do not cover the variety of new use cases developing with the advent of the Internet of Things (IoT). In particular, the current push toward fog computing, in which control, computation, and storage are moved to nodes close to the network edge, induces a need to collect data at multiple sinks, rather than the single sink typically considered in WSN aggregation algorithms. Moreover, for machine-to-machine communication scenarios, actuators subscribing to sensor measurements may also be present, in which case data should be not only aggregated and processed in-network but also disseminated to actuator nodes. In this paper, we present mixed-integer programming formulations and algorithms for the problem of energy-optimal routing and multiple-sink aggregation, as well as joint aggregation and dissemination, of sensor measurement data in IoT edge networks. We consider optimization of the network for both minimal total energy usage, and min-max per-node energy usage. We also provide a formulation and algorithm for throughput-optimal scheduling of transmissions under the physical interference model in the pure aggregation case. We have conducted a numerical study to compare the energy required for the two use cases, as well as the time to solve them, in generated network scenarios with varying topologies and between 10 and 40 nodes. Although aggregation only accounts for less than 15 of total energy usage in all cases tested, it provides substantial energy savings. Our results show more than 13 times greater energy usage for 40-node networks using direct, shortest-path flows from sensors to actuators, compared with our aggregation and dissemination solutions." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Network lifetime has been studied extensively in the context of WSNs since the early 2000's. A full review of the literature in this area is therefore beyond the scope of this paper; a recent survey can be found in @cite_20 . We will instead focus on the recent work that is most relevant to the current paper.
{ "cite_N": [ "@cite_20" ], "mid": [ "2034964589" ], "abstract": [ "The longevity of wireless sensor network (WSN) deployments is often crucial for real-time monitoring applications. Minimizing energy consumption by utilizing intelligent information processing is one of the main ways to prolong the lifetime of a network deployment. Data streams from the sensors need to be processed within the resource constraints of the sensing platforms to reduce the energy consumption associated with packet transmission. In this paper we carried out both simulation and real-world implementation of light-weight adaptive models to achieve a prolonged WSN lifetime. Specifically, we propose a Naive model that incurs virtually no cost with low memory footprint to realize this goal. Our results show that, despite its minimal complexity, the Naive model is robust when compared with other well-known algorithms used for prediction in WSNs. We show that our approach achieves up to 96 communication reduction, within 0.2 degrees error bound with no significant loss in accuracy and it is comparable in performance to the more complex algorithms like Exponential Smoothing (ETS)." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
There are numerous different definitions of network lifetime adopted in the literature @cite_20 . Some of these include that the network lifetime expires at the time instant a certain number (possibly as low as one) or proportion of nodes deplete their batteries, when the first data collection failure occurs, or when the specific node with the highest consumption rate runs out of energy. In @cite_20 , these definitions are classified into four categories depending on whether they are based on node lifetime, coverage and connectivity, transmission, or a combination of parameters.
{ "cite_N": [ "@cite_20" ], "mid": [ "2099641228" ], "abstract": [ "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
However, a problem with many of these definitions is that they are not application-centric. In practice, whether or not a network is functional depends on the specific application or applications which it serves. Some applications may require all nodes in the network to have remaining energy, while others may continue to operate correctly with only a few nodes working. The lifetime also depends on the capabilities of the network. For example, if the network can be reconfigured, the lifetime may be extended by switching configurations. This can be facilitated by the use of software defined networking @cite_22 , as well as support from cloud services that are capable of performing even demanding calculations to determine the best network configuration at any given time, without incurring an energy cost in the end devices.
{ "cite_N": [ "@cite_22" ], "mid": [ "1787995280" ], "abstract": [ "An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
This is the approach we adopt in this paper, and we define valid configurations based on the demands of the applications present in the network along with the roles the various nodes play in these demands. As such, we will adopt a general definition of the network lifetime as the total time in which the network is operational. Since we consider a class of applications with data streams as their demands, this is most similar to the definition used in @cite_15 , where the network lifetime was defined as the number of sensory information task cycles achieved until the network ceases to be fully operational.
{ "cite_N": [ "@cite_15" ], "mid": [ "2099641228" ], "abstract": [ "We derive a general formula for the lifetime-of wireless sensor networks which holds independently of the underlying network model including network architecture and protocol, data collection initiation, lifetime definition, channel fading characteristics, and energy consumption model. This formula identifies two key parameters at the physical layer that affect the network lifetime: the channel state and the residual energy of sensors. As a result, it provides not only a gauge for performance evaluation of sensor networks but also a guideline for the design of network protocols. Based on this formula, we propose a medium access control protocol that exploits both the channel state information and the residual energy information of individual sensors. Referred to as the max-min approach, this protocol maximizes the minimum residual energy across the network in each data collection." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Some work has been performed regarding network lifetime for networks with heterogeneous nodes, but only in a quite limited sense. For example, there is work based on the LEACH clustering protocol @cite_16 @cite_18 , where each node may be either an ordinary sensor node or a cluster head at different times. Examples of variations on LEACH that improve the network lifetime include @cite_7 , @cite_17 and @cite_10 , while @cite_12 presents a clustering routing protocol that considers both network lifetime and coverage. In @cite_4 , the nodes are also heterogeneous, however they may only be of two types: sensor nodes and relay nodes. This is also the case in @cite_13 , where network lifetime is defined as the time until the first node depletes its battery, and (unicast) routing is then optimised for each traffic flow to reach the sink.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_4", "@cite_7", "@cite_16", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2106665154", "2167360551", "2106335692", "2126379392", "2156919687", "2150050284", "2038464572", "2092347390" ], "abstract": [ "In wireless sensor networks that consist of a large number of low-power, short-lived, unreliable sensors, one of the main design challenges is to obtain long system lifetime without sacrificing system original performances (sensing coverage and sensing reliability). In this paper, we propose a node-scheduling scheme, which can reduce system overall energy consumption, therefore increasing system lifetime, by identifying redundant nodes in respect of sensing coverage and then assigning them an off-duty operation mode that has lower energy consumption than the normal on-duty one. Our scheme aims to completely preserve original sensing coverage theoretically. Practically, sensing coverage degradation caused by location error, packet loss and node failure is very limited, not more than 1 as shown by our experimental results. In addition, the experimental results illustrate that certain redundancy is still guaranteed after node-scheduling, which we believe can provide enough sensing reliability in many applications. We implement the proposed scheme in NS-2 as an extension of the LEACH protocol and compare its energy consumption with the original LEACH. Simulation results exhibit noticeably longer system lifetime after introducing our scheme than before. Copyright © 2003 John Wiley & Sons, Ltd.", "In a sensor network, usually a large number of sensors transport data messages to a limited number of sinks. Due to this multipoint-to-point communications pattern in general homogeneous sensor networks, the closer a sensor to the sink, the quicker it will deplete its battery. This unbalanced energy depletion phenomenon has become the bottleneck problem to elongate the lifetime of sensor networks. In this paper, we consider the effects of joint relay node deployment and transmission power control on network lifetime. Contrary to the intuition the relay nodes considered are even simpler devices than the sensor nodes with limited capabilities. We show that the network lifetime can be extended significantly with the addition of relay nodes to the network. In addition, for the same expected network lifetime goal, the number of relay nodes required can be reduced by employing efficient transmission power control while leaving the network connectivity level unchanged. The solution suggests that it is sufficient to deploy relay nodes only with a specific probabilistic distribution rather than the specifying the exact places. Furthermore, the solution does not require any change on the protocols (such as routing) used in the network.", "Wireless distributed microsensor systems will enable the reliable monitoring of a variety of environments for both civil and military applications. In this paper, we look at communication protocols, which can have significant impact on the overall energy dissipation of these networks. Based on our findings that the conventional protocols of direct transmission, minimum-transmission-energy, multi-hop routing, and static clustering may not be optimal for sensor networks, we propose LEACH (Low-Energy Adaptive Clustering Hierarchy), a clustering-based protocol that utilizes randomized rotation of local cluster based station (cluster-heads) to evenly distribute the energy load among the sensors in the network. LEACH uses localized coordination to enable scalability and robustness for dynamic networks, and incorporates data fusion into the routing protocol to reduce the amount of information that must be transmitted to the base station. Simulations show the LEACH can achieve as much as a factor of 8 reduction in energy dissipation compared with conventional outing protocols. In addition, LEACH is able to distribute energy dissipation evenly throughout the sensors, doubling the useful system lifetime for the networks we simulated.", "Prolonged network lifetime, scalability, and load balancing are important requirements for many ad-hoc sensor network applications. Clustering sensor nodes is an effective technique for achieving these goals. In this work, we propose a new energy-efficient approach for clustering nodes in ad-hoc sensor networks. Based on this approach, we present a protocol, HEED (hybrid energy-efficient distributed clustering), that periodically selects cluster heads according to a hybrid of their residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED does not make any assumptions about the distribution or density of nodes, or about node capabilities, e.g., location-awareness. The clustering process terminates in O(1) iterations, and does not depend on the network topology or size. The protocol incurs low overhead in terms of processing cycles and messages exchanged. It also achieves fairly uniform cluster head distribution across the network. A careful selection of the secondary clustering parameter can balance load among cluster heads. Our simulation results demonstrate that HEED outperforms weight-based clustering protocols in terms of several cluster characteristics. We also apply our approach to a simple application to demonstrate its effectiveness in prolonging the network lifetime and supporting data aggregation.", "In a heterogeneous wireless sensor network (WSN), relay nodes (RNs) are adopted to relay data packets from sensor nodes (SNs) to the base station (BS). The deployment of the RNs can have a significant impact on connectivity and lifetime of a WSN system. This paper studies the effects of random deployment strategies. We first discuss the biased energy consumption rate problem associated with uniform random deployment. This problem leads to insufficient energy utilization and shortened network lifetime. To overcome this problem, we propose two new random deployment strategies, namely, the lifetime-oriented deployment and hybrid deployment. The former solely aims at balancing the energy consumption rates of RNs across the network, thus extending the system lifetime. However, this deployment scheme may not provide sufficient connectivity to SNs when the given number of RNs is relatively small. The latter reconciles the concerns of connectivity and lifetime extension. Both single-hop and multihop communication models are considered in this paper. With a combination of theoretical analysis and simulated evaluation, this study explores the trade-off between connectivity and lifetime extension in the problem of RN deployment. It also provides a guideline for efficient deployment of RNs in a large-scale heterogeneous WSN.", "As a specific area of sensor networks, wireless in-home sensor networks differ from general sensor networks in that the network has nodes with heterogeneous resources and dissimilar mobility attributes. For example, sensor with different radio coverage, energy capacity, and processing capabilities are deployed, and some of the sensors are mobile and others are fixed in position. The architecture and routing protocol for this type of heterogeneous sensor networks must be based on the resources and characteristics of their member nodes. In addition, the sole stress on energy efficiency for performance measurement is not sufficient. System lifetime is more important in this case. We propose a hub-spoke network topology that is adaptively formed according to the resources of its members. A protocol named resource oriented protocol (ROP) was developed to build the network topology. This protocol principally divides the network operation into two phases. In the topology formation phase, nodes report their available resource characteristics, based on which network architecture is optimally built. We stress that due to the existence of nodes with limitless resources, a top-down appointment process can build the architecture with minimum resource consumption of ordinary nodes. In the topology update phase, mobile sensors and isolated sensors are accepted into the network with an optimal balance of resources. To avoid overhead of periodic route updates, we use a reactive strategy to maintain route cache. Simulation results show that the hub-spoke topology built by ROP can achieve much longer system lifetime.", "We consider a two-tiered Wireless Sensor Network (WSN) consisting of sensor clusters deployed around strategic locations and base-stations (BSs) whose locations are relatively flexible. Within a sensor cluster, there are many small sensor nodes (SNs) that capture, encode and transmit relevant information from the designated area, and there is at least one application node (AN) that receives raw data from these SNs, creates a comprehensive local-view, and forwards the composite bit-stream toward a BS. In practice, both SN and AN are battery-powered and energy-constrained, and their node lifetimes directly affect the network lifetime of WSNs. In this paper, we focus on the topology control process for ANs and BSs, which constitute the upper tier of a two-tiered WSN. We propose approaches to maximize the topological network lifetime of the WSN, by arranging BS location and inter-AN relaying optimally. Based on an algorithm in Computational Geometry, we derive the optimal BS locations under three topological lifetime definitions according to mission criticality. In addition, by studying the intrinsic properties of WSNs, we establish the upper and lower bounds of their maximal topological lifetime. When inter-AN relaying becomes feasible and favorable, we continue to develop an optimal parallel relay allocation to further prolong the topological lifetime of the WSN. An equivalent serialized relay schedule is also obtained, so that each AN only needs to have one relay destination at any time throughout the mission. The experimental performance evaluation demonstrates the efficacy of topology control as a vital process to maximize the network lifetime of WSNs.", "Wireless sensor network (WSN) is a rapidly evolving technological platform with tremendous and novel applications. Recent advances in WSN have led to many new protocols specifically designed for them where energy awareness (i.e. long lived wireless network) is an essential consideration. Most of the attention, however, has been given to the routing protocols since they might differ depending on the application and network architecture. As routing approach with hierarchical structure is realized to successfully provide energy efficient solution, various heuristic clustering algorithms have been proposed. As an attractive WSN routing protocol, LEACH has been widely accepted for its energy efficiency and simplicity. Also, the discipline of meta-heuristics Evolutionary Algorithms (EAs) has been utilized by several researchers to tackle cluster-based routing problem in WSN. These biologically inspired routing mechanisms, e.g., HCR, have proved beneficial in prolonging the WSN lifetime, but unfortunately at the expense of decreasing the stability period of WSN. This is most probably due to the abstract modeling of the EA's clustering fitness function. The aim of this paper is to alleviate the undesirable behavior of the EA when dealing with clustered routing problem in WSN by formulating a new fitness function that incorporates two clustering aspects, viz. cohesion and separation error. Simulation over 20 random heterogeneous WSNs shows that our evolutionary based clustered routing protocol (ERP) always prolongs the network lifetime, preserves more energy as compared to the results obtained using the current heuristics such as LEACH, SEP, and HCR protocols. Additionally, we found that ERP outperforms LEACH and HCR in prolonging the stability period, comparable to SEP performance for heterogeneous networks with 10 extra heterogeneity but requires further heterogeneous-aware modification in the presence of 20 of node heterogeneity." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
Some work in the literature also considers in-network processing. In @cite_8 , data aggregation trees are constructed and scheduled, and the network can be reconfigured, in that different trees can be used in different time periods. This work again uses the traditional WSN model of many homogeneous sensor nodes all sending measurements to a single sink. The scenario considered in @cite_23 focuses on a machine-to-machine communication application similar to the one we consider, including the presence of edge nodes in the network. However, there, the problem addressed is that of data placement on these edge nodes in order to maximize the network lifetime under latency constraints. Routing is performed by selecting the paths that yield the maximum lifetime, defined as the time until any node runs out of energy; reconfiguration of the network as we propose in this paper is not considered.
{ "cite_N": [ "@cite_23", "@cite_8" ], "mid": [ "2786403027", "2545881182" ], "abstract": [ "Established approaches to data aggregation in wireless sensor networks (WSNs) do not cover the variety of new use cases developing with the advent of the Internet of Things (IoT). In particular, the current push toward fog computing, in which control, computation, and storage are moved to nodes close to the network edge, induces a need to collect data at multiple sinks, rather than the single sink typically considered in WSN aggregation algorithms. Moreover, for machine-to-machine communication scenarios, actuators subscribing to sensor measurements may also be present, in which case data should be not only aggregated and processed in-network but also disseminated to actuator nodes. In this paper, we present mixed-integer programming formulations and algorithms for the problem of energy-optimal routing and multiple-sink aggregation, as well as joint aggregation and dissemination, of sensor measurement data in IoT edge networks. We consider optimization of the network for both minimal total energy usage, and min-max per-node energy usage. We also provide a formulation and algorithm for throughput-optimal scheduling of transmissions under the physical interference model in the pure aggregation case. We have conducted a numerical study to compare the energy required for the two use cases, as well as the time to solve them, in generated network scenarios with varying topologies and between 10 and 40 nodes. Although aggregation only accounts for less than 15 of total energy usage in all cases tested, it provides substantial energy savings. Our results show more than 13 times greater energy usage for 40-node networks using direct, shortest-path flows from sensors to actuators, compared with our aggregation and dissemination solutions.", "In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed." ] }
1908.05055
2968345977
In this paper we present new optimization formulations for maximizing the network lifetime in wireless mesh networks performing data aggregation and dissemination for machine-to-machine communication in the Internet of Things. We focus on heterogeneous networks in which multiple applications co-exist and nodes may take on different roles for different applications. Moreover, we address network reconfiguration as a means to increase the network lifetime, in keeping with the current trend towards software defined networks and network function virtualization. To test our optimization formulations, we conducted a numerical study using randomly-generated mesh networks from 10 to 30 nodes, and showed that the network lifetime can be increased using network reconfiguration by up to 75 over a single, minimal-energy configuration. Further, our solutions are feasible to implement in practical scenarios: only few configurations are needed, thus requiring little storage for a standalone network, and the synchronization and signalling needed to switch configurations is low relative to each configuration's operating time.
A few general frameworks for maximizing network lifetime have also been developed. In @cite_25 , the focus is on network deployment, specifically the initial energy allocated to each node. Once again nodes are homogeneous, with all nodes collecting data and transmitting it to their neighbors, and the definition of network lifetime is the time until the first sensor depletes its battery. A more general definition of network lifetime is used in @cite_1 , which applies a framework based on channel states aimed at developing medium access protocols for improved lifetime. However, nodes have fixed roles and only a single application is considered.
{ "cite_N": [ "@cite_1", "@cite_25" ], "mid": [ "2545881182", "1787995280" ], "abstract": [ "In a Wireless Sensor Network (WSN) the sensed data must be gathered and transmitted to a base station where it is further processed by end users. Since that kind of network consists of low-power nodes with limited battery power, power efficient methods must be applied for node communication and data gathering in order to achieve long network lifetimes. In such networks where in a round of communication many sensor nodes have data to send to a base station, it is very important to minimize the total energy consumed by the system so that the total network lifetime is maximized. The lifetime of such sensor network is the time until base station can receive data from all sensors in the network. In this work1, besides the conventional protocol of direct transmission or the use of dynamic routing protocols proposed in literature that potentially aggregates data, we propose an algorithm based on static routing among sensor nodes with unequal energy distribution in order to extend network lifetime and find a near-optimal node energy charge scheme that leads to both node and network lifetime prolongation. Our simulation results show that our algorithm achieves longer network lifetimes mainly because the final energy charge of each node is not uniform, while each node is free from maintaining complex route information and thus less infrastructure communication is needed.", "An ad-hoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor-based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set of possible one hop away neighbors. Traffic forwarding through multiple hops is employed when the intended destination is not within immediate reach. The nodes have limited initial amounts of energy that is consumed at different rates depending on the power level and the intended receiver. We propose algorithms to select the routes and the corresponding power levels such that the time until the batteries of the nodes drain-out is maximized. The algorithms are local and amenable to distributed implementation. When there is a single power level, the problem is reduced to a maximum flow problem with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed such that the energy consumption is balanced among the nodes in proportion to their energy reserves, instead of routing to minimize the absolute consumed power." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
To preserve spatial information within tensors in the dimensionality reduction methods, @cite_17 introduces the Tucker LPP (TLPP) which is LPP based on the Tucker decomposition to analyze the high-dimensional data and has the exponential increase in storage complexity as the number of modes increases.
{ "cite_N": [ "@cite_17" ], "mid": [ "1995406764" ], "abstract": [ "For @math -dimensional tensors with possibly large @math , an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has asymptotically the same (and often even smaller) number of representation parameters and viable stability properties. The approach involves a recursive construction described by a tree with the leafs corresponding to the Tucker decompositions of three-dimensional tensors, and is based on a sequence of SVDs for the recursively obtained unfolding matrices and on the auxiliary dimensions added to the initial “spatial” dimensions. It is shown how this format can be applied to the problem of multidimensional convolution. Convincing numerical examples are given." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The other existing dimensionality reduction method which embeds the TT subspace, is the tensor train neighbourhood preserving embedding (TTNPE) @cite_2 . TTNPE solves the exponential explosion on the complexity with the number of modes increasing. However, its robustness to the extreme outliers remains as a concern. Therefore, a dimensionality reduction method for tensors with a large number of modes or dimensions is demanded to propose on the TT subspace and the capability of reducing the sensitivity to the extreme outliers. Our method TTPUDR is thus developed with all the aspects.
{ "cite_N": [ "@cite_2" ], "mid": [ "2963465654" ], "abstract": [ "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
We denote the left unfolding operation @cite_2 of @math as the matrix @math where the last mode of the tensor becomes the column indices of the left unfolding matrix and the rest of the modes are the row indices. Similarly, for the right unfolding operation, denoting it as @math . Also, the vectorization of a tensor is denoted by @math . The F-norm of a tensor can be defined as the @math -norm of its vectorization, i.e., @math , which considers all the elements @math as an entire group and preserves the general spatial relations between elements. Besides @math -norm of a tensor is computed as @math which treats each elements separately and can probably cause the spatial information loss.
{ "cite_N": [ "@cite_2" ], "mid": [ "2043571470" ], "abstract": [ "Abstract Operations with tensors, or multiway arrays, have become increasingly prevalent in recent years. Traditionally, tensors are represented or decomposed as a sum of rank-1 outer products using either the CANDECOMP PARAFAC (CP) or the Tucker models, or some variation thereof. Such decompositions are motivated by specific applications where the goal is to find an approximate such representation for a given multiway array. The specifics of the approximate representation (such as how many terms to use in the sum, orthogonality constraints, etc.) depend on the application. In this paper, we explore an alternate representation of tensors which shows promise with respect to the tensor approximation problem. Reminiscent of matrix factorizations, we present a new factorization of a tensor as a product of tensors. To derive the new factorization, we define a closed multiplication operation between tensors. A major motivation for considering this new type of tensor multiplication is to devise new types of factorizations for tensors which can then be used in applications. Specifically, this new multiplication allows us to introduce concepts such as tensor transpose, inverse, and identity, which lead to the notion of an orthogonal tensor. The multiplication also gives rise to a linear operator, and the null space of the resulting operator is identified. We extend the concept of outer products of vectors to outer products of matrices. All derivations are presented for third-order tensors. However, they can be easily extended to the order-p ( p > 3 ) case. We conclude with an application in image deblurring." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The tensor-train (TT) decomposition is designed for large-scale data analysis @cite_3 . It can achieve a simpler implementation than the tree-type decomposition algorithms @cite_18 which are developed to reduce the storage complexity and avoid the local minima.
{ "cite_N": [ "@cite_18", "@cite_3" ], "mid": [ "2962881496", "2963465654" ], "abstract": [ "Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor.", "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
For most of the applications, in order to achieve the computational efficiency and be less information redundant, the researchers often restrict the tensor ranks to be smaller than the size of their corresponding tensor mode, i.e., @math for @math @cite_2 .
{ "cite_N": [ "@cite_2" ], "mid": [ "2132267493" ], "abstract": [ "There has been continued interest in seeking a theorem describing optimal low-rank approximations to tensors of order 3 or higher that parallels the Eckart-Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rank- @math approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders, and ranks, regardless of the choice of norm (or even Bregman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank-3 tensor has an optimal rank-2 approximation. The notable exceptions to this misbehavior are rank-1 tensors and order-2 tensors (i.e., matrices). In a more positive spirit, we propose a natural way of overcoming the ill-posedness of the low-rank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete low-dimensional examples as a first step toward more general results. To this end, we present a detailed analysis of equivalence classes of @math tensors, and we develop methods for extending results upward to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, it can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular, we make extensive use of the hyperdeterminant @math on @math ." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
Given a set of vectorial training data @math and an affinity matrix of locality similarity @math , LPP intends to seek for a linear projection @math from @math to @math such that the following optimization problem is solved to minimize the locality preserving criterion set as the objective function. The widely used affinity @math is based on the graph of the neighborhood information in the data as follows @cite_5 . where @math is a positive parameter and @math denotes the @math -nearest neighborhood of @math .
{ "cite_N": [ "@cite_5" ], "mid": [ "2089035607" ], "abstract": [ "Locality preserving projection (LPP) is a manifold learning method widely used in pattern recognition and computer vision. The face recognition application of LPP is known to suffer from a number of problems including the small sample size (SSS) problem, the fact that it might produce statistically identical transform results for neighboring samples, and that its classification performance seems to be heavily influenced by its parameters. In this paper, we propose three novel solution schemes for LPP. Experimental results also show that the proposed LPP solution scheme is able to classify much more accurately than conventional LPP and to obtain a classification performance that is only little influenced by the definition of neighbor samples." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
LPP is a classical dimensionality reduction method and has been applied in many real cases, for example, computer vision @cite_8 . It captures the local information among the data points and reduces more sensitivity to the outliers than PCA. However, we do observe the following shortcomings of LPP: LPP is designed for vectorial data. When it is applied to multi-dimensional data, i.e, tensors, there exists potential loss of spatial information. The existing tensor locality preserving projections, i.e., the Tucker LPP (TLPP) @cite_17 embeds the tensor space with a high storage complexity at @math . Theoretically, LPP cannot work for the cases where the data dimension is greater than the number of samples. Although this can be avoided by a trick in which one first projects the data onto its PCA subspace, then implements LPP in this subspace http: www.cad.zju.edu.cn home dengcai Data code LPP.m , this would not work well for ultra-dimensional data with a fairly large dataset as a singular value decomposition (SVD) becomes a bottleneck.
{ "cite_N": [ "@cite_17", "@cite_8" ], "mid": [ "2154872931", "2109531142" ], "abstract": [ "Many problems in information processing involve some form of dimensionality reduction. In this paper, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. This is borne out by illustrative examples on some high dimensional data sets.", "Reducing the dimensionality of data without losing intrinsic information is an important preprocessing step in high-dimensional data analysis. Fisher discriminant analysis (FDA) is a traditional technique for supervised dimensionality reduction, but it tends to give undesired results if samples in a class are multimodal. An unsupervised dimensionality reduction method called locality-preserving projection (LPP) can work well with multimodal data due to its locality preserving property. However, since LPP does not take the label information into account, it is not necessarily useful in supervised learning scenarios. In this paper, we propose a new linear supervised dimensionality reduction method called local Fisher discriminant analysis (LFDA), which effectively combines the ideas of FDA and LPP. LFDA has an analytic form of the embedding transformation and the solution can be easily computed just by solving a generalized eigenvalue problem. We demonstrate the practical usefulness and high scalability of the LFDA method in data visualization and classification tasks through extensive simulation studies. We also show that LFDA can be extended to non-linear dimensionality reduction scenarios by applying the kernel trick." ] }
1908.04924
2967111226
Locality preserving projections (LPP) are a classical dimensionality reduction method based on data graph information. However, LPP is still responsive to extreme outliers. LPP aiming for vectorial data may undermine data structural information when it is applied to multidimensional data. Besides, it assumes the dimension of data to be smaller than the number of instances, which is not suitable for high-dimensional data. For high-dimensional data analysis, the tensor-train decomposition is proved to be able to efficiently and effectively capture the spatial relations. Thus, we propose a tensor-train parameterization for ultra dimensionality reduction (TTPUDR) in which the traditional LPP mapping is tensorized in terms of tensor-trains and the LPP objective is replaced with the Frobenius norm to increase the robustness of the model. The manifold optimization technique is utilized to solve the new model. The performance of TTPUDR is assessed on classification problems and TTPUDR significantly outperforms the past methods and the several state-of-the-art methods.
The TT decomposition with a smaller storage complexity at @math has been recently applied in the tensor train neighborhood preserving embedding (TTNPE) @cite_2 @cite_0 . Nevertheless, the actual algorithm in TTNPE is only implemented as a TT approximation to the pseudo PCA. To the best of our knowledge, there is no existing dimensionality reduction method which can directly process the tensor data with less storage complexity, i.e., using the TT decomposition in algorithms.
{ "cite_N": [ "@cite_0", "@cite_2" ], "mid": [ "2963465654", "2962881496" ], "abstract": [ "In this paper, we propose a tensor train neighborhood preserving embedding (TTNPE) to embed multidimensional tensor data into low-dimensional tensor subspace. Novel approaches to solve the optimization problem in TTNPE are proposed. For this embedding, we evaluate a novel tradeoff gain among classification, computation, and dimensionality reduction (storage) for supervised learning. It is shown that compared to the state-of-the-arts tensor embedding methods, TTNPE achieves superior tradeoff in classification, computation, and dimensionality reduction in MNIST handwritten digits, Weizmann face datasets, and financial market datasets.", "Tensor train (TT) decomposition provides a space-efficient representation for higher-order tensors. Despite its advantage, we face two crucial limitations when we apply the TT decomposition to machine learning problems: the lack of statistical theory and of scalable algorithms. In this paper, we address the limitations. First, we introduce a convex relaxation of the TT decomposition problem and derive its error bound for the tensor completion task. Next, we develop a randomized optimization method, in which the time complexity is as efficient as the space complexity is. In experiments, we numerically confirm the derived bounds and empirically demonstrate the performance of our method with a real higher-order tensor." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
Fingerprinting has been a broadly studied method of indoor positioning @cite_12 . More particularly, RSSI has been the main type of signal that is used @cite_12 . It has been only a few years since the transfering of fingerprinting techniques in the outdoor world, and in particular in LPWAN settings. In a recent study, @cite_1 have made three fingerprinting datasets of Low Power Wide Area Networks publicly available. One of these datasets contains LoRaWAN RSSI measurements collected in the urban area of the city of Antwerp, in Belgium. The motivation for making the datasets publicly available was to provide the global research community with a benchmark tool to evaluate fingerprinting algorithms for LPWAN standards. In that work, the utilization of the presented LoRaWAN dataset by a k Nearest Neighbours fingerprinting method was exemplified, achieving a mean localization error of 398 meters. To the best of our knowledge, there is no follow up study so far, which utilizes this dataset.
{ "cite_N": [ "@cite_1", "@cite_12" ], "mid": [ "2968588162", "2791401550" ], "abstract": [ "Fingerprinting techniques, which are a common method for indoor localization, have been recently applied with success into outdoor settings. Particularly, the communication signals of Low Power Wide Area Networks (LPWAN) such as Sigfox, have been used for localization. In this rather recent field of study, not many publicly available datasets, which would facilitate the consistent comparison of different positioning systems, exist so far. In the current study, a published dataset of RSSI measurements on a Sigfox network deployed in Antwerp, Belgium is used to analyse the appropriate selection of preprocessing steps and to tune the hyperparameters of a kNN fingerprinting method. Initially, the tuning of hyperparameter k for a variety of distance metrics, and the selection of efficient data transformation schemes, proposed by relevant works, is presented. In addition, accuracy improvements are achieved in this study, by a detailed examination of the appropriate adjustment of the parameters of the data transformation schemes tested, and of the handling of out of range values. With the appropriate tuning of these factors, the achieved mean localization error was 298 meters, and the median error was 109 meters. To facilitate the reproducibility of tests and comparability of results, the code and train validation test split used in this study are available.", "Because of the increasing relevance of the Internet of Things and location-based services, researchers are evaluating wireless positioning techniques, such as fingerprinting, on Low Power Wide Area Network (LPWAN) communication. In order to evaluate fingerprinting in large outdoor environments, extensive, time-consuming measurement campaigns need to be conducted to create useful datasets. This paper presents three LPWAN datasets which are collected in large-scale urban and rural areas. The goal is to provide the research community with a tool to evaluate fingerprinting algorithms in large outdoor environments. During a period of three months, numerous mobile devices periodically obtained location data via a GPS receiver which was transmitted via a Sigfox or LoRaWAN message. Together with network information, this location data is stored in the appropriate LPWAN dataset. The first results of our basic fingerprinting implementation, which is also clarified in this paper, indicate a mean location estimation error of 214.58 m for the rural Sigfox dataset, 688.97 m for the urban Sigfox dataset and 398.40 m for the urban LoRaWAN dataset. In the future, we will enlarge our current datasets and use them to evaluate and optimize our fingerprinting methods. Also, we intend to collect additional datasets for Sigfox, LoRaWAN and NB-IoT." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
@cite_0 , have evaluated experimentally RRS and TDoA ranging positioning methods using a LoRaWAN network, reporting median errors of 1250 and 200 meters for RRS and TDoA respectively. Other works @cite_2 , @cite_5 , have focused on rather specific settings over which they evaluate positioning methods. These works @cite_2 , @cite_5 , present experiments in car parking settings, testing in confined areas, with a placement of basestations that was adapted to their use-case, presenting a low error which ranges at the scale of a few tens of meters.
{ "cite_N": [ "@cite_0", "@cite_5", "@cite_2" ], "mid": [ "2901105864", "2900294995", "2084503286" ], "abstract": [ "This paper experimentally compares the positioning accuracy of TDoA-based and RSS-based localization in a public outdoor LoRa network in the Netherlands. The performance of different Received Signal Strength (RSS)-based approaches (proximity, centroid, map matching,…) is compared with Time-Difference-of-Arrival (TDoA) performance. The number of RSS and TDoA location updates and the positioning accuracy per spreading factor (SF) is assessed, allowing to select the optimal SF choice for the network. A road mapping filter is applied to the raw location estimates for the best algorithms and SFs. RSS-based approaches have median and maximal errors that are limited to 1000 m and 2000 m respectively, using a road mapping filter. Using the same filter, TDoA-based approaches deliver median and maximal errors in the order of 150 m and 350 m respectively. However, the number of location updates per time unit using SF7 is around 10 times higher for RSS algorithms than for the TDoA algorithm.", "Positioning is an essential element in most Internet of Things (IoT) applications. Global Positioning System (GPS) chips have high cost and power consumption, making it unsuitable for long-range (LoRa) and low-power IoT devices. Alternatively, low-power wide-area (LPWA) signals can be used for simultaneous positioning and communication. We summarize previous studies related to LoRa signal-based positioning systems, including those addressing proximity, a path loss model, time difference of arrival (TDoA), and fingerprint positioning methods. We propose a LoRa signal-based positioning method that uses a fingerprint algorithm instead of a received signal strength indicator (RSSI) proximity or TDoA method. The main objective of this study was to evaluate the accuracy and usability of the fingerprint algorithm for large areas in the real world. We estimated the locations using probabilistic means based on three different algorithms that use interpolated fingerprint RSSI maps. The average accuracy of the three proposed algorithms in our experiments was 28.8 m. Our method also reduced the battery consumption significantly compared with that of existing GPS-based positioning methods.", "A robust and accurate positioning solution is required to increase the safety in GPS-denied environments. Although there is a lot of available research in this area, little has been done for confined environments such as tunnels. Therefore, we organized a measurement campaign in a basement tunnel of Linkoping university, in which we obtained ultra-wideband (UWB) complex impulse responses for line-of-sight (LOS), and three non-LOS (NLOS) scenarios. This paper is focused on time-of-arrival (TOA) ranging since this technique can provide the most accurate range estimates, which are required for range-based positioning. We describe the measurement setup and procedure, select the threshold for TOA estimation, analyze the channel propagation parameters obtained from the power delay profile (PDP), and provide statistical model for ranging. According to our results, the rise-time should be used for NLOS identification, and the maximum excess delay should be used for NLOS error mitigation. However, the NLOS condition cannot be perfectly determined, so the distance likelihood has to be represented in a Gaussian mixture form. We also compared these results with measurements from a mine tunnel, and found a similar behavior." ] }
1908.05085
2968619508
The use of fingerprinting localization techniques in outdoor IoT settings has started to gain popularity over the recent years. Communication signals of Low Power Wide Area Networks (LPWAN), such as LoRaWAN, are used to estimate the location of low power mobile devices. In this study, a publicly available dataset of LoRaWAN RSSI measurements is utilized to compare different machine learning methods and their accuracy in producing location estimates. The tested methods are: the k Nearest Neighbours method, the Extra Trees method and a neural network approach using a Multilayer Perceptron. To facilitate the reproducibility of tests and the comparability of results, the code and the train validation test split of the dataset used in this study have become available. The neural network approach was the method with the highest accuracy, achieving a mean error of 358 meters and a median error of 204 meters.
General purpose fingerprinting methods in LPWAN settings have been presented and discussed in recent works @cite_6 , @cite_7 . @cite_6 have utilized a Sigfox dataset to apply a kNN algorithm, and selected among a variety of distance metrics and data representations the best performing ones, resulting with a mean positioning error of 340 meters. In addition, in our previous work @cite_7 , we have moved further in analysing the same Sigfox dataset, by tuning relevant parameters of the discussed preprocessing schemes, reducing the mean error to 298 meters. As was done in our previous work @cite_7 , in order to facilitate the comparability of results, we share the train validation test sets used in the current work as well.
{ "cite_N": [ "@cite_7", "@cite_6" ], "mid": [ "2968588162", "2791401550" ], "abstract": [ "Fingerprinting techniques, which are a common method for indoor localization, have been recently applied with success into outdoor settings. Particularly, the communication signals of Low Power Wide Area Networks (LPWAN) such as Sigfox, have been used for localization. In this rather recent field of study, not many publicly available datasets, which would facilitate the consistent comparison of different positioning systems, exist so far. In the current study, a published dataset of RSSI measurements on a Sigfox network deployed in Antwerp, Belgium is used to analyse the appropriate selection of preprocessing steps and to tune the hyperparameters of a kNN fingerprinting method. Initially, the tuning of hyperparameter k for a variety of distance metrics, and the selection of efficient data transformation schemes, proposed by relevant works, is presented. In addition, accuracy improvements are achieved in this study, by a detailed examination of the appropriate adjustment of the parameters of the data transformation schemes tested, and of the handling of out of range values. With the appropriate tuning of these factors, the achieved mean localization error was 298 meters, and the median error was 109 meters. To facilitate the reproducibility of tests and comparability of results, the code and train validation test split used in this study are available.", "Because of the increasing relevance of the Internet of Things and location-based services, researchers are evaluating wireless positioning techniques, such as fingerprinting, on Low Power Wide Area Network (LPWAN) communication. In order to evaluate fingerprinting in large outdoor environments, extensive, time-consuming measurement campaigns need to be conducted to create useful datasets. This paper presents three LPWAN datasets which are collected in large-scale urban and rural areas. The goal is to provide the research community with a tool to evaluate fingerprinting algorithms in large outdoor environments. During a period of three months, numerous mobile devices periodically obtained location data via a GPS receiver which was transmitted via a Sigfox or LoRaWAN message. Together with network information, this location data is stored in the appropriate LPWAN dataset. The first results of our basic fingerprinting implementation, which is also clarified in this paper, indicate a mean location estimation error of 214.58 m for the rural Sigfox dataset, 688.97 m for the urban Sigfox dataset and 398.40 m for the urban LoRaWAN dataset. In the future, we will enlarge our current datasets and use them to evaluate and optimize our fingerprinting methods. Also, we intend to collect additional datasets for Sigfox, LoRaWAN and NB-IoT." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Containerizing control applications has been discussed in recent literature. @cite_6 , for instance, presented the concept of containerization of full control applications as a means to decouple the hardware and software life-cycles of an industrial automation system. Due to the performance overhead in hardware virtualization, the authors state that OS-level virtualization is a suitable technique to cope with automation system timing demands. They propose two approaches to migrate a control application into containers on top of a patched real-time Linux-based operating system: A given system is decomposed into subsystems, where a set of sub-units performs a localized computation, which then is actuated through a global decision maker. Devices are defined as a set of processes, where each process is an isolated standalone solution with a shared communication stack. Based on this, systems are divided into specialized modules, allowing a granular development and update strategy. The authors demonstrate the feasibility of real-time applications in conjunction with containerization, even though they express concern on the maturity of the technical solution presented.
{ "cite_N": [ "@cite_6" ], "mid": [ "2487718634" ], "abstract": [ "Virtualization is entering the world of real-time embedded systems. Industrial automation systems, in particular, can benefit from what virtualization has to offer: flexible consolidation of applications on different hardware types and scales, extending the life-time of legacy code or decoupling software and hardware lifecycles. However, such systems require a light-weight virtualization technology in order to be able to maintain real-time behavior while dealing with real-time data. This paper sets out to investigate the applicability of container-based OS-level virtualization technology to industrial automation systems. To this end, we provide insights into the capabilities of containers to achieve flexible consolidation and easy migration of industrial automation applications as well as into the container technology readiness with respect to the fundamental requirement of industrial automation systems, namely performing timely control actions based on real-time data. Moreover, we provide an empirical study of the performance overhead introduced by containers based on micro-benchmarks that capture the characteristics of targeted industrial automation applications." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
Goldschmidt and Hauk-Stattelmann in @cite_9 perform benchmark tests on modularized industrial Programmable Logic Controller (PLC) applications. This analysis analyzes the impact of container-based virtualization on real-time constraints. As there is no solution for legacy code migration of PLCs, the migration to application containers could extend a system's lifetime beyond the physical device's limits. Even though tests showed worst-case latencies of the order of @math on Intel-based hosts, the authors argue that the container engines may be stripped down and optimized for real-time execution. In a follow-up work, @cite_28 , a possible multi-purpose architecture was described and tested in a real-world use case. The results show worst case latencies in the range of @math for a Raspberry PI single-board computer, making the solution viable for cycle times in the range of @math to @math . The authors state that topics such as memory overhead, containers' restricted access and problems due to technology immaturity are still to be investigated.
{ "cite_N": [ "@cite_28", "@cite_9" ], "mid": [ "2534821673", "2792724737" ], "abstract": [ "Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present a an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines.", "Abstract Cyber-physical systems and the Internet-of-Things are getting more and more traction in different application areas. Boosted by initiatives such as Industrie 4.0 in Germany or the Industrial Internet Consortium in the US, they are enablers for innovation in industrial automation. To provide the advanced flexibility in production envisioned for future automation systems, Programmable Logic Controllers (PLCs), as one of their main building blocks, also need to become more flexible. However, the conservative nature of this domain prohibits changes in the controller architecture impacting the installed base. Currently there exist various approaches that evolve control architectures to the next level, but none of them address flexible function deployment at the same time with legacy support. In this paper, we present an architecture for a multi-purpose controller that is inspired by the virtualization trend in cloud systems which moves from heavyweight virtual machines to lightweight containers solutions such as LXC or Docker. Our solution includes the support for multiple PLC execution engines and adds support for the emulation of legacy engines as well. We evaluate this architecture by executing performance measurements that analyze the impact of container technologies to the real-time aspects of PLC engines." ] }
1908.04465
2967650664
We explore the challenges and opportunities of shifting industrial control software from dedicated hardware to bare-met al servers or cloud computing platforms using off the shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on a series of dedicated latency tests targeting relevant real-time configurations.
@cite_0 address architectural details not discussed in @cite_9 and @cite_28 . These additions include the definite run-time environment and how deterministic communication of containers and field devices may be achieved in a novel container-based architecture. They proposed a Linux-based solution as host operating system, including both the single kernel preemption-focused PREEMPT-RT patch and the co-kernel oriented Xenomai. With this patch, the approach exhibits better predictability, although it suffers from security concerns introduced by exposed system files required by Xenomai. For this reason, they suggested limiting its application for safety-critical code execution. They analyzed and discussed inter-process messaging in detail, focusing on the specific properties needed in real-time applications. Finally, they implemented an orchestration run-time managing intra-container communication and showed that task times as low as @math are possible.
{ "cite_N": [ "@cite_0", "@cite_9", "@cite_28" ], "mid": [ "2131355481", "2010670047", "1503814339" ], "abstract": [ "Management, allocation and scheduling of heterogeneous resources for complex distributed real-time applications is a challenging problem. Timing constraints of applications may be fulfilled by the proper use of real-time scheduling policies, admission control and enforcement of timing constraints. However, it is not easy to design basic infrastructure services that allow for easy access to the allocation of multiple heterogeneous resources in a distributed environment. In this paper, we present a middleware for providing distributed soft real-time applications with a uniform API for reserving heterogeneous resources with real-time scheduling capabilities in a distributed environment. The architecture relies on standard POSIX OS facilities, such as time management and standard TCP IP networking services, and it is designed around CORBA, in order to facilitate modularity, flexibility and portability of the applications using it. However, real-time scheduling is supported by proper extensions at the kernel-level, plugged within the framework by means of dedicated resource managers. Our current implementation on Linux supports the reservation of the CPU, disk and network bandwidth. However, additional resource managers supporting alternative real-time schedulers for these resources, as well as additional types of resources, may be easily added. We present experimental results gathered on both synthetic applications and a real multimedia video streaming case study, showing the advantages deriving from the use of the proposed middleware. Finally, overhead figures are reported, showing the sustainability of the approach for a wide class of complex, distributed, soft real-time applications.", "Real-time communication (RTC) applications such as VoIP, video conferencing, and online gaming are flourishing. To adapt and deliver good performance, these applications require accurate estimations of short-term network performance metrics, e.g., loss rate, one-way delay, and throughput. However, the wide variation in mobile cellular network performance makes running RTC applications on these networks problematic. To address this issue, various performance adaptation techniques have been proposed, but one common problem of such techniques is that they only adjust application behavior reactively after performance degradation is visible. Thus, proactive adaptation based on accurate short-term, fine-grained network performance prediction can be a preferred alternative that benefits RTC applications. In this study, we show that forecasting the short-term performance in cellular networks is possible in part due to the channel estimation scheme on the device and the radio resource scheduling algorithm at the base station. We develop a system interface called PROTEUS, which passively collects current network performance, such as throughput, loss, and one-way delay, and then uses regression trees to forecast future network performance. PROTEUS successfully predicts the occurrence of packet loss within a 0.5s time window for 98 of the time windows and the occurrence of long one-way delay for 97 of the time windows. We also demonstrate how PROTEUS can be integrated with RTC applications to significantly improve the perceptual quality. In particular, we increase the peak signal-to-noise ratio of a video conferencing application by up to 15dB and reduce the perceptual delay in a gaming application by up to 4s.", "The cloud computing infrastructure relies on virtualized servers that provide isolation across guest OS's through sand boxing. This isolation was demonstrated to be imperfect in past work which exploited hardware level information leakages to gain access to sensitive information across co-located virtual machines (VMs). In response virtualization companies and cloud services providers have disabled features such as deduplication to prevent such attacks. In this work, we introduce a fine-grain cross-core cache attack that exploits access time variations on the last level cache. The attack exploits huge pages to work across VM boundaries without requiring deduplication. No configuration changes on the victim OS are needed, making the attack quite viable. Furthermore, only machine co-location is required, while the target and victim OS can still reside on different cores of the machine. Our new attack is a variation of the prime and probe cache attack whose applicability at the time is limited to L1 cache. In contrast, our attack works in the spirit of the flush and reload attack targeting the shared L3 cache instead. Indeed, by adjusting the huge page size our attack can be customized to work virtually at any cache level size. We demonstrate the viability of the attack by targeting an Open SSL1.0.1f implementation of AES. The attack recovers AES keys in the cross-VM setting on Xen 4.1 with deduplication disabled, being only slightly less efficient than the flush and reload attack. Given that huge pages are a standard feature enabled in the memory management unit of OS's and that besides co-location no additional assumptions are needed, the attack we present poses a significant risk to existing cloud servers." ] }
1908.04574
2968985217
The domain name resolution into IP addresses can significantly delay connection establishments on the web. Moreover, the common use of recursive DNS resolvers presents a privacy risk as they can closely monitor the user's browsing activities. In this paper, we present a novel HTTP response header allowing web server to provide their clients with relevant DNS records. Our results indicate, that this resolver-less DNS mechanism allows user agents to save the DNS lookup time for subsequent connection establishments. We find, that this proposal saves at least 80ms per DNS lookup for the one percent of users having the longest round-trip times towards their recursive resolver. Furthermore, our proposal decreases the number of DNS lookups and thus improves the privacy posture of the user towards the used recursive resolver. Comparing the security guarantees of the traditional DNS to our proposal, we find that resolver-less DNS achieves at least the same security properties. In detail, it even improves the user's resilience against censorship through tampered DNS resolvers.
The DNS Anonymity Service combines a broadcast mechanism for popular DNS records with an anonymity network to conduct additional DNS lookups @cite_15 . Unlike our proposal, the DNS Anonymity Service causes additional network traffic for downloading the broadcasted DNS records and suffers additional network latency when the client resolves hostnames via the anonymity network. In total, the performance gains of this clean-slate approach are vague as they depend on the user's browsing behavior. Furthermore, this approach does not integrate well into the existing DNS and requires additional Internet infrastructure to be deployed.
{ "cite_N": [ "@cite_15" ], "mid": [ "37081517" ], "abstract": [ "We propose a dedicated DNS Anonymity Service which protects users' privacy. The design consists of two building blocks: a broadcast scheme for the distribution of a \"top list\" of DNS hostnames, and low-latency Mixes for requesting the remaining hostnames unobservably. We show that broadcasting the 10,000 most frequently queried hostnames allows zero-latency lookups for over 80 of DNS queries at reasonable cost. We demonstrate that the performance of the previously proposed Range Queries approach severely suffers from high lookup latencies in a real-world scenario." ] }
1908.04574
2968985217
The domain name resolution into IP addresses can significantly delay connection establishments on the web. Moreover, the common use of recursive DNS resolvers presents a privacy risk as they can closely monitor the user's browsing activities. In this paper, we present a novel HTTP response header allowing web server to provide their clients with relevant DNS records. Our results indicate, that this resolver-less DNS mechanism allows user agents to save the DNS lookup time for subsequent connection establishments. We find, that this proposal saves at least 80ms per DNS lookup for the one percent of users having the longest round-trip times towards their recursive resolver. Furthermore, our proposal decreases the number of DNS lookups and thus improves the privacy posture of the user towards the used recursive resolver. Comparing the security guarantees of the traditional DNS to our proposal, we find that resolver-less DNS achieves at least the same security properties. In detail, it even improves the user's resilience against censorship through tampered DNS resolvers.
DNS prefetching describes a popular performance optimization where browsers start resolving the hostname of hyperlinks before the user clicks on them. However, privacy research on this mechanism indicates severe privacy problems. For example, it was shown that the recursive resolver could even infer the search terms the user entered into the search engine based on DNS prefetching @cite_6 .
{ "cite_N": [ "@cite_6" ], "mid": [ "1512251782" ], "abstract": [ "A recent trend in optimizing Internet browsing speed is to optimistically pre-resolve (or prefetch) DNS resolutions. While the practical benefits of doing so are still being debated, this paper attempts to raise awareness that current practices could lead to privacy threats that are ripe for abuse. More specifically, although the adoption of several browser optimizations have already raised security concerns, we examine how prefetching amplifies disclosure attacks to a degree where it is possible to infer the likely search terms issued by clients using a given DNS resolver. The success of these inference attacks relies on the fact that prefetching inserts a significant amount of context into a resolver's cache, allowing an adversary to glean far more detailed insights than when this feature is turned off." ] }
1908.04727
2967991851
A in the unit @math -cube is a set @math such that for every @math and @math in @math we either have @math for all @math , or @math for all @math . We consider subsets, @math , of the unit @math -cube @math that satisfy [ card (A C) k, , for all chains , C [0,1]^n , , ] where @math is a fixed positive integer. We refer to such a set @math as a @math -antichain. We show that the @math -dimensional Hausdorff measure of a @math -antichain in @math is at most @math and that the bound is asymptotically sharp. Moreover, we conjecture that there exist @math -antichains in @math whose @math -dimensional Hausdorff measure equals @math and we verify the validity of this conjecture when @math .
When @math this conjecture is clearly true, and when @math it is observed in @cite_0 that the validity of Conjecture is an immediate consequence of the following, well-known, result. Recall that a singular function @math is a strictly decreasing function whose derivative equals zero almost everywhere.
{ "cite_N": [ "@cite_0" ], "mid": [ "2110785107" ], "abstract": [ "We consider the functions @math defined as the @math th partial derivative of Lebesgue's singular function @math with respect to @math at @math . This sequence includes a multiple of the Takagi function as the case @math . We show that @math is continuous but nowhere differentiable for each @math , and determine the Holder order of @math . From this, we derive that the Hausdorff dimension of the graph of @math is one. Using a formula of Lomnicki and Ulam, we obtain an arithmetic expression for @math using the binary expansion of @math , and use this to find the sets of points where @math and @math take on their absolute maximum and minimum values. We show that these sets are topological Cantor sets. In addition, we characterize the sets of local maximum and minimum points of @math and @math ." ] }
1908.04090
2967417443
Although many tools have been presented in the research literature of software visualization, there is little evidence of their adoption. To choose a suitable visualization tool, practitioners need to analyze various characteristics of tools such as their supported software concerns and level of maturity. Indeed, some tools can be prototypes for which the lifespan is expected to be short, whereas others can be fairly mature products that are maintained for a longer time. Although such characteristics are often described in papers, we conjecture that practitioners willing to adopt software visualizations require additional support to discover suitable visualization tools. In this paper, we elaborate on our efforts to provide such support. To this end, we systematically analyzed research papers in the literature of software visualization and curated a catalog of 70 available tools that employ various visualization techniques to support the analysis of multiple software concerns. We further encapsulate these characteristics in an ontology. VISON, our software visualization ontology, captures these semantics as concepts and relationships. We report on early results of usage scenarios that demonstrate how the ontology can support (i) developers to find suitable tools for particular development concerns, and (ii) researchers who propose new software visualization tools to identify a baseline tool for a controlled experiment.
Some studies examine software visualization tools, in particular, to create guidelines for designing and evaluating software visualizations. For example, Storey al @cite_52 examine 12 software visualization tools and propose a framework to evaluate software visualizations based on intent, information, presentation, interaction, and effectiveness. Sensalire al @cite_59 @cite_6 classify the features users require in software visualization tools. To this end, they elaborate on lessons learned from evaluating 20 software visualization tools and identify dimensions that can help design an evaluation and then analyze the results. In our investigation, we do not attempt to provide a comprehensive catalog of software visualization tools, but we seek to provide a means to boost software visualization discoverability.
{ "cite_N": [ "@cite_52", "@cite_59", "@cite_6" ], "mid": [ "2014707001", "2566045746", "2147787175" ], "abstract": [ "We provide an evaluation of 15 software visualization tools applicable to corrective maintenance. The tasks supported as well as the techniques used are presented and graded based on the support level. By analyzing user acceptation of current tools, we aim to help developers to select what to consider, avoid or improve in their next releases. Tool users can also recognize what to broadly expect (and what not) from such tools, thereby supporting an informed choice for the tools evaluated here and for similar tools.", "Software visualization can be very useful for answering complex questions that arise in the software development process. Although modern visualization engines offer expressive APIs for building such visualizations, developers often have difficulties to (1) identify a suitable visualization technique to answer their particular development question, and to (2) implement that visualization using the existing APIs. Examples that illustrate the usage of an engine to build concrete visualizations offer a good starting point, but developers may have to traverse long lists of categories and analyze examples one-by-one to find a suitable one. We propose MetaVis, a tool that fills the gap between existing visualization techniques and their practical applications during software development. We classify questions frequently formulated by software developers and for each, based on our expertise, identify suitable visualizations. MetaVis uses tags mined from these questions to offer a tag-iconic cloud-based visualization. Each tag links to suitable visualizations that developers can explore, modify and try out. We present initial results of an implementation of MetaVis in the Pharo programming environment. The tool visualizes 76 developers' questions assigned to 49 visualization examples.", "Many software visualization (SoftVis) tools are continuously being developed by both researchers as well as software development companies. In order to determine if the developed tools are effective in helping their target users, it is desirable that they are exposed to a proper evaluation. Despite this, there is still lack of a general guideline on how these evaluations should be carried out and many of the tool developers perform very limited or no evaluation of their tools. Each person that carries out one evaluation, however, has experiences which, if shared, can guide future evaluators. This paper presents the lessons learned from evaluating over 20 SoftVis tools with over 90 users in five different studies spread on a period of over two years. The lessons covered include the selection of the tools, tasks, as well as evaluation participants. Other discussed points are related to the duration of the evaluation experiment, its location, the procedure followed when carrying out the experiment, as well as motivation of the participants. Finally, an analysis of the lessons learned is shown with the hope that these lessons will be of some assistance to future SoftVis tool evaluators." ] }
1908.04090
2967417443
Although many tools have been presented in the research literature of software visualization, there is little evidence of their adoption. To choose a suitable visualization tool, practitioners need to analyze various characteristics of tools such as their supported software concerns and level of maturity. Indeed, some tools can be prototypes for which the lifespan is expected to be short, whereas others can be fairly mature products that are maintained for a longer time. Although such characteristics are often described in papers, we conjecture that practitioners willing to adopt software visualizations require additional support to discover suitable visualization tools. In this paper, we elaborate on our efforts to provide such support. To this end, we systematically analyzed research papers in the literature of software visualization and curated a catalog of 70 available tools that employ various visualization techniques to support the analysis of multiple software concerns. We further encapsulate these characteristics in an ontology. VISON, our software visualization ontology, captures these semantics as concepts and relationships. We report on early results of usage scenarios that demonstrate how the ontology can support (i) developers to find suitable tools for particular development concerns, and (ii) researchers who propose new software visualization tools to identify a baseline tool for a controlled experiment.
Some other studies present taxonomies that characterize software visualization tools. Myers @cite_14 classifies software visualization tools based on whether they focus on code, data, or algorithms; and whether they are implemented in a static or dynamic fashion. Price al @cite_24 present a taxonomy of software visualization tools based on six dimensions: scope, content, form, method, interaction, and effectiveness. Maletic al @cite_62 propose a taxonomy of five dimensions to classify software visualization tools: tasks, audience, target, representation, and medium. Schots al @cite_54 extend this taxonomy by adding two dimensions: resource requirements of visualizations, and evidence of their utility. Merino al @cite_11 add as a main characteristic of software visualization tools. In their context, needs'' refers to the set of questions that are supported by software visualization tools. Although we consider these studies crucial for reflecting on the software visualization domain, we think that practitioners may require a more comprehensive support to identify a suitable tool. In particular, we believe that the semantics of concepts and their relationships are often missing in taxonomies and other classifications. The use of an ontology enforces the analysis of these relationships, which can play an important role in identifying a suitable visualization tools.
{ "cite_N": [ "@cite_14", "@cite_62", "@cite_54", "@cite_24", "@cite_11" ], "mid": [ "2163225273", "2070921605", "2566045746", "2149784077", "2020141201" ], "abstract": [ "A number of taxonomies to classify and categorize software visualization systems have been proposed in the past. Most notable are those presented by Price (1993) and Roman (1993). While these taxonomies are an accurate representation of software visualization issues, they are somewhat skewed with respect to current research areas on software visualization. We revisit this important work and propose a number of re-alignments with respect to addressing the software engineering tasks of large-scale development and maintenance. We propose a framework to emphasize the general tasks of understanding and analysis during development and maintenance of large-scale software systems. Five dimensions relating to the what, where, how, who, and why of software visualization make up this framework. The focus of this work is not so much as to classify software visualization system, but to point out the need for matching the method with the task. Finally, a number of software visualization systems are examined under our framework to highlight the particular problems each addresses.", "In the early 1980s researchers began building systems to visualize computer programs and algorithms using newly emerging graphical workstation technology. After more than a decade of advances in interface technology, a large variety of systems has been built and many different aspects of the visualization process have been investigated. As in any new branch of a science, a taxonomy is required so that researchers can use a common language to discuss the merits of existing systems, classify new ones (to see if they really are new) and identify gaps which suggest promising areas for further development. Several authors have suggested taxonomies for these visualization systems, but they have been ad hoc and have relied on only a handful of characteristics to describe a large and diverse area of work. Another major drawback of these taxonomies is their inability to accommodate expansion: there is no clear way to add new categories when the need arises. In this paper we present a detailed taxonomy of systems for the visualization of computer software. This taxonomy was derived from an established black-box model of software and is composed of a hierarchy with six broad categories at the top and over 30 leaf-level nodes at four hierarchical levels. We describe 12 important systems in detail and apply the taxonomy to them in order to illustrate its features. After discussing each system in this context, we analyse its coverage of the categories and present a research agenda for future work in the area.", "Software visualization can be very useful for answering complex questions that arise in the software development process. Although modern visualization engines offer expressive APIs for building such visualizations, developers often have difficulties to (1) identify a suitable visualization technique to answer their particular development question, and to (2) implement that visualization using the existing APIs. Examples that illustrate the usage of an engine to build concrete visualizations offer a good starting point, but developers may have to traverse long lists of categories and analyze examples one-by-one to find a suitable one. We propose MetaVis, a tool that fills the gap between existing visualization techniques and their practical applications during software development. We classify questions frequently formulated by software developers and for each, based on our expertise, identify suitable visualizations. MetaVis uses tags mined from these questions to offer a tag-iconic cloud-based visualization. Each tag links to suitable visualizations that developers can explore, modify and try out. We present initial results of an implementation of MetaVis in the Pharo programming environment. The tool visualizes 76 developers' questions assigned to 49 visualization examples.", "Software is usually complex and always intangible. In practice, the development and maintenance processes are time-consuming activities mainly because software complexity is difficult to manage. Graphical visualization of software has the potential to result in a better and faster understanding of its design and functionality, thus saving time and providing valuable information to improve its quality. However, visualizing software is not an easy task because of the huge amount of information comprised in the software. Furthermore, the information content increases significantly once the time dimension to visualize the evolution of the software is taken into account. Human perception of information and cognitive factors must thus be taken into account to improve the understandability of the visualization. In this paper, we survey visualization techniques, both 2D- and 3D-based, representing the static aspects of the software and its evolution. We categorize these techniques according to the issues they focus on, in order to help compare them and identify the most relevant techniques and tools for a given problem.", "We introduce a novel projection-based visualization method for high-dimensional data sets by combining concepts from MDS and the geometry of the hyperbolic spaces. This approach hyperbolic multi-dimensional scaling (H-MDS) is a synthesis of two important concepts for explorative data analysis and visualization: (i) multi-dimensional scaling uses proximity or pair distance data to generate a low-dimensional, spatial presentation of the data; (ii) previous work on the \"hyperbolic tree browser\" demonstrated the extraordinary advantages for an interactive display of graph-like data in the two-dimensional hyperbolic space (H2).In the new approach, H-MDS maps proximity data directly into the H2. This removes the restriction to \"quasihierarchical\", graph-based data--a major limitation of (ii). Since a suitable distance function can convert all kinds of data to proximity (or distance-based) data, this type of data can be considered the most general.We review important properties of the hyperbolic space and, in particular, the circular Poincare model of the H2. It enables effective human-computer interaction: by mouse dragging the \"focus\", the user can navigate in the data without loosing the context. In H2 the \"fish-eye\" behavior originates not simply by a non-linear view transformation but rather by extraordinary, non-Euclidean properties of the H2. Especially, the exponential growth of length and area of the underlying space makes the H2 a prime target for mapping hierarchical and (now also) high-dimensional data.Several high-dimensional mapping examples including synthetic and real-world data are presented. Since high-dimensional data produce \"ring\"-shaped displays, we present methods to enhance the display by modulating the dissimilarity contrast. This is demonstrated for an application for unstructured text: i.e., by using multiple film critiques from news:rec.art.movies.reviews and www.imdb.com, each movie is placed within the H2--creating a \"space of movies\" for interactive exploration." ] }
1908.04008
2967406324
Batch Normalization (BN) (Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and this batch information is considered as batch noise that will be brought to the features of an instance by BN. We offer a point of view that self-attention mechanism can help regulate the batch noise by enhancing instance-specific information. Based on this view, we propose combining BN with a self-attention mechanism to adjust the batch noise and give an attention-based version of BN called Instance Enhancement Batch Normalization (IEBN) which recalibrates channel information by a simple linear transformation. IEBN outperforms BN with a light parameter increment in various visual tasks universally for different network structures and benchmark data sets. Besides, even if under the attack of synthetic noise, IEBN can still stabilize network training with good generalization. The code of IEBN is available at this https URL
The normalization layer is an important component of a deep network. Multiple normalization methods have been proposed for different tasks. Batch Normalization @cite_30 which normalizes input by mini-batch statistics has been a foundation of visual recognition tasks @cite_7 . Instance Normalization @cite_1 performs one instance BN-like normalization and is widely used in generative model @cite_15 @cite_4 . There are some variants of BN, such as, Conditional Batch Normalization @cite_27 for Visual Questioning and Answering, Group Normalization @cite_25 and Batch Renormalization @cite_23 for small batch size training, Adaptive Batch Normalization @cite_28 for domain adaptation and Switchable normalization @cite_10 which learns to select different normalizers for different normalization layers. Among them, Conditional Batch Norm and Batch Renorm adjust the trainable parameters in reparameterization step of BN. Both of them are most related to our work which modifies the trainable scaling parameter.
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_7", "@cite_28", "@cite_1", "@cite_27", "@cite_23", "@cite_15", "@cite_10", "@cite_25" ], "mid": [ "2902302607", "2962958829", "2292729293", "2795783309", "2750854376", "2962836826", "2757196798", "2963743626", "2949117887", "2568343048" ], "abstract": [ "As an indispensable component, Batch Normalization (BN) has successfully improved the training of deep neural networks (DNNs) with mini-batches, by normalizing the distribution of the internal representation for each hidden layer. However, the effectiveness of BN would diminish with the scenario of micro-batch (e.g. less than 4 samples in a mini-batch), since the estimated statistics in a mini-batch are not reliable with insufficient samples. This limits BN's room in training larger models on segmentation, detection, and video-related problems, which require small batches constrained by memory consumption. In this paper, we present a novel normalization method, called Kalman Normalization (KN), for improving and accelerating the training of DNNs, particularly under the context of micro-batches. Specifically, unlike the existing solutions treating each hidden layer as an isolated system, KN treats all the layers in a network as a whole system, and estimates the statistics of a certain layer by considering the distributions of all its preceding layers, mimicking the merits of Kalman Filtering. On ResNet50 trained in ImageNet, KN has 3.4 lower error than its BN counterpart when using a batch size of 4; Even when using typical batch sizes, KN still maintains an advantage over BN while other BN variants suffer a performance degradation. Moreover, KN can be naturally generalized to many existing normalization variants to obtain gains, e.g. equipping Group Normalization with Group Kalman Normalization (GKN). KN can outperform BN and its variants for large scale object detection and segmentation task in COCO 2017.", "While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks- Internal Covariate Shift- the current solution has certain drawbacks. For instance, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate due to shifting parameter values (especially during initial training epochs). Another fundamental problem with BN is that it cannot be used with batch-size 1 during training. We address these drawbacks of BN by proposing a non-adaptive normalization technique for removing covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.", "While the authors of Batch Normalization (BN) identify and address an important problem involved in training deep networks-- Internal Covariate Shift-- the current solution has certain drawbacks. Specifically, BN depends on batch statistics for layerwise input normalization during training which makes the estimates of mean and standard deviation of input (distribution) to hidden layers inaccurate for validation due to shifting parameter values (especially during initial training epochs). Also, BN cannot be used with batch-size 1 during training. We address these drawbacks by proposing a non-adaptive normalization technique for removing internal covariate shift, that we call Normalization Propagation. Our approach does not depend on batch statistics, but rather uses a data-independent parametric estimate of mean and standard-deviation in every layer thus being computationally faster compared with BN. We exploit the observation that the pre-activation before Rectified Linear Units follow Gaussian distribution in deep networks, and that once the first and second order statistics of any given dataset are normalized, we can forward propagate this normalization without the need for recalculating the approximate statistics for hidden layers.", "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6 lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries.", "Batch Normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets.", "Batch normalization (BN) has proven to be an effective algorithm for deep neural network training by normalizing the input to each neuron and reducing the internal covariate shift. The space of weight vectors in the BN layer can be naturally interpreted as a Riemannian manifold, which is invariant to linear scaling of weights. Following the intrinsic geometry of this manifold provides a new learning rule that is more efficient and easier to analyze. We also propose intuitive and effective gradient clipping and regularization methods for the proposed algorithm by utilizing the geometry of the manifold. The resulting algorithm consistently outperforms the original BN on various types of network architectures and datasets.", "Batch normalization (BN) has become a de facto standard for training deep convolutional networks. However, BN accounts for a significant fraction of training run-time and is difficult to accelerate, since it is a memory-bandwidth bounded operation. Such a drawback of BN motivates us to explore recently proposed weight normalization algorithms (WN algorithms), i.e. weight normalization, normalization propagation and weight normalization with translated ReLU. These algorithms don't slow-down training iterations and were experimentally shown to outperform BN on relatively small networks and datasets. However, it is not clear if these algorithms could replace BN in practical, large-scale applications. We answer this question by providing a detailed comparison of BN and WN algorithms using ResNet-50 network trained on ImageNet. We found that although WN achieves better training accuracy, the final test accuracy is significantly lower ( @math ) than that of BN. This result demonstrates the surprising strength of the BN regularization effect which we were unable to compensate for using standard regularization techniques like dropout and weight decay. We also found that training of deep networks with WN algorithms is significantly less stable compared to BN, limiting their practical applications.", "Batch Normalization (BN) is capable of accelerating the training of deep models by centering and scaling activations within mini-batches. In this work, we propose Decorrelated Batch Normalization (DBN), which not just centers and scales activations but whitens them. We explore multiple whitening techniques, and find that PCA whitening causes a problem we call stochastic axis swapping, which is detrimental to learning. We show that ZCA whitening does not suffer from this problem, permitting successful learning. DBN retains the desirable qualities of BN and further improves BN's optimization efficiency and generalization ability. We design comprehensive experiments to show that DBN can improve the performance of BN on multilayer perceptrons and convolutional neural networks. Furthermore, we consistently improve the accuracy of residual networks on CIFAR-10, CIFAR-100, and ImageNet.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown to speed up training and result in better models. However its success has been very limited when dealing with recurrent neural networks. On the other hand, layer normalization normalizes the activations across all activities within a layer. This was shown to work well in the recurrent setting. In this paper we propose a unified view of normalization techniques, as forms of divisive normalization, which includes layer and batch normalization as special cases. Our second contribution is the finding that a small modification to these normalization schemes, in conjunction with a sparse regularizer on the activations, leads to significant benefits over standard normalization techniques. We demonstrate the effectiveness of our unified divisive normalization framework in the context of convolutional neural nets and recurrent neural networks, showing improvements over baselines in image classification, language modeling as well as super-resolution." ] }
1908.04008
2967406324
Batch Normalization (BN) (Ioffe and Szegedy 2015) normalizes the features of an input image via statistics of a batch of images and this batch information is considered as batch noise that will be brought to the features of an instance by BN. We offer a point of view that self-attention mechanism can help regulate the batch noise by enhancing instance-specific information. Based on this view, we propose combining BN with a self-attention mechanism to adjust the batch noise and give an attention-based version of BN called Instance Enhancement Batch Normalization (IEBN) which recalibrates channel information by a simple linear transformation. IEBN outperforms BN with a light parameter increment in various visual tasks universally for different network structures and benchmark data sets. Besides, even if under the attack of synthetic noise, IEBN can still stabilize network training with good generalization. The code of IEBN is available at this https URL
The cooperation of BN and attention dates back to Visual Questioning and Answering (VQA) which inputs an image and an image-related question and then outputs the answer to the question. For this task, Conditional Batch Norm @cite_27 is proposed to influence the feature extraction of an image via the feature collected from the question. A Recurrent Neural Network (RNN) is used to extract the features from the question while a Convolutional Neural Network (CNN), a pre-trained ResNet, performs features selection from the image. The features extracted from the question are conditioned on the shift and scale parameters of the BN in the pre-trained ResNet such that the feature selection of the CNN is question-referenced and the overall networks can handle different reasoning tasks. Note that for VQA, the features from question can be viewed as external attention to guide the training of overall network since those features are external regarding the image. In our work, the IEBN we proposed can also be viewed as a kind of Conditional Batch Norm but the guidance of the network training is using the internal attention since we use self-attention mechanism to extract the information from the image itself.
{ "cite_N": [ "@cite_27" ], "mid": [ "2174492417" ], "abstract": [ "We propose a novel attention based deep learning architecture for visual question answering task (VQA). Given an image and an image related natural language question, VQA generates the natural language answer for the question. Generating the correct answers requires the model's attention to focus on the regions corresponding to the question, because different questions inquire about the attributes of different image regions. We introduce an attention based configurable convolutional neural network (ABC-CNN) to learn such question-guided attention. ABC-CNN determines an attention map for an image-question pair by convolving the image feature map with configurable convolutional kernels derived from the question's semantics. We evaluate the ABC-CNN architecture on three benchmark VQA datasets: Toronto COCO-QA, DAQUAR, and VQA dataset. ABC-CNN model achieves significant improvements over state-of-the-art methods on these datasets. The question-guided attention generated by ABC-CNN is also shown to reflect the regions that are highly relevant to the questions." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
The importance of the uneven-channel bottleneck in coded caching has been acknowledged in a large number of recent works that seek to understand and ameliorate this limitation @cite_7 @cite_20 @cite_25 @cite_31 @cite_28 @cite_12 @cite_18 @cite_3 @cite_21 @cite_15 @cite_30 @cite_23 @cite_14 @cite_9 @cite_19 @cite_17 . For example, reference @cite_7 focuses on the uneven link-capacity SISO BC where each user experiences a distinct channel strength, and proposes algorithms that outperform the naive implementation of the algorithm of @cite_24 whereby each coded message is transmitted at a rate equal to the rate of the worst user whose message appears in the corresponding XOR operation. Under a similar setting, the work in @cite_31 considered feedback-aided user selection that can maximize the sum-rate as well as increase a fairness criterion that ensures that each user receives their requested file in a timely manner. In the related context of the erasure BC where users have uneven probabilities of erasures, references @cite_28 and @cite_12 showed how an erasure at some users can be exploited as side information at the remaining users in order to increase system performance. Related work can also be found in @cite_18 @cite_3 @cite_21 .
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_14", "@cite_7", "@cite_15", "@cite_28", "@cite_9", "@cite_21", "@cite_3", "@cite_24", "@cite_19", "@cite_23", "@cite_12", "@cite_31", "@cite_25", "@cite_20", "@cite_17" ], "mid": [ "2963745869", "2963602389", "1987954156", "2078905979", "2022138967", "2527594325", "2949272603", "2115094114", "2963279554", "2051343255", "2964101002", "2598021007", "2516631548", "1970554468", "2963352975", "2952236086", "2765687053" ], "abstract": [ "We explore the performance of coded caching in a SISO BC setting where some users have higher link capacities than others. Focusing on a binary and fixed topological model where strong links have a fixed normalized capacity 1, and where weak links have reduced normalized capacity T < 1, we identify — as a function of the cache size and T — the optimal throughput performance, within a factor of at most 8. The transmission scheme that achieves this performance, employs a simple form of interference enhancement, and exploits the property that weak links attenuate interference, thus allowing for multicasting rates to remain high even when involving weak users. This approach ameliorates the negative effects of uneven topology in multicasting, now allowing all users to achieve the optimal performance associated to T = 1, even if τ is approximately as low as T ≥ 1 − (1 − w)g where g is the coded-caching gain, and where w is the fraction of users that are weak. This leads to the interesting conclusion that for coded multicasting, the weak users need not bring down the performance of all users, but on the contrary to a certain extent, the strong users can lift the performance of the weak users without any penalties on their own performance. Furthermore for smaller ranges of τ, we also see that achieving the near-optimal performance comes with the advantage that the strong users do not suffer any additional delays compared to the case where T = 1.", "We consider the canonical shared link caching network formed by a source node, hosting a library of @math information messages (files), connected via a noiseless multicast link to @math user nodes, each equipped with a cache of size @math files. Users request files independently at random according to an a-priori known demand distribution q. A coding scheme for this network consists of two phases: cache placement and delivery. The cache placement is a mapping of the library files onto the user caches that can be optimized as a function of the demand statistics, but is agnostic of the actual demand realization. After the user demands are revealed, during the delivery phase the source sends a codeword (function of the library files, cache placement, and demands) to the users, such that each user retrieves its requested file with arbitrarily high probability. The goal is to minimize the average transmission length of the delivery phase, referred to as rate (expressed in channel symbols per file). In the case of deterministic demands, the optimal min-max rate has been characterized within a constant multiplicative factor, independent of the network parameters. The case of random demands was previously addressed by applying the order-optimal min-max scheme separately within groups of files requested with similar probability. However, no complete characterization of order-optimality was previously provided for random demands under the average rate performance criterion. In this paper, we consider the random demand setting and, for the special yet relevant case of a Zipf demand distribution, we provide a comprehensive characterization of the order-optimal rate for all regimes of the system parameters, as well as an explicit placement and delivery scheme achieving order-optimal rates. We present also numerical results that confirm the superiority of our scheme with respect to previously proposed schemes for the same setting.", "In this paper, we study resource allocation in a downlink OFDMA system assuming imperfect channel state information (CSI) at the transmitter. To achieve the individual QoS of the users in OFDMA system, adaptive resource allocation is very important, and has therefore been an active area of research. However, in most of the the previous work perfect CSI at the transmitter is assumed which is rarely possible due to channel estimation error and feedback delay. In this paper, we study the effect of channel estimation error on resource allocation in a downlink OFDMA system. We assume that each user terminal estimates its channel by using an MMSE estimator and sends its CSI back to the base station through a feedback channel. We approach the problem by using convex optimization framework, provide an explicit closed form expression for the users' transmit power and then develop an optimal margin adaptive resource allocation algorithm. Our proposed algorithm minimizes the total transmit power of the system subject to constraints on users' average data rate. The algorithm has polynomial complexity and solves the problem with zero optimality gaps. Simulation results show that our algorithm highly improves the system performance in the presence of imperfect channel estimation.", "We introduce the concept of resource management for in-network caching environments. We argue that in Information-Centric Networking environments, deterministically caching content messages at predefined places along the content delivery path results in unfair and inefficient content multiplexing between different content flows, as well as in significant caching redundancy. Instead, allocating resources along the path according to content flow characteristics results in better use of network resources and therefore, higher overall performance. The design principles of our proposed in-network caching scheme, which we call ProbCache, target these two outcomes, namely reduction of caching redundancy and fair content flow multiplexing along the delivery path. In particular, ProbCache approximates the caching capability of a path and caches contents probabilistically to: 1) leave caching space for other flows sharing (part of) the same path, and 2) fairly multiplex contents in caches along the path from the server to the client. We elaborate on the content multiplexing fairness of ProbCache and find that it sometimes behaves in favor of content flows connected far away from the source, that is, it gives higher priority to flows travelling longer paths, leaving little space to shorter-path flows. We introduce an enhanced version of the main algorithm that guarantees fair behavior to all participating content flows. We evaluate the proposed schemes in both homogeneous and heterogeneous cache size environments and formulate a framework for resource allocation in in-network caching environments. The proposed probabilistic approach to in-network caching exhibits ideal performance both in terms of network resource utilization and in terms of resource allocation fairness among competing content flows. Finally, and in contrast to the expected behavior, we find that the efficient design of ProbCache results in fast convergence to caching of popular content items.", "Our paper presents solutions that can significantly improve the delay performance of putting and retrieving data in and out of cloud storage. We first focus on measuring the delay performance of a very popular cloud storage service Amazon S3. We establish that there is significant randomness in service times for reading and writing small and medium size objects when assigned distinct keys. We further demonstrate that using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller objects) together pushes the envelope on service time distributions significantly (e.g., 76 , 80 , and 85 reductions in mean, 90th, and 99th percentiles for 2-MB files) at the expense of additional storage (e.g., 1.75x). However, chunking and erasure coding increase the load and hence the queuing delays while reducing the supportable rate region in number of requests per second per node. Thus, in the second part of our paper, we focus on analyzing the delay performance when chunking, forward error correction (FEC), and parallel connections are used together. Based on this analysis, we develop load-adaptive algorithms that can pick the best code rate on a per-request basis by using offline computed queue backlog thresholds. The solutions work with homogeneous services with fixed object sizes, chunk sizes, operation type (e.g., read or write) as well as heterogeneous services with mixture of object sizes, chunk sizes, and operation types. We also present a simple greedy solution that opportunistically uses idle connections and picks the erasure coding rate accordingly on the fly. Both backlog-based and greedy solutions support the full rate region and provide best mean delay performance when compared to the best fixed coding rate policy. Our evaluations show that backlog-based solutions achieve better delay performance at higher percentile values than the greedy solution.", "We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.", "We investigate the problem of optimal request routing and content caching in a heterogeneous network supporting in-network content caching with the goal of minimizing average content access delay. Here, content can either be accessed directly from a back-end server (where content resides permanently) or be obtained from one of multiple in-network caches. To access a piece of content, a user must decide whether to route its request to a cache or to the back-end server. Additionally, caches must decide which content to cache. We investigate the problem complexity of two problem formulations, where the direct path to the back-end server is modeled as i) a congestion-sensitive or ii) a congestion-insensitive path, reflecting whether or not the delay of the uncached path to the back-end server depends on the user request load, respectively. We show that the problem is NP-complete in both cases. We prove that under the congestion-insensitive model the problem can be solved optimally in polynomial time if each piece of content is requested by only one user, or when there are at most two caches in the network. We also identify a structural property of the user-cache graph that potentially makes the problem NP-complete. For the congestion-sensitive model, we prove that the problem remains NP-complete even if there is only one cache in the network and each content is requested by only one user. We show that approximate solutions can be found for both models within a (1-1 e) factor of the optimal solution, and demonstrate a greedy algorithm that is found to be within 1 of optimal for small problem sizes. Through trace-driven simulations we evaluate the performance of our greedy algorithms, which show up to a 50 reduction in average delay over solutions based on LRU content caching.", "This paper addresses three issues in the field of ad hoc network capacity: the impact of (i) channel fading, (ii) channel inversion power control, and (iii) threshold-based scheduling on capacity. Channel inversion and threshold scheduling may be viewed as simple ways to exploit channel state information (CSI) without requiring cooperation across transmitters. We use the transmission capacity (TC) as our metric, defined as the maximum spatial intensity of successful simultaneous transmissions subject to a constraint on the outage probability (OP). By assuming the nodes are located on the infinite plane according to a Poisson process, we are able to employ tools from stochastic geometry to obtain asymptotically tight bounds on the distribution of the signal-to-interference (SIR) level, yielding in turn tight bounds on the OP (relative to a given SIR threshold) and the TC. We demonstrate that in the absence of CSI, fading can significantly reduce the TC and somewhat surprisingly, channel inversion only makes matters worse. We develop a threshold-based transmission rule where transmitters are active only if the channel to their receiver is acceptably strong, obtain expressions for the optimal threshold, and show that this simple, fully distributed scheme can significantly reduce the effect of fading.", "We consider coding schemes for computationally bounded channels, which can introduce an arbitrary set of errors as long as (a) the fraction of errors is bounded with high probability by a parameter p and (b) the process that adds the errors can be described by a sufficiently “simple” circuit. Codes for such channel models are attractive since, like codes for standard adversarial errors, they can handle channels whose true behavior is unknown or varying over time. For two classes of channels, we provide explicit, efficiently encodable decodable codes of optimal rate where only inefficiently decodable codes were previously known. In each case, we provide one encoder decoder that works for every channel in the class. The encoders are randomized, and probabilities are taken over the (local, unknown to the decoder) coins of the encoder and those of the channel. Unique decoding for additive errors: We give the first construction of a polynomial-time encodable decodable code for additive (a.k.a. oblivious) channels that achieve the Shannon capacity 1 − H(p). These are channels that add an arbitrary error vector e ∈ 0, 1 N of weight at most pN to the transmitted word; the vector e can depend on the code but not on the randomness of the encoder or the particular transmitted word. Such channels capture binary symmetric errors and burst errors as special cases. List decoding for polynomial-time channels: For every constant c > 0, we construct codes with optimal rate (arbitrarily close to 1 − H(p)) that efficiently recover a short list containing the correct message with high probability for channels describable by circuits of size at most Nc. Our construction is not fully explicit but rather Monte Carlo (we give an algorithm that, with high probability, produces an encoder decoder pair that works for all time Nc channels). We are not aware of any channel models considered in the information theory literature other than purely adversarial channels, which require more than linear-size circuits to implement. We justify the relaxation to list decoding with an impossibility result showing that, in a large range of parameters (p > 1 4), codes that are uniquely decodable for a modest class of channels (online, memoryless, nonuniform channels) cannot have positive rate.", "This paper introduces a novel technique for access by a cognitive Secondary User (SU) to a spectrum with an incumbent Primary User (PU), which uses Type-I Hybrid ARQ. The technique allows the SU to perform selective retransmissions of previously corrupted SU data packets. The temporal redundancy introduced by the primary ARQ protocol and by the selective SU retransmission process can be exploited by the SU receiver to perform Interference Cancellation (IC) over the entire interference pattern, thus creating a \"clean\" channel for the decoding of the concurrent message. The chain decoding technique, initiated by a successful decoding operation of a SU or PU message, consists in the iterative application of IC, as previously corrupted messages become decodable. Based on this scheme, we design an optimal policy that maximizes the SU throughput under a constraint on the average long-term PU throughput degradation. We show that the optimal policy can be found by first optimizing the SU access policy using a Markov Decision Process formulation, and then applying a chain decoding protocol defined by five basic rules. Such an approach enables a compact state representation of the protocol, and its efficient numerical optimization. Finally, we show by numerical results the throughput benefit of the proposed technique.", "We consider a wireless device-to-device (D2D) network where the nodes have precached information from a library of available files. Nodes request files at random. If the requested file is not in the on-board cache, then it is downloaded from some neighboring node via one-hop local communication. An outage event occurs when a requested file is not found in the neighborhood of the requesting node, or if the network admission control policy decides not to serve the request. We characterize the optimal throughput-outage tradeoff in terms of tight scaling laws for various regimes of the system parameters, when both the number of nodes and the number of files in the library grow to infinity. Our analysis is based on Gupta and Kumar protocol model for the underlying D2D wireless network, widely used in the literature on capacity scaling laws of wireless networks without caching. Our results show that the combination of D2D spectrum reuse and caching at the user nodes yields a per-user throughput independent of the number of users, for any fixed outage probability in (0, 1). This implies that the D2D caching network is scalable: even though the number of users increases, each user achieves constant throughput. This behavior is very different from the classical Gupta and Kumar result on ad hoc wireless networks, for which the per-user throughput vanishes as the number of users increases. Furthermore, we show that the user throughput is directly proportional to the fraction of cached information over the whole file library size. Therefore, we can conclude that D2D caching networks can turn memory into bandwidth (i.e., doubling the on-board cache memory on the user devices yields a 100 increase of the user throughout).", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.", "In this paper, we investigate the optimal caching policy, respectively, maximizing the success probability and area spectral efficiency (ASE) in a cache-enabled heterogeneous network (HetNet), where a tier of multi-antenna macro base stations (MBSs) is overlaid with a tier of helpers with caches. Under the probabilistic caching framework, we resort to stochastic geometry theory to derive the success probability and ASE. After finding the optimal caching policies, we analyze the impact of critical system parameters and compare the ASE with traditional HetNet where the MBS tier is overlaid by a tier of pico BSs (PBSs) with limited-capacity backhaul. Analytical and numerical results show that the optimal caching probability is less skewed among helpers to maximize the success probability when the ratios of MBS-to-helper density, MBS-to-helper transmit power, user-to-helper density, or the rate requirement are small, but is more skewed to maximize the ASE in general. Compared with traditional HetNet, the helper density is much lower than the PBS density to achieve the same target ASE. The helper density can be reduced by increasing cache size. With given total cache size within an area, there exists an optimal helper node density that maximizes the ASE.", "The throughput of wireless networks can be significantly improved by multi-channel communications compared with single-channel communications since the use of multiple channels can reduce interference influence. In this paper, we study interference-aware topology control and QoS routing in IEEE 802.11-based multi-channel wireless mesh networks with dynamic traffic. Channel assignment and routing are two basic issues in such networks. Different channel assignments can lead to different network topologies. We present a novel definition of co-channel interference. Based on this concept, we formally define and present an effective heuristic for the minimum INterference Survivable Topology Control (INSTC) problem which seeks a channel assignment for the given network such that the induced network topology is interference-minimum among all K-connected topologies. We then formulate the Bandwidth-Aware Routing (BAR) problem for a given network topology, which seeks routes for QoS connection requests with bandwidth requirements. We present a polynomial time optimal algorithm to solve the BAR problem under the assumption that traffic demands are splittable. For the non-splittable case, we present a maximum bottleneck capacity path routing heuristic. Simulation results show that compared with the simple common channel assignment and shortest path routing approach, our scheme improves the system performance by 57 on average in terms of connection blocking ratio.", "An erasure broadcast network is considered with two disjoint sets of receivers: a set of weak receivers with all-equal erasure probabilities and equal cache sizes and a set of strong receivers with all-equal erasure probabilities and no cache memories. Lower and upper bounds are presented on the capacity-memory tradeoff of this network (the largest rate at which messages can be reliably communicated for given cache sizes). The lower bound is achieved by means of a joint cache-channel coding scheme and significantly improves over traditional schemes based on the separate cache-channel coding . In particular, it is shown that the joint cache-channel coding offers new global caching gains that scale with the number of strong receivers in the network. The upper bound uses bounding techniques from degraded broadcast channels and introduces an averaging argument to capture the fact that the contents of the cache memories are designed before knowing users’ demands. The derived upper bound is valid for all stochastically degraded broadcast channels. The lower and upper bounds match for a single weak receiver (and any number of strong receivers) when the cache size does not exceed a certain threshold. Improved bounds are presented for the special case of a single weak and a single strong receiver with two files and the bounds are shown to match over a large range of cache sizes.", "We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an infrastructure'' setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this work, we consider a D2D infrastructure-less'' version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery, achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is therefore natural to ask whether these two gains are cumulative, i.e.,if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law).", "Caching plays an important role in reducing the backbone traffic when serving high-volume multimedia content. Recently, a new class of coded caching schemes have received significant interest, because they can exploit coded multi-cast opportunities to further reduce backbone traffic. Without considering file popularity, prior works have characterized the fundamental performance limits of coded caching through a deterministic worst-case analysis. However, when heterogeneous file popularity is considered, there remain open questions regarding the fundamental limits of coded caching performance. In this paper, for an arbitrary popularity distribution, we first derive a new information-theoretic lower bound on the expected transmission rate of any coded caching schemes. We then show that a simple coded-caching scheme attains an expected transmission rate that is at most a constant factor away from the lower bound. Unlike other existing studies, the constant factor that we derived is independent of the popularity distribution." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
The uneven-capacity bottleneck was also studied in the presence of multiple transmit antennas @cite_25 @cite_26 . Reference @cite_25 exploited transmit diversity to ameliorate the impact of the worst-user capacity, and showed that employing @math transmit antennas can allow for a transmission sum-rate that scales with @math . Similarly, the work in @cite_26 considered multiple transmit and multiple receive antennas, and designed topology-dependent cache-placement to ameliorate the worst-user effect.
{ "cite_N": [ "@cite_26", "@cite_25" ], "mid": [ "2598021007", "2137531352" ], "abstract": [ "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.", "We consider a dense fading multi-user network with multiple active multi-antenna source-destination pair terminals communicating simultaneously through a large common set of K multi-antenna relay terminals in the full spatial multiplexing mode. We use Shannon-theoretic tools to analyze the tradeoff between energy efficiency and spectral efficiency (known as the power-bandwidth tradeoff) in meaningful asymptotic regimes of signal-to-noise ratio (SNR) and network size. We design linear distributed multi-antenna relay beamforming (LDMRB) schemes that exploit the spatial signature of multi-user interference and characterize their power-bandwidth tradeoff under a system-wide power constraint on source and relay transmissions. The impact of multiple users, multiple relays and multiple antennas on the key performance measures of the high and low SNR regimes is investigated in order to shed new light on the possible reduction in power and bandwidth requirements through the usage of such practical relay cooperation techniques. Our results indicate that point-to-point coded multi-user networks supported by distributed relay beamforming techniques yield enhanced energy efficiency and spectral efficiency, and with appropriate signaling and sufficient antenna degrees of freedom, can achieve asymptotically optimal power-bandwidth tradeoff with the best possible (i.e., as in the cutset bound) energy scaling of K-1 and the best possible spectral efficiency slope at any SNR for large number of relay terminals. Furthermore, our results help to identify the role of interference cancellation capability at the relay terminals on realizing the optimal power- bandwidth tradeoff; and show how relaying schemes that do not attempt to mitigate multi-user interference, despite their optimal capacity scaling performance, could yield a poor power- bandwidth tradeoff." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
In a related line of work, the papers @cite_15 and @cite_30 studied the cache-aided topological interference channel where @math cache-aided transmitters are connected to @math cache-aided receivers, and each transmitter is connected to one receiver via a direct strong'' link and to each of the other receivers via weak'' links. Under the assumption of no channel state information at the transmitters (CSIT), the authors showed how the lack of CSIT can be ameliorated by exploiting the topology of the channel and the multicast nature of the transmissions.
{ "cite_N": [ "@cite_30", "@cite_15" ], "mid": [ "2740920677", "2598021007" ], "abstract": [ "This work explores cache-aided interference management in the absence of channel state information at the transmitters (CSIT), focusing on the setting with K transmitter receiver pairs endowed with caches, where each receiver k is connected to transmitter k via a direct link with normalized capacity 1, and to any other transmitter via a cross link with normalized capacity t ≤ 1. In this setting, we explore how a combination of pre-caching at transmitters and receivers, together with interference enhancement techniques, can a) partially counter the lack of CSIT, and b) render the network self-sufficient, in the sense that the transmitters need not receive additional data after pre-caching. Toward this we present new schemes that blindly harness topology and transmitter-and-receiver caching, to create separate streams, each serving many receivers at a time. Key to the approach here is a combination of rate-splitting, interference enhancement and coded caching.", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
Recently, significant effort has been made toward understanding the behavior of coded caching in the finite Signal-to-Noise Ratio (SNR) regime with realistic (and thus often uneven) channel qualities. In this direction, the work in @cite_23 showed that a single-stream coded caching message beamformed by an appropriate transmit vector can outperform some existing multi-stream coded caching methods in the low-SNR regime, while references @cite_14 @cite_9 (see also @cite_19 ) revealed the importance of jointly considering caching with multicast beamformer design. Moreover, the work in @cite_17 studied the connection between rate and subpacketization in the multi-antenna environment, accounting for the unevenness naturally brought about by fading.
{ "cite_N": [ "@cite_14", "@cite_9", "@cite_19", "@cite_23", "@cite_17" ], "mid": [ "2921187562", "2767375015", "2074863717", "2598021007", "2963664701" ], "abstract": [ "We study downlink beamforming in a single-cell network with a multi-antenna base station (BS) serving cache-enabled users. For a given common rate of the files in the system, we first formulate the minimum transmit power with beamforming at the BS as a non-convex optimization problem. This corresponds to a multiple multicast problem, to which a stationary solution can be efficiently obtained through successive convex approximation (SCA). It is observed that the complexity of the problem grows exponentially with the number of subfiles delivered to each user in each time slot, which itself grows exponentially with the number of users in the system. Therefore, we introduce a low-complexity alternative through time-sharing that limits the number of subfiles that can be received by a user in each time slot. It is shown through numerical simulations that, the reduced-complexity beamforming scheme has minimal performance gap compared to transmitting all the subfiles jointly, and outperforms the state-of-the-art low-complexity scheme at all SNR and rate values with sufficient spatial degrees of freedom, and in the high SNR high rate regime when the number of spatial degrees of freedom is limited.", "A single cell downlink scenario is considered where a multiple-antenna base station delivers contents to multiple cache-enabled user terminals. Using the ideas from multi-server coded caching (CC) scheme developed for wired networks, a joint design of CC and general multicast beamforming is proposed to benefit from spatial multiplexing gain, improved interference management and the global CC gain, simultaneously. Utilizing the multiantenna multicasting opportunities provided by the CC technique, the proposed method is shown to perform well over the entire SNR region, including the low SNR regime, unlike the existing schemes based on zero forcing (ZF). Instead of nulling the interference at users not requiring a specific coded message, general multicast beamforming strategies are employed, optimally balancing the detrimental impact of both noise and inter-stream interference from coded messages transmitted in parallel. The proposed scheme is shown to provide the same degrees-of-freedom at high SNR as the state-of-art methods and, in general, to perform significantly better than several base-line schemes including, the joint ZF and CC, max-min fair multicasting with CC, and basic unicasting with multiuser beamforming.", "Transmit beamforming and receive combining are simple methods for exploiting the significant diversity that is available in multiple-input multiple-output (MIMO) wireless systems. Unfortunately, optimal performance requires either complete channel knowledge or knowledge of the optimal beamforming vector; both are hard to realize. In this article, a quantized maximum signal-to-noise ratio (SNR) beamforming technique is proposed where the receiver only sends the label of the best beamforming vector in a predetermined codebook to the transmitter. By using the distribution of the optimal beamforming vector in independent and identically distributed Rayleigh fading matrix channels, the codebook design problem is solved and related to the problem of Grassmannian line packing. The proposed design criterion is flexible enough to allow for side constraints on the codebook vectors. Bounds on the codebook size are derived to guarantee full diversity order. Results on the density of Grassmannian line packings are derived and used to develop bounds on the codebook size given a capacity or SNR loss. Monte Carlo simulations are presented that compare the probability of error for different quantization strategies.", "We consider the content delivery problem in a fading multi-input single-output channel with cache-aided users. We are interested in the scalability of the equivalent content delivery rate when the number of users, @math , is large. Analytical results show that, using coded caching and wireless multicasting, without channel state information at the transmitter, linear scaling of the content delivery rate with respect to @math can be achieved in some different ways. First, if the multicast transmission spans over @math independent sub-channels, e.g., in quasi-static fading if @math , and in block fading or multi-carrier systems if @math , linear scaling can be obtained, when the product of the number of transmit antennas and the number of sub-channels scales logarithmically with @math . Second, even with a fixed number of antennas, we can achieve the linear scaling with a threshold-based user selection requiring only one-bit feedbacks from the users. When CSIT is available, we propose a mixed strategy that combines spatial multiplexing and multicasting. Numerical results show that, by optimizing the power split between spatial multiplexing and multicasting, we can achieve a significant gain of the content delivery rate with moderate cache size.", "We investigate the potentials of applying the coded caching paradigm in wireless networks. In order to do this, we investigate physical layer schemes for downlink transmission from a multiantenna transmitter to several cache-enabled users. As the baseline scheme, we consider employing coded caching on the top of max–min fair multicasting, which is shown to be far from optimal at high-SNR values. Our first proposed scheme, which is near-optimal in terms of DoF, is the natural extension of multiserver coded caching to Gaussian channels. As we demonstrate, its finite SNR performance is not satisfactory, and thus we propose a new scheme in which the linear combination of messages is implemented in the finite field domain, and the one-shot precoding for the MISO downlink is implemented in the complex field. While this modification results in the same near-optimal DoF performance, we show that this leads to significant performance improvement at finite SNR. Finally, we extend our scheme to the previously considered cache-enabled interference channels, and moreover we provide an ergodic rate analysis of our scheme. Our results convey the important message that although directly translating schemes from the network coding ideas to wireless networks may work well at high-SNR values, careful modifications need to be considered for acceptable finite SNR performance." ] }
1908.04036
2968366816
This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity.
Our work is in the spirit of all the above papers, and it can be seen specifically as an extension of @cite_20 . This reference considered a specific binary topological case, for which it proposed a two-level superposition-based transmission scheme to alleviate the worst-user bottleneck.
{ "cite_N": [ "@cite_20" ], "mid": [ "1965299092" ], "abstract": [ "Selective families, a weaker variant of superimposed codes [KS64, F92, 197, CR96], have been recently used to design Deterministic Distributed Broadcast (DDB) protocols for unknown radio networks (a radio network is said to be unknown when the nodes know nothing about the network but their own label) [CGGPR00, CGOR00]. We first provide a general almost tight lower bound on the size of selective families. Then, by reverting the selective families - DDB protocols connection, we exploit our lower bound to construct a family of “hard” radio networks (i.e. directed graphs). These networks yield an O(n log D) lower bound on the completion time of DDB protocols that is superlinear (in the size n of the network) even for very small maximum eccentricity D of the network, while all the previous lower bounds (e.g. O(D log n) [CGGPR00]) are superlinear only when D is almost linear. On the other hand, the previous upper bounds are all superlinear in n independently of the eccentricity D and the maximum in-degree d of the network. We introduce a broadcast technique that exploits selective families in a new way. Then, by combining selective families of almost optimal size with our new broadcast technique, we obtain an O(Dd log3 n) upper bound that we prove to be almost optimal when d = O(n D). This exponentially improves over the best known upper bound [CGR00) when D, d = O(polylogn). Furthermore, by comparing our deterministic upper bound with the best known randomized one [BGI87] we obtain a new, rather surprising insight into the real gap between deterministic and randomized protocols. It turns out that this gap is exponential (as discovered in [BGI87]), but only when the network has large maximum in-degree (i.e. d = O(na), for some constant a > O). We then look at the multibroadcast problem on unknown radio networks. A similar connection to that between selective families and (single) broadcast also holds between superimposed codes and multibroadcast. We in fact combine a variant of our (single) broadcast technique with superimposed codes of almost optimal size available in literature [EFF85, HS87, I97, CHI99]. This yields a multibroadcast protocol having completion time O(Dd2 log3 n). Finally, in order to determine the limits of our multibroadcast technique, we generalize (and improve) the best known lower bound [CR96] on the size of superimposed codes." ] }
1908.03864
2968252280
We consider an information theoretic approach to address the problem of identifying fake digital images. We propose an innovative method to formulate the issue of localizing manipulated regions in an image as a deep representation learning problem using the Information Bottleneck (IB), which has recently gained popularity as a framework for interpreting deep neural networks. Tampered images pose a serious predicament since digitized media is a ubiquitous part of our lives. These are facilitated by the easy availability of image editing software and aggravated by recent advances in deep generative models such as GANs. We propose InfoPrint, a computationally efficient solution to the IB formulation using approximate variational inference and compare it to a numerical solution that is computationally expensive. Testing on a number of standard datasets, we demonstrate that InfoPrint outperforms the state-of-the-art and the numerical solution. Additionally, it also has the ability to detect alterations made by inpainting GANs.
Information theory is a powerful framework that is being increasingly adopted to improve various aspects of deep machine learning, e.g., representation learning @cite_36 generalizability & regularization @cite_38 , and for interpreting how deep neural networks function @cite_27 @cite_34 . Mutual information plays a key role in many of these methods. InfoGAN @cite_13 , showed that maximizing the mutual information between the latent code and the generator's output improved the representations learned by a generative adversarial network (GAN) @cite_24 , allowing them to be more disentangled and interpretable. Since mutual information is hard to compute, InfoGAN maximized a variational lower bound @cite_0 . A similar information maximization idea was explored in @cite_36 to improve unsupervised representation learning using the numerical estimator proposed in @cite_31 .
{ "cite_N": [ "@cite_38", "@cite_36", "@cite_24", "@cite_0", "@cite_27", "@cite_31", "@cite_34", "@cite_13" ], "mid": [ "2963226019", "2787273235", "2164700406", "2962730405", "2767724106", "2434741482", "2201744460", "2111141597" ], "abstract": [ "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound of the mutual information objective that can be optimized efficiently. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing supervised methods. For an up-to-date version of this paper, please see https: arxiv.org abs 1606.03657.", "Advances in unsupervised learning enable reconstruction and generation of samples from complex distributions, but this success is marred by the inscrutability of the representations learned. We propose an information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation. The principle of total Cor-relation Ex-planation (CorEx) has motivated successful unsupervised learning applications across a variety of domains, but under some restrictive assumptions. Here we relax those restrictions by introducing a flexible variational lower bound to CorEx. Surprisingly, we find that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions. This information-theoretic view of VAE deepens our understanding of hierarchical VAE and motivates a new algorithm, AnchorVAE, that makes latent codes more interpretable through information maximization and enables generation of richer and more realistic samples.", "Building intelligent systems that are capable of extracting high-level representations from high-dimensional sensory data lies at the core of solving many AI related tasks, including object recognition, speech perception, and language understanding. Theoretical and biological arguments strongly suggest that building such systems requires models with deep architectures that involve many layers of nonlinear processing. The aim of the thesis is to demonstrate that deep generative models that contain many layers of latent variables and millions of parameters can be learned efficiently, and that the learned high-level feature representations can be successfully applied in a wide spectrum of application domains, including visual object recognition, information retrieval, and classification and regression tasks. In addition, similar methods can be used for nonlinear dimensionality reduction. The first part of the thesis focuses on analysis and applications of probabilistic generative models called Deep Belief Networks. We show that these deep hierarchical models can learn useful feature representations from a large supply of unlabeled sensory inputs. The learned high-level representations capture a lot of structure in the input data, which is useful for subsequent problem-specific tasks, such as classification, regression or information retrieval, even though these tasks are unknown when the generative model is being trained. In the second part of the thesis, we introduce a new learning algorithm for a different type of hierarchical probabilistic model, which we call a Deep Boltzmann Machine. Like Deep Belief Networks, Deep Boltzmann Machines have the potential of learning internal representations that become increasingly complex at higher layers, which is a promising way of solving object and speech recognition problems. Unlike Deep Belief Networks and many existing models with deep architectures, the approximate inference procedure, in addition to a fast bottom-up pass, can incorporate top-down feedback. This allows Deep Boltzmann Machines to better propagate uncertainty about ambiguous inputs.", "The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents. Most learning algorithms that involve optimisation of the mutual information rely on the Blahut-Arimoto algorithm — an enumerative algorithm with exponential complexity that is not suitable for modern machine learning applications. This paper provides a new approach for scalable optimisation of the mutual information by merging techniques from variational inference and deep learning. We develop our approach by focusing on the problem of intrinsically-motivated learning, where the mutual information forms the definition of a well-known internal drive known as empowerment. Using a variational lower bound on the mutual information, combined with convolutional networks for handling visual input streams, we develop a stochastic optimisation algorithm that allows for scalable information maximisation and empowerment-based reasoning directly from pixels to actions.", "The Web has accumulated a rich source of information, such as text, image, rating, etc, which represent different aspects of user preferences. However, the heterogeneous nature of this information makes it difficult for recommender systems to leverage in a unified framework to boost the performance. Recently, the rapid development of representation learning techniques provides an approach to this problem. By translating the various information sources into a unified representation space, it becomes possible to integrate heterogeneous information for informed recommendation. In this work, we propose a Joint Representation Learning (JRL) framework for top-N recommendation. In this framework, each type of information source (review text, product image, numerical rating, etc) is adopted to learn the corresponding user and item representations based on available (deep) representation learning architectures. Representations from different sources are integrated with an extra layer to obtain the joint representations for users and items. In the end, both the per-source and the joint representations are trained as a whole using pair-wise learning to rank for top-N recommendation. We analyze how information propagates among different information sources in a gradient-descent learning paradigm, based on which we further propose an extendable version of the JRL framework (eJRL), which is rigorously extendable to new information sources to avoid model re-training in practice. By representing users and items into embeddings offline, and using a simple vector multiplication for ranking score calculation online, our framework also has the advantage of fast online prediction compared with other deep learning approaches to recommendation that learn a complex prediction network for online calculation.", "This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.", "Abstract This paper proposes a unified approach to learning from constraints, which integrates the ability of classical machine learning techniques to learn from continuous feature-based representations with the ability of reasoning using higher-level semantic knowledge typical of Statistical Relational Learning. Learning tasks are modeled in the general framework of multi-objective optimization, where a set of constraints must be satisfied in addition to the traditional smoothness regularization term. The constraints translate First Order Logic formulas, which can express learning-from-example supervisions and general prior knowledge about the environment by using fuzzy logic. By enforcing the constraints also on the test set, this paper presents a natural extension of the framework to perform collective classification. Interestingly, the theory holds for both the case of data represented by feature vectors and the case of data simply expressed by pattern identifiers, thus extending classic kernel machines and graph regularization, respectively. This paper also proposes a probabilistic interpretation of the proposed learning scheme, and highlights intriguing connections with probabilistic approaches like Markov Logic Networks. Experimental results on classic benchmarks provide clear evidence of the remarkable improvements that are obtained with respect to related approaches.", "The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent i.e., the functional structure of the optimal solution to the learning problem, which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine SCM based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses." ] }
1908.03978
2968137284
Accurate pedestrian counting algorithm is critical to eliminate insecurity in the congested public scenes. However, counting pedestrians in crowded scenes often suffer from severe perspective distortion. In this paper, basing on the straight-line double region pedestrian counting method, we propose a dynamic region division algorithm to keep the completeness of counting objects. Utilizing the object bounding boxes obtained by YoloV3 and expectation division line of the scene, the boundary for nearby region and distant one is generated under the premise of retaining whole head. Ulteriorly, appropriate learning models are applied to count pedestrians in each obtained region. In the distant region, a novel inception dilated convolutional neural network is proposed to solve the problem of choosing dilation rate. In the nearby region, YoloV3 is used for detecting the pedestrian in multi-scale. Accordingly, the total number of pedestrians in each frame is obtained by fusing the result in nearby and distant regions. A typical subway pedestrian video dataset is chosen to conduct experiment in this paper. The result demonstrate that proposed algorithm is superior to existing machine learning based methods in general performance.
Traditional methods used the histogram of oriented gradients(HOG) as the pedestrian-level features and the support vector machine as the classifier to detect pedestrians in specific scenes @cite_10 , but these hand-crafted features severely suffered from light variance and scale variance. The region-based convolutional neural networks(R-CNNs) @cite_17 used features extracted from CNN and improved the performance in detection. This method could be summarized as two stages processing: proposal and classification, but hard to be accelerated. YOLO @cite_3 provided a new one-stage solution for detection and significantly improved the speed. It converted the thought of classification to regression in sub-grids and abandoned the process of proposal. Following YOLO, some methods paid attention to support multiple scales object detection such as SSD @cite_7 and YOLOV3 @cite_6 . Although detection methods have achieved tremendous performance and can be used in sparse crowd scenes, it is hard to substitute density-based methods in crowded scenes.
{ "cite_N": [ "@cite_7", "@cite_6", "@cite_3", "@cite_10", "@cite_17" ], "mid": [ "2963315052", "2953226057", "2410641892", "2291533986", "2258484932" ], "abstract": [ "In this paper, we consider the problem of pedestrian detection in natural scenes. Intuitively, instances of pedestrians with different spatial scales may exhibit dramatically different features. Thus, large variance in instance scales, which results in undesirable large intracategory variance in features, may severely hurt the performance of modern object instance detection methods. We argue that this issue can be substantially alleviated by the divide-and-conquer philosophy. Taking pedestrian detection as an example, we illustrate how we can leverage this philosophy to develop a Scale-Aware Fast R-CNN (SAF R-CNN) framework. The model introduces multiple built-in subnetworks which detect pedestrians with scales from disjoint ranges. Outputs from all of the subnetworks are then adaptively combined to generate the final detection results that are shown to be robust to large variance in instance scales, via a gate function defined over the sizes of object proposals. Extensive evaluations on several challenging pedestrian detection datasets well demonstrate the effectiveness of the proposed SAF R-CNN. Particularly, our method achieves state-of-the-art performance on Caltech [P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” IEEE Trans. Pattern Anal. Mach. Intell. , vol. 34, no. 4, pp. 743–761, Apr. 2012], and obtains competitive results on INRIA [N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2005, pp. 886–893], ETH [A. Ess, B. Leibe, and L. V. Gool, “Depth and appearance for mobile scene analysis,” in Proc. Int. Conf. Comput. Vis ., 2007, pp. 1–8], and KITTI [A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit ., 2012, pp. 3354–3361].", "Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are \"deep in context\". We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.", "Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.", "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40 miss rate). Using new and accurate annotations, an MCF achieves 7.98 miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
In the modern world, a high density of wireless networks and huge interference between them make centralized coordination of the networks more and more popular. It allows optimizing network performance and thus increasing total efficiency. While today's wireless networks are mainly optimized to provide high throughput, the growing OPEX of network operators including the payments for energy consumption may shift the paradigm in the near future. Because of the very high number of base stations and access points energy consumption becomes an essential issue for wireless networks. To improve energy efficiency, various approaches can be used, including energy harvesting, improving hardware, network planning, and resource allocation @cite_5 .
{ "cite_N": [ "@cite_5" ], "mid": [ "2093917343" ], "abstract": [ "Recently, energy efficiency in wireless networks has become an important objective. Aside from the growing proliferation of smartphones and other high-end devices in conventional human-to-human (H2H) communication, the introduction of machine-to-machine (M2M) communication or machine-type communication into cellular networks is another contributing factor. In this paper, we investigate quality-of-service (QoS)-driven energy-efficient design for the uplink of long term evolution (LTE) networks in M2M H2H co-existence scenarios. We formulate the resource allocation problem as a maximization of effective capacity-based bits-per-joule capacity under statistical QoS provisioning. The specific constraints of single carrier frequency division multiple access (uplink air interface in LTE networks) pertaining to power and resource block allocation not only complicate the resource allocation problem, but also render the standard Lagrangian duality techniques inapplicable. We overcome the analytical and computational intractability by first transforming the original problem into a mixed integer programming (MIP) problem and then formulating its dual problem using the canonical duality theory. The proposed energy-efficient design is compared with the spectral efficient design along with round robin (RR) and best channel quality indicator (BCQI) algorithms. Numerical results, which are obtained using the invasive weed optimization (IWO) algorithm, show that the proposed energy-efficient uplink design not only outperforms other algorithms in terms of energy efficiency while satisfying the QoS requirements, but also performs closer to the optimal design." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
@cite_11 , energy efficiency is defined as the amount of data delivered through a link divided by the consumed energy. The authors of this paper consider a terminal having limited energy and compare the energy efficiency of various Automatic Repeat reQuest (ARQ) protocols.
{ "cite_N": [ "@cite_11" ], "mid": [ "2963302028" ], "abstract": [ "High-fidelity, real-time interactive applications are envisioned with the emergence of the Internet of Things and tactile Internet by means of ultra-reliable low-latency communications (URLLC). Exploiting time diversity for fulfilling the URLLC requirements in an energy efficient manner is a challenging task due to the nontrivial interplay among packet size, retransmission rounds and delay, and transmit power. In this paper, we study the fundamental energy-latency tradeoff in URLLC systems employing incremental redundancy (IR) hybrid automatic repeat request (HARQ). We cast the average energy minimization problem with a finite blocklength (latency) constraint and feedback delay, which is non-convex. We propose a dynamic programming algorithm for energy efficient IR-HARQ optimization in terms of number of retransmissions, blocklength, and power per round. Numerical results show that our IR-HARQ approach could provide around 25 energy saving compared with one-shot transmission (no HARQ)." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
When optimizing energy efficiency, it is essential to add circuit power consumption @math to transmit power. Without taking this component into account, the maximum energy efficiency corresponds to the lowest transmission rate @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2152481521" ], "abstract": [ "Wireless systems where the nodes operate on batteries so that energy consumption must be minimized while satisfying given throughput and delay requirements are considered. In this context, the best modulation strategy to minimize the total energy consumption required to send a given number of bits is analyzed. The total energy consumption includes both the transmission energy and the circuit energy consumption. For uncoded systems, by optimizing the transmission time and the modulation parameters, it is shown that up to 80 energy savings is achievable over nonoptimized systems. For coded systems, it is shown that the benefit of coding varies with the transmission distance and the underlying modulation schemes." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
Mentioned above papers consider only a single wireless link. The definition of energy efficiency has to be extended for systems with multiple transmitters and receivers. In @cite_10 , it is done in the following way: where @math is the overall utility function, @math is the link utility function of link @math , @math is the number of links, @math is the rate of link @math , @math is the average transmit power at link @math . The major disadvantage of such a way is that the utility function represents the sum of energy efficiencies of individual links, while the network operator is interested in the total network energy consumption and energy efficiency which is different.
{ "cite_N": [ "@cite_10" ], "mid": [ "2093917343" ], "abstract": [ "Recently, energy efficiency in wireless networks has become an important objective. Aside from the growing proliferation of smartphones and other high-end devices in conventional human-to-human (H2H) communication, the introduction of machine-to-machine (M2M) communication or machine-type communication into cellular networks is another contributing factor. In this paper, we investigate quality-of-service (QoS)-driven energy-efficient design for the uplink of long term evolution (LTE) networks in M2M H2H co-existence scenarios. We formulate the resource allocation problem as a maximization of effective capacity-based bits-per-joule capacity under statistical QoS provisioning. The specific constraints of single carrier frequency division multiple access (uplink air interface in LTE networks) pertaining to power and resource block allocation not only complicate the resource allocation problem, but also render the standard Lagrangian duality techniques inapplicable. We overcome the analytical and computational intractability by first transforming the original problem into a mixed integer programming (MIP) problem and then formulating its dual problem using the canonical duality theory. The proposed energy-efficient design is compared with the spectral efficient design along with round robin (RR) and best channel quality indicator (BCQI) algorithms. Numerical results, which are obtained using the invasive weed optimization (IWO) algorithm, show that the proposed energy-efficient uplink design not only outperforms other algorithms in terms of energy efficiency while satisfying the QoS requirements, but also performs closer to the optimal design." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
@cite_3 , the authors consider some other utility functions. In addition to the sum of energy efficiencies, an example of which is described above, they consider the product of energy efficiencies and so-called Global Energy Efficiency (GEE). Global energy efficiency is defined as the sum of rates divided by the total power consumption of all devices. Fast algorithms are proposed to solve Sum-EE and Prod-EE maximization problems. For GEE maximization problem, the optimal solution is only found when interference is negligible compared to the constant background noise.
{ "cite_N": [ "@cite_3" ], "mid": [ "2589615107" ], "abstract": [ "The characterization of the global maximum of energy efficiency (EE) problems in wireless networks is a challenging problem due to their nonconvex nature in interference channels. The aim of this paper is to develop a new and general framework to achieve globally optimal solutions. First, the hidden monotonic structure of the most common EE maximization problems is exploited jointly with fractional programming theory to obtain globally optimal solutions with exponential complexity in the number of network links. To overcome the high complexity, we also propose a framework to compute suboptimal power control strategies with affordable complexity. This is achieved by merging fractional programming and sequential optimization. The proposed monotonic framework is used to shed light on the ultimate performance of wireless networks in terms of EE and also to benchmark the performance of the lower-complexity framework based on sequential programming. Numerical evidence is provided to show that the sequential fractional programming framework achieves global optimality in several practical communication scenarios." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
The GEE maximization problem can be solved with existing mathematical methods based on the so-called polyblock algorithm @cite_7 . However, this approach is known to converge very slowly when one or more variables are close to zero. While modeling real deployments, we often observed such cases. That is why we use another approach based on the branch-and-bound method that avoids this slow convergence @cite_2 . Although being applied to solve the GEE problem @cite_6 in LTE networks, its applicability for Wi-Fi networks is not straightforward.
{ "cite_N": [ "@cite_6", "@cite_7", "@cite_2" ], "mid": [ "1539042193", "2134058948", "1970830346" ], "abstract": [ "Carrier sense multiple access (CSMA), which resolves contentions over wireless networks in a fully distributed fashion, has recently gained a lot of attentions since it has been proved that appropriate control of CSMA parameters guarantees optimality in terms of stability (i.e., scheduling) and system-wide utility (i.e., scheduling and congestion control). Most CSMA-based algorithms rely on the popular Markov chain Monte Carlo technique, which enables one to find optimal CSMA parameters through iterative loops of simulation-and-update. However, such a simulation-based approach often becomes a major cause of exponentially slow convergence, being poorly adaptive to flow topology changes. In this paper, we develop distributed iterative algorithms which produce approximate solutions with convergence in polynomial time for both stability and utility maximization problems. In particular, for the stability problem, the proposed distributed algorithm requires, somewhat surprisingly, only one iteration among links. Our approach is motivated by the Bethe approximation (introduced by Yedidia, Freeman, and Weiss) allowing us to express approximate solutions via a certain nonlinear system with polynomial size. Our polynomial convergence guarantee comes from directly solving the nonlinear system in a distributed manner, rather than multiple simulation-and-update loops in existing algorithms. We provide numerical results to show that the algorithm produces highly accurate solutions and converges much faster than the prior ones.", "In a heterogeneous wireless cellular network, each user may be covered by multiple access points such as macro pico relay femto base stations (BS). An effective approach to maximize the sum utility (e.g., system throughput) in such a network is to jointly optimize users' linear procoders as well as their BS associations. In this paper, we first show that this joint optimization problem is NP-hard and thus is difficult to solve to global optimality. To find a locally optimal solution, we formulate the problem as a noncooperative game in which the users and the BSs both act as players. We introduce a set of new utility functions for the players and show that every Nash equilibrium (NE) of the resulting game is a stationary solution of the original sum utility maximization problem. Moreover, we develop a best-response type algorithm that allows the players to distributedly reach a NE of the game. Simulation results show that the proposed distributed algorithm can effectively relieve local BS congestion and simultaneously achieve high throughput and load balancing in a heterogeneous network.", "Consider the multiple-input multiple-output (MIMO) interfering broadcast channel whereby multiple base stations in a cellular network simultaneously transmit signals to a group of users in their own cells while causing interference to each other. The basic problem is to design linear beamformers that can maximize the system throughput. In this paper, we propose a linear transceiver design algorithm for weighted sum-rate maximization that is based on iterative minimization of weighted mean-square error (MSE). The proposed algorithm only needs local channel knowledge and converges to a stationary point of the weighted sum-rate maximization problem. Furthermore, the algorithm and its convergence can be extended to a general class of sum-utility maximization problem. The effectiveness of the proposed algorithm is validated by numerical experiments." ] }
1908.03803
2966887519
During the last decade, the number of devices connected to the Internet by Wi-Fi has grown significantly. A high density of both the client devices and the hot spots posed new challenges related to providing the desired quality of service in the current and emerging scenarios. To cope with the negative effects caused by network densification, modern Wi-Fi is becoming more and more centralized. To improve network efficiency, today many new Wi-Fi deployments are under control of management systems that optimize network parameters in a centralized manner. In the paper, for such a cloud management system, we develop an algorithm which aims at maximizing energy efficiency and also keeps fairness among clients. For that, we design an objective function and solve an optimization problem using the branch and bound approach. To evaluate the efficiency of the developed solution, we implement it in the NS-3 simulator and compare with existing solutions and legacy behavior.
Wi-Fi networks impose additional restrictions on solutions of the described problem. Specifically, since Wi-Fi implements CSMA CA, regulatory bodies put limits on the sensitivity threshold. An example of a solution to the GEE problem is shown in paper @cite_0 , where an algorithm based on the branch-and-bound technique was proposed to allocate power in Wi-Fi networks dynamically. Even with a constant traffic load such an algorithm dynamically varies the transmit power and, thus, obtains higher efficiency. In this paper, we generalize the GEE metric to take both power consumption and fairness into account and to develop a global optimization algorithm for green Wi-Fi networks.
{ "cite_N": [ "@cite_0" ], "mid": [ "2292788438" ], "abstract": [ "In this paper, we propose a distributed multi-hop interference avoidance algorithm, namely, IAA to avoid co-channel interference inside a wireless body area network (WBAN). Our proposal adopts carrier sense multiple access with collision avoidance (CSMA CA) between sources and relays and a flexible time division multiple access (FTDMA) between relays and coordinator. The proposed scheme enables low interfering nodes to transmit their messages using base channel. Depending on suitable situations, high interfering nodes double their contention windows (CW) and probably use switched orthogonal channel. Simulation results show that proposed scheme has far better minimum SINR (12dB improvement) and longer energy lifetime than other schemes (power control and opportunistic relaying). Additionally, we validate our proposal in a theoretical analysis and also propose a probabilistic approach to prove the outage probability can be effectively reduced to the minimal." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
There are different types of materials used in manufacturing of optical sensors: various polymers including silicone, polyurethane, and thermoplastic elastomers; POFs and hydrogels. Liquid silicone rubber compounds (e.g. Smooth-on Sorta Clear 18 and Techsil RTV27905) are widely used in injection molding to create robust parts. The part quality mainly depends on how well the silicone compounds are mixed during molding. On the other hand, thermoplastic rubbers, as in @cite_21 , provide a better ability to return to their original shape after stretching them to moderate elongations. These can be processed by heating the granules of the thermoplastic elastomer, shaping them under pressure, and then cooling them to solidify. In contrast to silicone rubber and elastomers, polyurethanes can be synthesized by chemical reactions. Polyurethane parts are resistant to wear and tear.
{ "cite_N": [ "@cite_21" ], "mid": [ "2157459335" ], "abstract": [ "We present the development of a polyimide-based two-dimensional tactile sensing array realized using a novel inverted fabrication technique. Thermal silicon oxide or Pyrex® substrates are treated such that their surfaces are OH group terminated, allowing good adhesion between such substrates and a spun-on polyimide film during processing through what are suspected to be hydrogen bonds that can be selectively broken when release is desired. The release of the continuous polyimide film is rapidly accomplished by breaking these bonds. This process results in robust, low-cost and continuous polymer-film devices. The developed sensor skin contains an array of membrane-based tactile sensors (taxels). Micromachined thin-film met al strain gauges are positioned on the edges of polyimide membranes. The change in resistance from each strain gauge resulting from normal forces applied to tactile bumps on the top of the membranes is used to image force distribution. Response of an individual taxel is characterized. The effective gauge factor of the taxels is found to be approximately 1.3. Sensor array output is experimentally obtained. The demonstrated devices are robust enough for direct contact with humans, everyday objects and contaminants without undue care." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
With the aim of making wearable and biocompatible parts, technological advances in bioengineering led to the emergence of hydrogels @cite_3 . A hydrogel, a rubbery and transparent material composed mostly of water, can also be a good choice for safe physical HRI.
{ "cite_N": [ "@cite_3" ], "mid": [ "2049609871" ], "abstract": [ "Hydrogels have found wide application in biosensors due to their versatile nature. This family of materials is applied in biosensing either to increase the loading capacity compared to two-dimensional surfaces, or to support biospecific hydrogel swelling occurring subsequent to specific recognition of an analyte. This review focuses on various principles underpinning the design of biospecific hydrogels acting through various molecular mechanisms in transducing the recognition event of label-free analytes. Towards this end, we describe several promising hydrogel systems that when combined with the appropriate readout platform and quantitative approach could lead to future real-life applications." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Using the materials described in Sec , a variety of optical tactile sensors were presented in the literature. The general principle is based on the optical reflection between mediums with different refractive indices. A conventional optical tactile sensor consists of an array of infrared light-emitting diodes (LEDs) and photodetectors. The intensity of the light is usually proportional to the magnitude of the pressure @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "2070970799" ], "abstract": [ "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
GelSight tactile sensor @cite_21 uses a thermoplastic elastomer coated with a reflective membrane highlighted by an LED ring to capture surface textures with a camera. @cite_4 , this sensor was benchmarked in a texture recognition problem. Similarly, researchers of the Bristol Robotics laboratory developed a family of optical tactile sensors that are almost ready for small-scale mass production @cite_9 . Their TacTip sensor uses a commodity image tracker originally used in optical computer mice. It combines an image acquisition system and a digital signal processor, capable of processing the images at 2000 Hz @cite_19 . Thanks to the high image processing rate, they can detect the slippage of a grasped object @cite_18 . @cite_26 , a touch sensor, consisting of 41 silicone rubber markers, a light source, and a camera, estimates the tangential and normal forces by tracking these markers. Markers with different colors are used in GelForce sensor @cite_24 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_9", "@cite_21", "@cite_24", "@cite_19" ], "mid": [ "2775635818", "2743209907", "1965953026", "2962983231", "2562116405", "2070970799", "2793447234" ], "abstract": [ "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.", "A GelSight sensor uses an elastomeric slab covered with a reflective membrane to measure tactile signals. It measures the 3D geometry and contact force information with high spacial resolution, and successfully helped many challenging robot tasks. A previous sensor [1], based on a semi-specular membrane, produces high resolution but with limited geometry accuracy. In this paper, we describe a new design of GelSight for robot gripper, using a Lambertian membrane and new illumination system, which gives greatly improved geometric accuracy while retaining the compact size. We demonstrate its use in measuring surface normals and reconstructing height maps using photometric stereo. We also use it for the task of slip detection, using a combination of information about relative motions on the membrane surface and the shear distortions. Using a robotic arm and a set of 37 everyday objects with varied properties, we find that the sensor can detect translational and rotational slip in general cases, and can be used to improve the stability of the grasp.", "Sensing surface textures by touch is a valuable capability for robots. Until recently it was difficult to build a compliant sensor with high sensitivity and high resolution. The GelSight sensor is compliant and offers sensitivity and resolution exceeding that of the human fingertips. This opens the possibility of measuring and recognizing highly detailed surface textures. The GelSight sensor, when pressed against a surface, delivers a height map. This can be treated as an image, and processed using the tools of visual texture analysis. We have devised a simple yet effective texture recognition system based on local binary patterns, and enhanced it by the use of a multi-scale pyramid and a Hellinger distance metric. We built a database with 40 classes of tactile textures using materials such as fabric, wood, and sandpaper. Our system can correctly categorize materials from this database with high accuracy. This suggests that the GelSight sensor can be useful for material recognition by robots.", "Vision and touch are two of the important sensing modalities for humans and they offer complementary information for sensing the environment. Robots could also benefit from such multi-modal sensing ability. In this paper, addressing for the first time (to the best of our knowledge) texture recognition from tactile images and vision, we propose a new fusion method named Deep Maximum Covariance Analysis (DMCA) to learn a joint latent space for sharing features through vision and tactile sensing. The features of camera images and tactile data acquired from a GelSight sensor are learned by deep neural networks. But the learned features are of a high dimensionality and are redundant due to the differences between the two sensing modalities, which deteriorates the perception performance. To address this, the learned features are paired using maximum covariance analysis. Results of the algorithm on a newly collected dataset of paired visual and tactile data relating to cloth textures show that a good recognition performance of greater than 90 can be achieved by using the proposed DMCA framework. In addition, we find that the perception performance of either vision or tactile sensing can be improved by employing the shared representation space, compared to learning from unimodal data.", "Hardness sensing is a valuable capability for a robot touch sensor. We describe a novel method of hardness sensing that does not require accurate control of contact conditions. A GelSight sensor is a tactile sensor that provides high resolution tactile images, which enables a robot to infer object properties such as geometry and fine texture, as well as contact force and slip conditions. The sensor is pressed on silicone samples by a human or a robot and we measure the sample hardness only with data from the sensor, without a separate force sensor and without precise knowledge of the contact trajectory. We describe the features that show object hardness. For hemispherical objects, we develop a model to measure the sample hardness, and the estimation error is about 4 in the range of 8 Shore 00 to 45 Shore A. With this technology, a robot is able to more easily infer the hardness of the touched objects, thereby improving its object recognition as well as manipulation strategy.", "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept.", "Tactile sensing is required for human-like control with robotic manipulators. Multimodality is an essential component for these tactile sensors, for robots to achieve both the perceptual accuracy required for precise control, as well as the robustness to maintain a stable grasp without causing damage to the object or the robot itself. In this study, we present a cheap, 3D-printed, compliant, dual-modal, optical tactile sensor that is capable of both high (temporal) speed sensing, analogous to pain reception in humans and high (spatial) resolution sensing, analogous to the sensing provided by Merkel cell complexes in the human fingertip. We apply three tasks for testing the sensing capabilities in both modes; first, a depth modulation task, requiring the robot to follow a target trajectory using the high-speed mode; second, a high-resolution perception task, where the sensor perceives angle and radial position relative to an object edge; and third, a tactile exploration task, where the robot uses the high-resolution mode to perceive an edge and subsequently follow the object contour. The robot is capable of modulating contact depth using the high-speed mode, high accuracy in the perception task, and accurate control using the high-resolution mode." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Researchers embedded an optical tactile sensor into the multi-modal tactile sensing system of an underwater robot gripper @cite_25 . As in @cite_10 and Optoforce sensor , the sensing principle is based on the light reflection delivered via POFs. The POFs can be used as force sensing elements due to the stray light, which is considered as a drawback in telecommunications @cite_1 . The deformation of a POF increases the losses of the light propagated inside, as the attenuation coefficient increases. In additon, the elasto-optic metamaterial presented in @cite_2 can change its refractive index due to pure bending. Such POFs are fabricated by the chemical vapor deposition technique. Their design generally relies upon the phenomenon of optical interference @cite_0 .
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_2", "@cite_10", "@cite_25" ], "mid": [ "2070970799", "2775635818", "2472910955", "2793447234", "70651934" ], "abstract": [ "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept.", "Tactile sensing is an important perception mode for robots, but the existing tactile technologies have multiple limitations. What kind of tactile information robots need, and how to use the information, remain open questions. We believe a soft sensor surface and high-resolution sensing of geometry should be important components of a competent tactile sensor. In this paper, we discuss the development of a vision-based optical tactile sensor, GelSight. Unlike the traditional tactile sensors which measure contact force, GelSight basically measures geometry, with very high spatial resolution. The sensor has a contact surface of soft elastomer, and it directly measures its deformation, both vertical and lateral, which corresponds to the exact object shape and the tension on the contact surface. The contact force, and slip can be inferred from the sensor’s deformation as well. Particularly, we focus on the hardware and software that support GelSight’s application on robot hands. This paper reviews the development of GelSight, with the emphasis in the sensing principle and sensor design. We introduce the design of the sensor’s optical system, the algorithm for shape, force and slip measurement, and the hardware designs and fabrication of different sensor versions. We also show the experimental evaluation on the GelSight’s performance on geometry and force measurement. With the high-resolution measurement of shape and contact force, the sensor has successfully assisted multiple robotic tasks, including material perception or recognition and in-hand localization for robot manipulation.", "This paper addresses 6-DOF (degree-of-freedom) tactile localization, i.e., the pose estimation of tridimensional objects using tactile measurements. This estimation problem is fundamental for the operation of autonomous robots that are often required to manipulate and grasp objects whose pose is a priori unknown. The nature of tactile measurements, the strict time requirements for real-time operation, and the multimodality of the involved probability distributions pose remarkable challenges and call for advanced nonlinear filtering techniques. Following a Bayesian approach, this paper proposes a novel and effective algorithm, named memory unscented particle filter (MUPF), which solves 6-DOF localization recursively in real time by only exploiting contact point measurements. The MUPF combines a modified particle filter that incorporates a sliding memory of past measurements to better handle multimodal distributions, along with the unscented Kalman filter that moves the particles toward regions of the search space that are more likely with the measurements. The performance of the proposed MUPF algorithm has been assessed both in simulation and on a real robotic system equipped with tactile sensors (i.e., the iCub humanoid robot). The experiments show that the algorithm provides accurate and reliable localization even with a low number of particles and, hence, is compatible with real-time requirements.", "Tactile sensing is required for human-like control with robotic manipulators. Multimodality is an essential component for these tactile sensors, for robots to achieve both the perceptual accuracy required for precise control, as well as the robustness to maintain a stable grasp without causing damage to the object or the robot itself. In this study, we present a cheap, 3D-printed, compliant, dual-modal, optical tactile sensor that is capable of both high (temporal) speed sensing, analogous to pain reception in humans and high (spatial) resolution sensing, analogous to the sensing provided by Merkel cell complexes in the human fingertip. We apply three tasks for testing the sensing capabilities in both modes; first, a depth modulation task, requiring the robot to follow a target trajectory using the high-speed mode; second, a high-resolution perception task, where the sensor perceives angle and radial position relative to an object edge; and third, a tactile exploration task, where the robot uses the high-resolution mode to perceive an edge and subsequently follow the object contour. The robot is capable of modulating contact depth using the high-speed mode, high accuracy in the perception task, and accurate control using the high-resolution mode.", "We present a complete software architecture for reliable grasping of household objects. Our work combines aspects such as scene interpretation from 3D range data, grasp planning, motion planning, and grasp failure identification and recovery using tactile sensors. We build upon, and add several new contributions to the significant prior work in these areas. A salient feature of our work is the tight coupling between perception (both visual and tactile) and manipulation, aiming to address the uncertainty due to sensor and execution errors. This integration effort has revealed new challenges, some of which can be addressed through system and software engineering, and some of which present opportunities for future research. Our approach is aimed at typical indoor environments, and is validated by long running experiments where the PR2 robotic platform was able to consistently grasp a large variety of known and unknown objects. The set of tools and algorithms for object grasping presented here have been integrated into the open-source Robot Operating System (ROS)." ] }
1908.03687
2967382602
The sense of touch is essential for reliable mapping between the environment and a robot which interacts physically with objects. Presumably, an artificial tactile skin would facilitate safe interaction of the robots with the environment. In this work, we present our color-coded tactile sensor, incorporating plastic optical fibers (POF), transparent silicone rubber and an off-the-shelf color camera. Processing electronics are placed away from the sensing surface to make the sensor robust to harsh environments. Contact localization is possible thanks to the lower number of light sources compared to the number of camera POFs. Classical machine learning techniques and a hierarchical classification scheme were used for contact localization. Specifically, we generated the mapping from stimulation to sensation of a robotic perception system using our sensor. We achieved a force sensing range up to 18 N with the force resolution of around 3.6 N and the spatial resolution of 8 mm. The color-coded tactile sensor is suitable for tactile exploration and might enable further innovations in robust tactile sensing.
Laboratory prototypes of image-based tactile sensors were reported in @cite_22 and @cite_24 . In these sensing panels, LEDs and photo-diodes camera were placed against a reflecting planar surface. When the surface deforms, it causes changes in reflected beams. These sensors use optical light to detect deformation of the contact surface, which can be used to estimate the force.
{ "cite_N": [ "@cite_24", "@cite_22" ], "mid": [ "1965890979", "2070970799" ], "abstract": [ "We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.", "This paper presents a fiber optic based tactile array sensor that can be employed in magnetic resonance environments. In contrast to conventional sensing approaches, such as resistive or capacitive-based sensing methods, which strongly rely on the generation and transmission of electronics signals, here electromagnetically isolated optical fibers were utilized to develop the tactile array sensor. The individual sensing elements of the proposed sensor detect normal forces; fusing the information from the individual elements allows the perception of the shape of probed objects. Applied forces deform a micro-flexure inside each sensor tactel, displacing a miniature mirror which, in turn, modulates the light intensity introduced by a transmitting fiber connected to a light source at its proximal end. For each tactel, the light intensity is read by a receiving fiber connected directly to a 2-D vision sensor. Computer software, such as MATLAB, is used to process the images received by the vision sensor. The calibration process was conducted by relating the applied forces to the number of activated pixels for each image received from a receiving fiber. The proposed approach allows the concurrent acquisition of data from multiple tactile sensor elements using a vision sensor such as a standard video camera. Test results of force responses and shape detection have proven the viability of this sensing concept." ] }
1908.03645
2966981412
Qualitative relationships describe how increasing or decreasing one property (e.g. altitude) affects another (e.g. temperature). They are an important aspect of natural language question answering and are crucial for building chatbots or voice agents where one may enquire about qualitative relationships. Recently a dataset about question answering involving qualitative relationships has been proposed, and a few approaches to answer such questions have been explored, in the heart of which lies a semantic parser that converts the natural language input to a suitable logical form. A problem with existing semantic parsers is that they try to directly convert the input sentences to a logical form. Since the output language varies with each application, it forces the semantic parser to learn almost everything from scratch. In this paper, we show that instead of using a semantic parser to produce the logical form, if we apply the generate-validate framework i.e. generate a natural language description of the logical form and validate if the natural language description is followed from the input text, we get a better scope for transfer learning and our method outperforms the state-of-the-art by a large margin of 7.93 .
Our work is related to both the works in semantic parsing @cite_14 @cite_12 @cite_1 @cite_0 @cite_4 and question answering using semantic parsing @cite_11 @cite_5 @cite_13 . The problem of QUAREL is quite similar to the word math problems @cite_16 @cite_6 in the sense that both are story problems and use semantic parsing to translate the input problem to a suitable representation. Our work is also related to the work in @cite_13 that uses generate-validate framework to answer questions w.r.t life cycle text. @cite_13 uses generate-validate framework to verify given facts''. Particularly, it shows how rules can be used to infer new information over raw text without using a semantic parser to create a structured knowledge base. The work in @cite_13 uses a semantic parser to translate the question into one of the predefined forms. In our work, however we use generate-validate for both question and given fact" understanding. The work of @cite_9 is most related to us. @cite_9 proposes two models for QUAREL. One uses a state-of-the-art semantic parser @cite_0 to convert the input problem to the desired logical representation. They call this model QUASP, which obtains an accuracy of $56.1
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_9", "@cite_1", "@cite_6", "@cite_0", "@cite_5", "@cite_16", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "2250225488", "2963611534", "2901386711", "2473222270", "2251673953", "1504212872", "2170732969", "2156621282", "2121127625", "2107288097", "2251079237" ], "abstract": [ "A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question-answer pairs. Our system PARASEMPRE improves stateof-the-art accuracies on two recently released question-answering datasets.", "Despite the availability of a huge amount of video data accompanied by descriptive texts, it is not always easy to exploit the information contained in natural language in order to automatically recognize video concepts. Towards this goal, in this paper we use textual cues as means of supervision, introducing two weakly supervised techniques that extend the Multiple Instance Learning (MIL) framework: the Fuzzy Sets Multiple Instance Learning (FSMIL) and the Probabilistic Labels Multiple Instance Learning (PLMIL). The former encodes the spatio-temporal imprecision of the linguistic descriptions with Fuzzy Sets, while the latter models different interpretations of each description's semantics with Probabilistic Labels, both formulated through a convex optimization algorithm. In addition, we provide a novel technique to extract weak labels in the presence of complex semantics, that consists of semantic similarity computations. We evaluate our methods on two distinct problems, namely face and action recognition, in the challenging and realistic setting of movies accompanied by their screenplays, contained in the COGNIMUSE database. We show that, on both tasks, our method considerably outperforms a state-of-the-art weakly supervised approach, as well as other baselines.", "Many natural language questions require recognizing and reasoning with qualitative relationships (e.g., in science, economics, and medicine), but are challenging to answer with corpus-based methods. Qualitative modeling provides tools that support such reasoning, but the semantic parsing task of mapping questions into those models has formidable challenges. We present QuaRel, a dataset of diverse story questions involving qualitative relationships that characterize these challenges, and techniques that begin to address them. The dataset has 2771 questions relating 19 different types of quantities. For example, \"Jenny observes that the robot vacuum cleaner moves slower on the living room carpet than on the bedroom carpet. Which carpet has more friction?\" We contribute (1) a simple and flexible conceptual framework for representing these kinds of questions; (2) the QuaRel dataset, including logical forms, exemplifying the parsing challenges; and (3) two novel models for this task, built as extensions of type-constrained semantic parsing. The first of these models (called QuaSP+) significantly outperforms off-the-shelf tools on QuaRel. The second (QuaSP+Zero) demonstrates zero-shot capability, i.e., the ability to handle new qualitative relationships without requiring additional training data, something not possible with previous models. This work thus makes inroads into answering complex, qualitative questions that require reasoning, and scaling to new relationships at low cost. The dataset and models are available at this http URL", "Traditional semantic parsers map language onto compositional, executable queries in a fixed schema. This mapping allows them to effectively leverage the information contained in large, formal knowledge bases (KBs, e.g., Freebase) to answer questions, but it is also fundamentally limiting---these semantic parsers can only assign meaning to language that falls within the KB's manually-produced schema. Recently proposed methods for open vocabulary semantic parsing overcome this limitation by learning execution models for arbitrary language, essentially using a text corpus as a kind of knowledge base. However, all prior approaches to open vocabulary semantic parsing replace a formal KB with textual information, making no use of the KB in their models. We show how to combine the disparate representations used by these two approaches, presenting for the first time a semantic parser that (1) produces compositional, executable representations of language, (2) can successfully leverage the information contained in both a formal KB and a large corpus, and (3) is not limited to the schema of the underlying KB. We demonstrate significantly improved performance over state-of-the-art baselines on an open-domain natural language question answering task.", "We consider the challenge of learning semantic parsers that scale to large, open-domain problems, such as question answering with Freebase. In such settings, the sentences cover a wide variety of topics and include many phrases whose meaning is difficult to represent in a fixed target ontology. For example, even simple phrases such as ‘daughter’ and ‘number of people living in’ cannot be directly represented in Freebase, whose ontology instead encodes facts about gender, parenthood, and population. In this paper, we introduce a new semantic parsing approach that learns to resolve such ontological mismatches. The parser is learned from question-answer pairs, uses a probabilistic CCG to build linguistically motivated logicalform meaning representations, and includes an ontology matching model that adapts the output logical forms for each target ontology. Experiments demonstrate state-of-the-art performance on two benchmark semantic parsing datasets, including a nine point accuracy improvement on a recent Freebase QA corpus.", "Objective : Development of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms. @PARASPLIT Design : The natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain. @PARASPLIT Measurements : The impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision. @PARASPLIT Results : Without training specific to the four diseases, recall and precision of the system(combined effect of the processor and query generator) were 70 and 87 . Training of the query component increased recall to 85 without changing precision.", "We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0 F-Score accuracy compared to an F-Score of 66.7 produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing). 1", "We present an approach to learning a model-theoretic semantics for natural language tied to Freebase. Crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as \"Republican front-runner from Texas\" whose semantics cannot be represented using the Freebase schema. Our approach directly converts a sentence's syntactic CCG parse into a logical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. This logical form is evaluated against a learned probabilistic database that defines a distribution over denotations for each textual predicate. A training phase produces this probabilistic database using a corpus of entity-linked text and probabilistic matrix factorization with a novel ranking objective function. We evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. We also compare our approach against manually annotated Freebase queries, finding that our open predicate vocabulary enables us to answer many questions that Freebase cannot.", "This article considers approaches which rerank the output of an existing probabilistic parser. The base parser produces a set of candidate parses for each input sentence, with associated probabilities that define an initial ranking of these parses. A second model then attempts to improve upon this initial ranking, using additional features of the tree as evidence. The strength of our approach is that it allows a tree to be represented as an arbitrary set of features, without concerns about how these features interact or overlap and without the need to define a derivation or a generative model which takes these features into account. We introduce a new method for the reranking task, based on the boosting approach to ranking problems described in (1998). We apply the boosting method to parsing the Wall Street Journal treebank. The method combined the log-likelihood under a baseline model (that of Collins [1999]) with evidence from an additional 500,000 features over parse trees that were not included in the original model. The new model achieved 89.75 F-measure, a 13 relative decrease in F measure error over the baseline model's score of 88.2 . The article also introduces a new algorithm for the boosting approach which takes advantage of the sparsity of the feature space in the parsing data. Experiments show significant efficiency gains for the new algorithm over the obvious implementation of the boosting approach. We argue that the method is an appealing alternative—in terms of both simplicity and efficiency—to work on feature selection methods within log-linear (maximum-entropy) models. Although the experiments in this article are on natural language parsing (NLP), the approach should be applicable to many other NLP problems which are naturally framed as ranking tasks, for example, speech recognition, machine translation, or natural language generation.", "This paper proposes a data-driven method for concept-to-text generation, the task of automatically producing textual output from non-linguistic input. A key insight in our approach is to reduce the tasks of content selection (\"what to say\") and surface realization (\"how to say\") into a common parsing problem. We define a probabilistic context-free grammar that describes the structure of the input (a corpus of database records and text describing some of them) and represent it compactly as a weighted hypergraph. The hypergraph structure encodes exponentially many derivations, which we rerank discriminatively using local and global features. We propose a novel decoding algorithm for finding the best scoring derivation and generating in this setting. Experimental evaluation on the Atis domain shows that our model outperforms a competitive discriminative system both using BLEU and in a judgment elicitation study.", "We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5 on the WEBQUESTIONS dataset." ] }
1908.03405
2968351798
Early time series classification (eTSC) is the problem of classifying a time series after as few measurements as possible with the highest possible accuracy. The most critical issue of any eTSC method is to decide when enough data of a time series has been seen to take a decision: Waiting for more data points usually makes the classification problem easier but delays the time in which a classification is made; in contrast, earlier classification has to cope with less input data, often leading to inferior accuracy. The state-of-the-art eTSC methods compute a fixed optimal decision time assuming that every times series has the same defined start time (like turning on a machine). However, in many real-life applications measurements start at arbitrary times (like measuring heartbeats of a patient), implying that the best time for taking a decision varies heavily between time series. We present TEASER, a novel algorithm that models eTSC as a two two-tier classification problem: In the first tier, a classifier periodically assesses the incoming time series to compute class probabilities. However, these class probabilities are only used as output label if a second-tier classifier decides that the predicted label is reliable enough, which can happen after a different number of measurements. In an evaluation using 45 benchmark datasets, TEASER is two to three times earlier at predictions than its competitors while reaching the same or an even higher classification accuracy. We further show TEASER's superior performance using real-life use cases, namely energy monitoring, and gait detection.
The techniques used for (TSC) can be broadly categorized into two classes: methods and . Whole series-based methods make use of a point-wise comparison of entire TS like 1-NN Dynamic Time Warping (DTW) @cite_12 . In contrast, feature-based classifiers rely on comparing features generated from substructures of TS. Approaches can be grouped as either using shapelets or bag-of-patterns (BOP). Shapelets are defined as TS subsequences that are maximally representative of a class @cite_26 @cite_21 . The (BOP) model @cite_10 @cite_19 @cite_25 @cite_39 breaks up a TS into a bag of substructures, represents these substructures as discrete features, and finally builds a histogram of feature counts as basis for classification. The recent Word ExtrAction for time SEries cLassification (WEASEL) @cite_10 also conceptually builds on the bag-of-patterns (BOP) approach and is one of the fastest and most accurate classifiers. @cite_1 deep learning networks are applied to TSC. Their best performing full convolutional network (FCN) performs not significantly different from state of the art. @cite_5 presents an overview of deep learning approaches.
{ "cite_N": [ "@cite_26", "@cite_21", "@cite_1", "@cite_39", "@cite_19", "@cite_5", "@cite_10", "@cite_25", "@cite_12" ], "mid": [ "2802962644", "1975257359", "2306394264", "2581867724", "2728116991", "2892035503", "2144796873", "2461743311", "2321533354" ], "abstract": [ "With the development of Fully Convolutional Neural Network (FCN), there have been progressive advances in the field of semantic segmentation in recent years. The FCN-based solutions are able to summarize features across training images and generate matching templates for the desired object classes, yet they overlook intra-class difference (ICD) among multiple instances in the same class. In this work, we present a novel fine-to-coarse learning (FCL) procedure, which first guides the network with designed 'finer' sub-class labels, whose decisions are mapped to the original 'coarse' object category through end-to-end learning. A sub-class labeling strategy is designed with unsupervised clustering upon deep convolutional features, and the proposed FCL procedure enables a balance between the fine-scale (i.e. sub-class) and the coarse-scale (i.e. class) knowledge. We conduct extensive experiments on several popular datasets, including PASCAL VOC, Context, Person-Part and NYUDepth-v2 to demonstrate the advantage of learning finer sub-classes and the potential to guide the learning of deep networks with unsupervised clustering.", "Time series classification is an important task with many challenging applications. A nearest neighbor (NN) classifier with dynamic time warping (DTW) distance is a strong solution in this context. On the other hand, feature-based approaches have been proposed as both classifiers and to provide insight into the series, but these approaches have problems handling translations and dilations in local patterns. Considering these shortcomings, we present a framework to classify time series based on a bag-of-features representation (TSBF). Multiple subsequences selected from random locations and of random lengths are partitioned into shorter intervals to capture the local information. Consequently, features computed from these subsequences measure properties at different locations and dilations when viewed from the original series. This provides a feature-based approach that can handle warping (although differently from DTW). Moreover, a supervised learner (that handles mixed data types, different units, etc.) integrates location information into a compact codebook through class probability estimates. Additionally, relevant global features can easily supplement the codebook. TSBF is compared to NN classifiers and other alternatives (bag-of-words strategies, sparse spatial sample kernels, shapelets). Our experimental results show that TSBF provides better results than competitive methods on benchmark datasets from the UCR time series database.", "Time series classification (TSC), the problem of predicting class labels of time series, has been around for decades within the community of data mining and machine learning, and found many important applications such as biomedical engineering and clinical prediction. However, it still remains challenging and falls short of classification accuracy and efficiency. Traditional approaches typically involve extracting discriminative features from the original time series using dynamic time warping (DTW) or shapelet transformation, based on which an off-the-shelf classifier can be applied. These methods are ad-hoc and separate the feature extraction part with the classification part, which limits their accuracy performance. Plus, most existing methods fail to take into account the fact that time series often have features at different time scales. To address these problems, we propose a novel end-to-end neural network model, Multi-Scale Convolutional Neural Networks (MCNN), which incorporates feature extraction and classification in a single framework. Leveraging a novel multi-branch layer and learnable convolutional layers, MCNN automatically extracts features at different scales and frequencies, leading to superior feature representation. MCNN is also computationally efficient, as it naturally leverages GPU computing. We conduct comprehensive empirical evaluation with various existing methods on a large number of benchmark datasets, and show that MCNN advances the state-of-the-art by achieving superior accuracy performance than other leading methods.", "Time series (TS) occur in many scientific and commercial applications, ranging from earth surveillance to industry automation to the smart grids. An important type of TS analysis is classification, which can, for instance, improve energy load forecasting in smart grids by detecting the types of electronic devices based on their energy consumption profiles recorded by automatic sensors. Such sensor-driven applications are very often characterized by (a) very long TS and (b) very large TS datasets needing classification. However, current methods to time series classification (TSC) cannot cope with such data volumes at acceptable accuracy; they are either scalable but offer only inferior classification quality, or they achieve state-of-the-art classification quality but cannot scale to large data volumes. In this paper, we present WEASEL (Word ExtrAction for time SEries cLassification), a novel TSC method which is both fast and accurate. Like other state-of-the-art TSC methods, WEASEL transforms time series into feature vectors, using a sliding-window approach, which are then analyzed through a machine learning classifier. The novelty of WEASEL lies in its specific method for deriving features, resulting in a much smaller yet much more discriminative feature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more accurate than the best current non-ensemble algorithms at orders-of-magnitude lower classification and training times, and it is almost as accurate as ensemble classifiers, whose computational complexity makes them inapplicable even for mid-size datasets. The outstanding robustness of WEASEL is also confirmed by experiments on two real smart grid datasets, where it out-of-the-box achieves almost the same accuracy as highly tuned, domain-specific methods.", "Inspired by the tremendous success of deep Convolutional Neural Networks as generic feature extractors for images, we propose TimeNet: a deep recurrent neural network (RNN) trained on diverse time series in an unsupervised manner using sequence to sequence (seq2seq) models to extract features from time series. Rather than relying on data from the problem domain, TimeNet attempts to generalize time series representation across domains by ingesting time series from several domains simultaneously. Once trained, TimeNet can be used as a generic off-the-shelf feature extractor for time series. The representations or embeddings given by a pre-trained TimeNet are found to be useful for time series classification (TSC). For several publicly available datasets from UCR TSC Archive and an industrial telematics sensor data from vehicles, we observe that a classifier learned over the TimeNet embeddings yields significantly better performance compared to (i) a classifier learned over the embeddings given by a domain-specific RNN, as well as (ii) a nearest neighbor classifier based on Dynamic Time Warping.", "Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR UEA archive) and 12 multivariate time series datasets. By training 8730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.", "Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled ‘seed’ image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While features learned with our approach cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor.", "We consider Multiclass and Multilabel classification with extremely large number of classes, of which only few are labeled to each instance. In such setting, standard methods that have training, prediction cost linear to the number of classes become intractable. State-of-the-art methods thus aim to reduce the complexity by exploiting correlation between labels under assumption that the similarity between labels can be captured by structures such as low-rank matrix or balanced tree. However, as the diversity of labels increases in the feature space, structural assumption can be easily violated, which leads to degrade in the testing performance. In this work, we show that a margin-maximizing loss with l1 penalty, in case of Extreme Classification, yields extremely sparse solution both in primal and in dual without sacrificing the expressive power of predictor. We thus propose a Fully-Corrective Block-Coordinate Frank-Wolfe (FC-BCFW) algorithm that exploits both primal and dual sparsity to achieve a complexity sublinear to the number of primal and dual variables. A bi-stochastic search method is proposed to further improve the efficiency. In our experiments on both Multiclass and Multilabel problems, the proposed method achieves significant higher accuracy than existing approaches of Extreme Classification with very competitive training and prediction time.", "We propose a novel unsupervised learning approach to build features suitable for object detection and classification. The features are pre-trained on a large dataset without human annotation and later transferred via fine-tuning on a different, smaller and labeled dataset. The pre-training consists of solving jigsaw puzzles of natural images. To facilitate the transfer of features to other tasks, we introduce the context-free network (CFN), a siamese-ennead convolutional neural network. The features correspond to the columns of the CFN and they process image tiles independently (i.e., free of context). The later layers of the CFN then use the features to identify their geometric arrangement. Our experimental evaluations show that the learned features capture semantically relevant content. We pre-train the CFN on the training set of the ILSVRC2012 dataset and transfer the features on the combined training and validation set of Pascal VOC 2007 for object detection (via fast RCNN) and classification. These features outperform all current unsupervised features with (51.8 , ) for detection and (68.6 , ) for classification, and reduce the gap with supervised learning ( (56.5 , ) and (78.2 , ) respectively)." ] }
1908.03295
2967131314
We introduce a novel single-shot object detector to ease the imbalance of foreground-background class by suppressing the easy negatives while increasing the positives. To achieve this, we propose an Anchor Promotion Module (APM) which predicts the probability of each anchor as positive and adjusts their initial locations and shapes to promote both the quality and quantity of positive anchors. In addition, we design an efficient Feature Alignment Module (FAM) to extract aligned features for fitting the promoted anchors with the help of both the location and shape transformation information from the APM. We assemble the two proposed modules to the backbone of VGG-16 and ResNet-101 network with an encoder-decoder architecture. Extensive experiments on MS COCO well demonstrate our model performs competitively with alternative methods (40.0 mAP on set) and runs faster (28.6 ).
Cascaded Architecture . Cascaded architecture has been explored a lot for improving classification and refining locations. Viola and Jones @cite_25 trained a series of cascaded weak classifiers to form a strong region classifier for face detection. MR-CNN @cite_2 introduced an iterative bounding box regression by feeding the bounding boxes into RCNN several times to improve the localization accuracy during inference. More recently, Cai al @cite_11 proposed the Cascade R-CNN which achieved more accurate boxes by a sequence of detectors trained with increasing IoU thresholds. Cheng al @cite_12 resampled the hard positive detection boxes and applied a R-CNN to rescore these boxes. Different from the above works which focus on further improving the output detection results in two-stage methods, our framework aims to recognize the positive anchor boxes and promote the anchors for one-stage detection.
{ "cite_N": [ "@cite_11", "@cite_25", "@cite_12", "@cite_2" ], "mid": [ "2473640056", "2912662889", "2559348937", "2951001760" ], "abstract": [ "Cascade has been widely used in face detection, where classifier with low computation cost can be firstly used to shrink most of the background while keeping the recall. The cascade in detection is popularized by seminal Viola-Jones framework and then widely used in other pipelines, such as DPM and CNN. However, to our best knowledge, most of the previous detection methods use cascade in a greedy manner, where previous stages in cascade are fixed when training a new stage. So optimizations of different CNNs are isolated. In this paper, we propose joint training to achieve end-to-end optimization for CNN cascade. We show that the back propagation algorithm used in training CNN can be naturally used in training CNN cascade. We present how jointly training can be conducted on naive CNN cascade and more sophisticated region proposal network (RPN) and fast R-CNN. Experiments on face detection benchmarks verify the advantages of the joint training.", "Cascade is a classic yet powerful architecture that has boosted performance on various tasks. However, how to introduce cascade to instance segmentation remains an open question. A simple combination of Cascade R-CNN and Mask R-CNN only brings limited gain. In exploring a more effective approach, we find that the key to a successful instance segmentation cascade is to fully leverage the reciprocal relationship between detection and segmentation. In this work, we propose a new framework, Hybrid Task Cascade (HTC), which differs in two important aspects: (1) instead of performing cascaded refinement on these two tasks separately, it interweaves them for a joint multi-stage processing; (2) it adopts a fully convolutional branch to provide spatial context, which can help distinguishing hard foreground from cluttered background. Overall, this framework can learn more discriminative features progressively while integrating complementary features together in each stage. Without bells and whistles, a single HTC obtains 38.4 and 1.5 improvement over a strong Cascade Mask R-CNN baseline on MSCOCO dataset. Moreover, our overall system achieves 48.6 mask AP on the test-challenge split, ranking 1st in the COCO 2018 Challenge Object Detection Task. Code is available at: this https URL.", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization.", "Object detection is a challenging task in visual understanding domain, and even more so if the supervision is to be weak. Recently, few efforts to handle the task without expensive human annotations is established by promising deep neural network. A new architecture of cascaded networks is proposed to learn a convolutional neural network (CNN) under such conditions. We introduce two such architectures, with either two cascade stages or three which are trained in an end-to-end pipeline. The first stage of both architectures extracts best candidate of class specific region proposals by training a fully convolutional network. In the case of the three stage architecture, the middle stage provides object segmentation, using the output of the activation maps of first stage. The final stage of both architectures is a part of a convolutional neural network that performs multiple instance learning on proposals extracted in the previous stage(s). Our experiments on the PASCAL VOC 2007, 2010, 2012 and large scale object datasets, ILSVRC 2013, 2014 datasets show improvements in the areas of weakly-supervised object detection, classification and localization." ] }
1908.03391
2967874181
Individual identification is essential to animal behavior and ecology research and is of significant importance for protecting endangered species. Red pandas, among the world's rarest animals, are currently identified mainly by visual inspection and microelectronic chips, which are costly and inefficient. Motivated by recent advancement in computer-vision-based animal identification, in this paper, we propose an automatic framework for identifying individual red pandas based on their face images. We implement the framework by exploring well-established deep learning models with necessary adaptation for effectively dealing with red panda images. Based on a database of red panda images constructed by ourselves, we evaluate the effectiveness of the proposed automatic individual red panda identification method. The evaluation results show the promising potential of automatically recognizing individual red pandas from their faces. We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
As summarized in Table , automatic individual identification methods have been studied for a number of species, including African penguins @cite_5 , northeast tigers @cite_12 , cattle @cite_11 , lemurs @cite_8 , dairy cows @cite_7 , great white sharks @cite_3 , pandas @cite_0 , primates @cite_18 , pigs @cite_9 , and ringed seals @cite_4 . Different species usually have largely different appearance; however, different individual animals of the same species may differ quite slightly in their appearance, and can be distinguished only by fine-grained detail. Almost all of the related studies are based on specific body parts of an animal to determine its identity. For those species that have salient characteristics in their appearance (e.g., the spots on the breast of penguins @cite_5 , and the rings on the body of ringed seals @cite_4 ), individual identification can be done by extracting and comparing their salient features. For those species that have subtle appearance differences between different individuals, such as pigs @cite_9 , lemurs @cite_8 , and pandas @cite_0 , the most common solution to individual identification is to focus on the body parts with relatively rich textures and extract discriminative features from the parts.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_7", "@cite_8", "@cite_9", "@cite_3", "@cite_0", "@cite_5", "@cite_12", "@cite_11" ], "mid": [ "1580935273", "2765230784", "2791690647", "2963430954", "2521680879", "2791449606", "2569469748", "2583064257", "2118696714", "2068232285" ], "abstract": [ "Summary 1 The ability to identify individual animals is a critical aid in wildlife and conservation studies requiring information on behaviour, distribution, habitat use, population and life-history parameters. We present a computer-aided photo-identification technique that relies on natural marks to identify individuals of Carcharias taurus, a shark species that is critically endangered off the eastern Australian coast and considered globally vulnerable. The technique could potentially be applied to a range of species of similar form and bearing natural marks. 2 The use of natural marks for photo-identification is a non-invasive technique for identifying individual animals. As photo-identification databases grow larger, and their implementation spans several years, the historically used visual-matching processes lose accuracy and speed. A computerized pattern-matching system that requires initial user interaction to select the key features aids researchers by considerably reducing the time needed for identification of individuals. 3 Our method uses a two-dimensional affine transformation to compare two individuals in a commonly defined reference space. The methodology was developed using a database of 221 individually identifiable sharks that were photographically marked and rephotographed over 9 years, demonstrating both the efficacy of the technique and that the natural pigment marks of C. taurus are a reliable means of tracking individuals over several years. 4 Synthesis and applications. The identification of individual animals that are naturally marked with spots or similar patterns is achieved with an interactive pattern-matching system that uses an affine transformation to compare selected points in a single-user computer-aided interface. Our technique has been used successfully on C. taurus and we believe the methodology can be applied to other species of a similar form that have natural marks or patterns. The identification of individuals allows accurate tracking of their movements and distribution, and contributes to better population estimates for improved wildlife management and conservation planning.", "In order to monitor an animal population and to track individual animals in a non-invasive way, identification of individual animals based on certain distinctive characteristics is necessary. In this study, automatic image-based individual identification of the endangered Saimaa ringed seal (Phoca hispida saimensis) is considered. Ringed seals have a distinctive permanent pelage pattern that is unique to each individual. This can be used as a basis for the identification process. The authors propose a framework that starts with segmentation of the seal from the background and proceeds to various post-processing steps to make the pelage pattern more visible and the identification easier. Finally, two existing species independent individual identification methods are compared with a challenging data set of Saimaa ringed seal images. The results show that the segmentation and proposed post-processing steps increase the identification performance.", "Abstract Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7 on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs.", "We address the problem of identifying individual cetaceans from images showing the trailing edge of their fins. Given the trailing edge from an unknown individual, we produce a ranking of known individuals from a database. The nicks and notches along the trailing edge define an individual's unique signature. We define a representation based on integral curvature that is robust to changes in viewpoint and pose, and captures the pattern of nicks and notches in a local neighborhood at multiple scales. We explore two ranking methods that use this representation. The first uses a dynamic programming time-warping algorithm to align two representations, and interprets the alignment cost as a measure of similarity. This algorithm also exploits learned spatial weights to downweight matches from regions of unstable curvature. The second interprets the representation as a feature descriptor. Feature keypoints are defined at the local extrema of the representation. Descriptors for the set of known individuals are stored in a tree structure, which allows us to perform queries given the descriptors from an unknown trailing edge. We evaluate the top-k accuracy on two real-world datasets to demonstrate the effectiveness of the curvature representation, achieving top-1 accuracy scores of approximately 95 and 80 for bottlenose dolphins and humpback whales, respectively.", "This paper discusses the automated visual identification of individual great white sharks from dorsal fin imagery. We propose a computer vision photo ID system and report recognition results over a database of thousands of unconstrained fin images. To the best of our knowledge this line of work establishes the first fully automated contour-based visual ID system in the field of animal biometrics. The approach put forward appreciates shark fins as textureless, flexible and partially occluded objects with an individually characteristic shape. In order to recover animal identities from an image we first introduce an open contour stroke model, which extends multi-scale region segmentation to achieve robust fin detection. Secondly, we show that combinatorial, scale-space selective fingerprinting can successfully encode fin individuality. We then measure the species-specific distribution of visual individuality along the fin contour via an embedding into a global fin space'. Exploiting this domain, we finally propose a non-linear model for individual animal recognition and combine all approaches into a fine-grained multi-instance framework. We provide a system evaluation, compare results to prior work, and report performance and properties in detail.", "Abstract Photographic identification methods are of highly importance when it comes to reduce the animal's stress, pain and possible injuries during or after marking techniques and thus to increase the reliability of demographic parameter estimates. There is plenty of software available for photo-identification, allowing individual identification in capture-mark-recapture (CMR) methods using body patterns, spots and marks unique to each individual. However, these non-invasive methods have hardly ever been used with arthropods. In this study, APHIS (Automated PHoto Identification Suite) has been assessed as a software capable of identifying individuals in different samplings during catch-and-release sessions with dead specimens under laboratory conditions. For this individual identification, SPM (Spot Pattern Matching) and ITM (Image Template Matching) procedures were tested; achieving a success of 100 and 95.35 , respectively. In SPM, the software itself matched the specimens almost automatically in half of the cases. However, it resulted more time-consuming than ITM during the pre-processing of images. On the other hand, ITM saves time during this step and still is able to detect recaptures accurately, yet more time may be needed when selecting the recaptures from the candidate list. Thus, it can be attested that APHIS is a competent and efficient software regarding photo-identification of Rhynchophorus ferrugineus and species with similar and unique individual colour patterns in their pronotum.", "Ecologists commonly use photo-identification of individual animals to monitor the behaviour, state and health of a population, since it is a cost-effective technique that eliminates the need to physically capture and tag animals. With dolphins, the nicks and notches of the dorsal fin are typically used as the unique identifying features for each individual; however New Zealand common dolphins are relatively unmarked, so most of the population cannot be identified. Here, we investigate how computer vision can be used to extract information from the pigmentation patterns that are typically seen on adult common dolphin dorsal fins. We develop features that are relatively robust to changes in the fin orientation and compare the classification rates of 779 photos of 169 different adult common dolphins. Using pigmentation-based features, we correctly classified individuals 75 of the time, with our top-5 estimates containing the correct dolphin in 86 of the cases.", "Long-term research of known individuals is critical for understanding the demographic and evolutionary processes that influence natural populations. Current methods for individual identification of many animals include capture and tagging techniques and or researcher knowledge of natural variation in individual phenotypes. These methods can be costly, time-consuming, and may be impractical for larger-scale, population-level studies. Accordingly, for many animal lineages, long-term research projects are often limited to only a few taxa. Lemurs, a mammalian lineage endemic to Madagascar, are no exception. Long-term data needed to address evolutionary questions are lacking for many species. This is, at least in part, due to difficulties collecting consistent data on known individuals over long periods of time. Here, we present a new method for individual identification of lemurs (LemurFaceID). LemurFaceID is a computer-assisted facial recognition system that can be used to identify individual lemurs based on photographs. LemurFaceID was developed using patch-wise Multiscale Local Binary Pattern features and modified facial image normalization techniques to reduce the effects of facial hair and variation in ambient lighting on identification. We trained and tested our system using images from wild red-bellied lemurs (Eulemur rubriventer) collected in Ranomafana National Park, Madagascar. Across 100 trials, with different partitions of training and test sets, we demonstrate that the LemurFaceID can achieve 98.7 ± 1.81 accuracy (using 2-query image fusion) in correctly identifying individual lemurs. Our results suggest that human facial recognition techniques can be modified for identification of individual lemurs based on variation in facial patterns. LemurFaceID was able to identify individual lemurs based on photographs of wild individuals with a relatively high degree of accuracy. This technology would remove many limitations of traditional methods for individual identification. Once optimized, our system can facilitate long-term research of known individuals by providing a rapid, cost-effective, and accurate method for individual identification.", "From a set of images in a particular domain, labeled with part locations and class, we present a method to automatically learn a large and diverse set of highly discriminative intermediate features that we call Part-based One-vs.-One Features (POOFs). Each of these features specializes in discrimination between two particular classes based on the appearance at a particular part. We demonstrate the particular usefulness of these features for fine-grained visual categorization with new state-of-the-art results on bird species identification using the Caltech UCSD Birds (CUB) dataset and parity with the best existing results in face verification on the Labeled Faces in the Wild (LFW) dataset. Finally, we demonstrate the particular advantage of POOFs when training data is scarce.", "For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been incorporated into a website (mantamatcher.org) which will serve as a global resource for ecological and conservation research. It will allow researchers to manage and track sightings data to establish important life-history parameters as well as determine other ecological data such as abundance, range, movement patterns, and structure of manta ray populations across the world." ] }
1908.03391
2967874181
Individual identification is essential to animal behavior and ecology research and is of significant importance for protecting endangered species. Red pandas, among the world's rarest animals, are currently identified mainly by visual inspection and microelectronic chips, which are costly and inefficient. Motivated by recent advancement in computer-vision-based animal identification, in this paper, we propose an automatic framework for identifying individual red pandas based on their face images. We implement the framework by exploring well-established deep learning models with necessary adaptation for effectively dealing with red panda images. Based on a database of red panda images constructed by ourselves, we evaluate the effectiveness of the proposed automatic individual red panda identification method. The evaluation results show the promising potential of automatically recognizing individual red pandas from their faces. We are going to release our database and model in the public domain to promote the research on automatic animal identification and particularly on the technique for protecting red pandas.
Red pandas obviously belong to those species that have subtle appearance differences between different individuals. Fortunately, their faces have relatively salient textures. According to Table , most methods for the species that do not have salient appearance differences are based on learned features. With learning based models, researchers do not have to manually find out the exact parts that are helpful to identification. Inspired by these works, we build a deep neural network model for identifying individual red pandas based on their face images. Compared with existing animal identification methods, ours is fully automatic. Almost all existing methods are based on pre-cropped pictures of specific body parts, such as the tailhead images of dairy cows @cite_7 and face images of pig @cite_9 . In contrast, our method takes the image of a red panda as input and automatically detect its face, extracts features and matches the features to the ones enrolled in the gallery to determine its identity. In addition, to the best of our knowledge, the research in this paper is the first attempt to image-based automatic individual identification of red pandas.
{ "cite_N": [ "@cite_9", "@cite_7" ], "mid": [ "2791690647", "2125860240" ], "abstract": [ "Abstract Identification of individual livestock such as pigs and cows has become a pressing issue in recent years as intensification practices continue to be adopted and precise objective measurements are required (e.g. weight). Current best practice involves the use of RFID tags which are time-consuming for the farmer and distressing for the animal to fit. To overcome this, non-invasive biometrics are proposed by using the face of the animal. We test this in a farm environment, on 10 individual pigs using three techniques adopted from the human face recognition literature: Fisherfaces, the VGG-Face pre-trained face convolutional neural network (CNN) model and our own CNN model that we train using an artificially augmented data set. Our results show that accurate individual pig recognition is possible with accuracy rates of 96.7 on 1553 images. Class Activated Mapping using Grad-CAM is used to show the regions that our network uses to discriminate between pigs.", "The issue of recognizability of subjects in biometric identification is of particular interest to the designers of these systems. We have applied the concept of Doddington’s biometric menagerie to the area of facial recognition. We performed a series of tests for the presence of goats, lambs, and wolves on FRGC 2.0 color image data. The data for the subjects that appeared at the extreme end of these tests was then visually examined. Even a cursory comparison of images showed that for this set of data, some images fell into the defined menagerie categories. Our tests show the statistical existence of these animal classifications within the constraints of this set of FRGC 2.0 data using the baseline matching algorithm. Ultimately, these tests were limited by the image data set and matching algorithm used. For further confirmation of the existence of the menagerie, the analysis must be expanded to include different image sets and matching algorithms.." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Deep reinforcement learning has been applied to solve several tasks such as learning to play video-games and robotics problems @cite_4 . In particular, it has been applied to grasp tasks with manipulator robots equipped with grippers, to locomotion tasks and also to humanoid robots @cite_12 @cite_3 . These problems are currently solved in a simulated environment. Several works obtained good results, like @cite_10 that involves the simulation of a robot and its working environment to develop a solution based on deep reinforcement learning algorithms, to solve the grasping problem. Another recent work is @cite_17 that simulate four complex tasks of dexterous manipulation based on deep reinforcement learning. It uses a policy gradient method, in particular, @cite_18 in combination with an imitation learning algorithm called @cite_9 which learns a policy through supervised learning to mimic the demonstrations of an expert.
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_9", "@cite_3", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2767506186", "2575705757", "2953326790", "2902125520", "2760057500", "2963713397", "2891076394" ], "abstract": [ "Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.", "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.", "Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered policy representations and human-supplied demonstrations. Deep reinforcement learning alleviates this limitation by training general-purpose neural network policies, but applications of direct deep reinforcement learning algorithms have so far been restricted to simulated settings and relatively simple tasks, due to their apparent high sample complexity. In this paper, we demonstrate that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots. We demonstrate that the training times can be further reduced by parallelizing the algorithm across multiple robots which pool their policy updates asynchronously. Our experimental evaluation shows that our method can learn a variety of 3D manipulation skills in simulation and a complex door opening skill on real robots without any prior demonstrations or manually designed representations.", "Deep reinforcement learning (RL) algorithms can learn complex robotic skills from raw sensory inputs, but have yet to achieve the kind of broad generalization and applicability demonstrated by deep learning methods in supervised domains. We present a deep RL method that is practical for real-world robotics tasks, such as robotic manipulation, and generalizes effectively to never-before-seen tasks and objects. In these settings, ground truth reward signals are typically unavailable, and we therefore propose a self-supervised model-based approach, where a predictive model learns to directly predict the future from raw sensory readings, such as camera images. At test time, we explore three distinct goal specification methods: designated pixels, where a user specifies desired object manipulation tasks by selecting particular pixels in an image and corresponding goal positions, goal images, where the desired goal state is specified with an image, and image classifiers, which define spaces of goal states. Our deep predictive models are trained using data collected autonomously and continuously by a robot interacting with hundreds of objects, without human supervision. We demonstrate that visual MPC can generalize to never-before-seen objects---both rigid and deformable---and solve a range of user-defined object manipulation tasks using the same model.", "While recent advances in deep reinforcement learning have allowed autonomous learning agents to succeed at a variety of complex tasks, existing algorithms generally require a lot of training data. One way to increase the speed at which agents are able to learn to perform tasks is by leveraging the input of human trainers. Although such input can take many forms, real-time, scalar-valued feedback is especially useful in situations where it proves difficult or impossible for humans to provide expert demonstrations. Previous approaches have shown the usefulness of human input provided in this fashion (e.g., the TAMER framework), but they have thus far not considered high-dimensional state spaces or employed the use of deep learning. In this paper, we do both: we propose Deep TAMER, an extension of the TAMER framework that leverages the representational power of deep neural networks in order to learn complex tasks in just a short amount of time with a human trainer. We demonstrate Deep TAMER's success by using it and just 15 minutes of human-provided feedback to train an agent that performs better than humans on the Atari game of Bowling - a task that has proven difficult for even state-of-the-art reinforcement learning methods.", "We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.", "The reinforcement learning (RL) community has made great strides in designing algorithms capable of exceeding human performance on specific tasks. These algorithms are mostly trained one task at the time, each new task requiring to train a brand new agent instance. This means the learning algorithm is general, but each solution is not; each agent can only solve the one task it was trained on. In this work, we study the problem of learning to master not one but multiple sequentialdecision tasks at once. A general issue in multi-task learning is that a balance must be found between the needs of multiple tasks competing for the limited resources of a single learning system. Many learning algorithms can get distracted by certain tasks in the set of tasks to solve. Such tasks appear more salient to the learning process, for instance because of the density or magnitude of the in-task rewards. This causes the algorithm to focus on those salient tasks at the expense of generality. We propose to automatically adapt the contribution of each task to the agent’s updates, so that all tasks have a similar impact on the learning dynamics. This resulted in state of the art performance on learning to play all games in a set of 57 diverse Atari games. Excitingly, our method learned a single trained policy - with a single set of weights - that exceeds median human performance. To our knowledge, this was the first time a single agent surpassed human-level performance on this multi-task domain. The same approach also demonstrated state of the art performance on a set of 30 tasks in the 3D reinforcement learning platform DeepMind Lab." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
@cite_3 is an open-source perceptual and physics simulator. This work can be seen as a bridge between learning from a simulator and transfer learning. The main goal of is to facilitate transferring models trained in a simulated environment to the real world.
{ "cite_N": [ "@cite_3" ], "mid": [ "1884730573" ], "abstract": [ "This paper describes a 3D object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario. This system, developed within the scope of the European project RACE, integrates detection, tracking, learning and recognition of tabletop objects. Interaction capabilities were also developed to enable a human user to take the role of instructor and teach new object categories. Thus, the system learns in an incremental and open-ended way from user-mediated experiences. Based on the analysis of memory requirements for storing both semantic and perceptual data, a dual memory approach, comprising a semantic memory and a perceptual memory, was adopted. The perceptual memory is the central data structure of the described perception and learning system. The goal of this paper is twofold: on one hand, we provide a thorough description of the developed system, starting with motivations, cognitive considerations and architecture design, then providing details on the developed modules, and finally presenting a detailed evaluation of the system; on the other hand, we emphasize the crucial importance of the Point Cloud Library (PCL) for developing such system.11This paper is a revised and extended version of Oliveira et?al. (2014). We describe an object perception and perceptual learning system.The system is able to detect, track and recognize tabletop objects.The system learns novel object categories in an open-ended fashion.The Point Cloud Library is used in nearly all modules of the system.The system was developed and used in the European project RACE." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Although the previous works seem to obtain good results using deep reinforcement learning, there are a lot of other works that do not show these satisfactory results. A significant example is @cite_13 , which shows good results mainly on tasks with vector observations as input, while most of the tasks with visual observations perform badly.
{ "cite_N": [ "@cite_13" ], "mid": [ "2342662072" ], "abstract": [ "Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning. Some notable examples include training agents to play Atari games based on raw pixel data and to acquire advanced manipulation skills using raw sensory inputs. However, it has been difficult to quantify progress in the domain of continuous control due to the lack of a commonly adopted benchmark. In this work, we present a benchmark suite of continuous control tasks, including classic tasks like cart-pole swing-up, tasks with very high state and action dimensionality such as 3D humanoid locomotion, tasks with partial observations, and tasks with hierarchical structure. We report novel findings based on the systematic evaluation of a range of implemented reinforcement learning algorithms. Both the benchmark and reference implementations are released at this https URL in order to facilitate experimental reproducibility and to encourage adoption by other researchers." ] }
1908.03440
2967014930
In this paper, we propose a deep reinforcement learning (DRL) solution to the grasping problem using 2.5D images as the only source of information. In particular, we developed a simulated environment where a robot equipped with a vacuum gripper has the aim of reaching blocks with planar surfaces. These blocks can have different dimensions, shapes, position and orientation. Unity 3D allowed us to simulate a real-world setup, where a depth camera is placed in a fixed position and the stream of images is used by our policy network to learn how to solve the task. We explored different DRL algorithms and problem configurations. The experiments demonstrated the effectiveness of the proposed DRL algorithm applied to grasp tasks guided by visual depth camera inputs. When using the proper policy, the proposed method estimates a robot tool configuration that reaches the object surface with negligible position and orientation errors. This is, to the best of our knowledge, the first successful attempt of using 2.5D images only as of the input of a DRL algorithm, to solve the grasping problem regressing 3D world coordinates.
Another problem in deep reinforcement learning, and deep learning in general, is hyper-parameters tuning. Deep reinforcement learning problems involve a huge number of hyper-parameters that affect training process. It is necessary to tune hyper-parameters such as the learning rate, batch size, random seed, architecture of the policy networks, together with correct reward functions, defining and normalizing them before feeding the network. In @cite_0 the authors introduce a large comparison of state of the art problems, algorithms and implementations to highlight how deep reinforcement learning is still heavily weak to hyper-parameters variation.
{ "cite_N": [ "@cite_0" ], "mid": [ "2591924527" ], "abstract": [ "Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a deep transfer learning scheme, called selective joint fine-tuning, for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our deep transfer learning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, and fine-grained classification problems (Oxford Flowers 102 and Stanford Dogs 120). In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2 - 10 using a single model. Codes and models are available at https: github.com ZYYSzj Selective-Joint-Fine-tuning." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Super-resolution: The classical solution to image restoration and super-resolution was to combine multiple data sources ( multiple images obtained at sub-pixel misalignments @cite_6 , or use self-similar patches within a single image @cite_31 @cite_33 ), and then incorporate these within a regularisation constraint total variation @cite_46 . Microscopy has applied super-resolution for volumetric data via depth of field @cite_34 , and through multi-spectral sensing data @cite_48 via sparse coding a machine learning-based super-resolution approach that learns the visual characteristics of the supplied training images, then applies the learnt model within an optimisation framework to enhance detail. More recently, as with all computer vision domains convolutional neural network (CNN) autoencoders have been applied to image @cite_16 @cite_3 and video-upscaling @cite_32 . While symmetric autoencoders have effectively learnt an image transformation between clean and synthetically noisy images @cite_41 . Similarly, Dong @cite_12 trained end-to-end networks to model image up-scaling or super-resolution.
{ "cite_N": [ "@cite_31", "@cite_33", "@cite_41", "@cite_48", "@cite_32", "@cite_6", "@cite_16", "@cite_3", "@cite_46", "@cite_34", "@cite_12" ], "mid": [ "2914829730", "2534320940", "2523714292", "2963470893", "2892998444", "2220123745", "2137290314", "2964277374", "2503186844", "2140257560", "2735224642" ], "abstract": [ "Image Super-Resolution: Historical Overview and Future Challenges, J. Yang and T. Huang Introduction to Super-Resolution Notations Techniques for Super-Resolution Challenge issues for Super-Resolution Super-Resolution Using Adaptive Wiener Filters, R.C. Hardie Introduction Observation Model AWF SR Algorithms Experimental Results Conclusions Acknowledgments Locally Adaptive Kernel Regression for Space-Time Super-Resolution, H. Takeda and P. Milanfar Introduction Adaptive Kernel Regression Examples Conclusion AppendiX Super-Resolution With Probabilistic Motion Estimation, M. Protter and M. Elad Introduction Classic Super-Resolution: Background The Proposed Algorithm Experimental Validation Summary Spatially Adaptive Filtering as Regularization in Inverse Imaging, A. Danielyan, A. Foi, V. Katkovnik, and K. Egiazarian Introduction Iterative filtering as regularization Compressed sensing Super-resolution Conclusions Registration for Super-Resolution, P. Vandewalle, L. Sbaiz, and M. Vetterli Camera Model What Is Resolution? Super-Resolution as a Multichannel Sampling Problem Registration of Totally Aliased Signals Registration of Partially Aliased Signals Conclusions Towards Super-Resolution in the Presence of Spatially Varying Blur, M. Sorel, F. Sroubek and J. Flusser Introduction Defocus and Optical Aberrations Camera Motion Blur Scene Motion Algorithms Conclusion Acknowledgments Toward Robust Reconstruction-Based Super-Resolution, M. Tanaka and M. Okutomi Introduction Overviews Robust SR Reconstruction with Pixel Selection Robust Super-Resolution Using MPEG Motion Vectors Robust Registration for Super-Resolution Conclusions Multi-Frame Super-Resolution from a Bayesian Perspective, L. Pickup, S. Roberts, A. Zisserman and D. Capel The Generative Model Where Super-Resolution Algorithms Go Wrong Simultaneous Super-Resolution Bayesian Marginalization Concluding Remarks Variational Bayesian Super Resolution Reconstruction, S. Derin Babacan, R. Molina, and A.K. Katsaggelos Introduction Problem Formulation Bayesian Framework for Super Resolution Bayesian Inference Variational Bayesian Inference Using TV Image Priors Experiments Estimation of Motion and Blur Conclusions Acknowledgements Pattern Recognition Techniques for Image Super-Resolution, K. Ni and T.Q. Nguyen Introduction Nearest Neighbor Super-Resolution Markov Random Fields and Approximations Kernel Machines for Image Super-Resolution Multiple Learners and Multiple Regressions Design Considerations and Examples Remarks Glossary Super-Resolution Reconstruction of Multi-Channel Images, O.G. Sezer and Y. Altunbasak Introduction Notation Image Acquisition Model Subspace Representation Reconstruction Algorithm Experiments & Discussions Conclusion New Applications of Super-Resolution in Medical Imaging, M.D.Robinson, S.J. Chiu, C.A. Toth, J.A. Izatt, J.Y. Lo, and S. Farsiu Introduction The Super-Resolution Framework New Medical Imaging Applications Conclusion Acknowledgment Practicing Super-Resolution: What Have We Learned? N. Bozinovic Abstract Introduction MotionDSP: History and Concepts Markets and Applications Technology Results Lessons Learned Conclusions", "Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.", "The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural networks grows, the long-term information coming from preceding layers is easy to be weaken or lost in late layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a sequence of feature-modulation memory (FMM) modules is cascaded with a densely connected structure to transform low-resolution features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate multi-level features in a global-and-local manner. This feature modulation strategy enables the high contribution information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods.", "It is difficult to design an image super-resolution algorithm that can not only preserve image edges and texture structure but also keep lower computational complexity. A new super-resolution model based on sparsity regularization in Bayesian framework is presented. The fidelity term restricts the underlying image to be consistent with the observation image in terms of the image degradation model. The sparsity regularization term constraints the underlying image with a sparse representation in a proper dictionary. The non-local self-similarity is also introduced into the model. In order to make the sparse domain better represent the underlying image, we use high-frequency features extracted from the underlying image patches for sparse representation, which increases the effectiveness of sparse modeling. The proposed method learns dictionary directly from the estimated high-resolution image patches (extracted features), and the dictionary learning and the super-resolution can be fused together naturally into one coherent and iterated process. Such a self-learning method has stronger adaptability to different images and reduces dictionary training time. Experiments demonstrate the effectiveness of the proposed method. Compared with some state-of-the-art methods, the proposed method can better preserve image edges and texture details.", "Super-resolution reconstruction proposes a fusion of several low-quality images into one higher quality result with better optical resolution. Classic super-resolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inaccurately, as often happens for nonglobal motion fields, annoying artifacts appear in the super-resolved outcome. Encouraged by recent developments on the video denoising problem, where state-of-the-art algorithms are formed with no explicit motion estimation, we seek a super-resolution algorithm of similar nature that will allow processing sequences with general motion patterns. In this paper, we base our solution on the Nonlocal-Means (NLM) algorithm. We show how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation. Results on several test movies show that the proposed method is very successful in providing super-resolution on general sequences.", "Recent years have witnessed the unprecedented success of deep convolutional neural networks (CNNs) in single image super-resolution (SISR). However, existing CNN-based SISR methods mostly assume that a low-resolution (LR) image is bicubicly downsampled from a high-resolution (HR) image, thus inevitably giving rise to poor performance when the true degradation does not follow this assumption. Moreover, they lack scalability in learning a single model to nonblindly deal with multiple degradations. To address these issues, we propose a general framework with dimensionality stretching strategy that enables a single convolutional super-resolution network to take two key factors of the SISR degradation process, i.e., blur kernel and noise level, as input. Consequently, the super-resolver can handle multiple and even spatially variant degradations, which significantly improves the practicability. Extensive experimental results on synthetic and real LR images show that the proposed convolutional super-resolution network not only can produce favorable results on multiple degradations but also is computationally efficient, providing a highly effective and scalable solution to practical SISR applications.", "Depth image super-resolution is an extremely challenging task due to the information loss in sub-sampling. Deep convolutional neural network has been widely applied to color image super-resolution. Quite surprisingly, this success has not been matched to depth super-resolution. This is mainly due to the inherent difference between color and depth images. In this paper, we bridge up the gap and extend the success of deep convolutional neural network to depth super-resolution. The proposed deep depth super-resolution method learns the mapping from a low-resolution depth image to a high-resolution one in an end-to-end style. Furthermore, to better regularize the learned depth map, we propose to exploit the depth field statistics and the local correlation between depth image and color image. These priors are integrated in an energy minimization formulation, where the deep neural network learns the unary term, the depth field statistics works as global model constraint and the color-depth correlation is utilized to enforce the local structure in depth image. Extensive experiments on various depth super-resolution benchmark datasets show that our method outperforms the state-of-the-art depth image super-resolution methods with a margin.", "Nearly all super-resolution algorithms are based on the fundamental constraints that the super-resolution image should generate low resolution input images when appropriately warped and down-sampled to model the image formation process. (These reconstruction constraints are normally combined with some form of smoothness prior to regularize their solution.) We derive a sequence of analytical results which show that the reconstruction constraints provide less and less useful information as the magnification factor increases. We also validate these results empirically and show that, for large enough magnification factors, any smoothness prior leads to overly smooth results with very little high-frequency content. Next, we propose a super-resolution algorithm that uses a different kind of constraint in addition to the reconstruction constraints. The algorithm attempts to recognize local features in the low-resolution images and then enhances their resolution in an appropriate manner. We call such a super-resolution algorithm a hallucination or reconstruction algorithm. We tried our hallucination algorithm on two different data sets, frontal images of faces and printed Roman text. We obtained significantly better results than existing reconstruction-based algorithms, both qualitatively and in terms of RMS pixel error.", "Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Bottom-up pose estimation is driven by image parsing to isolate components, Srinivasan @cite_42 used graph-cuts to parse a subset of salient shapes from an image and group these into a model of a person. Ren @cite_39 recursively splits Canny edge contours into segments, classifying each as a putative body part using cues such as parallelism. Ren @cite_1 also used Bag of Visual Words for implicit pose estimation as part of a pose similarity system for dance video retrieval. More recently studies have begun to leverage the power of convolutional neural networks, following in the wake of the eye-opening results of Krizhevsky @cite_15 on image recognition. In DeepPose, Toshev @cite_24 used a cascade of convolutional neural networks to estimate 2D pose in images. Descriptors learnt by a CNN have also been used in 2D pose estimation from very low-resolution images @cite_52 . Elhayek @cite_50 used MVV with a Convnet to produce 2D pose estimations while Rhodin @cite_55 minimised the edge energy inspired by volume ray casting to deduce the 3D pose.
{ "cite_N": [ "@cite_55", "@cite_42", "@cite_1", "@cite_52", "@cite_39", "@cite_24", "@cite_50", "@cite_15" ], "mid": [ "1896788142", "2890816492", "2561080715", "2792747672", "2769237672", "2626687521", "2340779594", "2079846689" ], "abstract": [ "In this paper we present a convolutional neural network (CNN)-based model for human head pose estimation in low-resolution multi-modal RGB-D data. We pose the problem as one of classification of human gazing direction. We further fine-tune a regressor based on the learned deep classifier. Next we combine the two models (classification and regression) to estimate approximate regression confidence. We present state-of-the-art results in datasets that span the range of high-resolution human robot interaction (close up faces plus depth information) data to challenging low resolution outdoor surveillance data. We build upon our robust head-pose estimation and further introduce a new visual attention model to recover interaction with the environment . Using this probabilistic model, we show that many higher level scene understanding like human-human scene interaction detection can be achieved. Our solution runs in real-time on commercial hardware.", "In this work we integrate ideas from surface-based modeling with neural synthesis: we propose a combination of surface-based pose estimation and deep generative models that allows us to perform accurate pose transfer, i.e. synthesize a new image of a person based on a single image of that person and the image of a pose donor. We use a dense pose estimation system that maps pixels from both images to a common surface-based coordinate system, allowing the two images to be brought in correspondence with each other. We inpaint and refine the source image intensities in the surface coordinate system, prior to warping them onto the target pose. These predictions are fused with those of a convolutional predictive module through a neural synthesis module allowing for training the whole pipeline jointly end-to-end, optimizing a combination of adversarial and perceptual losses. We show that dense pose estimation is a substantially more powerful conditioning input than landmark-, or mask-based alternatives, and report systematic improvements over state of the art generators on DeepFashion and MVC datasets.", "Human pose as a query modality is an alternative and rich experience for image and video retrieval. It has interesting retrieval applications in domains such as sports and dance databases. In this work we propose two novel ways for representing the image of a person striking a pose, one looking for parts and other looking at the whole image. These representations are then used for retrieval. Both the representations are obtained using deep learning methods.In the first method, we make the following contributions: (a) We introduce deep poselets' for pose-sensitive detection of various body parts, built on convolutional neural network (CNN) features. These deep poselets significantly outperform previous instantiations of Berkeley poselets [6], and (b) Using these detector responses, we construct a pose representation that is suitable for pose search, and show that pose retrieval performance is on par with the previous methods. In the second method, we make the following contributions: (a) We design an optimized neural network which maps the input image to a very low dimensional space where similar poses are close by and dissimilar poses are farther away, and (b) We show that pose retrieval system using these low dimensional representation is on par with the deep poselet representation and is on par with the previous methods.The previous works with which the above two methods are compared include bag of visual words [44], Berkeley poselets[6] and human pose estimation algorithms [52]. All the methods are quantitatively evaluated on a large dataset of images built from a number of standard benchmarks together with frames from Hollywood movies. For human pose search, two novel pose descriptors (D-I and D-II) are proposed.D-I pools scores of deep poselets (body part detectors in a specific pose).D-II, maps an image such that similar and dissimilar pose images are separated.Both descriptors use CNNs and do not need an explicit human pose estimation.These have been shown to outperform other pose descriptors by large margins.", "We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D poses of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our Localization-Classification-Regression architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests candidate poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our method recovers full-body 2D and 3D poses, hallucinating plausible body parts when the persons are partially occluded or truncated by the image boundary. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "We propose a novel approach to 3D human pose estimation from a single depth map. Recently, convolutional neural network (CNN) has become a powerful paradigm in computer vision. Many of computer vision tasks have benefited from CNNs, however, the conventional approach to directly regress 3D body joint locations from an image does not yield a noticeably improved performance. In contrast, we formulate the problem as estimating per-voxel likelihood of key body joints from a 3D occupancy grid. We argue that learning a mapping from volumetric input to volumetric output with 3D convolution consistently improves the accuracy when compared to learning a regression from depth map to 3D joint coordinates. We propose a two-stage approach to reduce the computational overhead caused by volumetric representation and 3D convolution: Holistic 2D prediction and Local 3D prediction. In the first stage, Planimetric Network (P-Net) estimates per-pixel likelihood for each body joint in the holistic 2D space. In the second stage, Volumetric Network (V-Net) estimates the per-voxel likelihood of each body joints in the local 3D space around the 2D estimations of the first stage, effectively reducing the computational cost. Our model outperforms existing methods by a large margin in publicly available datasets.", "Human 3D pose estimation from a single image is a challenging task with numerous applications. Convolutional Neural Networks (CNNs) have recently achieved superior performance on the task of 2D pose estimation from a single image, by training on images with 2D annotations collected by crowd sourcing. This suggests that similar success could be achieved for direct estimation of 3D poses. However, 3D poses are much harder to annotate, and the lack of suitable annotated training images hinders attempts towards end-toend solutions. To address this issue, we opt to automatically synthesize training images with ground truth pose annotations. We find that pose space coverage and texture diversity are the key ingredients for the effectiveness of synthetic training data. We present a fully automatic, scalable approach that samples the human pose space for guiding the synthesis procedure and extracts clothing textures from real images. We demonstrate that CNNs trained with our synthetic images out-perform those trained with real photos on 3D pose estimation tasks.", "We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy. The discriminative part-based pose detection method, implemented using Convolutional Networks (ConvNet), estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials are used to probabilistically extract pose constraints for tracking by using weighted sampling from a pose posterior guided by the model. In the final energy, these constraints are combined with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, as ConvNet detection is fast, and our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
More recently given the success and accuracy of 2D joint estimation @cite_53 , several works lift 2D detections to 3D using learning or geometric reasoning, aiming to recover the missing depth dimension in the images. Sanzari @cite_7 estimates the location of 2D joints, before predicting 3D pose using appearance and probable 3D pose of the discovered parts with a hierarchical Bayesian model. While Zhou @cite_2 integrates 2D, 3D and temporal information to account for uncertainties in the data. The challenge of estimating 3D human pose from MVV is currently less explored, generally casting 3D pose estimation as a coordinate regression task, with the target output being the spatial @math coordinates of a joint with respect to a known root node such as the pelvis. Trumble @cite_43 used a flattened MVV based spherical histogram with a 2D convnet to estimate pose. While Pavlakos @cite_5 used a simple volumetric representation in a 3D convnet for pose estimation and Wei @cite_18 performed related work in aligning pairs of joints to estimate 3D human pose. Differently, Huang @cite_51 constructed a 4-D mesh of the subject from video reconstruction to estimate the 3D pose.
{ "cite_N": [ "@cite_18", "@cite_7", "@cite_53", "@cite_43", "@cite_2", "@cite_5", "@cite_51" ], "mid": [ "2626687521", "2557698284", "2769237672", "2554247908", "2799870331", "2803914169", "2612706635" ], "abstract": [ "We propose a novel approach to 3D human pose estimation from a single depth map. Recently, convolutional neural network (CNN) has become a powerful paradigm in computer vision. Many of computer vision tasks have benefited from CNNs, however, the conventional approach to directly regress 3D body joint locations from an image does not yield a noticeably improved performance. In contrast, we formulate the problem as estimating per-voxel likelihood of key body joints from a 3D occupancy grid. We argue that learning a mapping from volumetric input to volumetric output with 3D convolution consistently improves the accuracy when compared to learning a regression from depth map to 3D joint coordinates. We propose a two-stage approach to reduce the computational overhead caused by volumetric representation and 3D convolution: Holistic 2D prediction and Local 3D prediction. In the first stage, Planimetric Network (P-Net) estimates per-pixel likelihood for each body joint in the holistic 2D space. In the second stage, Volumetric Network (V-Net) estimates the per-voxel likelihood of each body joints in the local 3D space around the 2D estimations of the first stage, effectively reducing the computational cost. Our model outperforms existing methods by a large margin in publicly available datasets.", "This paper addresses the problem of 3D human pose estimation from a single image. We follow a standard two-step pipeline by first detecting the 2D position of the N body joints, and then using these observations to infer 3D pose. For the first step, we use a recent CNN-based detector. For the second step, most existing approaches perform 2N-to-3N regression of the Cartesian joint coordinates. We show that more precise pose estimates can be obtained by representing both the 2D and 3D human poses using NxN distance matrices, and formulating the problem as a 2D-to-3D distance matrix regression. For learning such a regressor we leverage on simple Neural Network architectures, which by construction, enforce positivity and symmetry of the predicted matrices. The approach has also the advantage to naturally handle missing observations and allowing to hypothesize the position of non-observed joints. Quantitative results on Humaneva and Human3.6M datasets demonstrate consistent performance gains over state-of-the-art. Qualitative evaluation on the images in-the-wild of the LSP dataset, using the regressor learned on Human3.6M, reveals very promising generalization results.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "This paper addresses the challenge of 3D human pose estimation from a single color image. Despite the general success of the end-to-end learning paradigm, top performing approaches employ a two-step solution consisting of a Convolutional Network (ConvNet) for 2D joint localization and a subsequent optimization step to recover 3D pose. In this paper, we identify the representation of 3D pose as a critical issue with current ConvNet approaches and make two important contributions towards validating the value of end-to-end learning for this task. First, we propose a fine discretization of the 3D space around the subject and train a ConvNet to predict per voxel likelihoods for each joint. This creates a natural representation for 3D pose and greatly improves performance over the direct regression of joint coordinates. Second, to further improve upon initial estimates, we employ a coarse-to-fine prediction scheme. This step addresses the large dimensionality increase and enables iterative refinement and repeated processing of the image features. The proposed approach outperforms all state-of-the-art methods on standard benchmarks achieving a relative error reduction greater than 30 on average. Additionally, we investigate using our volumetric representation in a related architecture which is suboptimal compared to our end-to-end approach, but is of practical interest, since it enables training when no image with corresponding 3D groundtruth is available, and allows us to present compelling results for in-the-wild images.", "We propose a method for estimating 3D human poses from single images or video sequences. The task is challenging because: (a) many 3D poses can have similar 2D pose projections which makes the lifting ambiguous, and (b) current 2D joint detectors are not accurate which can cause big errors in 3D estimates. We represent 3D poses by a sparse combination of bases which encode structural pose priors to reduce the lifting ambiguity. This prior is strengthened by adding limb length constraints. We estimate the 3D pose by minimizing an @math L 1 norm measurement error between the 2D pose and the 3D pose because it is less sensitive to inaccurate 2D poses. We modify our algorithm to output @math K 3D pose candidates for an image, and for videos, we impose a temporal smoothness constraint to select the best sequence of 3D poses from the candidates. We demonstrate good results on 3D pose estimation from static images and improved performance by selecting the best 3D pose from the @math K proposals. Our results on video sequences also show improvements (over static images) of roughly 15 .", "In this paper, we propose a two-stage depth ranking based method (DRPose3D) to tackle the problem of 3D human pose estimation. Instead of accurate 3D positions, the depth ranking can be identified by human intuitively and learned using the deep neural network more easily by solving classification problems. Moreover, depth ranking contains rich 3D information. It prevents the 2D-to-3D pose regression in two-stage methods from being ill-posed. In our method, firstly, we design a Pairwise Ranking Convolutional Neural Network (PRCNN) to extract depth rankings of human joints from images. Secondly, a coarse-to-fine 3D Pose Network(DPNet) is proposed to estimate 3D poses from both depth rankings and 2D human joint locations. Additionally, to improve the generality of our model, we introduce a statistical method to augment depth rankings. Our approach outperforms the state-of-the-art methods in the Human3.6M benchmark for all three testing protocols, indicating that depth ranking is an essential geometric feature which can be learned to improve the 3D pose estimation.", "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30 on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation." ] }
1908.03030
2966691903
We present an approach to accurately estimate high fidelity markerless 3D pose and volumetric reconstruction of human performance using only a small set of camera views ( @math ). Our method utilises a dual loss in a generative adversarial network that can yield improved performance in both reconstruction and pose estimate error. We use a deep prior implicitly learnt by the network trained over a dataset of view-ablated multi-view video footage of a wide range of subjects and actions. Uniquely we use a multi-channel symmetric 3D convolutional encoder-decoder with a dual loss to enforce the learning of a latent embedding that enforces skelet al joint positions and a deep volumetric reconstruction of the performer. An extensive evaluation is performed with state of the art performance reported on three datasets; Human 3.6M, TotalCapture and TotalCaptureOutdoor. The method opens the possibility of high-end volumetric and pose performance capture in on-set and prosumer scenarios where time or cost prohibit a high witness camera count.
Since detecting pose for each frame individually leads to incoherent and jittery predictions over a sequence, many approaches exploit temporal information. Andriluka @cite_28 used tracking-by-detection to associate 2D poses detected in each frame individually and used them to retrieve 3D pose. While Tekin @cite_10 used a CNN to first align bounding boxes of successive frames so that the person in the image is always at the centre of the box and then extracted 3D HOG features over the spatiotemporal volume from which they regress the 3D pose of the central frame. Lin @cite_19 performed a multi-stage sequential refinement using LSTMs @cite_8 to predict 3D pose sequences using previously predicted 2D pose representations and 3D pose. While Hossain @cite_49 learns the temporal context of a sequence using a form of sequence-to-sequence network.
{ "cite_N": [ "@cite_8", "@cite_28", "@cite_19", "@cite_49", "@cite_10" ], "mid": [ "2604236302", "2769237672", "2079846689", "2952717317", "2270288817" ], "abstract": [ "We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.", "In this work, we address the problem of 3D human pose estimation from a sequence of 2D human poses. Although the recent success of deep networks has led many state-of-the-art methods for 3D pose estimation to train deep networks end-to-end to predict from images directly, the top-performing approaches have shown the effectiveness of dividing the task of 3D pose estimation into two steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from images and then mapping them into 3D space. They also showed that a low-dimensional representation like 2D locations of a set of joints can be discriminative enough to estimate 3D pose with high accuracy. However, estimation of 3D pose for individual frames leads to temporally incoherent estimates due to independent error in each frame causing jitter. Therefore, in this work we utilize the temporal information across a sequence of 2D joint locations to estimate a sequence of 3D poses. We designed a sequence-to-sequence network composed of layer-normalized LSTM units with shortcut connections connecting the input to the output on the decoder side and imposed temporal smoothness constraint during training. We found that the knowledge of temporal consistency improves the best reported result on Human3.6M dataset by approximately (12.2 ) and helps our network to recover temporally consistent 3D poses over a sequence of images even when the 2D pose detector fails.", "We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy. The discriminative part-based pose detection method, implemented using Convolutional Networks (ConvNet), estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials are used to probabilistically extract pose constraints for tracking by using weighted sampling from a pose posterior guided by the model. In the final energy, these constraints are combined with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, as ConvNet detection is fast, and our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras.", "We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task (, ICCV'17) that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by the YOLO network design that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches when they are all used without post-processing. During post-processing, a pose refinement step can be used to boost the accuracy of the existing methods, but at 10 fps or less, they are much slower than our method.", "We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Previous approaches typically compute candidate poses in individual frames and then link them in a post-processing step to resolve ambiguities. By contrast, we directly regress from a spatio-temporal volume of bounding boxes to a 3D pose in the central frame. We further show that, for this approach to achieve its full potential, it is essential to compensate for the motion in consecutive frames so that the subject remains centered. This then allows us to effectively overcome ambiguities and improve upon the state-of-the-art by a large margin on the Human3.6m, HumanEva, and KTH Multiview Football 3D human pose estimation benchmarks." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
Semi-supervised learning for image classification is an active research topic @cite_29 ; this section focuses on reviewing work closely related to ours, discussing methods that use deep learning with mini-batch optimization over large image collections. Previous work on semi-supervised deep learning differ in whether they use consistency regularization or pseudo-labeling to learn from the unlabeled set @cite_33 , while they all share the use of a cross-entropy loss (or similar) on labeled data.
{ "cite_N": [ "@cite_29", "@cite_33" ], "mid": [ "1981613567", "2621015177" ], "abstract": [ "In image categorization the goal is to decide if an image belongs to a certain category or not. A binary classifier can be learned from manually labeled images; while using more labeled examples improves performance, obtaining the image labels is a time consuming process. We are interested in how other sources of information can aid the learning process given a fixed amount of labeled images. In particular, we consider a scenario where keywords are associated with the training images, e.g. as found on photo sharing websites. The goal is to learn a classifier for images alone, but we will use the keywords associated with labeled and unlabeled images to improve the classifier using semi-supervised learning. We first learn a strong Multiple Kernel Learning (MKL) classifier using both the image content and keywords, and use it to score unlabeled images. We then learn classifiers on visual features only, either support vector machines (SVM) or least-squares regression (LSR), from the MKL output values on both the labeled and unlabeled images. In our experiments on 20 classes from the PASCAL VOC'07 set and 38 from the MIR Flickr set, we demonstrate the benefit of our semi-supervised approach over only using the labeled images. We also present results for a scenario where we do not use any manual labeling but directly learn classifiers from the image tags. The semi-supervised approach also improves classification accuracy in this case.", "Aiming at improving the performance of visual classification in a cost-effective manner, this paper proposes an incremental semi-supervised learning paradigm called deep co-space (DCS). Unlike many conventional semi-supervised learning methods usually performed within a fixed feature space, our DCS gradually propagates information from labeled samples to unlabeled ones along with deep feature learning. We regard deep feature learning as a series of steps pursuing feature transformation, i.e., projecting the samples from a previous space into a new one, which tends to select the reliable unlabeled samples with respect to this setting. Specifically, for each unlabeled image instance, we measure its reliability by calculating the category variations of feature transformation from two different neighborhood variation perspectives and merged them into a unified sample mining criterion deriving from Hellinger distance. Then, those samples keeping stable correlation to their neighboring samples (i.e., having small category variation in distribution) across the successive feature space transformation are automatically received labels and incorporated into the model for incrementally training in terms of classification. Our extensive experiments on standard image classification benchmarks (e.g., Caltech-256 and SUN-397) demonstrate that the proposed framework is capable of effectively mining from large-scale unlabeled images, which boosts image classification performance and achieves promising results compared with other semi-supervised learning methods." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
Co-training @cite_15 combines several ideas from the previous works, using two (or more) networks trained simultaneously to agree in their predictions (consistency regularization) and disagree in their errors. Here the errors are defined as making different predictions when exposed to adversarial attacks, thus forcing different networks to learn complementary representations for the same samples. Recently, @cite_10 measure the consistency between the current prediction and an additional prediction of the same sample given by an external memory module that keeps track of previous representations of a sample. They additionally introduce an uncertainty weighting of the consistency term to reduce the contribution of uncertain sample predictions given by the memory module. Consistency regularization methods such as @math -model @cite_1 , mean teachers @cite_34 , and VAT @cite_4 have all been shown to benefit from the recent stochastic weight averaging (SWA) method @cite_18 @cite_38 . SWA averages network parameters at different training epochs to move the SGD solution on borders of flat loss regions to their center and improve generalization.
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_4", "@cite_1", "@cite_15", "@cite_34", "@cite_10" ], "mid": [ "2609701267", "2963476860", "2909986471", "2792287754", "2963702144", "2766164908", "2583938035" ], "abstract": [ "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of mini-batch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certainty-driven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.", "Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much broader optima than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead.", "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to @math validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.", "It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to @math validation accuracy in under 30 minutes.", "Deep convolutional networks have achieved successful performance in data mining field. However, training large networks still remains a challenge, as the training data may be insufficient and the model can easily get overfitted. Hence the training process is usually combined with a model regularization. Typical regularizers include weight decay, Dropout, etc. In this paper, we propose a novel regularizer, named Structured Decorrelation Constraint (SDC), which is applied to the activations of the hidden layers to prevent overfitting and achieve better generalization. SDC impels the network to learn structured representations by grouping the hidden units and encouraging the units within the same group to have strong connections during the training procedure. Meanwhile, it forces the units in different groups to learn non-redundant representations by minimizing the cross-covariance between them. Compared with Dropout, SDC reduces the co-adaptions between the hidden units in an explicit way. Besides, we propose a novel approach called Reg-Conv that can help SDC to regularize the complex convolutional layers. Experiments on extensive datasets show that SDC significantly reduces overfitting and yields very meaningful improvements on classification performance (on CIFAR-10 6.22 accuracy promotion and on CIFAR-100 9.63 promotion)." ] }
1908.02983
2964677000
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that label noise and mixup augmentation are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10 100 and Mini-Imaget despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Source code is available at this https URL .
It is important to highlight a widely used practice @cite_27 @cite_1 @cite_34 @cite_15 @cite_39 @cite_33 : a warm-up where labeled samples have a higher (or full) weight at the beginning of training to palliate the incorrect guidance of unlabeled samples early in training. The authors in @cite_29 also reveal some limitations of current practices in semi-supervised learning such as low quality fully-supervised frameworks, absence of comparison with transfer learning baselines, and pointing out issues related to excessive hyperparameter tuning on large validation sets (not available in real situations in semi-supervised learning).
{ "cite_N": [ "@cite_33", "@cite_29", "@cite_1", "@cite_39", "@cite_27", "@cite_15", "@cite_34" ], "mid": [ "2048679005", "2119991813", "2909986471", "2621925205", "2114718442", "2125592902", "1572720249" ], "abstract": [ "We consider the problem of using a large unlabeled sample to boost performance of a learning algorit,hrn when only a small set of labeled examples is available. In particular, we consider a problem setting motivated by the task of learning to classify web pages, in which the description of each example can be partitioned into two distinct views. For example, the description of a web page can be partitioned into the words occurring on that page, and the words occurring in hyperlinks t,hat point to that page. We assume that either view of the example would be sufficient for learning if we had enough labeled data, but our goal is to use both views together to allow inexpensive unlabeled data to augment, a much smaller set of labeled examples. Specifically, the presence of two distinct views of each example suggests strategies in which two learning algorithms are trained separately on each view, and then each algorithm’s predictions on new unlabeled examples are used to enlarge the training set of the other. Our goal in this paper is to provide a PAC-style analysis for this setting, and, more broadly, a PAC-style framework for the general problem of learning from both labeled and unlabeled data. We also provide empirical results on real web-page data indicating that this use of unlabeled examples can lead to significant improvement of hypotheses in practice. *This research was supported in part by the DARPA HPKB program under contract F30602-97-1-0215 and by NSF National Young investigator grant CCR-9357793. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. TO copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and or a fee. COLT 98 Madison WI USA Copyright ACM 1998 l-58113-057--0 98 7... 5.00 92 Tom Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3891 mitchell+@cs.cmu.edu", "In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete. This includes, for example, (i) semi-supervised learning where labels are partially known; (ii) multi-instance learning where labels are implicitly known; and (iii) clustering where labels are completely unknown. Unlike supervised learning, learning with weak labels involves a difficult Mixed-Integer Programming (MIP) problem. Therefore, it can suffer from poor scalability and may also get stuck in local minimum. In this paper, we focus on SVMs and propose the WELLSVM via a novel label generation strategy. This leads to a convex relaxation of the original MIP, which is at least as tight as existing convex Semi-Definite Programming (SDP) relaxations. Moreover, the WELLSVM can be solved via a sequence of SVM subproblems that are much more scalable than previous convex SDP relaxations. Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised learning; (ii) multi-instance learning for locating regions of interest in content-based information retrieval; and (iii) clustering, clearly demonstrate improved performance, and WELLSVM is also readily applicable on large data sets.", "The recently proposed semi-supervised learning methods exploit consistency loss between different predictions under random perturbations. Typically, a student model is trained to predict consistently with the targets generated by a noisy teacher. However, they ignore the fact that not all training data provide meaningful and reliable information in terms of consistency. For misclassified data, blindly minimizing the consistency loss around them can hinder learning. In this paper, we propose a novel certainty-driven consistency loss (CCL) to dynamically select data samples that have relatively low uncertainty. Specifically, we measure the variance or entropy of multiple predictions under random augmentations and dropout as an estimation of uncertainty. Then, we introduce two approaches, i.e. Filtering CCL and Temperature CCL to guide the student learn more meaningful and certain reliable targets, and hence improve the quality of the gradients backpropagated to the student. Experiments demonstrate the advantages of the proposed method over the state-of-the-art semi-supervised deep learning methods on three benchmark datasets: SVHN, CIFAR10, and CIFAR100. Our method also shows robustness to noisy labels.", "In many real-world scenarios, labeled data for a specific machine learning task is costly to obtain. Semi-supervised training methods make use of abundantly available unlabeled data and a smaller number of labeled examples. We propose a new framework for semi-supervised training of deep neural networks inspired by learning in humans. Associations are made from embeddings of labeled samples to those of unlabeled ones and back. The optimization schedule encourages correct association cycles that end up at the same class from which the association was started and penalizes wrong associations ending at a different class. The implementation is easy to use and can be added to any existing end-to-end training setup. We demonstrate the capabilities of learning by association on several data sets and show that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data. In particular, for cases with few labeled data, our training scheme outperforms the current state of the art on SVHN.", "Empirical evidence shows that in favorable situations semi-supervised learning (SSL) algorithms can capitalize on the abundance of unlabeled training data to improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations unlabeled data do not seem to help. Recent attempts at theoretically characterizing SSL gains only provide a partial and sometimes apparently conflicting explanations of whether, and to what extent, unlabeled data can help. In this paper, we attempt to bridge the gap between the practice and theory of semi-supervised learning. We develop a finite sample analysis that characterizes the value of un-labeled data and quantifies the performance improvement of SSL compared to supervised learning. We show that there are large classes of problems for which SSL can significantly outperform supervised learning, in finite sample regimes and sometimes also in terms of error convergence rates.", "There has been increased interest in devising learning techniques that combine unlabeled data with labeled data -- i.e. semi-supervised learning. However, to the best of our knowledge, no study has been performed across various techniques and different types and amounts of labeled and unlabeled data. Moreover, most of the published work on semi-supervised learning techniques assumes that the labeled and unlabeled data come from the same distribution. It is possible for the labeling process to be associated with a selection bias such that the distributions of data points in the labeled and unlabeled sets are different. Not correcting for such bias can result in biased function approximation with potentially poor performance. In this paper, we present an empirical study of various semi-supervised learning techniques on a variety of datasets. We attempt to answer various questions such as the effect of independence or relevance amongst features, the effect of the size of the labeled and unlabeled sets and the effect of noise. We also investigate the impact of sample-selection bias on the semi -supervised learning techniques under study and implement a bivariate probit technique particularly designed to correct for such bias.", "The present work discusses what have been called 'imperfectly supervised situations': pattern recognition applications where the assumption of label correctness does not hold for all the elements of the training sample. A methodology for contending with these practical situations and to avoid their negative impact on the performance of supervised methods is presented. This methodology can be regarded as a cleaning process removing some suspicious instances of the training sample or correcting the class labels of some others while retaining them. It has been conceived for doing classification with the Nearest Neighbor rule, a supervised nonparametric classifier that combines conceptual simplicity and an asymptotic error rate bounded in terms of the optimal Bayes error. However, initial experiments concerning the learning phase of a Multilayer Perceptron (not reported in the present work) seem to indicate a broader applicability. Results with both simulated and real data sets are presented to support the methodology and to clarify the ideas behind it. Related works are briefly reviewed and some issues deserving further research are also exposed." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
The key to success for the generation of an immersive and interactive telepresence experience is the real-time 3D reconstruction of the scene of interest. In particular due to the high computational burden and the huge memory requirements required to process and store large scenes, seminal work on multi-camera telepresence systems @cite_22 @cite_34 @cite_14 @cite_25 @cite_26 @cite_41 with less powerful hardware available at that time faced limitations regarding the capability to capture high-quality 3D models in real-time and to immediately transmit them to remote users. More recently, the emerging progress towards affordable commodity depth sensors including the Microsoft Kinect has successfully been exploited for the development of 3D reconstruction approaches working at room scale @cite_24 @cite_11 @cite_15 @cite_19 . Yet the step towards high-quality reconstructions remained highly challenging due to the high sensor noise as well as temporal inconsistency in the reconstructed data.
{ "cite_N": [ "@cite_14", "@cite_26", "@cite_22", "@cite_41", "@cite_24", "@cite_19", "@cite_15", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "2801778672", "244217497", "2890415768", "353406226", "2243794092", "2065906272", "2178922201", "2071906076", "2894865236", "2584684405" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "The availability of commodity depth sensors such as Kinect has enabled development of methods which can densely reconstruct arbitrary scenes. While the results of these methods are accurate and visually appealing, they are quite often incomplete. This is either due to the fact that only part of the space was visible during the data capture process or due to the surfaces being occluded by other objects in the scene. In this paper, we address the problem of completing and refining such reconstructions. We propose a method for scene completion that can infer the layout of the complete room and the full extent of partially occluded objects. We propose a new probabilistic model, Contour Completion Random Fields, that allows us to complete the boundaries of occluded surfaces. We evaluate our method on synthetic and real world reconstructions of 3D scenes and show that it quantitatively and qualitatively outperforms standard methods. We created a large dataset of partial and complete reconstructions which we will make available to the community as a benchmark for the scene completion task. Finally, we demonstrate the practical utility of our algorithm via an augmented-reality application where objects interact with the completed reconstructions inferred by our method.", "We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views – that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation.", "This paper describes an enhanced telepresence system that offers fully dynamic, real-time 3D scene capture and continuous-viewpoint, head-tracked stereo 3D display without requiring the user to wear any tracking or viewing apparatus. We present a complete software and hardware framework for implementing the system, which is based on an array of commodity Microsoft Kinect^T^Mcolor-plus-depth cameras. Contributions include an algorithm for merging data between multiple depth cameras and techniques for automatic color calibration and preserving stereo quality even with low rendering rates. Also presented is a solution to the problem of interference that occurs between Kinect cameras with overlapping views. Emphasis is placed on a fully GPU-accelerated data processing and rendering pipeline that can apply hole filling, smoothing, data merger, surface generation, and color correction at rates of up to 200 million triangles s on a single PC and graphics board. Also presented is a Kinect-based markerless tracking system that combines 2D eye recognition with depth information to allow head-tracked stereo views to be rendered for a parallax barrier autostereoscopic display. Enhancements in calibration, filtering, and data merger were made to improve image quality over a previous version of the system.", "The ability to quickly acquire 3D models is an essential capability needed in many disciplines including robotics, computer vision, geodesy, and architecture. In this paper we present a novel method for real-time camera tracking and 3D reconstruction of static indoor environments using an RGB-D sensor. We show that by representing the geometry with a signed distance function (SDF), the camera pose can be efficiently estimated by directly minimizing the error of the depth images on the SDF. As the SDF contains the distances to the surface for each voxel, the pose optimization can be carried out extremely fast. By iteratively estimating the camera poses and integrating the RGB-D data in the voxel grid, a detailed reconstruction of an indoor environment can be achieved. We present reconstructions of several rooms using a hand-held sensor and from onboard an autonomous quadrocopter. Our extensive evaluation on publicly available benchmark data shows that our approach is more accurate and robust than the iterated closest point algorithm (ICP) used by KinectFusion, and yields often a comparable accuracy at much higher speed to feature-based bundle adjustment methods such as RGB-D SLAM for up to medium-sized scenes.", "Real-time or online 3D reconstruction has wide applicability and receives further interest due to availability of consumer depth cameras. Typical approaches use a moving sensor to accumulate depth measurements into a single model which is continuously refined. Designing such systems is an intricate balance between reconstruction quality, speed, spatial scale, and scene assumptions. Existing online methods either trade scale to achieve higher quality reconstructions of small objects scenes. Or handle larger scenes by trading real-time performance and or quality, or by limiting the bounds of the active reconstruction. Additionally, many systems assume a static scene, and cannot robustly handle scene motion or reconstructions that evolve to reflect scene changes. We address these limitations with a new system for real-time dense reconstruction with equivalent quality to existing online methods, but with support for additional spatial scale and robustness in dynamic scenes. Our system is designed around a simple and flat point-Based representation, which directly works with the input acquired from range depth sensors, without the overhead of converting between representations. The use of points enables speed and memory efficiency, directly leveraging the standard graphics pipeline for all central operations, i.e., camera pose estimation, data association, outlier removal, fusion of depth maps into a single denoised model, and detection and update of dynamic objects. We conclude with qualitative and quantitative results that highlight robust tracking and high quality reconstructions of a diverse set of scenes at varying scales.", "We propose a real-time approach for indoor scene reconstruction. It is capable of producing a ready-to-use 3D geometric model even while the user is still scanning the environment with a consumer depth camera. Our approach features explicit representations of planar regions and nonplanar objects extracted from the noisy feed of the depth camera, via an online structure analysis on the dynamic, incomplete data. The structural information is incorporated into the volumetric representation of the scene, resulting in a seamless integration with KinectFusion's global data structure and an efficient implementation of the whole reconstruction process. Moreover, heuristics based on rectilinear shapes in typical indoor scenes effectively eliminate camera tracking drift and further improve reconstruction accuracy. The instantaneous feedback enabled by our on-the-fly structure analysis, including repeated object recognition, allows the user to selectively scan the scene and produce high-fidelity large-scale models efficiently. We demonstrate the capability of our system with real-life examples.", "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "We present a deep learning based volumetric approach for performance capture using a passive and highly sparse multi-view capture system. State-of-the-art performance capture systems require either pre-scanned actors, large number of cameras or active sensors. In this work, we focus on the task of template-free, per-frame 3D surface reconstruction from as few as three RGB sensors, for which conventional visual hull or multi-view stereo methods fail to generate plausible results. We introduce a novel multi-view Convolutional Neural Network (CNN) that maps 2D images to a 3D volumetric field and we use this field to encode the probabilistic distribution of surface points of the captured subject. By querying the resulting field, we can instantiate the clothed human body at arbitrary resolutions. Our approach scales to different numbers of input images, which yield increased reconstruction quality when more views are used. Although only trained on synthetic data, our network can generalize to handle real footage from body performance capture. Our method is suitable for high-quality low-cost full body volumetric capture solutions, which are gaining popularity for VR and AR content creation. Experimental results demonstrate that our method is significantly more robust and accurate than existing techniques when only very sparse views are available.", "We introduce a novel framework that enables large-scale dense 3D scene reconstruction, data streaming over the network and immersive exploration of the reconstructed environment using virtual reality. The system is operated by two remote entities, where one entity – for instance an autonomous aerial vehicle – captures and reconstructs the environment as well as transmits the data to another entity – such as human observer – that can immersivly explore the 3D scene, decoupled from the view of the capturing entity. The performance evaluation revealed the framework’s capabilities to perform RGB-D data capturing, dense 3D reconstruction, streaming and dynamic scene updating in real time for indoor environments up to a size of 100m2, using either a state-of-the-art mobile computer or a workstation. Thereby, our work provides a foundation for enabling immersive exploration of remotely captured and incrementally reconstructed dense 3D scenes, which has not shown before and opens up new research aspects in future." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
Recently, a huge step towards an immersive teleconferencing experience has been achieved with the development of the Holoportation system @cite_38 . This system has been implemented based on the Fusion4D framework @cite_6 that allows an accurate 3D reconstruction at real-time rates, as well real-time data transmission and the coupling to AR VR technology. However, real-time performance is achieved based on massive hardware requirements involving several high-end GPUs running on multiple desktop computers and most of the hardware components have to be installed at the local user's side. Furthermore, only an area of limited size that is surrounded by the involved static cameras can be captured which allows the application of this framework for teleconferencing but prevents it from being used for interactive remote exploration of larger live-captured scenes.
{ "cite_N": [ "@cite_38", "@cite_6" ], "mid": [ "2801778672", "2532511219" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "We present an end-to-end system for augmented and virtual reality telepresence, called Holoportation. Our system demonstrates high-quality, real-time 3D reconstructions of an entire space, including people, furniture and objects, using a set of new depth cameras. These 3D models can also be transmitted in real-time to remote users. This allows users wearing virtual or augmented reality displays to see, hear and interact with remote participants in 3D, almost as if they were present in the same physical space. From an audio-visual perspective, communicating and interacting with remote users edges closer to face-to-face communication. This paper describes the Holoportation technical system in full, its key interactive capabilities, the application scenarios it enables, and an initial qualitative study of using this new communication medium." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
Towards the goal of exploring larger environments as related to the exploration of contaminated scenes envisioned in this work, Mossel and Kr "o ter @cite_5 presented a system that allows interactive VR-based exploration of the captured scene by a single exploration client. Their system benefits from the real-time reconstruction based on current voxel block hashing techniques @cite_20 , however, it only allows scene exploration by one single exploration client, and, yet, the bandwidth requirements of this approach have been reported to be up to 175 ,MBit s. Furthermore, the system relies on the direct transmission of the captured data to the rendering client, which is not designed to handle network interruptions that force the exploration client to reconnect to the reconstruction client and, consequently, scene parts that have been reconstructed during network outage will be lost.
{ "cite_N": [ "@cite_5", "@cite_20" ], "mid": [ "2801778672", "2584684405" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages.", "We introduce a novel framework that enables large-scale dense 3D scene reconstruction, data streaming over the network and immersive exploration of the reconstructed environment using virtual reality. The system is operated by two remote entities, where one entity – for instance an autonomous aerial vehicle – captures and reconstructs the environment as well as transmits the data to another entity – such as human observer – that can immersivly explore the 3D scene, decoupled from the view of the capturing entity. The performance evaluation revealed the framework’s capabilities to perform RGB-D data capturing, dense 3D reconstruction, streaming and dynamic scene updating in real time for indoor environments up to a size of 100m2, using either a state-of-the-art mobile computer or a workstation. Thereby, our work provides a foundation for enabling immersive exploration of remotely captured and incrementally reconstructed dense 3D scenes, which has not shown before and opens up new research aspects in future." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
The recent approach by Stotko al @cite_7 overcomes these problems and allows the on-the-fly scene inspection and interaction by an arbitrary number of exploration clients, and, hence, represents a practical framework for interactive collaboration purposes. Most notably, the system is based on a novel compact Marching Cubes (MC) based voxel block representation maintained on a server. Efficient streaming at low-bandwidth requirements is achieved by transmitting MC indices and reconstructing and storing the models explored by individual exploration clients directly on their hardware. This makes the approach both scalable to many-client-exploration and robust to network interruptions as the consistent model is generated on the server and the updates are streamed once the connection is re-established.
{ "cite_N": [ "@cite_7" ], "mid": [ "2801778672" ], "abstract": [ "Real-time 3D scene reconstruction from RGB-D sensor data, as well as the exploration of such data in VR AR settings, has seen tremendous progress in recent years. The combination of both these components into telepresence systems, however, comes with significant technical challenges. All approaches proposed so far are extremely demanding on input and output devices, compute resources and transmission bandwidth, and they do not reach the level of immediacy required for applications such as remote collaboration. Here, we introduce what we believe is the first practical client-server system for real-time capture and many-user exploration of static 3D scenes. Our system is based on the observation that interactive frame rates are sufficient for capturing and reconstruction, and real-time performance is only required on the client site to achieve lag-free view updates when rendering the 3D model. Starting from this insight, we extend previous voxel block hashing frameworks by introducing a novel thread-safe GPU hash map data structure that is robust under massively concurrent retrieval, insertion and removal of entries on a thread level. We further propose a novel transmission scheme for volume data that is specifically targeted to Marching Cubes geometry reconstruction and enables a 90 reduction in bandwidth between server and exploration clients. The resulting system poses very moderate requirements on network bandwidth, latency and client-side computation, which enables it to rely entirely on consumer-grade hardware, including mobile devices. We demonstrate that our technique achieves state-of-the-art representation accuracy while providing, for any number of clients, an immersive and fluid lag-free viewing experience even during network outages." ] }
1908.02949
2966611292
Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard video-based approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VR-based practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGB-D data that is streamed to a SLAM-based live multi-client telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. head-mounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robot-specific data and enables a quick integration into existing robotic systems. This way, in contrast to first-person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proof-of-concept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidth-efficient data streaming and visualization. Furthermore, we show its benefits over purely video-based teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments.
In Schwarz al @cite_8 , the rescue robot Momaro is described, which is equipped with interfaces for immersive teleoperation using an HMD device and 6D trackers. The immersive display greatly benefited the operators by increasing situational awareness. However, visualization was limited to registered 3D point clouds, which carry no color information. As a result, additional 2D camera images were displayed to the operator to visualize texture. Momaro served as a precursor to the Centauro robot @cite_27 , which extends the Momaro system in several directions, including immersive display of RGB-D data. However, the system is currently limited to displaying live data without aggregation.
{ "cite_N": [ "@cite_27", "@cite_8" ], "mid": [ "2530835184", "2410621027" ], "abstract": [ "Planetary exploration scenarios illustrate the need for autonomous robots that are capable to operate in unknown environments without direct human interaction. At the DARPA Robotics Challenge, we demonstrated that our Centaur-like mobile manipulation robot Momaro can solve complex tasks when teleoperated. Motivated by the DLR SpaceBot Cup 2015, where robots should explore a Mars-like environment, find and transport objects, take a soil sample, and perform assembly tasks, we developed autonomous capabilities for Momaro. Our robot perceives and maps previously unknown, uneven terrain using a 3D laser scanner. Based on the generated height map, we assess drivability, plan navigation paths, and execute them using the omnidirectional drive. Using its four legs, the robot adapts to the slope of the terrain. Momaro perceives objects with cameras, estimates their pose, and manipulates them with its two arms autonomously. For specifying missions, monitoring mission progress, on-the-fly reconfiguration, and teleoperation, we developed a ground station with suitable operator interfaces. To handle network communication interruptions and latencies between robot and ground station, we implemented a robust network layer for the ROS middleware. With the developed system, our team NimbRo Explorer solved all tasks of the DLR SpaceBot Camp 2015. We also discuss the lessons learned from this demonstration.", "Locomotion in uneven terrain is important for a wide range of robotic applications, including Search&Rescue operations. Our mobile manipulation robot Momaro features a unique locomotion design consisting of four legs ending in pairs of steerable wheels, allowing the robot to omnidirectionally drive on sufficiently even terrain, step over obstacles, and also to overcome height differences by climbing. We demonstrate the feasibility and usefulness of this design on the example of the DARPA Robotics Challenge, where our team NimbRo Rescue solved seven out of eight tasks in only 34 minutes. We also introduce a method for semi-autonomous execution of weight-shifting and stepping actions based on a 2D heightmap generated from 3D laser data." ] }
1908.03020
2966062579
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45 higher in this paper's four case studies.
Early work seeking to provide explanations to neural networks have been focused on the extraction of symbolic knowledge from trained networks @cite_18 , either decision trees in the case of feedforward networks @cite_10 or graphs in the case of recurrent networks @cite_5 @cite_3 . More recently, attention has been shifted from global to local explanation models due to the very large-scale nature of current deep networks, and has been focused on explaining specific network architectures (such as the bottleneck in auto-encoders @cite_9 ) or domain specific networks such as those used to solve computer vision problems @cite_6 , although some recent approaches continue to advocate the use of rule-based knowledge extraction @cite_15 @cite_2 . The reader is referred to @cite_13 for a recent survey.
{ "cite_N": [ "@cite_13", "@cite_18", "@cite_9", "@cite_3", "@cite_6", "@cite_2", "@cite_5", "@cite_15", "@cite_10" ], "mid": [ "2065204741", "2962949867", "2247148075", "1545139845", "2769943106", "1498932870", "2617995259", "1934184906", "2593110912" ], "abstract": [ "Although neural networks have shown very good performance in many application domains, one of their main drawbacks lies in the incapacity to provide an explanation for the underlying reasoning mechanisms. The “explanation capability” of neural networks can be achieved by the extraction of symbolic knowledge. In this paper, we present a new method of extraction that captures nonmonotonic rules encoded in the network, and prove that such a method is sound. We start by discussing some of the main problems of knowledge extraction methods. We then discuss how these problems may be ameliorated. To this end, a partial ordering on the set of input vectors of a network is defined, as well as a number of pruning and simplification rules. The pruning rules are then used to reduce the search space of the extraction algorithm during a pedagogical extraction, whereas the simplification rules are used to reduce the size of the extracted set of rules. We show that, in the case of regular networks, the extraction algorithm is sound and complete. We proceed to extend the extraction algorithm to the class of non-regular networks, the general case. We show that non-regular networks always contain regularities in their subnetworks. As a result, the underlying extraction method for regular networks can be applied, but now in a decompositional fashion. In order to combine the sets of rules extracted from each subnetwork into the final set of rules, we use a method whereby we are able to keep the soundness of the extraction algorithm. Finally, we present the results of an empirical analysis of the extraction system, using traditional examples and real-world application problems. The results have shown that a very high fidelity between the extracted set of rules and the network can be achieved.", "It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for example, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse. An analysis of i-RevNet’s learned representations suggests an explanation of the good accuracy by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural images representations.", "Much of the recent success of neural networks can be attributed to the deeper architectures that have become prevalent. However, the deeper architectures often yield unintelligible solutions, require enormous amounts of labeled data, and still remain brittle and easily broken. In this paper, we present a method to efficiently and intuitively discover input instances that are misclassified by well-trained neural networks. As in previous studies, we can identify instances that are so similar to previously seen examples such that the transformation is visually imperceptible. Additionally, unlike in previous studies, we can also generate mistakes that are significantly different from any training sample, while, importantly, still remaining in the space of samples that the network should be able to classify correctly. This is achieved by training a basket of N \"peer networks\" rather than a single network. These are similarly trained networks that serve to provide consistency pressure on each other. When an example is found for which a single network, S, disagrees with all of the other @math networks, which are consistent in their prediction, that example is a potential mistake for S. We present a simple method to find such examples and demonstrate it on two visual tasks. The examples discovered yield realistic images that clearly illuminate the weaknesses of the trained models, as well as provide a source of numerous, diverse, labeled-training samples.", "1. Introduction and Overview.- 1.1 Why Integrate Neurons and Symbols?.- 1.2 Strategies of Neural-Symbolic Integration.- 1.3 Neural-Symbolic Learning Systems.- 1.4 A Simple Example.- 1.5 How to Read this Book.- 1.6 Summary.- 2. Background.- 2.1 General Preliminaries.- 2.2 Inductive Learning.- 2.3 Neural Networks.- 2.3.1 Architectures.- 2.3.2 Learning Strategy.- 2.3.3 Recurrent Networks.- 2.4 Logic Programming.- 2.4.1 What is Logic Programming?.- 2.4.2 Fixpoints and Definite Programs.- 2.5 Nonmonotonic Reasoning.- 2.5.1 Stable Models and Acceptable Programs.- 2.6 Belief Revision.- 2.6.1 Truth Maintenance Systems.- 2.6.2 Compromise Revision.- I. Knowledge Refinement in Neural Networks.- 3. Theory Refinement in Neural Networks.- 3.1 Inserting Background Knowledge.- 3.2 Massively Parallel Deduction.- 3.3 Performing Inductive Learning.- 3.4 Adding Classical Negation.- 3.5 Adding Met alevel Priorities.- 3.6 Summary and Further Reading.- 4. Experiments on Theory Refinement.- 4.1 DNA Sequence Analysis.- 4.2 Power Systems Fault Diagnosis.- 4.3.Discussion.- 4.4.Appendix.- II. Knowledge Extraction from Neural Networks.- 5. Knowledge Extraction from Trained Networks.- 5.1 The Extraction Problem.- 5.2 The Case of Regular Networks.- 5.2.1 Positive Networks.- 5.2.2 Regular Networks.- 5.3 The General Case Extraction.- 5.3.1 Regular Subnetworks.- 5.3.2 Knowledge Extraction from Subnetworks.- 5.3.3 Assembling the Final Rule Set.- 5.4 Knowledge Representation Issues.- 5.5 Summary and Further Reading.- 6. Experiments on Knowledge Extraction.- 6.1 Implementation.- 6.2 The Monk's Problems.- 6.3 DNA Sequence Analysis.- 6.4 Power Systems Fault Diagnosis.- 6.5 Discussion.- III. Knowledge Revision in Neural Networks.- 7. Handling Inconsistencies in Neural Networks.- 7.1 Theory Revision in Neural Networks.- 7.1.1The Equivalence with Truth Maintenance Systems.- 7.1.2Minimal Learning.- 7.2 Solving Inconsistencies in Neural Networks.- 7.2.1 Compromise Revision.- 7.2.2 Foundational Revision.- 7.2.3 Nonmonotonic Theory Revision.- 7.3 Summary of the Chapter.- 8. Experiments on Handling Inconsistencies.- 8.1 Requirements Specifications Evolution as Theory Refinement.- 8.1.1Analysing Specifications.- 8.1.2Revising Specifications.- 8.2 The Automobile Cruise Control System.- 8.2.1Knowledge Insertion.- 8.2.2Knowledge Revision: Handling Inconsistencies.- 8.2.3Knowledge Extraction.- 8.3 Discussion.- 8.4 Appendix.- 9. Neural-Symbolic Integration: The Road Ahead.- 9.1 Knowledge Extraction.- 9.2 Adding Disjunctive Information.- 9.3 Extension to the First-Order Case.- 9.4 Adding Modalities.- 9.5 New Preference Relations.- 9.6 A Proof Theoretical Approach.- 9.7 The \"Forbidden Zone\" [Amax, Amin].- 9.8 Acceptable Programs and Neural Networks.- 9.9 Epilogue.", "The success of recent deep convolutional neural networks (CNNs) depends on learning hidden representations that can summarize the important factors of variation behind the data. However, CNNs often criticized as being black boxes that lack interpretability, since they have millions of unexplained model parameters. In this work, we describe Network Dissection, a method that interprets networks by providing labels for the units of their deep visual representations. The proposed method quantifies the interpretability of CNN representations by evaluating the alignment between individual hidden units and a set of visual semantic concepts. By identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, and colors. The method reveals that deep representations are more transparent and interpretable than expected: we find that representations are significantly more interpretable than they would be under a random equivalently powerful basis. We apply the method to interpret and compare the latent representations of various network architectures trained to solve different supervised and self-supervised training tasks. We then examine factors affecting the network interpretability such as the number of the training iterations, regularizations, different initializations, and the network depth and width. Finally we show that the interpreted units can be used to provide explicit explanations of a prediction given by a CNN for an image. Our results highlight that interpretability is an important property of deep neural networks that provides new insights into their hierarchical structure.", "Ability of deep networks to extract high level features and of recurrent networks to perform time-series inference have been studied. In view of universality of one hidden layer network at approximating functions under weak constraints, the benefit of multiple layers is to enlarge the space of dynamical systems approximated or, given the space, reduce the number of units required for a certain error. Traditionally shallow networks with manually engineered features are used, back-propagation extent is limited to one and attempt to choose a large number of hidden units to satisfy the Markov condition is made. In case of Markov models, it has been shown that many systems need to be modeled as higher order. In the present work, we present deep recurrent networks with longer backpropagation through time extent as a solution to modeling systems that are high order and to predicting ahead. We study epileptic seizure suppression electro-stimulator. Extraction of manually engineered complex features and prediction employing them has not allowed small low-power implementations as, to avoid possibility of surgery, extraction of any features that may be required has to be included. In this solution, a recurrent neural network performs both feature extraction and prediction. We prove analytically that adding hidden layers or increasing backpropagation extent increases the rate of decrease of approximation error. A Dynamic Programming (DP) training procedure employing matrix operations is derived. DP and use of matrix operations makes the procedure efficient particularly when using data-parallel computing. The simulation studies show the geometry of the parameter space, that the network learns the temporal structure, that parameters converge while model output displays same dynamic behavior as the system and greater than .99 Average Detection Rate on all real seizure data tried.", "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.", "In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks. Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant. Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances the ability of the model to integrate the context information, which is important for object recognition. Like other recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process. The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN. With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent structure over purely feed-forward structure for object recognition.", "Deep neural network is difficult to train and this predicament becomes worse as the depth increases. The essence of this problem exists in the magnitude of backpropagated errors that will result in gradient vanishing or exploding phenomenon. We show that a variant of regularizer which utilizes orthonormality among different filter banks can alleviate this problem. Moreover, we design a backward error modulation mechanism based on the quasi-isometry assumption between two consecutive parametric layers. Equipped with these two ingredients, we propose several novel optimization solutions that can be utilized for training a specific-structured (repetitively triple modules of Conv-BNReLU) extremely deep convolutional neural network (CNN) WITHOUT any shortcuts identity mappings from scratch. Experiments show that our proposed solutions can achieve distinct improvements for a 44-layer and a 110-layer plain networks on both the CIFAR-10 and ImageNet datasets. Moreover, we can successfully train plain CNNs to match the performance of the residual counterparts. Besides, we propose new principles for designing network structure from the insights evoked by orthonormality. Combined with residual structure, we achieve comparative performance on the ImageNet dataset." ] }
1908.03020
2966062579
We propose a novel method for explaining the predictions of any classifier. In our approach, local explanations are expected to explain both the outcome of a prediction and how that prediction would change if 'things had been different'. Furthermore, we argue that satisfactory explanations cannot be dissociated from a notion and measure of fidelity, as advocated in the early days of neural networks' knowledge extraction. We introduce a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary. A system called CLEAR: Counterfactual Local Explanations via Regression, is introduced and evaluated. CLEAR generates w-counterfactual explanations that state minimum changes necessary to flip a prediction's classification. CLEAR then builds local regression models, using the w-counterfactuals to measure and improve the fidelity of its regressions. By contrast, the popular LIME method, which also uses regression to generate local explanations, neither measures its own fidelity nor generates counterfactuals. CLEAR's regressions are found to have significantly higher fidelity than LIME's, averaging over 45 higher in this paper's four case studies.
More specifically, @cite_14 have proposed LORE – Local Rule based Explanations, which provides local explanations for binary classification tasks using decision trees. It is model-agnostic, generates local models from synthetic data, has many other similarities to LIME, but it also generates counterfactual explanations. criticise LIME for producing neighbourhood datasets whose observations are too distant from each other and have too low a density around . By contrast LORE uses a genetic algorithm to create neighbourhood datasets with a high density around and the decision boundary. claim that their system outperforms LIME and they provide fidelity statistics comparing LORE and LIME, where fidelity is defined in terms of how well local models perform in making the same classifications as the underlying machine learning system. However, their fidelity statistics for LIME could be misconstrued; it does not follow from being able to mimic a system’s classifications that a local model will also faithfully mimic its counterfactuals (see Section 4).
{ "cite_N": [ "@cite_14" ], "mid": [ "2803532212" ], "abstract": [ "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance's features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
The seminal result of @cite_0 showed that consensus cannot be reached in asynchronous systems in the presence of crash faults. @cite_5 showed that it is however possible to reach in an asynchronous system even with arbitrary faulty behavior when the values reside on the continuous real line. Subsequently, the one-dimensional approximate agreement problem has been extensively studied @cite_5 @cite_10 @cite_36 @cite_41 . Fekete @cite_36 showed that any algorithm reducing the distance of values from @math to @math requires @math asynchronous rounds when @math ; in the discrete setting this yields the bound @math for paths of length @math . Recently, @cite_13 introduced the natural generalisation of approximate agreement and showed that the @math -dimensional problem is solvable in an asynchronous system with Byzantine faults if and only if @math holds for any given @math .
{ "cite_N": [ "@cite_13", "@cite_41", "@cite_36", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "2139686511", "1971773339", "2010107859", "2147056869", "2115307136", "2025375132" ], "abstract": [ "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.", "Consider a network of @math n processes, where each process inputs a @math d-dimensional vector of reals. All processes can communicate directly with others via reliable FIFO channels. We discuss two problems. The multidimensional Byzantine consensus problem, for synchronous systems, requires processes to decide on a single @math d-dimensional vector @math v?Rd, inside the convex hull of @math d-dimensional vectors that were input by the non-faulty processes. Also, the multidimensional Byzantine approximate agreement (MBAA) problem, for asynchronous systems, requires processes to decide on multiple @math d-dimensional vectors in @math Rd, all within a fixed Euclidean distance @math ∈ of each other, and inside the convex hull of @math d-dimensional vectors that were input by the non-faulty processes. We obtain the following results for the problems above, while tolerating up to @math f Byzantine failures in systems with complete communication graphs: (1) In synchronous systems, @math n>max 3f,(d+1)f is necessary and sufficient to solve the multidimensional consensus problem. (2) In asynchronous systems, @math n>(d+2)f is necessary and sufficient to solve the multidimensional approximate agreement problem. Our sufficiency proofs are constructive, giving explicit protocols for the problems. In particular, for the MBAA problem, we give two protocols with strictly different properties and applications.", "The problem of e-approximate agreement in Byzantine asynchronous systems is well-understood when all values lie on the real line. In this paper, we generalize the problem to consider values that lie in Rm, for m ≥ 1, and present an optimal protocol in regard to fault tolerance. Our scenario is the following. Processes start with values in Rm, for m ≥ 1, and communicate via message-passing. The system is asynchronous: there is no upper bound on processes' relative speeds or on message delay. Some faulty processes can display arbitrarily malicious (i.e. Byzantine) behavior. Non-faulty processes must decide on values that are: (1) in Rm; (2) within distance e of each other; and (3) in the convex hull of the non-faulty processes' inputs. We give an algorithm with a matching lower bound on fault tolerance: we require n > t(m+2), where n is the number of processes, t is the number of Byzantine processes, and input and output values reside in Rm. Non-faulty processes send O(n2 d log(m e max δ(d): 1 ≤ d ≤ m )) messages in total, where δ(d) is the range of non-faulty inputs projected at coordinate d. The Byzantine processes do not affect the algorithm's running time.", "Much of the past work on asynchronous approximate Byzantine consensus has assumed scalar inputs at the nodes [4, 8]. Recent work has yielded approximate Byzantine consensus algorithms for the case when the input at each node is a d-dimensional vector, and the nodes must reach consensus on a vector in the convex hull of the input vectors at the fault-free nodes [9, 13]. The d-dimensional vectors can be equivalently viewed as points in the d-dimensional Euclidean space. Thus, the algorithms in [9, 13] require the fault-free nodes to decide on a point in the d-dimensional space. In our recent work [12], we proposed a generalization of the consensus problem, namely Byzantine convex consensus (BCC), which allows the decision to be a convex polytope in the d-dimensional space, such that the decided polytope is within the convex hull of the input vectors at the fault-free nodes. We also presented an asynchronous approximate BCC algorithm. In this paper, we propose a new BCC algorithm with optimal fault-tolerance that also agrees on a convex polytope that is as large as possible under adversarial conditions. Our prior work [12] does not guarantee the optimality of the output polytope.", "Consider an asynchronous system where each process begins with an arbitrary real value. Given some fixed e>0, an approximate agreement algorithm must have all non-faulty processes decide on values that are at most e from each other and are in the range of the initial values of the non-faulty processes. Previous constructions solved asynchronous approximate agreement only when there were at least 5t+1 processes, t of which may be Byzantine. In this paper we close an open problem raised by in 1983. We present a deterministic optimal resilience approximate agreement algorithm that can tolerate any t Byzantine faults while requiring only 3t+1 processes. The algorithm's rate of convergence and total message complexity are efficiently bounded as a function of the range of the initial values of the non-faulty processes. All previous asynchronous algorithms that are resilient to Byzantine failures may require arbitrarily many messages to be sent.", "This paper proves a necessary and sufficient condition for the existence of iterative, algorithms that achieve approximate Byzantine consensus in arbitrary directed graphs, where each directed edge represents a communication channel between a pair of nodes. The class of iterative algorithms considered in this paper ensures that, after each iteration of the algorithm, the state of each fault-free node remains in the convex hull of the states of the fault-free nodes at the end of the previous iteration. The following convergence requirement is imposed: for any e > 0, after a sufficiently large number of iterations, the states of the fault-free nodes are guaranteed to be within e of each other. To the best of our knowledge, tight necessary and sufficient conditions for the existence of such iterative consensus algorithms in synchronous arbitrary point-to-point networks in presence of Byzantine faults, have not been developed previously. The methodology and results presented in this paper can also be extended to asynchronous systems." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
The was originally introduced in the context of wait-free algorithms in shared memory models @cite_47 @cite_53 . The problem has recently resurfaced in the context of asynchronous message-passing models with crash faults @cite_28 @cite_56 . These papers consider the problem when the validity condition is given as @math , i.e., the output of a processor must satisfy @math and the feasible area is determined also by the inputs of faulty processors. However, it is not difficult to see that under Byzantine faults, this validity condition is not reasonable, as the problem cannot be solved even with one faulty processor.
{ "cite_N": [ "@cite_28", "@cite_47", "@cite_56", "@cite_53" ], "mid": [ "1967858331", "2083306187", "2139686511", "2042928046" ], "abstract": [ "In the classical consensus problem, each of n processors receives a private input value and produces a decision value which is one of the original input values, with the requirement that all processors decide the same value. A central result in distributed computing is that, in several standard models including the asynchronous shared-memory model, this problem has no deterministic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri [ Inform. and Comput., 105 (1993), pp. 132--158], where the agreement condition is weakened so that the decision values produced may be different, as long as the number of distinct values is at most k. For @math it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper, we resolve this question by showing that for any k < n, there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "In the classical consensus problem,each of n processors receives a private input value and produces a decision value which is one of the original input values,with the requirement that all processors decide the same value. A central result in distributed computing is that,in several standard models including the asynchronous shared-memory model,this problem has no determinis- tic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri (Inform. and Comput.,105 (1993),pp. 132-158),where the agreement condition is weak- ened so that the decision values produced may be different,as long as the number of distinct values is at most k .F or n>k ≥ 2 it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper,we resolve this question by showing that for any k<n ,there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.", "We show that no algorithm exists for deciding whether a finite task for three or more processors is wait-free solvable in the asynchronous read-write shared-memory model. This impossibility result implies that there is no constructive (recursive) characterization of wait-free solvable tasks. It also applies to other shared-memory models of distributed computing, such as the comparison-based model." ] }
1908.02743
2965212953
Consider a distributed system with @math processors out of which @math can be Byzantine faulty. In the approximate agreement task, each processor @math receives an input value @math and has to decide on an output value @math such that - the output values are in the convex hull of the non-faulty processors' input values, - the output values are within distance @math of each other. Classically, the values are assumed to be from an @math -dimensional Euclidean space, where @math . In this work, we study the task in a discrete setting, where input values with some structure expressible as a graph. Namely, the input values are vertices of a finite graph @math and the goal is to output vertices that are within distance @math of each other in @math , but still remain in the graph-induced convex hull of the input values. For @math , the task reduces to consensus and cannot be solved with a deterministic algorithm in an asynchronous system even with a single crash fault. For any @math , we show that the task is solvable in asynchronous systems when @math is chordal and @math , where @math is the clique number of @math . In addition, we give the first Byzantine-tolerant algorithm for a variant of lattice agreement. For synchronous systems, we show tight resilience bounds for the exact variants of these and related tasks over a large class of combinatorial structures.
Another class of structured agreement problems in the wait-free asynchronous setting are tasks @cite_32 , which generalise @math -set agreement and approximate agreement (e.g., @math -set agreement and one-dimensional approximate agreement). In loop agreement, the set of inputs consists of three distinct vertices on a loop in a 2-dimensional simplicial complex and the outputs are vertices of the complex with certain constraints, whereas are a generalisation of loop agreement to higher dimensions @cite_39 . These tasks are part of large body of work exploring the deep connection of asynchronous computability and combinatorial topology, which has successfully been used to characterise the of various distributed tasks @cite_11 . Gafni and Kuznetsov's @math -reconciliation task @cite_17 achieves geodesic approximate agreement on a graph of system configurations.
{ "cite_N": [ "@cite_17", "@cite_11", "@cite_32", "@cite_39" ], "mid": [ "1967858331", "2083306187", "2886192190", "2011451665" ], "abstract": [ "In the classical consensus problem, each of n processors receives a private input value and produces a decision value which is one of the original input values, with the requirement that all processors decide the same value. A central result in distributed computing is that, in several standard models including the asynchronous shared-memory model, this problem has no deterministic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri [ Inform. and Comput., 105 (1993), pp. 132--158], where the agreement condition is weakened so that the decision values produced may be different, as long as the number of distinct values is at most k. For @math it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper, we resolve this question by showing that for any k < n, there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "In the classical consensus problem,each of n processors receives a private input value and produces a decision value which is one of the original input values,with the requirement that all processors decide the same value. A central result in distributed computing is that,in several standard models including the asynchronous shared-memory model,this problem has no determinis- tic solution. The k-set agreement problem is a generalization of the classical consensus proposed by Chaudhuri (Inform. and Comput.,105 (1993),pp. 132-158),where the agreement condition is weak- ened so that the decision values produced may be different,as long as the number of distinct values is at most k .F or n>k ≥ 2 it was not known whether this problem is solvable deterministically in the asynchronous shared memory model. In this paper,we resolve this question by showing that for any k<n ,there is no deterministic wait-free protocol for n processors that solves the k-set agreement problem. The proof technique is new: it is based on the development of a topological structure on the set of possible processor schedules of a protocol. This topological structure has a natural interpretation in terms of the knowledge of the processors of the state of the system. This structure reveals a close analogy between the impossibility of wait-free k-set agreement and the Brouwer fixed point theorem for the k-dimensional ball.", "This paper studies the lattice agreement problem and the generalized lattice agreement problem in distributed message passing systems. In the lattice agreement problem, given input values from a lattice, processes have to non-trivially decide output values that lie on a chain. We consider the lattice agreement problem in both synchronous and asynchronous systems. For synchronous lattice agreement, we present two algorithms which run in @math and @math rounds, respectively, where @math denotes the height of the input sublattice @math , @math is the number of crash failures the system can tolerate, and @math is the number of processes in the system. These algorithms have significant better round complexity than previously known algorithms. The algorithm by attiya1995atomic takes @math synchronous rounds, and the algorithm by Mavronicolasa mavronicolasabound takes @math rounds. For asynchronous lattice agreement, we propose an algorithm which has time complexity of @math message delays which improves on the previously known time complexity of @math message delays. The generalized lattice agreement problem defined by in faleiro2012generalized is a generalization of the lattice agreement problem where it is applied for the replicated state machine. We propose an algorithm which guarantees liveness when a majority of the processes are correct in asynchronous systems. Our algorithm requires @math units of time in the worst case which is better than @math units of time required by the algorithm of faleiro2012generalized .", "Loop agreement is a family of wait-free tasks that includes instances of set agreement and approximate agreement tasks. A task G implements task F if one can construct a solution to F from a solution to G, possibly followed by access to a read write memory. Loop agreement tasks form a lattice under this notion of implementation.This paper presents a classification of loop agreement tasks. Each loop agreement task can be assigned an algebraic signature consisting of a finitely presented group G and a distinguished element g in G. This signature characterizes the task's power to implement other tasks. If F and G are loop agreement tasks with respective signatures 〈F,f〉 and 〈G,g〉, then F implements G if and only if there exists a group homomorphism h : F → G carrying f to g." ] }
1908.02571
2964743966
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
Classic link prediction methods on social media use graph properties of the social network or NLP feature of nodes to predict links between entities. For example, @cite_3 is base solely on graph features and @cite_8 uses a similar technique for the social networks in healthcare. Meanwhile, @cite_14 uses common words to cluster and rank nodes and based on that predicts the closely-ranked nodes to be connected. Another Study @cite_10 uses a combination of graph features and keyword matches to train classifiers(SVM, Naive Bayes, etc) to predict if a link exists between two nodes.
{ "cite_N": [ "@cite_14", "@cite_10", "@cite_3", "@cite_8" ], "mid": [ "2137962371", "2542727820", "2097915776", "2136116685" ], "abstract": [ "In this paper we discuss a very simple approach of combining content and link information in graph structures for the purpose of community discovery, a fundamental task in network analysis. Our approach hinges on the basic intuition that many networks contain noise in the link structure and that content information can help strengthen the community signal. This enables ones to eliminate the impact of noise (false positives and false negatives), which is particularly prevalent in online social networks and Web-scale information networks. Specifically we introduce a measure of signal strength between two nodes in the network by fusing their link strength with content similarity. Link strength is estimated based on whether the link is likely (with high probability) to reside within a community. Content similarity is estimated through cosine similarity or Jaccard coefficient. We discuss a simple mechanism for fusing content and link similarity. We then present a biased edge sampling procedure which retains edges that are locally relevant for each graph node. The resulting backbone graph can be clustered using standard community discovery algorithms such as Metis and Markov clustering. Through extensive experiments on multiple real-world datasets (Flickr, Wikipedia and CiteSeer) with varying sizes and characteristics, we demonstrate the effectiveness and efficiency of our methods over state-of-the-art learning and mining approaches several of which also attempt to combine link and content analysis for the purposes of community discovery. Specifically we always find a qualitative benefit when combining content with link analysis. Additionally our biased graph sampling approach realizes a quantitative benefit in that it is typically several orders of magnitude faster than competing approaches.", "Online social networking sites have become increasingly popular over the last few years. As a result, new interdisciplinary research directions have emerged in which social network analysis methods are applied to networks containing hundreds millions of users. Unfortunately, links between individuals may be missing due to imperfect acquirement processes or because they are not yet reflected in the online network (i.e., friends in real world did not form a virtual connection.) Existing link prediction techniques lack the scalability required for full application on a continuously growing social network which may be adding everyday users with thousands of connections. The primary bottleneck in link prediction techniques is extracting structural features required for classifying links. In this paper we propose a set of simple, easy-to-compute structural features that can be analyzed to identify missing links. We show that a machine learning classifier trained using the proposed simple structural features can successfully identify missing links even when applied to a hard problem of classifying links between individuals who have at least one common friend. A new friends measure that we developed is shown to be a good predictor for missing links and an evaluation experiment was performed on five large social networks datasets: Face book, Flickr, You Tube, Academia and The Marker. Our methods can provide social network site operators with the capability of helping users to find known, offline contacts and to discover new friends online. They may also be used for exposing hidden links in an online social network.", "Link prediction is a complex, inherently relational, task. Be it in the domain of scientific citations, social networks or hypertext links, the underlying data are extremely noisy and the characteristics useful for prediction are not readily available in a “flat” file format, but rather involve complex relationships among objects. In this paper, we propose the application of our methodology for Statistical Relational Learning to building link prediction models. We propose an integrated approach to building regression models from data stored in relational databases in which potential predictors are generated by structured search of the space of queries to the database, and then tested for inclusion in a logistic regression. We present experimental results for the task of predicting citations made in scientific literature using relational data taken from CiteSeer. This data includes the citation graph, authorship and publication venues of papers, as well as their word content.", "Link prediction is a fundamental problem in social network analysis. The key technique in unsupervised link prediction is to find an appropriate similarity measure between nodes of a network. A class of wildly used similarity measures are based on random walk on graph. The traditional random walk (TRW) considers the link structures by treating all nodes in a network equivalently, and ignores the centrality of nodes of a network. However, in many real networks, nodes of a network not only prefer to link to the similar node, but also prefer to link to the central nodes of the network. To address this issue, we use maximal entropy random walk (MERW) for link prediction, which incorporates the centrality of nodes of the network. First, we study certain important properties of MERW on graph @math by constructing an eigen-weighted graph G. We show that the transition matrix and stationary distribution of MERW on G are identical to the ones of TRW on G. Based on G, we further give the maximal entropy graph Laplacians, and show how to fast compute the hitting time and commute time of MERW. Second, we propose four new graph kernels and two similarity measures based on MERW for link prediction. Finally, to exhibit the power of MERW in link prediction, we compare 27 various link prediction methods over 3 synthetic and 8 real networks. The results show that our newly proposed MERW based methods outperform the state-of-the-art method on most datasets." ] }
1908.02571
2964743966
Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession.
TransE @cite_13 is an embedding model that is popular because of its simplicity and efficiency. It represents the entities in a KG by a relation between the vectors representing them. The score function describing these vectors in TransE is:
{ "cite_N": [ "@cite_13" ], "mid": [ "2127795553" ], "abstract": [ "We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The first work to explore one communication round consensus in the benign failure model is @cite_10 . The basic protocol in @cite_10 requires @math . That protocol is also extended to support a preferred value which improves the resiliency requirement to @math , similar to our work. The main contribution of this paper compared to @cite_10 is in our exploration of this problem under Byzantine failures and in the fact that we present a single generic optimizer for both failure models.
{ "cite_N": [ "@cite_10" ], "mid": [ "1976693492" ], "abstract": [ "This paper introduces some algorithms to solve crash-failure, failure-by-omission and Byzantine failure versions of the Byzantine Generals or consensus problem, where non-faulty processors need only arrive at values that are close together rather than identical. For each failure model and each value ofS, we give at-resilient algorithm usingS rounds of communication. IfS=t+1, exact agreement is obtained. In the algorithms for the failure-by-omission and Byzantine failure models, each processor attempts to identify the faulty processors and corrects values transmited by them to reduce the amount of disagreement. We also prove lower bounds for each model, to show that each of our algorithms has a convergence rate that is asymptotic to the best possible in that model as the number of processors increases." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The work of @cite_18 explored simple Byzantine consensus protocols that can terminate in a single communication round whenever all nodes start with the same value and certain failures do not manifest. Yet, the probabilistic protocol of @cite_18 required @math while their deterministic protocol needed @math . In contrast, our optimizer when instantiated to Byzantine failures can withstand up to @math with the classical validity definition (and @math with external validity, which was not explored in @cite_18 ). This is due to biasing the consensus into preferring a certain value. The price that we pay compared to @cite_18 is that if all nodes start with the non-preferable value and the respective failures do not manifest, the protocol in @cite_18 would terminate in a single communication step while our optimizer would have to invoke the full protocol.
{ "cite_N": [ "@cite_18" ], "mid": [ "2058322902" ], "abstract": [ "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes" ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
Traditional deterministic Byzantine consensus protocols, most notably PBFT @cite_15 require at least 3 communication rounds to terminate. Multiple works that reduce this number have been published, each presenting a unique optimization. The Q U work presented a client driven protocol @cite_17 which enables termination in two communication rounds when favorable conditions are met. Yet, its resiliency requirement is @math , compared to our @math for the classical validity and @math for external validity. The HQ work improved the resiliency of Q U to @math , yet does not perform well under high network load @cite_2 . Also, our optimizer is generic whereas Q U and HQ are specialized solutions, each tailored to its intricate protocol.
{ "cite_N": [ "@cite_15", "@cite_2", "@cite_17" ], "mid": [ "2129467152", "2902905458", "2058322902" ], "abstract": [ "There are currently two approaches to providing Byzantine-fault-tolerant state machine replication: a replica-based approach, e.g., BFT, that uses communication between replicas to agree on a proposed ordering of requests, and a quorum-based approach, such as Q U, in which clients contact replicas directly to optimistically execute operations. Both approaches have shortcomings: the quadratic cost of inter-replica communication is un-necessary when there is no contention, and Q U requires a large number of replicas and performs poorly under contention. We present HQ, a hybrid Byzantine-fault-tolerant state machine replication protocol that overcomes these problems. HQ employs a lightweight quorum-based protocol when there is no contention, but uses BFT to resolve contention when it arises. Furthermore, HQ uses only 3f + 1 replicas to tolerate f faults, providing optimal resilience to node failures. We implemented a prototype of HQ, and we compare its performance to BFT and Q U analytically and experimentally. Additionally, in this work we use a new implementation of BFT designed to scale as the number of faults increases. Our results show that both HQ and our new implementation of BFT scale as f increases; additionally our hybrid approach of using BFT to handle contention works well.", "This paper introduces a new leaderless Byzantine consensus called the Democratic Byzantine Fault Tolerance (DBFT) for blockchains. While most blockchain consensus protocols rely on a correct leader or coordinator to terminate, our algorithm can terminate even when its coordinator is faulty. The key idea is to allow processes to complete asynchronous rounds as soon as they receive a threshold of messages, instead of having to wait for a message from a coordinator that may be slow. The resulting decentralization is particularly appealing for blockchains for two reasons: (i) each node plays a similar role in the execution of the consensus, hence making the decision inherently “democratic” (ii) decentralization avoids bottlenecks by balancing the load, making the solution scalable. DBFT is deterministic, assumes partial synchrony, is resilience optimal, time optimal and does not need signatures. We first present a simple safe binary Byzantine consensus algorithm, modify it to ensure termination, and finally present an optimized reduction from multivalue consensus to binary consensus whose fast path terminates in 4 message delays.", "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes" ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The Fast Byzantine Consensus (FaB) protocol was the first protocol to implement a Byzantine consensus protocol that terminates in two communication phases in the normal case while requiring @math @cite_29 . The normal case in @cite_29 is defined as when there is a unique correct leader, all correct acceptors agree on its identity, and the system is in a period of synchrony. This protocol translates into a @math phase state machine replication protocol. Another variant can accommodate @math , where @math is the upper bound on non-leaders suffering Byzantine failures.
{ "cite_N": [ "@cite_29" ], "mid": [ "2058322902" ], "abstract": [ "We present the first protocol that reaches asynchronous Byzantine consensus in two communication steps in the common case. We prove that our protocol is optimal in terms of both number of communication steps and number of processes for two-step consensus. The protocol can be used to build a replicated state machine that requires only three communication steps per request in the common case. Further, we show a parameterized version of the protocol that is safe despite f Byzantine failures and, in the common case, guarantees two-step execution despite some number t of failures (t les f). We show that this parameterized two-step consensus protocol is also optimal in terms of both number of communication steps and number of processes" ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
Zyzzyva is a client driven protocol @cite_24 which terminates after @math communication rounds (including the communication between the client and the replicas) whenever the client receives identical replies from all @math replicas. Our optimizer obtains termination in a single communication round among the replicas even when upto @math of them may crush or be slow. This is by relying on all-to-all communication, and ensuring fast termination only when the preferred value is included in the first @math replies. Also, our optimizer is generic while Zyzzyva and FaB are specialized solutions.
{ "cite_N": [ "@cite_24" ], "mid": [ "2139359217" ], "abstract": [ "We present Zyzzyva, a protocol that uses speculation to reduce the cost and simplify the design of Byzantine fault tolerant state machine replication. In Zyzzyva, replicas respond to a client's request without first running an expensive three-phase commit protocol to reach agreement on the order in which the request must be processed. Instead, they optimistically adopt the order proposed by the primary and respond immediately to the client. Replicas can thus become temporarily inconsistent with one another, but clients detect inconsistencies, help correct replicas converge on a single total ordering of requests, and only rely on responses that are consistent with this total order. This approach allows Zyzzyva to reduce replication overheads to near their theoretical minimal." ] }
1908.02675
2966014518
Consensus is one of the most fundamental distributed computing problems. In particular, it serves as a building block in many replication based fault-tolerant systems and in particular in multiple recent blockchain solutions. Depending on its exact variant and other environmental assumptions, solving consensus requires multiple communication rounds. Yet, there are known optimistic protocols that guarantee termination in a single communication round under favorable conditions. In this paper we present a generic optimizer than can turn any consensus protocol into an optimized protocol that terminates in a single communication round whenever all nodes start with the same predetermined value and no Byzantine failures occur (although node crashes are allowed). This is regardless of the network timing assumptions and additional oracle capabilities assumed by the base consensus protocol being optimized. In the case of benign failures, our optimizer works whenever the number of faulty nodes @math . For Byzantine behavior, our optimizer's resiliency depends on the validity variant sought. In the case of classical validity, it can accommodate @math Byzantine failures. With the more recent external validity function assumption, it works whenever @math . Either way, our optimizer only relies on oral messages, thereby imposing very light-weight crypto requirements.
The condition based approach for solving consensus identifies various sets of input values that enable solving consensus fast @cite_7 . This is by treating the set of input values held by all processes as an input vector to the problem. Specifically, the work in @cite_11 showed that when the possible input vectors correspond to error correcting codes, then consensus is solvable in a single communication round regardless of synchrony assumptions.
{ "cite_N": [ "@cite_7", "@cite_11" ], "mid": [ "2139686511", "1760770890" ], "abstract": [ "The condition-based approach identifies sets of input vectors, called conditions, for which it is possible to design an asynchronous protocol solving a distributed problem despite process crashes. This paper establishes a direct correlation between distributed agreement problems and error-correcting codes. In particular, crash failures in distributed agreement problems correspond to erasure failures in error-correcting codes and Byzantine and value domain faults correspond to corruption errors. This correlation is exemplified by concentrating on two well-known agreement problems, namely, consensus and interactive consistency, in the context of the condition-based approach. Specifically, the paper presents the following results: first, it shows that the conditions that allow interactive consistency to be solved despite fc crashes and fc value domain faults correspond exactly to the set of error-correcting codes capable of recovering from fc erasures and fc corruptions. Second, the paper proves that consensus can be solved despite fc crash failures if the condition corresponds to a code whose Hamming distance is fc + 1 and Byzantine consensus can be solved despite fb Byzantine faults if the Hamming distance of the code is 2 fb + 1. Finally, the paper uses the above relations to establish several results in distributed agreement that are derived from known results in error-correcting codes and vice versa.", "This work addresses Byzantine vector consensus, wherein the input at each process is a d-dimensional vector of reals, and each process is required to decide on a decision vector that is in the convex hull of the input vectors at the fault-free processes [9,12]. The input vector at each process may also be viewed as a point in the d-dimensional Euclidean space R d , where di¾?>i¾?0 is a finite integer. Recent work [9,12] has addressed Byzantine vector consensus, and presented algorithms with optimal fault tolerance in complete graphs. This paper considers Byzantine vector consensus in incomplete graphs using a restricted class of iterative algorithms that maintain only a small amount of memory across iterations. For such algorithms, we prove a necessary condition, and a sufficient condition, for the graphs to be able to solve the vector consensus problem iteratively. We present an iterative Byzantine vector consensus algorithm, and prove it correct under the sufficient condition. The necessary condition presented in this paper for vector consensus does not match with the sufficient condition for di¾?>i¾?1; thus, a weaker condition may potentially suffice for Byzantine vector consensus." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
In contrast, Mixture of Experts (MoE) @cite_40 employs a divide-and-conquer strategy where each base-learner, expert, specializes in one part of the problem domain. An additional gating network assesses the relevancy of each expert for a given input, and predicts an associated weight. The ensemble prediction is a weighted average of the experts' outputs. MoE has been trained by minimizing the expected training loss @cite_40 , maximizing the likelihood under a Gaussian mixture model interpretation @cite_40 or using the expectation-maximization (EM) algorithm @cite_41 .
{ "cite_N": [ "@cite_41", "@cite_40" ], "mid": [ "2963280294", "2787223504" ], "abstract": [ "Mixtures of Experts combine the outputs of several “expert” networks, each of which specializes in a different part of the input space. This is achieved by training a “gating” network that maps each input to a distribution over the experts. Such models show promise for building larger networks that are still cheap to compute at test time, and more parallelizable at training time. In this this work, we extend the Mixture of Experts to a stacked model, the Deep Mixture of Experts, with multiple sets of gating and experts. This exponentially increases the number of effective experts by associating each input with a combination of experts at each layer, yet maintains a modest model size. On a randomly translated version of the MNIST dataset, we find that the Deep Mixture of Experts automatically learns to develop location-dependent (“where”) experts at the first layer, and class-specific (“what”) experts at the second layer. In addition, we see that the different combinations are in use when the model is applied to a dataset of speech monophones. These demonstrate effective use of all expert combinations.", "We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
Scene coordinate regression methods @cite_58 @cite_10 @cite_2 @cite_43 @cite_28 @cite_44 @cite_4 @cite_29 @cite_13 @cite_1 also estimate 2D-3D correspondences between image and environment but do so densely for each pixel of the input image. This circumvents the need for a feature detector with the aforementioned draw-backs of feature-based methods. Brachmann al @cite_33 combine a neural network for scene coordinate regression with a differentiable RANSAC for an end-to-end trainable camera re-localization pipeline. Brachmann and Rother @cite_0 improve the pipeline's initialization and differentiable pose optimization to achieve state-of-the-art results for indoor camera re-localization from single RGB images. We build on and extend @cite_33 @cite_1 by combining them with our ESAC framework. Thereby, we are able to address two real-world problems: scalability and ambiguity in camera re-localization. Some scene coordinate regression methods use an ensemble of base learners, namely random forests @cite_58 @cite_2 @cite_43 @cite_28 @cite_44 @cite_29 @cite_13 . Guzman-Rivera al @cite_45 train the random forest in a boosting-like manner to diversify its predictions. Massiceti al @cite_59 map an ensemble of decision trees to an ensemble of neural networks. However, in none of these methods do the base-learners specialize in parts of the problem domain.
{ "cite_N": [ "@cite_13", "@cite_4", "@cite_33", "@cite_28", "@cite_29", "@cite_1", "@cite_44", "@cite_43", "@cite_0", "@cite_45", "@cite_59", "@cite_2", "@cite_58", "@cite_10" ], "mid": [ "2522883048", "2963053725", "2739492061", "2964175348", "2803564000", "2081605477", "2786350692", "1989476314", "2604236302", "2472269674", "2592183487", "2055686029", "2963523575", "2963024893" ], "abstract": [ "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step, Random Forests (RFs) are typically used. On the other hand, Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "We propose a new deep learning based approach for camera relocalization. Our approach localizes a given query image by using a convolutional neural network (CNN) for first retrieving similar database images and then predicting the relative pose between the query and the database images, whose poses are known. The camera location for the query image is obtained via triangulation from two relative translation estimates using a RANSAC based approach. Each relative pose estimate provides a hypothesis for the camera orientation and they are fused in a second RANSAC scheme. The neural network is trained for relative pose estimation in an end-to-end manner using training image pairs. In contrast to previous work, our approach does not require scene-specific training of the network, which improves scalability, and it can also be applied to scenes which are not available during the training of the network. As another main contribution, we release a challenging indoor localisation dataset covering 5 different scenes registered to a common coordinate frame. We evaluate our approach using both our own dataset and the standard 7 Scenes benchmark. The results show that the proposed approach generalizes well to previously unseen scenes and compares favourably to other recent CNN-based methods.", "Scene coordinate regression has become an essential part of current camera re-localization methods. Different versions, such as regression forests and deep learning methods, have been successfully applied to estimate the corresponding camera pose given a single input image. In this work, we propose to regress the scene coordinates pixel-wise for a given RGB image by using deep learning. Compared to the recent methods, which usually employ RANSAC to obtain a robust pose estimate from the established point correspondences, we propose to regress confidences of these correspondences, which allows us to immediately discard erroneous predictions and improve the initial pose estimates. Finally, the resulting confidences can be used to score initial pose hypothesis and aid in pose refinement, offering a generalized solution to solve this task.", "We address the problem of estimating the pose of a cam- era relative to a known 3D scene from a single RGB-D frame. We formulate this problem as inversion of the generative rendering procedure, i.e., we want to find the camera pose corresponding to a rendering of the 3D scene model that is most similar with the observed input. This is a non-convex optimization problem with many local optima. We propose a hybrid discriminative-generative learning architecture that consists of: (i) a set of M predictors which generate M camera pose hypotheses, and (ii) a 'selector' or 'aggregator' that infers the best pose from the multiple pose hypotheses based on a similarity function. We are interested in predictors that not only produce good hypotheses but also hypotheses that are different from each other. Thus, we propose and study methods for learning 'marginally relevant' predictors, and compare their performance when used with different selection procedures. We evaluate our method on a recently released 3D reconstruction dataset with challenging camera poses, and scene variability. Experiments show that our method learns to make multiple predictions that are marginally relevant and can effectively select an accurate prediction. Furthermore, our method outperforms the state-of-the-art discriminative approach for camera relocalization.", "Image-based localization, or camera relocalization, is a fundamental problem in computer vision and robotics, and it refers to estimating camera pose from an image. Recent state-of-the-art approaches use learning based methods, such as Random Forests (RFs) and Convolutional Neural Networks (CNNs), to regress for each pixel in the image its corresponding position in the scene's world coordinate frame, and solve the final pose via a RANSAC-based optimization scheme using the predicted correspondences. In this paper, instead of in a patch-based manner, we propose to perform the scene coordinate regression in a full-frame manner to make the computation efficient at test time and, more importantly, to add more global context to the regression process to improve the robustness. To do so, we adopt a fully convolutional encoder-decoder neural network architecture which accepts a whole image as input and produces scene coordinate predictions for all pixels in the image. However, using more global context is prone to overfitting. To alleviate this issue, we propose to use data augmentation to generate more data for training. In addition to the data augmentation in 2D image space, we also augment the data in 3D space. We evaluate our approach on the publicly available 7-Scenes dataset, and experiments show that it has better scene coordinate predictions and achieves state-of-the-art results in localization with improved robustness on the hardest frames (e.g., frames with repeated structures).", "We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines.", "We introduce a novel method for 3D object detection and pose estimation from color images only. We first use segmentation to detect the objects of interest in 2D even in presence of partial occlusions and cluttered background. By contrast with recent patch-based methods, we rely on a “holistic” approach: We apply to the detected objects a Convolutional Neural Network (CNN) trained to predict their 3D poses in the form of 2D projections of the corners of their 3D bounding boxes. This, however, is not sufficient for handling objects from the recent T-LESS dataset: These objects exhibit an axis of rotational symmetry, and the similarity of two images of such an object under two different poses makes training the CNN challenging. We solve this problem by restricting the range of poses used for training, and by introducing a classifier to identify the range of a pose at run-time before estimating it. We also use an optional additional step that refines the predicted poses. We improve the state-of-the-art on the LINEMOD dataset from 73.7 [2] to 89.3 of correctly registered RGB frames. We are also the first to report results on the Occlusion dataset [1 ] using color images only. We obtain 54 of frames passing the Pose 6D criterion on average on several sequences of the T-LESS dataset, compared to the 67 of the state-of-the-art [10] on the same sequences which uses both color and depth. The full approach is also scalable, as a single network can be trained for multiple objects simultaneously.", "In recent years, the task of estimating the 6D pose of object instances and complete scenes, i.e. camera localization, from a single input image has received considerable attention. Consumer RGB-D cameras have made this feasible, even for difficult, texture-less objects and scenes. In this work, we show that a single RGB image is sufficient to achieve visually convincing results. Our key concept is to model and exploit the uncertainty of the system at all stages of the processing pipeline. The uncertainty comes in the form of continuous distributions over 3D object coordinates and discrete distributions over object labels. We give three technical contributions. Firstly, we develop a regularized, auto-context regression framework which iteratively reduces uncertainty in object coordinate and object label predictions. Secondly, we introduce an efficient way to marginalize object coordinate distributions over depth. This is necessary to deal with missing depth information. Thirdly, we utilize the distributions over object labels to detect multiple objects simultaneously with a fixed budget of RANSAC hypotheses. We tested our system for object pose estimation and camera localization on commonly used data sets. We see a major improvement over competing systems.", "This paper presents an indoor relocalization system using a dual-stream convolutional neural network (CNN) with both color images and depth images as the network inputs. Aiming at the pose regression problem, a deep neural network architecture for RGB-D images is introduced, a training method by stages for the dual-stream CNN is presented, different depth image encoding methods are discussed, and a novel encoding method is proposed. By introducing the range information into the network through a dual-stream architecture, we not only improved the relocalization accuracy by about 20 compared with the state-of-the-art deep learning method for pose regression, but also greatly enhanced the system robustness in challenging scenes such as large-scale, dynamic, fast movement, and night-time environments. To the best of our knowledge, this is the first work to solve the indoor relocalization problems based on deep CNNs with RGB-D camera. The method is first evaluated on the Microsoft 7-Scenes data set to show its advantage in accuracy compared with other CNNs. Large-scale indoor relocalization is further presented using our method. The experimental results show that 0.3 m in position and 4° in orientation accuracy could be obtained. Finally, this method is evaluated on challenging indoor data sets collected from motion capture system. The results show that the relocalization performance is hardly affected by dynamic objects, motion blur, or night-time environments. Note to Practitioners —This paper was motivated by the limitations of the existing indoor relocalization technology that is significant for mobile robot navigation. Using this technology, robots can infer where they are in a previously visited place. Previous visual localization methods can hardly be put into wide application for the reason that they have strict requirements for the environments. When faced with challenging scenes such as large-scale environments, dynamic objects, motion blur caused by fast movement, night-time environments, or other appearance changed scenes, most existing methods tend to fail. This paper introduces deep learning into the indoor relocalization problem and uses dual-stream CNN (depth stream and color stream) to realize 6-DOF pose regression in an end-to-end manner. The localization error is about 0.3 m and 4° in a large-scale indoor environments. And what is more important, the proposed system does not lose efficiency in some challenging scenes. The proposed encoding method of depth images can also be adopted in other deep neural networks with RGB-D cameras as the sensor.", "We propose a probabilistic formulation of joint silhouette extraction and 3D reconstruction given a series of calibrated 2D images. Instead of segmenting each image separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most probable 3D shape that gives rise to the observed color information. The probabilistic framework, based on Bayesian inference, enables robust 3D reconstruction by optimally taking into account the contribution of all views. We solve the arising maximum a posteriori shape inference in a globally optimal manner by convex relaxation techniques in a spatially continuous representation. For an interactively provided user input in the form of scribbles specifying foreground and background regions, we build corresponding color distributions as multivariate Gaussians and find a volume occupancy that best fits to this data in a variational sense. Compared to classical methods for silhouette-based multiview reconstruction, the proposed approach does not depend on initialization and enjoys significant resilience to violations of the model assumptions due to background clutter, specular reflections, and camera sensor perturbations. In experiments on several real-world data sets, we show that exploiting a silhouette coherency criterion in a multiview setting allows for dramatic improvements of silhouette quality over independent 2D segmentations without any significant increase of computational efforts. This results in more accurate visual hull estimation, needed by a multitude of image-based modeling approaches. We made use of recent advances in parallel computing with a GPU implementation of the proposed method generating reconstructions on volume grids of more than 20 million voxels in up to 4.41 seconds.", "Camera relocalization plays a vital role in many robotics and computer vision tasks, such as global localization, recovery from tracking failure, and loop closure detection. Recent random forests based methods directly predict 3D world locations for 2D image locations to guide the camera pose optimization. During training, each tree greedily splits the samples to minimize the spatial variance. However, these greedy splits often produce uneven sub-trees in training or incorrect 2D-3D correspondences in testing. To address these problems, we propose a sample-balanced objective to encourage equal numbers of samples in the left and right sub-trees, and a novel backtracking scheme to remedy the incorrect 2D-3D correspondence predictions. Furthermore, we extend the regression forests based methods to use local features in both training and testing stages for outdoor RGB-only applications. Experimental results on publicly available indoor and outdoor datasets demonstrate the efficacy of our approach, which shows superior or on-par accuracy with several state-of-the-art methods.", "We present a robust and real-time monocular six degree of freedom visual relocalization system. We use a Bayesian convolutional neural network to regress the 6-DOF camera pose from a single RGB image. It is trained in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking under 6ms to compute. It obtains approximately 2m and 6° accuracy for very large scale outdoor scenes and 0.5m and 10° accuracy indoors. Using a Bayesian convolutional neural network implementation we obtain an estimate of the model's relocalization uncertainty and improve state of the art localization accuracy on a large scale outdoor dataset. We leverage the uncertainty measure to estimate metric relocalization error and to detect the presence or absence of the scene in the input image. We show that the model's uncertainty is caused by images being dissimilar to the training dataset in either pose or appearance." ] }
1908.02484
2965761271
Fitting model parameters to a set of noisy data points is a common problem in computer vision. In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment. We estimate these correspondences from the image using a neural network. Since the correspondences often contain outliers, we utilize a robust estimator such as Random Sample Consensus (RANSAC) or Differentiable RANSAC (DSAC) to fit the pose parameters. When the problem domain, e.g. the space of all 2D-3D correspondences, is large or ambiguous, a single network does not cover the domain well. Mixture of Experts (MoE) is a popular strategy to divide a problem domain among an ensemble of specialized networks, so called experts, where a gating network decides which expert is responsible for a given input. In this work, we introduce Expert Sample Consensus (ESAC), which integrates DSAC in a MoE. Our main technical contribution is an efficient method to train ESAC jointly and end-to-end. We demonstrate experimentally that ESAC handles two real-world problems better than competing methods, i.e. scalability and ambiguity. We apply ESAC to fitting simple geometric models to synthetic images, and to camera re-localization for difficult, real datasets.
In @cite_65 , Brachmann al train a joint classification-regression forest for camera re-localization. The forest classifies which part of the environment an input belongs to, and regresses relative scene coordinates for this part. More recently, image-retrieval and relative pose regression have been combined in one system for good accuracy in @cite_51 . Both works, @cite_65 and @cite_51 , bear some resemblance to our strategy but utilize one large model without the benefit of efficient, conditional computation. Also, their models cannot be trained in an end-to-end fashion.
{ "cite_N": [ "@cite_51", "@cite_65" ], "mid": [ "2522883048", "2963053725" ], "abstract": [ "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step, Random Forests (RFs) are typically used. On the other hand, Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.", "This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks." ] }
1908.02402
2965998974
This paper proposes a novel end-to-end architecture for task-oriented dialogue systems. It is based on a simple and practical yet very effective sequence-to-sequence approach, where language understanding and state tracking tasks are modeled jointly with a structured copy-augmented sequential decoder and a multi-label decoder for each slot. The policy engine and language generation tasks are modeled jointly following that. The copy-augmented sequential decoder deals with new or unknown values in the conversation, while the multi-label decoder combined with the sequential decoder ensures the explicit assignment of values to slots. On the generation part, slot binary classifiers are used to improve performance. This architecture is scalable to real-world scenarios and is shown through an empirical evaluation to achieve state-of-the-art performance on both the Cambridge Restaurant dataset and the Stanford in-car assistant dataset The code is available at this https URL
Our work is related to end-to-end task-oriented dialogue systems in general [among others] BingNAACL18,Jason17,Lowe18,msr_challenge,BingGoogle17,Pawel18,bordes2016learning,HoriWHWHRHKJZA16,wen2016network,serban2016building and those that extend the Seq2Seq @cite_8 architecture in particular . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. adopt Memory Networks to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.
{ "cite_N": [ "@cite_8" ], "mid": [ "2616122292" ], "abstract": [ "Neural task-oriented dialogue systems often struggle to smoothly interface with a knowledge base. In this work, we seek to address this problem by proposing a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism. The model is end-to-end differentiable and does not need to explicitly model dialogue state or belief trackers. We also release a new dataset of 3,031 dialogues that are grounded through underlying knowledge bases and span three distinct tasks in the in-car personal assistant space: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our architecture is simultaneously trained on data from all domains and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics." ] }
1908.02239
2964358797
A surge in artificial intelligence and autonomous technologies have increased the demand toward enhanced edge-processing capabilities. Computational complexity and size of state-of-the-art Deep Neural Networks (DNNs) are rising exponentially with diverse network models and larger datasets. This growth limits the performance scaling and energy-efficiency of both distributed and embedded inference platforms. Embedded designs at the edge are constrained by energy and speed limitations of available processor substrates and processor to memory communication required to fetch the model coefficients. While many hardware accelerator and network deployment frameworks have been in development, a framework is needed to allow the variety of existing architectures, and those in development, to be expressed in critical parts of the flow that perform various optimization steps. Moreover, premature architecture-blind network selection and optimization diminish the effectiveness of schedule optimizations and hardware-specific mappings. In this paper, we address these issues by creating a cross-layer software-hardware design framework that encompasses network training and model compression that is aware of and tuned to the underlying hardware architecture. This approach leverages the available degrees of DNN structure and sparsity to create a converged network that can be partitioned and efficiently scheduled on the target hardware platform, minimizing data movement, and improving the overall throughput and energy. To further streamline the design, we leverage the high-level, flexible SoC generator platform based on RISC-V ROCC framework. This integration allows seamless extensions of the RISC-V instruction set and Chisel-based rapid generator design. Utilizing this approach, we implemented a silicon prototype in a 16 nm TSMC process node achieving record processing efficiency of up to 18 TOPS W.
the concept of pruning the neural network and exploiting the sparsity has been explored lately either on the general purpose processors @cite_29 @cite_12 @cite_48 @cite_49 or dedicated accelerators. Both static pruning in which the layer weights are compressed, and dynamic pruning with zero-detection of the input activation values have been explored. In both approaches, the unstructured sparse matrix resulting from the pruning of the weights poses a limitation on the speedup and energy saving achieved from the compression technique applied due to the random memory accesses that are required. Although Scalpel @cite_48 takes into account the underlying hardware platform, it only achieves on average 1.25x speedup on GPU with Cusparse library, while structured pruning achieves 4x on the same platform @cite_34 @cite_39 . On the other hand, considering the customized ASIC designs, EIE @cite_36 achieves 5.12x speedup with respect to the GPU, whereas the current APU design we presented reaches up to 80x speedup for a typical fully connected layer.
{ "cite_N": [ "@cite_36", "@cite_48", "@cite_29", "@cite_39", "@cite_49", "@cite_34", "@cite_12" ], "mid": [ "2884180697", "2657126969", "2757143157", "2540366045", "2737121650", "2963363373", "2276892413" ], "abstract": [ "Weight pruning methods of deep neural networks have been demonstrated to achieve a good model pruning ratio without loss of accuracy, thereby alleviating the significant computation storage requirements of large-scale DNNs. Structured weight pruning methods have been proposed to overcome the limitation of irregular network structure and demonstrated actual GPU acceleration. However, the pruning ratio and GPU acceleration are limited when accuracy needs to be maintained. In this work, we overcome pruning ratio and GPU acceleration limitations by proposing a unified, systematic framework of structured weight pruning for DNNs, named ADAM-ADMM. It is a framework that can be used to induce different types of structured sparsity, such as filter-wise, channel-wise, and shape-wise sparsity, as well non-structured sparsity. The proposed framework incorporates stochastic gradient descent with ADMM, and can be understood as a dynamic regularization method in which the regularization target is analytically updated in each iteration. A significant improvement in structured weight pruning ratio is achieved without loss of accuracy, along with fast convergence rate. With a small sparsity degree of 33.3 on the convolutional layers, we achieve 1.64 accuracy enhancement for the AlexNet model. This is obtained by mitigation of overfitting. Without loss of accuracy on the AlexNet model, we achieve 2.58x and 3.65x average measured speedup on two GPUs, clearly outperforming the prior work. The average speedups reach 2.77x and 7.5x when allowing a moderate accuracy loss of 2 . In this case the model compression for convolutional layers is 13.2x, corresponding to 10.5x CPU speedup. Our experiments on ResNet model and on other datasets like UCF101 and CIFAR-10 demonstrate the consistently higher performance of our framework. Our models and codes are released at this https URL", "As the size of Deep Neural Networks (DNNs) continues to grow to increase accuracy and solve more complex problems, their energy footprint also scales. Weight pruning reduces DNN model size and the computation by removing redundant weights. However, we implemented weight pruning for several popular networks on a variety of hardware platforms and observed surprising results. For many networks, the network sparsity caused by weight pruning will actually hurt the overall performance despite large reductions in the model size and required multiply-accumulate operations. Also, encoding the sparse format of pruned networks incurs additional storage space overhead. To overcome these challenges, we propose Scalpel that customizes DNN pruning to the underlying hardware by matching the pruned network structure to the data-parallel hardware organization. Scalpel consists of two techniques: SIMD-aware weight pruning and node pruning. For low-parallelism hardware (e.g., microcontroller), SIMD-aware weight pruning maintains weights in aligned fixed-size groups to fully utilize the SIMD units. For high-parallelism hardware (e.g., GPU), node pruning removes redundant nodes, not redundant weights, thereby reducing computation without sacrificing the dense matrix format. For hardware with moderate parallelism (e.g., desktop CPU), SIMD-aware weight pruning and node pruning are synergistically applied together. Across the microcontroller, CPU and GPU, Scalpel achieves mean speedups of 3.54x, 2.61x, and 1.25x while reducing the model sizes by 88 , 82 , and 53 . In comparison, traditional weight pruning achieves mean speedups of 1.90x, 1.06x, 0.41x across the three platforms.", "Although deep Convolutional Neural Network (CNN) has shown better performance in various computer vision tasks, its application is restricted by a significant increase in storage and computation. Among CNN simplification techniques, parameter pruning is a promising approach which aims at reducing the number of weights of various layers without intensively reducing the original accuracy. In this paper, we propose a novel progressive parameter pruning method, named Structured Probabilistic Pruning (SPP), which effectively prunes weights of convolutional layers in a probabilistic manner. Specifically, unlike existing deterministic pruning approaches, where unimportant weights are permanently eliminated, SPP introduces a pruning probability for each weight, and pruning is guided by sampling from the pruning probabilities. A mechanism is designed to increase and decrease pruning probabilities based on importance criteria for the training process. Experiments show that, with 4x speedup, SPP can accelerate AlexNet with only 0.3 loss of top-5 accuracy and VGG-16 with 0.8 loss of top-5 accuracy in ImageNet classification. Moreover, SPP can be directly applied to accelerate multi-branch CNN networks, such as ResNet, without specific adaptations. Our 2x speedup ResNet-50 only suffers 0.8 loss of top-5 accuracy on ImageNet. We further prove the effectiveness of our method on transfer learning task on Flower-102 dataset with AlexNet.", "The learning capability of a neural network improves with increasing depth at higher computational costs. Wider layers with dense kernel connectivity patterns furhter increase this cost and may hinder real-time inference. We propose feature map and kernel level pruning for reducing the computational complexity of a deep convolutional neural network. Pruning feature maps reduces the width of a layer and hence does not need any sparse representation. Further, kernel pruning converts the dense connectivity pattern into a sparse one. Due to coarse nature, these pruning granularities can be exploited by GPUs and VLSI based implementations. We propose a simple and generic strategy to choose the least adversarial pruning masks for both granularities. The pruned networks are retrained which compensates the loss in accuracy. We obtain the best pruning ratios when we prune a network with both granularities. Experiments with the CIFAR-10 dataset show that more than 85 sparsity can be induced in the convolution layers with less than 1 increase in the missclassification rate of the baseline network.", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks.Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5x speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2x speed-up respectively, which is significant. Code has been made publicly available.", "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3 increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4 , 1.0 accuracy loss under 2× speedup respectively, which is significant.", "Real-time application of deep learning algorithms is often hindered by high computational complexity and frequent memory accesses. Network pruning is a promising technique to solve this problem. However, pruning usually results in irregular network connections that not only demand extra representation efforts but also do not fit well on parallel computation. We introduce structured sparsity at various scales for convolutional neural networks: feature map-wise, kernel-wise, and intra-kernel strided sparsity. This structured sparsity is very advantageous for direct computational resource savings on embedded computers, in parallel computing environments, and in hardware-based systems. To decide the importance of network connections and paths, the proposed method uses a particle filtering approach. The importance weight of each particle is assigned by assessing the misclassification rate with a corresponding connectivity pattern. The pruned network is retrained to compensate for the losses due to pruning. While implementing convolutions as matrix products, we particularly show that intra-kernel strided sparsity with a simple constraint can significantly reduce the size of the kernel and feature map tensors. The proposed work shows that when pruning granularities are applied in combination, we can prune the CIFAR-10 network by more than 70 with less than a 1 loss in accuracy." ] }
1908.01950
2966170251
The importance of wild video based image set recognition is becoming monotonically increasing. However, the contents of these collected videos are often complicated, and how to efficiently perform set modeling and feature extraction is a big challenge for set-based classification algorithms. In recent years, some proposed image set classification methods have made a considerable advance by modeling the original image set with covariance matrix, linear subspace, or Gaussian distribution. As a matter of fact, most of them just adopt a single geometric model to describe each given image set, which may lose some other useful information for classification. To tackle this problem, we propose a novel algorithm to model each image set from a multi-geometric perspective. Specifically, the covariance matrix, linear subspace, and Gaussian distribution are applied for set representation simultaneously. In order to fuse these multiple heterogeneous Riemannian manifoldvalued features, the well-equipped Riemannian kernel functions are first utilized to map them into high dimensional Hilbert spaces. Then, a multi-kernel metric learning framework is devised to embed the learned hybrid kernels into a lower dimensional common subspace for classification. We conduct experiments on four widely used datasets corresponding to four different classification tasks: video-based face recognition, set-based object categorization, video-based emotion recognition, and dynamic scene classification, to evaluate the classification performance of the proposed algorithm. Extensive experimental results justify its superiority over the state-of-the-art.
In image set classification, the covariance matrix, linear subspace and Gaussian distribution are three commonly used Riemannian manifold-valued descriptors for image set description. For covariance matrix, its advantages are the simplicity and flexibility to capture the variations within the set @cite_1 @cite_50 @cite_24 , and for linear subspace, its preponderance stem both from the lower computational cost and from the ability to accommodate the effects of various intra-set variations @cite_37 @cite_19 . In comparison, the strength of Gaussian distribution is that it can describe the set data variations by estimating their first-order statistics and second order statistics simultaneously @cite_44 @cite_3 . The increasing attention and promotion of these three descriptors based image set classification problems manifests in three main factors, which are presented as follows.
{ "cite_N": [ "@cite_37", "@cite_1", "@cite_3", "@cite_24", "@cite_19", "@cite_44", "@cite_50" ], "mid": [ "2144093206", "2585165747", "2099629511", "2433217581", "2116022929", "2125447566", "2092535060" ], "abstract": [ "We propose a novel discriminative learning approach to image set classification by modeling the image set with its natural second-order statistic, i.e. covariance matrix. Since nonsingular covariance matrices, a.k.a. symmetric positive definite (SPD) matrices, lie on a Riemannian manifold, classical learning algorithms cannot be directly utilized to classify points on the manifold. By exploring an efficient metric for the SPD matrices, i.e., Log-Euclidean Distance (LED), we derive a kernel function that explicitly maps the covariance matrix from the Riemannian manifold to a Euclidean space. With this explicit mapping, any learning method devoted to vector space can be exploited in either its linear or kernel formulation. Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) are considered in this paper for their feasibility for our specific problem. We further investigate the conventional linear subspace based set modeling technique and cast it in a unified framework with our covariance matrix based modeling. The proposed method is evaluated on two tasks: face recognition and object categorization. Extensive experimental results show not only the superiority of our method over state-of-the-art ones in both accuracy and efficiency, but also its stability to two real challenges: noisy set data and varying set size.", "We describe a new region descriptor and apply it to two problems, object detection and texture classification. The covariance of d-features, e.g., the three-dimensional color vector, the norm of first and second derivatives of intensity with respect to x and y, etc., characterizes a region of interest. We describe a fast method for computation of covariances based on integral images. The idea presented here is more general than the image sums or histograms, which were already published before, and with a series of integral images the covariances are obtained by a few arithmetic operations. Covariance matrices do not lie on Euclidean space, therefore we use a distance metric involving generalized eigenvalues which also follows from the Lie group structure of positive definite matrices. Feature matching is a simple nearest neighbor search under the distance metric and performed extremely rapidly using the integral images. The performance of the covariance features is superior to other methods, as it is shown, and large rotations and illumination changes are also absorbed by the covariance matrix.", "We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data are typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data population. We show that replacing pixel intensities with gradient orientations and the l2 norm with a cosine-based distance measure offers, to some extend, a remedy to this problem. Within this framework, which we coin Image Gradient Orientations (IGO) subspace learning, we first formulate and study the properties of Principal Component Analysis of image gradient orientations (IGO-PCA). We then show its connection to previously proposed robust PCA techniques both theoretically and experimentally. Finally, we derive a number of other popular subspace learning techniques, namely, Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE), and Laplacian Eigenmaps (LE). Experimental results show that our algorithms significantly outperform popular methods such as Gabor features and Local Binary Patterns and achieve state-of-the-art performance for difficult problems such as illumination and occlusion-robust face recognition. In addition to this, the proposed IGO-methods require the eigendecomposition of simple covariance matrices and are as computationally efficient as their corresponding l2 norm intensity-based counterparts. Matlab code for the methods presented in this paper can be found at http: ibug.doc.ic.ac.uk resources.", "Describing the color and textural information of a person image is one of the most crucial aspects of person re-identification. In this paper, we present a novel descriptor based on a hierarchical distribution of pixel features. A hierarchical covariance descriptor has been successfully applied for image classification. However, the mean information of pixel features, which is absent in covariance, tends to be major discriminative information of person images. To solve this problem, we describe a local region in an image via hierarchical Gaussian distribution in which both means and covariances are included in their parameters. More specifically, we model the region as a set of multiple Gaussian distributions in which each Gaussian represents the appearance of a local patch. The characteristics of the set of Gaussians are again described by another Gaussian distribution. In both steps, unlike the hierarchical covariance descriptor, the proposed descriptor can model both the mean and the covariance information of pixel features properly. The results of experiments conducted on five databases indicate that the proposed descriptor exhibits re-markably high performance which outperforms the state-of-the-art descriptors for person re-identification.", "We present a new algorithm to detect pedestrian in still images utilizing covariance matrices as object descriptors. Since the descriptors do not form a vector space, well known machine learning techniques are not well suited to learn the classifiers. The space of d-dimensional nonsingular covariance matrices can be represented as a connected Riemannian manifold. The main contribution of the paper is a novel approach for classifying points lying on a connected Riemannian manifold using the geometry of the space. The algorithm is tested on INRIA and DaimlerChrysler pedestrian datasets where superior detection rates are observed over the previous approaches.", "Abstract Avoiding the use of complicated pre-processing steps such as accurate face and body part segmentation or image normalization, this paper proposes a novel face person image representation which can properly handle background and illumination variations. Denoted as gBiCov, this representation relies on the combination of Biologically Inspired Features (BIF) and Covariance descriptors [1]. More precisely, gBiCov is obtained by computing and encoding the difference between BIF features at different scales. The distance between two persons can then be efficiently measured by computing the Euclidean distance of their signatures, avoiding some time consuming operations in Riemannian manifold required by the use of Covariance descriptors. In addition, the recently proposed KISSME framework [2] is adopted to learn a metric adapted to the representation. To show the effectiveness of gBiCov, experiments are conducted on three person re-identification tasks (VIPeR, i-LIDS and ETHZ) and one face verification task (LFW), on which competitive results are obtained. As an example, the matching rate at rank 1 on the VIPeR dataset is of 31.11 , improving the best previously published result by more than 10.", "In surveillance applications, head and body orientation of people is of primary importance for assessing many behavioral traits. Unfortunately, in this context people are often encoded by a few, noisy pixels so that their characterization is difficult. We face this issue, proposing a computational framework which is based on an expressive descriptor, the covariance of features. Covariances have been employed for pedestrian detection purposes, actually a binary classification problem on Riemannian manifolds. In this paper, we show how to extend to the multiclassification case, presenting a novel descriptor, named weighted array of covariances, especially suited for dealing with tiny image representations. The extension requires a novel differential geometry approach in which covariances are projected on a unique tangent space where standard machine learning techniques can be applied. In particular, we adopt the Campbell-Baker-Hausdorff expansion as a means to approximate on the tangent space the genuine (geodesic) distances on the manifold in a very efficient way. We test our methodology on multiple benchmark datasets, and also propose new testing sets, getting convincing results in all the cases." ] }
1908.01950
2966170251
The importance of wild video based image set recognition is becoming monotonically increasing. However, the contents of these collected videos are often complicated, and how to efficiently perform set modeling and feature extraction is a big challenge for set-based classification algorithms. In recent years, some proposed image set classification methods have made a considerable advance by modeling the original image set with covariance matrix, linear subspace, or Gaussian distribution. As a matter of fact, most of them just adopt a single geometric model to describe each given image set, which may lose some other useful information for classification. To tackle this problem, we propose a novel algorithm to model each image set from a multi-geometric perspective. Specifically, the covariance matrix, linear subspace, and Gaussian distribution are applied for set representation simultaneously. In order to fuse these multiple heterogeneous Riemannian manifoldvalued features, the well-equipped Riemannian kernel functions are first utilized to map them into high dimensional Hilbert spaces. Then, a multi-kernel metric learning framework is devised to embed the learned hybrid kernels into a lower dimensional common subspace for classification. We conduct experiments on four widely used datasets corresponding to four different classification tasks: video-based face recognition, set-based object categorization, video-based emotion recognition, and dynamic scene classification, to evaluate the classification performance of the proposed algorithm. Extensive experimental results justify its superiority over the state-of-the-art.
Manifold Dimensionality Reduction Based Image Set Classification: To circumvent the above problem, some algorithms that jointly perform linear mapping and metric learning directly on the original Riemannian manifold have been suggested recently @cite_50 @cite_19 @cite_53 , and therefore a discriminative lower-dimensional one can be yielded. Harandi . @cite_50 produce a lower-dimensional SPD manifold with an orthogonal mapping obtained by devising a discriminative metric learning framework with respect to the original high-dimensional data. To simplify the computational complexity, Huang . @cite_53 put forward a novel Log-Euclidean metric learning algorithm to form a desirable SPD manifold by directly embedding the tangent space of original SPD manifold into a lower-dimensional one. Similarly, Huang . @cite_19 try to learn a lower-dimensional and more discriminative Grassmannian-valued feature representations for the original high dimensional Grassmann manifold under a devised projection metric learning framework. Thanks to the advantage of fully considering the manifold geometry, the above algorithms show good classification performance. Yet, they also have an inherent design flaw, that is the mapping which is defined and learned on the non-linear Riemannian geometry is linear, which seems to be unreasonable.
{ "cite_N": [ "@cite_19", "@cite_53", "@cite_50" ], "mid": [ "1862697533", "2962772276", "1922045146" ], "abstract": [ "Representing images and videos with Symmetric Positive Definite (SPD) matrices and considering the Riemannian geometry of the resulting space has proven beneficial for many recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper we introduce an approach that lets us handle high-dimensional SPD matrices by constructing a lower-dimensional, more discriminative SPD manifold. To this end, we model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. In particular, we search for a projection that yields a low-dimensional manifold with maximum discriminative power encoded via an affinity-weighted similarity measure based on metrics on the manifold. Learning can then be expressed as an optimization problem on a Grassmann manifold. Our evaluation on several classification tasks shows that our approach leads to a significant accuracy gain over state-of-the-art methods.", "Representing images and videos with Symmetric Positive Definite (SPD) matrices, and considering the Riemannian geometry of the resulting space, has been shown to yield high discriminative power in many visual recognition tasks. Unfortunately, computation on the Riemannian manifold of SPD matrices –especially of high-dimensional ones– comes at a high cost that limits the applicability of existing techniques. In this paper, we introduce algorithms able to handle high-dimensional SPD matrices by constructing a lower-dimensional SPD manifold. To this end, we propose to model the mapping from the high-dimensional SPD manifold to the low-dimensional one with an orthonormal projection. This lets us formulate dimensionality reduction as the problem of finding a projection that yields a low-dimensional manifold either with maximum discriminative power in the supervised scenario, or with maximum variance of the data in the unsupervised one. We show that learning can be expressed as an optimization problem on a Grassmann manifold and discuss fast solutions for special cases. Our evaluation on several classification tasks evidences that our approach leads to a significant accuracy gain over state-of-the-art methods.", "In video based face recognition, great success has been made by representing videos as linear subspaces, which typically lie in a special type of non-Euclidean space known as Grassmann manifold. To leverage the kernel-based methods developed for Euclidean space, several recent methods have been proposed to embed the Grassmann manifold into a high dimensional Hilbert space by exploiting the well established Project Metric, which can approximate the Riemannian geometry of Grassmann manifold. Nevertheless, they inevitably introduce the drawbacks from traditional kernel-based methods such as implicit map and high computational cost to the Grassmann manifold. To overcome such limitations, we propose a novel method to learn the Projection Metric directly on Grassmann manifold rather than in Hilbert space. From the perspective of manifold learning, our method can be regarded as performing a geometry-aware dimensionality reduction from the original Grassmann manifold to a lower-dimensional, more discriminative Grassmann manifold where more favorable classification can be achieved. Experiments on several real-world video face datasets demonstrate that the proposed method yields competitive performance compared with the state-of-the-art algorithms." ] }
1908.01841
2966573247
Neural dialogue models, despite their successes, still suffer from lack of relevance, diversity, and in many cases coherence in their generated responses. These issues have been attributed to reasons including (1) short-range model architectures that capture limited temporal dependencies, (2) limitations of the maximum likelihood training objective, (3) the concave entropy profile of dialogue datasets resulting into short and generic responses, and (4) out-of-vocabulary problem leading to generation of a large number of @math tokens. Autoregressive transformer models such as GPT-2, although trained with the maximum likelihood objective, do not suffer from the out-of-vocabulary problem and have demonstrated an excellent ability to capture long-range structures in language modeling tasks. In this paper, we examine the use of autoregressive transformer models for multi-turn dialogue response generation. In our experiments, we employ small and medium GPT-2 models (with publicly available pretrained language model parameters) on the open-domain Movie Triples dataset and the closed-domain Ubuntu Dialogue dataset. The models (with and without pretraining) achieve significant improvements over the baselines for multi-turn dialogue response generation. They also produce state-of-the-art performance on the two datasets based on several metrics, including BLEU, ROGUE, and distinct n-gram.
There has been an ongoing effort to drastically improve the performance of dialogue response generation model especially in multi-turn scenarios. In particular, effort has been made to improve the performance of RNN-based models by exploring alternative frameworks such as variational auto-encoding @cite_15 , and generative adversarial networks @cite_35 that simultaneously encourage response relevance and diversity. Despite the improvements provided by these models, the quality of model-generated responses is still much below the human level. Recent work on autoregressive transformer-based language models @cite_9 @cite_34 @cite_2 @cite_21 have however shown an impressive ability to exploit long temporal dependencies in textual data. In this work, we investigate the effectiveness of the long temporal memory capability of autoregressive transformer-based models for multi-turn dialogue modeling. For our experiments, we adopted the GPT-2 autoregressive transformer architecture @cite_2 due to its large sequence length (1024) and 100 the best of our knowledge, there has been no previous work on using autoregressive transformer-based models for dialogue modeling.
{ "cite_N": [ "@cite_35", "@cite_9", "@cite_21", "@cite_2", "@cite_15", "@cite_34" ], "mid": [ "2806935606", "2551884415", "2952798561", "2593751037", "2418993857", "2767206889" ], "abstract": [ "We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations.", "We model coherent conversation continuation via RNN-based dialogue models equipped with a dynamic attention mechanism. Our attention-RNN language model dynamically increases the scope of attention on the history as the conversation continues, as opposed to standard attention (or alignment) models with a fixed input scope in a sequence-to-sequence model. This allows each generated word to be associated with the most relevant words in its corresponding conversation history. We evaluate the model on two popular dialogue datasets, the open-domain MovieTriples dataset and the closed-domain Ubuntu Troubleshoot dataset, and achieve significant improvements over the state-of-the-art and baselines on several metrics, including complementary diversity-based metrics, human evaluation, and qualitative visualizations. We also show that a vanilla RNN with dynamic attention outperforms more complex memory models (e.g., LSTM and GRU) by allowing for flexible, long-distance memory. We promote further coherence via topic modeling-based reranking.", "We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users.", "In this paper, we construct and train end-to-end neural network-based dialogue systems using an updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering, which can be both time consuming and expensive. We provide baselines in two different environments: one where models are trained to maximize the log-likelihood of a generated utterance conditioned on the context of the conversation, and one where models are trained to select the correct next response from a list of candidate responses. These are both evaluated on a recall task that we call Next Utterance Classification (NUC), as well as other generation-specific metrics. Finally, we provide a qualitative error analysis to help determine the most promising directions for future research on the Ubuntu Dialogue Corpus, and for end-to-end dialogue systems in general.", "We introduce the multiresolution recurrent neural network, which extends the sequence-to-sequence framework to model natural language generation as two parallel discrete stochastic processes: a sequence of high-level coarse tokens, and a sequence of natural language tokens. There are many ways to estimate or learn the high-level coarse tokens, but we argue that a simple extraction procedure is sufficient to capture a wealth of high-level discourse semantics. Such procedure allows training the multiresolution recurrent neural network by maximizing the exact joint log-likelihood over both sequences. In contrast to the standard log- likelihood objective w.r.t. natural language tokens (word perplexity), optimizing the joint log-likelihood biases the model towards modeling high-level abstractions. We apply the proposed model to the task of dialogue response generation in two challenging domains: the Ubuntu technical support domain, and Twitter conversations. On Ubuntu, the model outperforms competing approaches by a substantial margin, achieving state-of-the-art results according to both automatic evaluation metrics and a human evaluation study. On Twitter, the model appears to generate more relevant and on-topic responses according to automatic evaluation metrics. Finally, our experiments demonstrate that the proposed model is more adept at overcoming the sparsity of natural language and is better able to capture long-term structure.", "Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English-Romanian." ] }
1908.01714
2966789542
In their seminal work on systemic risk in financial markets, Eisenberg and Noe proposed and studied a model with @math firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation -- if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of @math away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.
To our knowledge, strategic aspects are currently reflected only in models of network formation @cite_10 @cite_9 . A three period economy is assumed where firms can invest into risky assets. To do so, they strategically decide to borrow funds from outside investors as well as other firms. Thereby a network of financial cross-holdings is endogenously formed as each firm maximizes their expected profit. The results show that risk-seeking firms tend to over-connect leading to stronger contagion and systemic risk as compared to the socially optimal risk-sharing allocation. Note that in this case, strategic aspects only play a role in the formation of inter-bank relations whereas the clearing mechanism is assumed to follow the same process as in @cite_3 .
{ "cite_N": [ "@cite_9", "@cite_10", "@cite_3" ], "mid": [ "100186914", "2160164079", "2013403206" ], "abstract": [ "I develop a model of the financial sector in which endogenous intermediation among debt financed banks generates excessive systemic risk. Financial institutions have incentives to capture intermediation spreads through strategic borrowing and lending decisions. By doing so, they tilt the division of surplus along an intermediation chain in their favor, while at the same time reducing aggregate surplus. I show that a core-periphery network -- few highly interconnected and many sparsely connected banks -- endogenously emerges in my model. The network is inefficient relative to a constrained efficient benchmark since banks who make risky investments \"overconnect\", exposing themselves to excessive counterparty risk, while banks who mainly provide funding end up with too few connections. The predictions of the model are consistent with empirical evidence in the literature.", "We provide a framework to study the formation of financial networks and investigate the interplay between banks' lending incentives and the emergence of systemic risk. We show that under natural contracting assumptions, banks fail to internalize the implications of their lending decisions for the banks with whom they are not directly contracting, thus establishing the presence of a financial network externality in the process of network formation. We then illustrate how the presence of this externality can function as a channel for the emergence of systemic risk. In particular, we show that (i) banks may \"overlend\" in equilibrium, creating channels over which idiosyncratic shocks can translate into systemic crises via financial contagion; and (ii) they may not spread their lending sufficiently among the set of potential borrowers, creating insufficiently connected financial networks that are excessively prone to contagious defaults. Finally, we show that banks' private incentives may lead to the formation of financial networks that are overly susceptible to systemic meltdowns with some small probability.", "We propose a simple model of inter-bank borrowing and lending where the evolution of the log-monetary reserves of @math banks is described by a system of diffusion processes coupled through their drifts in such a way that stability of the system depends on the rate of inter-bank borrowing and lending. Systemic risk is characterized by a large number of banks reaching a default threshold by a given time horizon. Our model incorporates a game feature where each bank controls its rate of borrowing lending to a central bank. The optimization reflects the desire of each bank to borrow from the central bank when its monetary reserve falls below a critical level or lend if it rises above this critical level which is chosen here as the average monetary reserve. Borrowing from or lending to the central bank is also subject to a quadratic cost at a rate which can be fixed by the regulator. We solve explicitly for Nash equilibria with finitely many players, and we show that in this model the central bank acts as a clearing house, adding liquidity to the system without affecting its systemic risk. We also study the corresponding Mean Field Game in the limit of large number of banks in the presence of a common noise." ] }
1908.01714
2966789542
In their seminal work on systemic risk in financial markets, Eisenberg and Noe proposed and studied a model with @math firms embedded into a network of debt relations. We analyze this model from a game-theoretic point of view. Every firm is a rational agent in a directed graph that has an incentive to allocate payments in order to clear as much of its debt as possible. Each edge is weighted and describes a liability between the firms. We consider several variants of the game that differ in the permissible payment strategies. We study the existence and computational complexity of pure Nash and strong equilibria, and we provide bounds on the (strong) prices of anarchy and stability for a natural notion of social welfare. Our results highlight the power of financial regulation -- if payments of insolvent firms can be centrally assigned, a socially optimal strong equilibrium can be found in polynomial time. In contrast, worst-case strong equilibria can be a factor of @math away from optimal, and, in general, computing a best response is an NP-hard problem. For less permissible sets of strategies, we show that pure equilibria might not exist, and deciding their existence as well as computing them if they exist constitute NP-hard problems.
On a more technical level, our game-theoretic approach is related to a number of existing game-theoretic models based on flows in networks. In cooperative game theory, there are several notions of flow games based on a directed flow network. Existing variants include games, where edges are players @cite_2 @cite_8 @cite_22 @cite_0 @cite_7 @cite_20 , or each player owns a source-sink pair @cite_1 @cite_12 . The total value of a coalition @math is the profit from a maximum (multi-commodity) flow that can be routed through the network if only the players in @math are present. There is a rich set of results on structural characterizations and computability of solutions in the core, as well as other solution concepts for cooperative games. In contrast to our work, these games are non-strategic. Instead, here we consider each player as a single node with a strategic decision about flow allocation.
{ "cite_N": [ "@cite_22", "@cite_7", "@cite_8", "@cite_1", "@cite_0", "@cite_2", "@cite_12", "@cite_20" ], "mid": [ "2013391976", "2143039680", "2115537584", "2127813674", "1605139543", "2046116913", "2007583146", "2158770193" ], "abstract": [ "We study network games in which each player wishes to connect his source and sink, and the cost of each edge is shared among its users either equally (in Fair Connection Games--FCG's) or arbitrarily (in General Connection Games--GCG's). We study the existence and quality of strong equilibria (SE)--strategy profiles from which no coalition can improve the cost of each of its members--in these settings. We show that SE always exist in the following games: (1) Single source and sink FCG's and GCG's. (2) Single source multiple sinks FCG's and GCG's on series parallel graphs. (3) Multi source and sink FCG's on extension parallel graphs. As for the quality of the SE, in any FCG with n players, the cost of any SE is bounded by H(n) (i.e., the harmonic sum), contrasted with the [Theta](n) price of anarchy. For any GCG, any SE is optimal.", "We consider computational aspects of a game theoretic approach to network reliability. Consider a network where failure of one node may disrupt communication between two other nodes. We model this network as a simple coalitional game, called the vertex Connectivity Game (CG). In this game, each agent owns a vertex, and controls all the edges going to and from that vertex. A coalition of agents wins if it fully connects a certain subset of vertices in the graph, called the primary vertices. We show that power indices, which express an agent's ability to affect the outcome of the vertex connectivity game, can be used to identify significant possible points of failure in the communication network, and can thus be used to increase network reliability. We show that in general graphs, calculating the Banzhaf power index is #P-complete, but suggest a polynomial algorithm for calculating this index in trees. We also show a polynomial algorithm for computing the core of a CG, which allows a stable division of payments to coalition agents.", "A key question in cooperative game theory is that of coalitional stability, usually captured by the notion of the core --the set of outcomes such that no subgroup of players has an incentive to deviate. However, some coalitional games have empty cores, and any outcome in such a game is unstable. In this paper, we investigate the possibility of stabilizing a coalitional game by using external payments. We consider a scenario where an external party, which is interested in having the players work together, offers a supplemental payment to the grand coalition (or, more generally, a particular coalition structure). This payment is conditional on players not deviating from their coalition(s). The sum of this payment plus the actual gains of the coalition(s) may then be divided among the agents so as to promote stability. We define the cost of stability (CoS) as the minimal external payment that stabilizes the game. We provide general bounds on the cost of stability in several classes of games, and explore its algorithmic properties. To develop a better intuition for the concepts we introduce, we provide a detailed algorithmic study of the cost of stability in weighted voting games, a simple but expressive class of games which can model decision-making in political bodies, and cooperation in multiagent settings. Finally, we extend our model and results to games with coalition structures.", "Coalitional games allow subsets (coalitions) of players to cooperate to receive a collective payoff. This payoff is then distributed “fairly” among the members of that coalition according to some division scheme. Various solution concepts have been proposed as reasonable schemes for generating fair allocations. The Shapley value is one classic solution concept: player i’s share is precisely equal to i’s expected marginal contribution if the players join the coalition one at a time, in a uniformly random order. In this paper, we consider the class of supermodular games (sometimes called convex games), and give a fully polynomial-time randomized approximation scheme (FPRAS) to compute the Shapley value to within a (1 ±e) factor in monotone supermodular games. We show that this result is tight in several senses: no deterministic algorithm can approximate Shapley value as well, no randomized algorithm can do better, and both monotonicity and supermodularity are required for the existence of an efficient (1 ±e)-approximation algorithm. We also argue that, relative to supermodularity, monotonicity is a mild assumption, and we discuss how to transform supermodular games to be monotonic.", "Simple coalitional games are a fundamental class of cooperative games and voting games which are used to model coalition formation, resource allocation and decision making in computer science, artificial intelligence and multiagent systems. Although simple coalitional games are well studied in the domain of game theory and social choice, their algorithmic and computational complexity aspects have received less attention till recently. The computational aspects of simple coalitional games are of increased importance as these games are used by computer scientists to model distributed settings. This thesis fits in the wider setting of the interplay between economics and computer science which has led to the development of algorithmic game theory and computational social choice. A unified view of the computational aspects of simple coalitional games is presented here for the first time. Certain complexity results also apply to other coalitional games such as skill games and matching games. The following issues are given special consideration: influence of players, limit and complexity of manipulations in the coalitional games and complexity of resource allocation on networks. The complexity of comparison of influence between players in simple games is characterized. The simple games considered are represented by winning coalitions, minimal winning coalitions, weighted voting games or multiple weighted voting games. A comprehensive classification of weighted voting games which can be solved in polynomial time is presented. An efficient algorithm which uses generating functions and interpolation to compute an integer weight vector for target power indices is proposed. Voting theory, especially the Penrose Square Root Law, is used to investigate the fairness of a real life voting model. Computational complexity of manipulation in social choice protocols can determine whether manipulation is computationally feasible or not. The computational complexity and bounds of manipulation are considered from various angles including control, false-name manipulation and bribery. Moreover, the computational complexity of computing various cooperative game solutions of simple games in dierent representations is studied. Certain structural results regarding least core payos extend to the general monotone cooperative game. The thesis also studies a coalitional game called the spanning connectivity game. It is proved that whereas computing the Banzhaf values and Shapley-Shubik indices of such games is #P-complete, there is a polynomial time combinatorial algorithm to compute the nucleolus. The results have interesting significance for optimal strategies for the wiretapping game which is a noncooperative game defined on a network.", "We study from a complexity theoretic standpoint the various solution concepts arising in cooperative game theory. We use as a vehicle for this study a game in which the players are nodes of a graph with weights on the edges, and the value of a coalition is determined by the total weight of the edges contained in it. The Shapley value is always easy to compute. The core is easy to characterize when the game is convex, and is intractable (NP-complete) otherwise. Similar results are shown for the kernel, the nucleolus, the e-core, and the bargaining set. As for the von Neumann-Morgenstern solution, we point out that its existence may not even be decidable. Many of these results generalize to the case in which the game is presented by a hypergraph with edges of size k > 2.", "We study network design games where @math self-interested agents have to form a network by purchasing links from a given set of edges. We consider Shapley cost sharing mechanisms that split the cost of an edge in a fair manner among the agents using the edge. It is well known that the price of anarchy of these games is as high as @math . Another line of research has focused on evaluating the price of stability, i.e., the cost of the best Nash equilibrium relative to the social optimum. In this paper we investigate to which extent coordination among agents can improve the quality of solutions. We resort to the concept of strong Nash equilibria, which were introduced by Aumann and are resilient to deviations by coalitions of agents. We analyze the price of anarchy of strong Nash equilibria and develop lower and upper bounds for unweighted and weighted games in both directed and undirected graphs. These bounds are tight or nearly tight for many scenarios. It shows that, by using coordination, the price of anarchy drops from linear to logarithmic bounds. We complement these results by also proving the first superconstant lower bound on the price of stability of standard equilibria (without coordination) in undirected graphs. More specifically, we show a lower bound of @math for weighted games, where @math is the total weight of all the agents. This almost matches the known upper bound of @math . Our results imply that, for most settings, the worst-case performance ratios of strong coordinated equilibria are essentially always as good as the performance ratios of the best equilibria achievable without coordination. These settings include unweighted games in directed graphs as well as weighted games in both directed and undirected graphs.", "We study the impact of collusion in network games with splittable flow and focus on the well established price of anarchy as a measure of this impact. We first investigate symmetric load balancing games and show that the price of anarchy is at most m, where m denotes the number of coalitions. For general networks, we present an instance showing that the price of anarchy is unbounded, even in the case of two coalitions. If latencies are restricted to polynomials with nonnegative coefficients and bounded degree, we prove upper bounds on the price of anarchy for general networks, which improve upon the current best ones except for affine latencies. In light of the negative results even for two coalitions, we analyze the effectiveness of Stackelberg strategies as a means to improve the quality of Nash equilibria. In this setting, an α fraction of the entire demand is first routed centrally by a Stackelberg leader according to a predefined Stackelberg strategy and the remaining demand is then routed selfishly by the coalitions (followers). For a single coalitional follower and parallel arcs, we develop an efficiently computable Stackelberg strategy that reduces the price of anarchy to one. For general networks and a single coalitional follower, we show that a simple strategy, called SCALE, reduces the price of anarchy to 1+α. Finally, we investigate SCALE for multiple coalitional followers, general networks, and affine latencies. We present the first known upper bound on the price of anarchy in this case. Our bound smoothly varies between 1.5 for α=0 and full efficiency for α=1." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
First, the conventional varying-order Markov models @cite_12 deal with this problem as a discrete-time sequence prediction task. Based on the observed history states sequence, prediction of the event type is given by the most likely state that the state transition process will evolve into on the next step. An obvious limit for the families of Markov models is that they assume the state transition process proceed with unit time-step, it can not capture the temporal dependency of the continuous time and give predictions on the exact time of the next event. Moreover, Markov models can not deal with long dependency of the history events when the event sequence is long, because the size of the state space grow exponentially with the number of the time steps considered in Markov model. It is worth mentioning that semi-Markov models @cite_3 can model continuous time-intervals between two states to some extent, by assuming the intervals to follow some simple distributions, but it still has the state space explosion problem when dealing with long time dependency.
{ "cite_N": [ "@cite_3", "@cite_12" ], "mid": [ "2003095404", "1535517661" ], "abstract": [ "Abstract When the initial distribution and transition rates for a continuous time Markov chain are not known precisely, robust methods are needed to study the evolution of the process in time to avoid judgements based on unwarranted precision. We follow the ideas successfully applied in the study of discrete time model to build a framework of imprecise Markov chains in continuous time. The imprecision in the distributions over the set of states is modelled with upper and lower expectation functionals, which equivalently represent sets of probability distributions. Uncertainty in transitions is modelled with sets of transition rates compatible with available information. The Kolmogorov’s backward equation is then generalised into the form of a generalised differential equation, with generalised derivatives and set valued maps. The upper and lower expectation functionals corresponding to imprecise distributions at given times are determined by the maximal and minimal solutions of these equations. The second part of the paper is devoted to numerical methods for approximating the boundary solutions. The methods are based on discretisation of the time interval. A uniform and adaptive grid discretisations are examined. The latter is computationally much more efficient than the former one, but is not applicable on every interval. Therefore, to achieve maximal efficiency a combination of the methods is used.", "We present an on-the-fly abstraction technique for infinite-state continuous -time Markov chains. We consider Markov chains that are specified by a finite set of transition classes. Such models naturally represent biochemical reactions and therefore play an important role in the stochastic modeling of biological systems. We approximate the transient probability distributions at various time instances by solving a sequence of dynamically constructed abstract models, each depending on the previous one. Each abstract model is a finite Markov chain that represents the behavior of the original, infinite chain during a specific time interval. Our approach provides complete information about probability distributions, not just about individual parameters like the mean. The error of each abstraction can be computed, and the precision of the abstraction refined when desired. We implemented the algorithm and demonstrate its usefulness and efficiency on several case studies from systems biology." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
Second, Temporal point processes with conditional intensity functions is a more general framework for sequential event data modeling. Temporal Point Process (TPP) is powerful for modeling event sequence with time-stamp in continuous time space. Early work dates back to the Hawkes processes @cite_40 which shows appropriateness for self-exciting and mutual-exciting process like earthquake and its aftershock @cite_15 @cite_26 . As an effective model for event sequence modeling, TPP has widely used in various applications, including data mining tasks e.g. social infectivity learning @cite_37 , conflict analysis @cite_7 , crime modeling @cite_31 , email network analytics @cite_39 and extremal behavior of stock price @cite_23 , and event prediction tasks e.g. failure prediction @cite_21 , sales outcome forecasting @cite_2 , literature citation prediction @cite_11 .
{ "cite_N": [ "@cite_37", "@cite_26", "@cite_31", "@cite_7", "@cite_21", "@cite_39", "@cite_40", "@cite_23", "@cite_2", "@cite_15", "@cite_11" ], "mid": [ "2509830164", "1938647246", "2605191235", "2086092403", "2963148415", "2093655440", "2090320383", "199115908", "2119355229", "2890009182", "2128364334" ], "abstract": [ "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions.", "Determinantal point processes (DPP) serve as a practicable modeling for many applications of repulsive point processes. A known approach for simulation was proposed in Hough(2006) , which generate the desired distribution point wise through rejection sampling. Unfortunately, the size of rejection could be very large. In this paper, we investigate the application of perfect simulation via coupling from the past (CFTP) on DPP. We give a general framework for perfect simulation on DPP model. It is shown that the limiting sequence of the time-to-coalescence of the coupling is bounded by @math An application is given to the stationary models in DPP.", "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.", "This paper presents an inference algorithm that can discover temporal logic properties of a system from data. Our algorithm operates on finite time system trajectories that are labeled according to whether or not they demonstrate some desirable system properties (e.g. \"the car successfully stops before hitting an obstruction\"). A temporal logic formula that can discriminate between the desirable behaviors and the undesirable ones is constructed. The formulae also indicate possible causes for each set of behaviors (e.g. \"If the speed of the car is greater than 15 m s within 0.5s of brake application, the obstruction will be struck\") which can be used to tune designs or to perform on-line monitoring to ensure the desired behavior. We introduce reactive parameter signal temporal logic (rPSTL), a fragment of parameter signal temporal logic (PSTL) that is expressive enough to capture causal, spatial, and temporal relationships in data. We define a partial order over the set of rPSTL formulae that is based on language inclusion. This order enables a directed search over this set, i.e. given a candidate rPSTL formula that does not adequately match the observed data, we can automatically construct a formula that will fit the data at least as well. Two case studies, one involving a cattle herding scenario and one involving a stochastic hybrid gene circuit model, are presented to illustrate our approach.", "Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model's expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.", "Early prediction of ongoing human activity has become more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions and interacting objects. Different from early detection on short-duration simple actions, we propose a novel framework for long -duration complex activity prediction by discovering three key aspects of activity: Causality, Context-cue, and Predictability. The major contributions of our work include: (1) a general framework is proposed to systematically address the problem of complex activity prediction by mining temporal sequence patterns; (2) probabilistic suffix tree (PST) is introduced to model causal relationships between constituent actions, where both large and small order Markov dependencies between action units are captured; (3) the context-cue, especially interactive objects information, is modeled through sequential pattern mining (SPM), where a series of action and object co-occurrence are encoded as a complex symbolic sequence; (4) we also present a predictive accumulative function (PAF) to depict the predictability of each kind of activity. The effectiveness of our approach is evaluated on two experimental scenarios with two data sets for each: action-only prediction and context-aware prediction. Our method achieves superior performance for predicting global activity classes and local action units.", "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs.", "Early prediction of ongoing activity has been more and more valuable in a large variety of time-critical applications. To build an effective representation for prediction, human activities can be characterized by a complex temporal composition of constituent simple actions. Different from early recognition on short-duration simple activities, we propose a novel framework for long-duration complex activity prediction by discovering the causal relationships between constituent actions and the predictable characteristics of activities. The major contributions of our work include: (1) we propose a novel activity decomposition method by monitoring motion velocity which encodes a temporal decomposition of long activities into a sequence of meaningful action units; (2) Probabilistic Suffix Tree (PST) is introduced to represent both large and small order Markov dependencies between action units; (3) we present a Predictive Accumulative Function (PAF) to depict the predictability of each kind of activity. The effectiveness of the proposed method is evaluated on two experimental scenarios: activities with middle-level complexity and activities with high-level complexity. Our method achieves promising results and can predict global activity classes and local action units.", "We give a denotational framework (a \"meta model\") within which certain properties of models of computation can be compared. It describes concurrent processes in general terms as sets of possible behaviors. A process is determinate if, given the constraints imposed by the inputs, there are exactly one or exactly zero behaviors. Compositions of processes are processes with behaviors in the intersection of the behaviors of the component processes. The interaction between processes is through signals, which are collections of events. Each event is a value-tag pair, where the tags can come from a partially ordered or totally ordered set. Timed models are where the set of tags is totally ordered. Synchronous events share the same tag, and synchronous signals contain events with the same set of tags. Synchronous processes have only synchronous signals as behaviors. Strict causality (in timed tag systems) and continuity (in untimed tag systems) ensure determinacy under certain technical conditions. The framework is used to compare certain essential features of various models of computation, including Kahn process networks, dataflow, sequential processes, concurrent sequential processes with rendezvous, Petri nets, and discrete-event systems.", "Many real world problems from sustainability, healthcare and Internet generate discrete events in continuous time. The generative processes of these data can be very complex, requiring flexible models to capture their dynamics. Temporal point processes offer an elegant framework for modeling such event data. However, sophisticated point process models typically leads to intractable likelihood functions, making model fitting difficult in practice. We address this challenge from the perspective of reinforcement learning (RL), and relate the intensity function of a point process to a stochastic policy in reinforcement learning. We parameterize the policy as a flexible recurrent neural network, and reward models which can mimic the observed event distribution. Since the reward function is unknown in practice, we also uncover an analytic form of the reward function using an inverse reinforcement learning formulation and functions from a reproducing kernel Hilbert space. This new RL framework allows us to derive an efficient policy gradient algorithm for learning flexible point process models, and we show that it performs well in both synthetic and real data.", "This paper presents a novel sequence labeling model based on the latent-variable semi-Markov conditional random fields for jointly extracting argument roles of events from texts. The model takes in coarse mention and type information and predicts argument roles for a given event template. This paper addresses the event extraction problem in a primarily unsupervised setting, where no labeled training instances are available. Our key contribution is a novel learning framework called structured preference modeling (PM), that allows arbitrary preference to be assigned to certain structures during the learning procedure. We establish and discuss connections between this framework and other existing works. We show empirically that the structured preferences are crucial to the success of our task. Our model, trained without annotated data and with a small number of structured preferences, yields performance competitive to some baseline supervised approaches." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
Traditional TPP models are modeled by parametric forms involving manual design of conditional intensity function @math depicting event occurrence rate over time, which measures the instantaneous event occurrence rate at time @math . A few popular examples include: Poisson process @cite_25 : the basic form is history independent @math which can be dated back to the 1900's; Reinforced Poisson processes @cite_19 : the model captures the rich-get-richer' mechanism by @math where @math mimics the aging effect while @math is the accumulation of history events; Self-exciting process (Hawkes process) @cite_16 : it provides an additive model to capture the self-exciting effect from history events @math ; Reactive point process @cite_21 : generalization to the Hawkes process by adding a self-inhibiting term to account for the inhibiting effects from history @math .
{ "cite_N": [ "@cite_19", "@cite_16", "@cite_21", "@cite_25" ], "mid": [ "1938647246", "2090320383", "2509830164", "2605191235" ], "abstract": [ "Determinantal point processes (DPP) serve as a practicable modeling for many applications of repulsive point processes. A known approach for simulation was proposed in Hough(2006) , which generate the desired distribution point wise through rejection sampling. Unfortunately, the size of rejection could be very large. In this paper, we investigate the application of perfect simulation via coupling from the past (CFTP) on DPP. We give a general framework for perfect simulation on DPP model. It is shown that the limiting sequence of the time-to-coalescence of the coupling is bounded by @math An application is given to the stationary models in DPP.", "Massachusetts Institute of Technology and the University of Washington Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time, based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short term prediction of electrical grid failures (“manhole events”), including outages, fires, explosions, and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating, and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulnerability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are i) making continuous-time failure predictions, and ii) cost benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short term horizon, and use to provide a cost benefit analysis of different proactive maintenance programs.", "Large volumes of event data are becoming increasingly available in a wide variety of applications, such as healthcare analytics, smart cities and social network analysis. The precise time interval or the exact distance between two events carries a great deal of information about the dynamics of the underlying systems. These characteristics make such data fundamentally different from independently and identically distributed data and time-series data where time and space are treated as indexes rather than random variables. Marked temporal point processes are the mathematical framework for modeling event data with covariates. However, typical point process models often make strong assumptions about the generative processes of the event data, which may or may not reflect the reality, and the specifically fixed parametric assumptions also have restricted the expressive power of the respective processes. Can we obtain a more expressive model of marked temporal point processes? How can we learn such a model from massive data? In this paper, we propose the Recurrent Marked Temporal Point Process (RMTPP) to simultaneously model the event timings and the markers. The key idea of our approach is to view the intensity function of a temporal point process as a nonlinear function of the history, and use a recurrent neural network to automatically learn a representation of influences from the event history. We develop an efficient stochastic gradient algorithm for learning the model parameters which can readily scale up to millions of events. Using both synthetic and real world datasets, we show that, in the case where the true models have parametric specifications, RMTPP can learn the dynamics of such models without the need to know the actual parametric forms; and in the case where the true models are unknown, RMTPP can also learn the dynamics and achieve better predictive performance than other parametric alternatives based on particular prior assumptions.", "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
One obvious limitation of the above TPP models is that they all assume all the samples obey a single parametric form which is too idealistic for real-world data. By contrast, recurrent neural network (RNN) based models @cite_4 @cite_17 @cite_22 are devised for learning point process. In these works, recurrent neural network (RNN) and its variants e.g. long-short term memory (LSTM) are used for modeling the conditional intensity function over time. More recently attention mechanisms are introduced to improve the interpretability of the neural model @cite_18 .
{ "cite_N": [ "@cite_18", "@cite_4", "@cite_22", "@cite_17" ], "mid": [ "2743945814", "2890832667", "2274880506", "1951216520" ], "abstract": [ "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on hidden-to-hidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.", "Recurrent neural networks (RNNs) such as long short-term memory and gated recurrent units are pivotal building blocks across a broad spectrum of sequence modeling problems. This paper proposes a recurrently controlled recurrent network (RCRN) for expressive and powerful sequence encoding. More concretely, the key idea behind our approach is to learn the recurrent gating functions using recurrent networks. Our architecture is split into two components - a controller cell and a listener cell whereby the recurrent controller actively influences the compositionality of the listener cell. We conduct extensive experiments on a myriad of tasks in the NLP domain such as sentiment analysis (SST, IMDb, Amazon reviews, etc.), question classification (TREC), entailment classification (SNLI, SciTail), answer selection (WikiQA, TrecQA) and reading comprehension (NarrativeQA). Across all 26 datasets, our results demonstrate that RCRN not only consistently outperforms BiLSTMs but also stacked BiLSTMs, suggesting that our controller architecture might be a suitable replacement for the widely adopted stacked architecture. Additionally, RCRN achieves state-of-the-art results on several well-established datasets.", "Recurrent neural network (RNN) has been broadly applied to natural language process (NLP) problems. This kind of neural network is designed for modeling sequential data and has been testified to be quite efficient in sequential tagging tasks. In this paper, we propose to use bi-directional RNN with long short-term memory (LSTM) units for Chinese word segmentation, which is a crucial task for modeling Chinese sentences and articles. Classical methods focus on designing and combining hand-craft features from context, whereas bi-directional LSTM network (BLSTM) does not need any prior knowledge or pre-designing, and is expert in creating hierarchical feature representation of contextual information from both directions. Experiment result shows that our approach gets state-of-the-art performance in word segmentation on both traditional Chinese datasets and simplified Chinese datasets.", "Recurrent Neural Networks (RNNs), and specifically a variant with Long Short-Term Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study." ] }
1908.01623
2966715342
Temporal point process is widely used for sequential data modeling. In this paper, we focus on the problem of modeling sequential event propagation in graph, such as retweeting by social network users, news transmitting between websites, etc. Given a collection of event propagation sequences, conventional point process model consider only the event history, i.e. embed event history into a vector, not the latent graph structure. We propose a Graph Biased Temporal Point Process (GBTPP) leveraging the structural information from graph representation learning, where the direct influence between nodes and indirect influence from event history is modeled respectively. Moreover, the learned node embedding vector is also integrated into the embedded event history as side information. Experiments on a synthetic dataset and two real-world datasets show the efficacy of our model compared to conventional methods and state-of-the-art.
When dealing with event propagation sequences, a major limitation of these existing studies is that the structural information of the latent graph @math is not utilized. Conventional TPP models including state-of-the-art method in @cite_4 solve event propagation modeling as general event sequences modeling and take input @math , while our GBTPP model leverage the structural information and node proximity of graph @math taking input @math , where @math is the node embedding vector obtained by a graph representation learning method for @math .
{ "cite_N": [ "@cite_4" ], "mid": [ "2169415915" ], "abstract": [ "Important inference problems in statistical physics, computer vision, error-correcting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain region-based free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a \"valid\" or \"maxent-normal\" approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the \"Bethe method\", the \"junction graph method\", the \"cluster variation method\", and the \"region graph method\". Finally, we explain how to tell whether a region-based approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP." ] }