aid
stringlengths 9
15
| mid
stringlengths 7
10
| abstract
stringlengths 78
2.56k
| related_work
stringlengths 92
1.77k
| ref_abstract
dict |
---|---|---|---|---|
1901.06268 | 2908914784 | Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions. | in @cite_9 are using stack auto-encoders to extract features from protein sequences. The classification predicting protein-protein interaction is then done by directly linking the output of the last auto-encoder to a softmax classifier. For their inputs, there are converting the sequences into fixed-size Boolean vectors, one per sequence, encoding the presence or absence of 3-grams of amino acids, , each possible combination of 3 amino acids. They trained their model doing a 5-fold or 10-fold cross validation, depending their datasets. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2133138357"
],
"abstract": [
"Protein-protein interactions play a key role in many biological systems. High-through- put methods can directly detect the set of interact- ing proteins in yeast, but the results are often incomplete and exhibit high false-positive and false- negative rates. Recently, many different research groups independently suggested using supervised learning methods to integrate direct and indirect biological data sources for the protein interaction prediction task. However, the data sources, ap- proaches, and implementations varied. Further- more, the protein interaction prediction task itself can be subdivided into prediction of (1) physical interaction, (2) co-complex relationship, and (3) path- way co-membership. To investigate systematically the utility of different data sources and the way the data is encoded as features for predicting each of these types of protein interactions, we assembled a large set of biological features and varied their encoding for use in each of the three prediction tasks. Six different classifiers were used to assess the accuracy in predicting interactions, Random Forest (RF), RF similarity-based k-Nearest-Neigh- bor, Naive Bayes, Decision Tree, Logistic Regres- sion, and Support Vector Machine. For all classifi- ers, the three prediction tasks had different success rates, and co-complex prediction appears to be an easier task than the other two. Independently of prediction task, however, the RF classifier consis- tently ranked as one of the top two classifiers for all combinations of feature sets. Therefore, we used this classifier to study the importance of different biological datasets. First, we used the splitting func- tion of the RF tree structure, the Gini index, to estimate feature importance. Second, we deter- mined classification accuracy when only the top- ranking features were used as an input in the classifier. We find that the importance of different features depends on the specific prediction task and the way they are encoded. Strikingly, gene expres- sion is consistently the most important feature for all three prediction tasks, while the protein interac- tions identified using the yeast-2-hybrid system were not among the top-ranking features under any con-"
]
} |
1901.06268 | 2908914784 | Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions. | proposed in @cite_0 a plain fully connected neural network, similar to our first model in this paper but significantly bigger, with layers containing 512, 256 and 128 units for what should be feature extraction, and 128 units for the head of their network. However, they are not giving as input to the network protein sequences but a list of features, such as sequence-order descriptors and composition-transition-distribution descriptors, that authors extracted themselves. They used an hold-out validation set together with a test set but then switched for a 5-fold cross validation when comparing their model with others on some other datasets. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2139582206"
],
"abstract": [
"The effect of training a neural network secondary structure prediction algorithm with different types of multiple sequence alignment profiles derived from the same sequences, is shown to provide a range of accuracy from 70.5 to 76.4 . The best accuracy of 76.4 (standard deviation 8.4 ), is 3.1 (Q3) and 4.4 (SOV2) better than the PHD algorithm run on the same set of 406 sequence non-redundant proteins that were not used to train either method. Residues predicted by the new method with a confidence value of 5 or greater, have an average Q3 accuracy of 84 , and cover 68 of the residues. Relative solvent accessibility based on a two state model, for 25, 5, and 0 accessibility are predicted at 76.2, 79.8, and 86.6 accuracy respectively. The source of the improvements obtained from training with different representations of the same alignment data are described in detail. The new Jnet prediction method resulting from this study is available in the Jpred secondary structure prediction server, and as a stand-alone computer program from: http: barton.ebi.ac.uk . Proteins 2000;40:502–511. © 2000 Wiley-Liss, Inc."
]
} |
1901.06268 | 2908914784 | Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions. | use in @cite_17 a Deep Polynomial Network on features extracted by hand, like amino acid mutation rates or hydrophobic properties of proteins, to make their classification. Thus, they do not use the chain of amino acid residues as an input. They based the learning process on a 5-fold cross validation without test sets. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2126486632"
],
"abstract": [
"A variety of functionally important protein properties, such as secondary structure, transmembrane topology and solvent accessibility, can be encoded as a labeling of amino acids. Indeed, the prediction of such properties from the primary amino acid sequence is one of the core projects of computational biology. Accordingly, a panoply of approaches have been developed for predicting such properties; however, most such approaches focus on solving a single task at a time. Motivated by recent, successful work in natural language processing, we propose to use multitask learning to train a single, joint model that exploits the dependencies among these various labeling tasks. We describe a deep neural network architecture that, given a protein sequence, outputs a host of predicted local properties, including secondary structure, solvent accessibility, transmembrane topology, signal peptides and DNA-binding residues. The network is trained jointly on all these tasks in a supervised fashion, augmented with a novel form of semi-supervised learning in which the model is trained to distinguish between local patterns from natural and synthetic protein sequences. The task-independent architecture of the network obviates the need for task-specific feature engineering. We demonstrate that, for all of the tasks that we considered, our approach leads to statistically significant improvements in performance, relative to a single task neural network approach, and that the resulting model achieves state-of-the-art performance."
]
} |
1901.06268 | 2908914784 | Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions. | @cite_24 , present a model composed of an embedding layer, three convolutions and a LSTM layer for feature extractions of protein sequences, before concatenating LSTM output of both proteins and performing classification with a fully connected layer linked to a sigmoid classifier. The architecture of our recurrent model is then near to their model, modulo the embedding and hyper-parameters. It is important to notice they do not mention to apply any regulation method to train their network. Their inputs are also sequence-based and they apply a 5-fold cross validation during training. with a hold-out test set. Interestingly, they also pad to zero their inputs, so they must mask these zeros into the embedding layer to make it learn something (otherwise this layer will interpret zeros as real data and their large number would prevent this layer to learn any good representation of the input). They use convolution layers after the embedding, and like for this paper, the use Keras as an API front-end to program their model. However the current implementation of convolution layers in Keras does not accept zero-masked data, and the authors do not write in their paper how did they manage to get around this technical issue. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2885583144"
],
"abstract": [
"Machine learning based predictions of protein–protein interactions (PPIs) could provide valuable insights into protein functions, disease occurrence, and therapy design on a large scale. The intensive feature engineering in most of these methods makes the prediction task more tedious and trivial. The emerging deep learning technology enabling automatic feature engineering is gaining great success in various fields. However, the over-fitting and generalization of its models are not yet well investigated in most scenarios. Here, we present a deep neural network framework (DNN-PPI) for predicting PPIs using features learned automatically only from protein primary sequences. Within the framework, the sequences of two interacting proteins are sequentially fed into the encoding, embedding, convolution neural network (CNN), and long short-term memory (LSTM) neural network layers. Then, a concatenated vector of the two outputs from the previous layer is wired as the input of the fully connected neural network. Finally, the Adam optimizer is applied to learn the network weights in a back-propagation fashion. The different types of features, including semantic associations between amino acids, position-related sequence segments (motif), and their long- and short-term dependencies, are captured in the embedding, CNN and LSTM layers, respectively. When the model was trained on Pan’s human PPI dataset, it achieved a prediction accuracy of 98.78 at the Matthew’s correlation coefficient (MCC) of 97.57 . The prediction accuracies for six external datasets ranged from 92.80 to 97.89 , making them superior to those achieved with previous methods. When performed on Escherichia coli, Drosophila, and Caenorhabditis elegans datasets, DNN-PPI obtained prediction accuracies of 95.949 , 98.389 , and 98.669 , respectively. The performances in cross-species testing among the four species above coincided in their evolutionary distances. However, when testing Mus Musculus using the models from those species, they all obtained prediction accuracies of over 92.43 , which is difficult to achieve and worthy of note for further study. These results suggest that DNN-PPI has remarkable generalization and is a promising tool for identifying protein interactions."
]
} |
1901.06268 | 2908914784 | Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions. | Finally, @cite_10 present a fully connected model regulated by dropouts. Like @cite_0 , they use composition-transition-distribution descriptors as features. They apply a 5-fold cross validation and have no separated test sets. | {
"cite_N": [
"@cite_0",
"@cite_10"
],
"mid": [
"2885062394",
"1034159276"
],
"abstract": [
"Multi-layer neural networks have lead to remarkable performance on many kinds of benchmark tasks in text, speech and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (local minima, colinearity, feature discovery etc.) is called Dropout (Hinton, et al 2012, 2016). The Dropout algorithm removes hidden units according to a Bernoulli random variable with probability @math prior to each update, creating random \"shocks\" to the network that are averaged over updates. In this paper we will show that Dropout is a special case of a more general model published originally in 1990 called the Stochastic Delta Rule, or SDR (Hanson, 1990). SDR redefines each weight in the network as a random variable with mean @math and standard deviation @math . Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights. Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. Tests on standard benchmarks (CIFAR) using a modified version of DenseNet shows the SDR outperforms standard Dropout in test error by approx. @math with DenseNet-BC 250 on CIFAR-100 and approx. @math in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 35 epochs.",
"Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage."
]
} |
1901.06268 | 2908914784 | Biological data are extremely diverse, complex but also quite sparse. The recent developments in deep learning methods are offering new possibilities for the analysis of complex data. However, it is easy to be get a deep learning model that seems to have good results but is in fact either overfitting the training data or the validation data. In particular, the fact to overfit the validation data, called "information leak", is almost never treated in papers proposing deep learning models to predict protein-protein interactions (PPI). In this work, we compare two carefully designed deep learning models and show pitfalls to avoid while predicting PPIs through machine learning methods. Our best model predicts accurately more than 78 of human PPI, in very strict conditions both for training and testing. The methodology we propose here allow us to have strong confidences about the ability of a model to scale up on larger datasets. This would allow sharper models when larger datasets would be available, rather than current models prone to information leaks. Our solid methodological foundations shall be applicable to more organisms and whole proteome networks predictions. | We can also mentioned the work of @cite_8 , proposing a multi-layered LSTM model to predict interface residue pair interactions, thus at a finer level level than prediction interaction between two proteins. This is a direction towards which we would like to extend our results. | {
"cite_N": [
"@cite_8"
],
"mid": [
"2614995786"
],
"abstract": [
"Motivation: Proteins usually fulfill their biological functions by interacting with other proteins. Although some methods have been developed to predict the binding sites of a monomer protein, these are not sufficient for prediction of the interaction between two monomer proteins. The correct prediction of interface residue pairs from two monomer proteins is still an open question and has great significance for practical experimental applications in the life sciences. We hope to build a method for the prediction of interface residue pairs that is suitable for those applications. Results: Here, we developed a novel deep network architecture called the multi-layered Long-Short Term Memory networks (LSTMs) approach for the prediction of protein interface residue pairs. Firstly, we created three new descriptions and used other six worked characterizations to describe an amino acid, then we employed these features to discriminate between interface residue pairs and non-interface residue pairs. Secondly, we used two thresholds to select residue pairs that are more likely to be interface residue pairs. Furthermore, this step increases the proportion of interface residue pairs and reduces the influence of imbalanced data. Thirdly, we built deep network architectures based on Long-Short Term Memory networks algorithm to organize and refine the prediction of interface residue pairs by employing features mentioned above. We trained the deep networks on dimers in the unbound state in the international Protein-protein Docking Benchmark version 3.0. The updated data sets in the versions 4.0 and 5.0 were used as the validation set and test set respectively. For our best model, the accuracy rate was over 62 when we chose the top 0.2 pairs of every dimer in the test set as predictions, which will be very helpful for the understanding of protein-protein interaction mechanisms and for guidance in biological experiments."
]
} |
1901.06263 | 2911010497 | Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment. | A paper focused on energy-efficiency improvements leveraging available building-level data for data mining is @cite_7 . The authors list the main predictive tasks in which data mining of large quantities of measurements and contextual information is relevant. These cover: building energy demand prediction, building occupancy and occupant behaviour and fault detection and diagnosis (FDD) for building systems. @cite_17 and @cite_5 further argument through broader studies the relevance of data-driven approaches in timely building energy efficiency applications. | {
"cite_N": [
"@cite_5",
"@cite_7",
"@cite_17"
],
"mid": [
"2770442362",
"2754029504",
"2364005411"
],
"abstract": [
"Modern, densely instrumented, smart buildings generate large amounts of raw data. This poses significant challenges from both the data management perspective as well as leveraging the associated information for enabling advanced energy management, fault detection and control strategies. Networks of intelligent sensors, controllers and actuators currently allow fine grained monitoring of the building state but shift the challenge to exploiting these large quantities of data in an efficient manner. We discuss methods for black-box modelling of input-output data stemming from buildings. Using exploratory analysis it is argued that data mining inspired approaches allow for fast and effective assessment of building state and associated predictions. These are illustrated using a case study on real data collected from commercial-grade air handling units of a research building. Conclusions point out to the feasibility of this approach as well as potential for data mining techniques in smart building control applications.",
"Abstract Energy is the lifeblood of modern societies. In the past decades, the world's energy consumption and associated CO2 emissions increased rapidly due to the increases in population and comfort demands of people. Building energy consumption prediction is essential for energy planning, management, and conservation. Data-driven models provide a practical approach to energy consumption prediction. This paper offers a review of the studies that developed data-driven building energy consumption prediction models, with a particular focus on reviewing the scopes of prediction, the data properties and the data preprocessing methods used, the machine learning algorithms utilized for prediction, and the performance measures used for evaluation. Based on this review, existing research gaps are identified and future research directions in the area of data-driven building energy consumption prediction are highlighted.",
"Abstract This work presents how to proceed during the processing of all available data coming from smart buildings to generate models that predict their energy consumption. For this, we propose a methodology that includes the application of different intelligent data analysis techniques and algorithms that have already been applied successfully in related scenarios, and the selection of the best one depending on the value of the selected metric used for the evaluation. This result depends on the specific characteristics of the target building and the available data. Among the techniques applied to a reference building, Bayesian Regularized Neural Networks and Random Forest are selected because they provide the most accurate predictive results."
]
} |
1901.06263 | 2911010497 | Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment. | Deployment of distributed sensor networks for finer grained spatio-temporal monitoring of indoor conditions is performed by @cite_13 . The authors argue that the statistical modelling of the indoor environment as non-parametric Gaussian processes can lead to reliable information that is fed back to the building management system in order to improve the HVAC control. Wireless sensors can be implemented with limited costs as compared to conventional wired sensors and the monitoring architecture can be adjusted dynamically in order to best capture field level information. In @cite_24 a thermal comfort application using collected HVAC IoT data is presented. Building-level benchmarking data sets @cite_22 are highly important to assess algorithm performance and produce reproducible outcomes. The authors present a large database of one year data from 507 non-residential building energy meters, mainly from university campuses. A model based predictive control for maintaining thermal comfort in buildings is applied in @cite_18 . The optimal comfort index is achieved by a cost function depending on both occupant comfort and energy cost. | {
"cite_N": [
"@cite_24",
"@cite_18",
"@cite_13",
"@cite_22"
],
"mid": [
"2743217645",
"2139644003",
"2001308229",
"2144261701"
],
"abstract": [
"The paper addresses the problem of efficiently monitoring environmental fields in a smart building by the use of a network of wireless noisy sensors that take discretely-predefined measurements at their locations through time. It is proposed that the indoor environmental fields are statistically modeled by spatio-temporal non-parametric Gaussian processes. The proposed models are able to effectively predict and estimate the indoor climate parameters at any time and at any locations of interest, which can be utilized to create timely maps of indoor environments. More importantly, the monitoring results are practically crucial for building management systems to efficiently control energy consumption and maximally improve human comfort in the building. The proposed approach was implemented in a real tested space in a university building, where the obtained results are highly promising.",
"Accurate analytical expressions of delay and packet reception probabilities, and energy consumption of duty-cycled wireless sensor networks with random medium access control (MAC) are instrumental for the efficient design and optimization of these resource-constrained networks. Given a clustered network topology with unslotted IEEE 802.15.4 and preamble sampling MAC, a novel approach to the modeling of the delay, reliability, and energy consumption is proposed. The challenging part in such a modeling is the random MAC and sleep policy of the receivers, which prevents to establish the exact time of data packet transmission. The analysis gives expressions as function of sleep time, listening time, traffic rate and MAC parameters. The analytical results are then used to optimize the duty cycle of the nodes and MAC protocol parameters. The approach provides a significant reduction of the energy consumption compared to existing solutions in the literature. Monte Carlo simulations by ns2 assess the validity of the analysis.",
"Increasingly many wireless sensor network deployments are using harvested environmental energy to extend system lifetime. Because the temporal profiles of such energy sources exhibit great variability due to dynamic weather patterns, an important problem is designing an adaptive duty-cycling mechanism that allows sensor nodes to maintain their power supply at sufficient levels (energy neutral operation) by adapting to changing environmental conditions. Existing techniques to address this problem are minimally adaptive and assume a priori knowledge of the energy profile. While such approaches are reasonable in environments that exhibit low variance, we find that it is highly inefficient in more variable scenarios. We introduce a new technique for solving this problem based on results from adaptive control theory and show that we achieve better performance than previous approaches on a broader class of energy source data sets. Additionally, we include a tunable mechanism for reducing the variance of the node's duty cycle over time, which is an important feature in tasks such as event monitoring. We obtain reductions in variance as great as two-thirds without compromising task performance or ability to maintain energy neutral operation.",
"Distributed processing through ad hoc and sensor networks is having a major impact on scale and applications of computing. The creation of new cyber-physical services based on wireless sensor devices relies heavily on how well communication protocols can be adapted and optimized to meet quality constraints under limited energy resources. The IEEE 802.15.4 medium access control protocol for wireless sensor networks can support energy efficient, reliable, and timely packet transmission by a parallel and distributed tuning of the medium access control parameters. Such a tuning is difficult, because simple and accurate models of the influence of these parameters on the probability of successful packet transmission, packet delay, and energy consumption are not available. Moreover, it is not clear how to adapt the parameters to the changes of the network and traffic regimes by algorithms that can run on resource-constrained devices. In this paper, a Markov chain is proposed to model these relations by simple expressions without giving up the accuracy. In contrast to previous work, the presence of limited number of retransmissions, acknowledgments, unsaturated traffic, packet size, and packet copying delay due to hardware limitations is accounted for. The model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while guaranteeing a given successful packet reception probability and delay constraints in the packet transmission. The algorithm does not require any modification of the IEEE 802.15.4 medium access control and can be easily implemented on network devices. The algorithm has been experimentally implemented and evaluated on a testbed with off-the-shelf wireless sensor devices. Experimental results show that the analysis is accurate, that the proposed algorithm satisfies reliability and delay constraints, and that the approach reduces the energy consumption of the network under both stationary and transient conditions. Specifically, even if the number of devices and traffic configuration change sharply, the proposed parallel and distributed algorithm allows the system to operate close to its optimal state by estimating the busy channel and channel access probabilities. Furthermore, results indicate that the protocol reacts promptly to errors in the estimation of the number of devices and in the traffic load that can appear due to device mobility. It is also shown that the effect of imperfect channel and carrier sensing on system performance heavily depends on the traffic load and limited range of the protocol parameters."
]
} |
1901.06263 | 2911010497 | Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment. | As compared to traditional model-based control (MBC), data-driven control (DDC) represents an emerging field of study which accounts for the need to manage the data deluge produced by dense temporal and spatial monitoring of various systems. A broad survey on the specific nature of DDC and comparison to MBC in various control structures is discussed by @cite_0 . Within this concept, the steps of data mining and classification for prediction and assessment are seen mainly as acting as a higher level supervisor to field level control loops in the case of tuning control parameters, set-points and providing contextual information which contributes to improved robustness. One good application example as reference for DDC with random forests of regression trees @cite_16 . In this case multi-output regression trees are used to represent the system dynamics over the prediction horizon and the control problem is solved in real-time in closed-loop with the physical plant. | {
"cite_N": [
"@cite_0",
"@cite_16"
],
"mid": [
"2040871222",
"2790404719"
],
"abstract": [
"This paper is a brief survey on the existing problems and challenges inherent in model-based control (MBC) theory, and some important issues in the analysis and design of data-driven control (DDC) methods are here reviewed and addressed. The necessity of data-driven control is discussed from the aspects of the history, the present, and the future of control theories and applications. The state of the art of the existing DDC methods and applications are presented with appropriate classifications and insights. The relationship between the MBC method and the DDC method, the differences among different DDC methods, and relevant topics in data-driven optimization and modeling are also highlighted. Finally, the perspective of DDC and associated research topics are briefly explored and discussed.",
"Model Predictive Control (MPC) plays an important role in optimizing operations of complex cyber-physical systems because of its ability to forecast system’s behavior and act under system level constraints. However, MPC requires reasonably accurate underlying models of the system. In many applications, such as building control for energy management, Demand Response, or peak power reduction, obtaining a high-fidelity physics-based model is cost and time prohibitive, thus limiting the widespread adoption of MPC. To this end, we propose a data-driven control algorithm for MPC that relies only on the historical data. We use multi-output regression trees to represent the system’s dynamics over multiple future time steps and formulate a finite receding horizon control problem that can be solved in real-time in closed-loop with the physical plant. We apply this algorithm to peak power reduction in buildings to optimally trade-off peak power reduction against thermal comfort without having to learn white grey box models of the systems dynamics."
]
} |
1901.06263 | 2911010497 | Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment. | Big data analytics for smart city electricity consumption in presented in @cite_28 . The authors use computational intelligence algorithms to model the consumption of eight university buildings. The outcome consists of offline policies to optimise energy usage across the campus. In @cite_11 a different application is described using decision trees for occupancy estimation in office buildings. Occupancy modelling and estimation is a critical task in smart buildings as the occupancy level and its accurate forecasting directly impact the HVAC conditioning strategy of the building and avoiding wasteful control. Fault and anomaly detection with a rule-based system is described in @cite_25 . The main contributions relate to building automated anomaly detection rules with regard to energy efficiency. This is achieved by combining data mining on historical data with expert information about energy efficiency. @cite_20 illustrate the results of the BRIDGE diagnosis strategy on a dedicated building sensor test bed. By considering sensor faults as data deviations, FDD can accurately detect abnormal conditions. FDD for ventilation subsystems is also covered by @cite_23 by using a graph-based approach. | {
"cite_N": [
"@cite_28",
"@cite_23",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2790486829",
"2475772748",
"2294864664",
"2073516667",
"2364005411"
],
"abstract": [
"New technologies such as sensor networks have been incorporated into the management of buildings for organizations and cities. Sensor networks have led to an exponential increase in the volume of data available in recent years, which can be used to extract consumption patterns for the purposes of energy and monetary savings. For this reason, new approaches and strategies are needed to analyze information in big data environments. This paper proposes a methodology to extract electric energy consumption patterns in big data time series, so that very valuable conclusions can be made for managers and governments. The methodology is based on the study of four clustering validity indices in their parallelized versions along with the application of a clustering technique. In particular, this work uses a voting system to choose an optimal number of clusters from the results of the indices, as well as the application of the distributed version of the k-means algorithm included in Apache Spark’s Machine Learning Library. The results, using electricity consumption for the years 2011–2017 for eight buildings of a public university, are presented and discussed. In addition, the performance of the proposed methodology is evaluated using synthetic big data, which cab represent thousands of buildings in a smart city. Finally, policies derived from the patterns discovered are proposed to optimize energy usage across the university campus.",
"A general approach is proposed to determine the common sensors that shall be used to estimate and classify the approximate number of people (within a range) in a room. The range is dynamic and depends on the maximum occupancy met in a training data set for instance. Means to estimate occupancy include motion detection, power consumption, CO2 concentration sensors, microphone or door window positions. The proposed approach is inspired by machine learning. It starts by determining the most useful measurements in calculating information gains. Then, estimation algorithms are proposed: they rely on decision tree learning algorithms because these yield decision rules readable by humans, which correspond to nested if-then-else rules, where thresholds can be adjusted depending on the living areas considered. In addition, the decision tree depth is limited in order to simplify the analysis of the tree rules. Finally, an economic analysis is carried out to evaluate the cost and the most relevant sensor sets, with cost and accuracy comparison for the estimation of occupancy. C45 and random forest algorithms have been applied to an office setting, with average estimation error of 0.19–0.18. Over-fitting issues and best sensor sets are discussed.",
"Automatic system to detect energy efficiency anomalies in smart buildings.Definition and testing of energy efficiency indicators to quantify energy savings.Knowledge extraction from data and HVAC experts through Data Mining techniques.In this study a full set of anomalous EE consumption patterns are detected.During test period more than 10 of day presented a kind of EE anomaly. The rapidly growing world energy use already has concerns over the exhaustion of energy resources and heavy environmental impacts. As a result of these concerns, a trend of green and smart cities has been increasing. To respond to this increasing trend of smart cities with buildings every time more complex, in this paper we have proposed a new method to solve energy inefficiencies detection problem in smart buildings. This solution is based on a rule-based system developed through data mining techniques and applying the knowledge of energy efficiency experts. A set of useful energy efficiency indicators is also proposed to detect anomalies. The data mining system is developed through the knowledge extracted by a full set of building sensors. So, the results of this process provide a set of rules that are used as a part of a decision support system for the optimisation of energy consumption and the detection of anomalies in smart buildings.",
"To smartly control the massive electrical appliances in buildings to save energy, the real-time on off states of the electrical appliances are critically required as the fundamental information. However, it is generally a very difficult and costly problem, because N appliances have 2N states and the appliances are massively in modern buildings. This paper propose a novel compressive sensing model for monitoring the massive appliances' states, in which the sparseness of on off switching events within a short observation interval is exploited. Based on such a temporal sparseness feature, a lightweight state tracking framework is proposed to track the on off states of N appliances by deploying only m smart meters on the power load tree, where m L N. Particularly, it firstly presents an online state decoding algorithm based on a Hidden Markov Model of sparse state transitions. It reduces the traditional O(t22N) complexity of Viterbi decoding to polynomial complexity of O(tnU+1) where n",
"Abstract This work presents how to proceed during the processing of all available data coming from smart buildings to generate models that predict their energy consumption. For this, we propose a methodology that includes the application of different intelligent data analysis techniques and algorithms that have already been applied successfully in related scenarios, and the selection of the best one depending on the value of the selected metric used for the evaluation. This result depends on the specific characteristics of the target building and the available data. Among the techniques applied to a reference building, Bayesian Regularized Neural Networks and Random Forest are selected because they provide the most accurate predictive results."
]
} |
1901.06263 | 2911010497 | Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment. | @cite_2 describe in detail the explicit data modelling process for smart building evaluation. A case study is carried out for energy forecasting of a target building using techniques such a Bayesian Regularized Neural Networks and Random Forests. SVM are also considered but provide weaker results in this specific scenario. Finally in @cite_14 SVM is applied for a regression problem where instead of a class label the output of the algorithm consists of a numeric value. | {
"cite_N": [
"@cite_14",
"@cite_2"
],
"mid": [
"2364005411",
"1558927261"
],
"abstract": [
"Abstract This work presents how to proceed during the processing of all available data coming from smart buildings to generate models that predict their energy consumption. For this, we propose a methodology that includes the application of different intelligent data analysis techniques and algorithms that have already been applied successfully in related scenarios, and the selection of the best one depending on the value of the selected metric used for the evaluation. This result depends on the specific characteristics of the target building and the available data. Among the techniques applied to a reference building, Bayesian Regularized Neural Networks and Random Forest are selected because they provide the most accurate predictive results.",
"As our society gains a better understanding of how humans have negatively impacted the environment, research related to reducing carbon emissions and overall energy consumption has become increasingly important. One of the simplest ways to reduce energy usage is by making current buildings less wasteful. By improving energy efficiency, this method of lowering our carbon footprint is particularly worthwhile because it reduces energy costs of operating the building, unlike many environmental initiatives that require large monetary investments. In order to improve the efficiency of the heating, ventilation, and air conditioning (HVAC) system of a Manhattan skyscraper, 345 Park Avenue, a predictive computer model was designed to forecast the amount of energy the building will consume. This model uses Support Vector Machine Regression (SVMR), a method that builds a regression based purely on historical data of the building, requiring no knowledge of its size, heating and cooling methods, or any other physical properties. SVMR employs time-delay coordinates as a representation of the past to create the feature vectors for SVM training. This pure dependence on historical data makes the model very easily applicable to different types of buildings with few model adjustments. The SVM regression model was built to predict a week of future energy usage based on past energy, temperature, and dew point temperature data."
]
} |
1901.06263 | 2911010497 | Considering the advances in building monitoring and control through networks of interconnected devices, effective handling of the associated rich data streams is becoming an important challenge. In many situations the application of conventional system identification or approximate grey-box models, partly theoretic and partly data-driven, is either unfeasible or unsuitable. The paper discusses and illustrates an application of black-box modelling achieved using data mining techniques with the purpose of smart building ventilation subsystem control. We present the implementation and evaluation of a data mining methodology on collected data over one year of operation. The case study is carried out on four air handling units of a modern campus building for preliminary decision support for facility managers. The data processing and learning framework is based on two steps: raw data streams are compressed using the Symbolic Aggregate Approximation method, followed by the resulting segments being input into a Support Vector Machine algorithm. The results are useful for deriving the behaviour of each equipment in various modi of operation and can be built upon for fault detection or energy efficiency applications. Challenges related to online operation within a commercial Building Management System are also discussed as the approach shows promise for deployment. | The current paper also builds upon own previous work dedicated to decision support systems for renewable energy campus microgrids @cite_10 and carrying out Model Predictive Control (MPC) for building simulations @cite_3 . Earlier work has also included exploratory data analysis from a single building AHU without further analysis and implementation of learning models at a larger scale @cite_9 . In this context we have developed the contributions towards better understanding of collected data from smart buildings. Figure 1 summarises this section with regard to the role of data mining for DDC in this scenario. This generic approach is mapped onto our particular scenario as well. Each of the ventilation units implements local control loops which have to comply to setpoints given by the building operator according to occupancy schedules or seasonal adjustments. Without influencing the low-level control we look at input-output data to indirectly characterise the system behaviour and the end goal of improving the control loop parameters and setpoints through a learning framework. | {
"cite_N": [
"@cite_9",
"@cite_10",
"@cite_3"
],
"mid": [
"2790404719",
"2594724848",
"2119681812"
],
"abstract": [
"Model Predictive Control (MPC) plays an important role in optimizing operations of complex cyber-physical systems because of its ability to forecast system’s behavior and act under system level constraints. However, MPC requires reasonably accurate underlying models of the system. In many applications, such as building control for energy management, Demand Response, or peak power reduction, obtaining a high-fidelity physics-based model is cost and time prohibitive, thus limiting the widespread adoption of MPC. To this end, we propose a data-driven control algorithm for MPC that relies only on the historical data. We use multi-output regression trees to represent the system’s dynamics over multiple future time steps and formulate a finite receding horizon control problem that can be solved in real-time in closed-loop with the physical plant. We apply this algorithm to peak power reduction in buildings to optimally trade-off peak power reduction against thermal comfort without having to learn white grey box models of the systems dynamics.",
"The goal of maintaining users’ thermal comfort conditions in indoor environments may require complex regulation procedures and a proper energy management. This problem is being widely analyzed, since it has a direct effect on users’ productivity. This paper presents an economic model-based predictive control (MPC) whose main strength is the use of the day-ahead price (DAP) in order to predict the energy consumption associated with the heating, ventilation and air conditioning (HVAC). In this way, the control system is able to maintain a high thermal comfort level by optimizing the use of the HVAC system and to reduce, at the same time, the energy consumption associated with it, as much as possible. Later, the performance of the proposed control system is tested through simulations with a non-linear model of a bioclimatic building room. Several simulation scenarios are considered as a test-bed. From the obtained results, it is possible to conclude that the control system has a good behavior in several situations, i.e., it can reach the users’ thermal comfort for the analyzed situations, whereas the HVAC use is adjusted through the DAP; therefore, the energy savings associated with the HVAC is increased.",
"We study the problem of heating, ventilation, and air conditioning (HVAC) control in a typical commercial building. We propose a model predictive control (MPC) approach which minimizes energy use while satisfying occupant comfort and actuator constraints by using predictive knowledge of weather and occupancy."
]
} |
1901.06261 | 2910933843 | Application of neural networks to a vast variety of practical applications is transforming the way AI is applied in practice. Pre-trained neural network models available through APIs or capability to custom train pre-built neural network architectures with customer data has made the consumption of AI by developers much simpler and resulted in broad adoption of these complex AI models. While prebuilt network models exist for certain scenarios, to try and meet the constraints that are unique to each application, AI teams need to think about developing custom neural network architectures that can meet the tradeoff between accuracy and memory footprint to achieve the tight constraints of their unique use-cases. However, only a small proportion of data science teams have the skills and experience needed to create a neural network from scratch, and the demand far exceeds the supply. In this paper, we present NeuNetS : An automated Neural Network Synthesis engine for custom neural network design that is available as part of IBM's AI OpenScale's product. NeuNetS is available for both Text and Image domains and can build neural networks for specific tasks in a fraction of the time it takes today with human effort, and with accuracy similar to that of human-designed AI models. | Evolutionary algorithms and reinforcement learning are currently the two state-of-the-art techniques used by neural network architectures search algorithms. With Neural Architecture Search @cite_38 , demonstrated in an experiment over 28 days and with 800 GPUs that neural network architectures with performances close to state-of-the-art architectures can be found. In parallel or inspired by this work, others proposed to use reinforcement learning to detect sequential architectures @cite_13 , reduce the search space to repeating cells @cite_71 @cite_0 or apply function-preserving actions to accelerate the search @cite_25 . | {
"cite_N": [
"@cite_38",
"@cite_0",
"@cite_71",
"@cite_13",
"@cite_25"
],
"mid": [
"2785430118",
"2957020430",
"2773706593",
"2963778169",
"2888429796"
],
"abstract": [
"The effort devoted to hand-crafting image classifiers has motivated the use of architecture search to discover them automatically. Reinforcement learning and evolution have both shown promise for this purpose. This study introduces a regularized version of a popular asynchronous evolutionary algorithm. We rigorously compare it to the non-regularized form and to a highly-successful reinforcement learning baseline. Using the same hardware, compute effort and neural network training code, we conduct repeated experiments side-by-side, exploring different datasets, search spaces and scales. We show regularized evolution consistently produces models with similar or higher accuracy, across a variety of contexts without need for re-tuning parameters. In addition, regularized evolution exhibits considerably better performance than reinforcement learning at early search stages, suggesting it may be the better choice when fewer compute resources are available. This constitutes the first controlled comparison of the two search algorithms in this context. Finally, we present new architectures discovered with regularized evolution that we nickname AmoebaNets. These models set a new state of the art for CIFAR-10 (mean test error = 2.13 ) and mobile-size ImageNet (top-5 accuracy = 92.1 with 5.06M parameters), and reach the current state of the art for ImageNet (top-5 accuracy = 96.2 ).",
"We propose a novel hardware and software co-exploration framework for efficient neural architecture search (NAS). Different from existing hardware-aware NAS which assumes a fixed hardware design and explores the neural architecture search space only, our framework simultaneously explores both the architecture search space and the hardware design space to identify the best neural architecture and hardware pairs that maximize both test accuracy and hardware efficiency. Such a practice greatly opens up the design freedom and pushes forward the Pareto frontier between hardware efficiency and test accuracy for better design tradeoffs. The framework iteratively performs a two-level (fast and slow) exploration. Without lengthy training, the fast exploration can effectively fine-tune hyperparameters and prune inferior architectures in terms of hardware specifications, which significantly accelerates the NAS process. Then, the slow exploration trains candidates on a validation set and updates a controller using the reinforcement learning to maximize the expected accuracy together with the hardware efficiency. Experiments on ImageNet show that our co-exploration NAS can find the neural architectures and associated hardware design with the same accuracy, 35.24 higher throughput, 54.05 higher energy efficiency and 136x reduced search time, compared with the state-of-the-art hardware-aware NAS.",
"Techniques for automatically designing deep neural network architectures such as reinforcement learning based approaches have recently shown promising results. However, their success is based on vast computational resources (e.g. hundreds of GPUs), making them difficult to be widely used. A noticeable limitation is that they still design and train each network from scratch during the exploration of the architecture space, which is highly inefficient. In this paper, we propose a new framework toward efficient architecture search by exploring the architecture space based on the current network and reusing its weights. We employ a reinforcement learning agent as the meta-controller, whose action is to grow the network depth or layer width with function-preserving transformations. As such, the previously validated networks can be reused for further exploration, thus saves a large amount of computational cost. We apply our method to explore the architecture space of the plain convolutional neural networks (no skip-connections, branching etc.) on image benchmark datasets (CIFAR-10, SVHN) with restricted computational resources (5 GPUs). Our method can design highly competitive networks that outperform existing networks using the same design scheme. On CIFAR-10, our model without skip-connections achieves 4.23 test error rate, exceeding a vast majority of modern architectures and approaching DenseNet. Furthermore, by applying our method to explore the DenseNet architecture space, we are able to achieve more accurate networks with fewer parameters.",
"We explore efficient neural architecture search methods and present a simple yet powerful evolutionary algorithm that can discover new architectures achieving state of the art results. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6 on CIFAR-10 and 20.3 when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches and represents the new state of the art for evolutionary strategies on this task. We also present results using random search, achieving 0.3 less top-1 accuracy on CIFAR-10 and 0.1 less on ImageNet whilst reducing the architecture search time from 36 hours down to 1 hour.",
"Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds maps neural network architectures into a continuous space. (2) A predictor takes the continuous representation of a network as input and predicts its accuracy. (3) A decoder maps a continuous representation of a network back to its architecture. The performance predictor and the encoder enable us to perform optimization in the continuous space to find the embedding of a new architecture with potentially better accuracy. Such a better embedding is then decoded to a network by the decoder. Experiments show that the architecture discovered by our method is very competitive for image classification task on CIFAR-10 and language modeling task on PTB, outperforming or on par with the best results of previous architecture search methods. Furthermore, the computational resource is 10 times fewer than typical methods based on RL and EA."
]
} |
1901.06261 | 2910933843 | Application of neural networks to a vast variety of practical applications is transforming the way AI is applied in practice. Pre-trained neural network models available through APIs or capability to custom train pre-built neural network architectures with customer data has made the consumption of AI by developers much simpler and resulted in broad adoption of these complex AI models. While prebuilt network models exist for certain scenarios, to try and meet the constraints that are unique to each application, AI teams need to think about developing custom neural network architectures that can meet the tradeoff between accuracy and memory footprint to achieve the tight constraints of their unique use-cases. However, only a small proportion of data science teams have the skills and experience needed to create a neural network from scratch, and the demand far exceeds the supply. In this paper, we present NeuNetS : An automated Neural Network Synthesis engine for custom neural network design that is available as part of IBM's AI OpenScale's product. NeuNetS is available for both Text and Image domains and can build neural networks for specific tasks in a fraction of the time it takes today with human effort, and with accuracy similar to that of human-designed AI models. | Various techniques exist which try to shorten the training time. One idea is based on the idea of terminating unpromising training runs early. The partially observed learning curve is used directly to decide to terminate a run early @cite_73 or first extrapolated and then used @cite_82 @cite_61 @cite_21 . Other methods are able to sample different architectures and then predict its likely performance. Peephole @cite_28 predicts a network accuracy by only analyzing the network structure, however it works only on a fixed dataset test case. SMASH uses a hypernetwork to predict weights for an architecture without training and uses its validation performance as a proxy for its performance after training @cite_19 . Others reduce the search time by sharing or reusing model weights @cite_25 @cite_65 @cite_49 @cite_52 . | {
"cite_N": [
"@cite_61",
"@cite_28",
"@cite_21",
"@cite_65",
"@cite_52",
"@cite_19",
"@cite_49",
"@cite_73",
"@cite_25",
"@cite_82"
],
"mid": [
"2963521187",
"2963702144",
"2766164908",
"2586408124",
"2724651715",
"2887204394",
"2748513770",
"2757910899",
"2971202925",
"2132708887"
],
"abstract": [
"This paper tackles the problem of training a deep convolutional neural network with both low-precision weights and low-bitwidth activations. Optimizing a low-precision network is very challenging since the training process can easily get trapped in a poor local minima, which results in substantial accuracy loss. To mitigate this problem, we propose three simple-yet-effective approaches to improve the network training. First, we propose to use a two-stage optimization strategy to progressively find good local minima. Specifically, we propose to first optimize a net with quantized weights and then quantized activations. This is in contrast to the traditional methods which optimize them simultaneously. Second, following a similar spirit of the first method, we propose another progressive optimization approach which progressively decreases the bit-width from high-precision to low-precision during the course of training. Third, we adopt a novel learning scheme to jointly train a full-precision model alongside the low-precision one. By doing so, the full-precision model provides hints to guide the low-precision model training. Extensive experiments on various datasets (i.e., CIFAR-100 and ImageNet) show the effectiveness of the proposed methods. To highlight, using our methods to train a 4-bit precision network leads to no performance decrease in comparison with its full-precision counterpart with standard network architectures (i.e., AlexNet and ResNet-50).",
"It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train Inception-ResNet-V2 on ImageNet to @math validation accuracy in under 2500 parameter updates, efficiently utilizing training batches of 65536 images.",
"It is common practice to decay the learning rate. Here we show one can usually obtain the same learning curve on both training and test sets by instead increasing the batch size during training. This procedure is successful for stochastic gradient descent (SGD), SGD with momentum, Nesterov momentum, and Adam. It reaches equivalent test accuracies after the same number of training epochs, but with fewer parameter updates, leading to greater parallelism and shorter training times. We can further reduce the number of parameter updates by increasing the learning rate @math and scaling the batch size @math . Finally, one can increase the momentum coefficient @math and scale @math , although this tends to slightly reduce the test accuracy. Crucially, our techniques allow us to repurpose existing training schedules for large batch training with no hyper-parameter tuning. We train ResNet-50 on ImageNet to @math validation accuracy in under 30 minutes.",
"Since performance is not portable between platforms, engineers must fine-tune heuristics for each processor in turn. This is such a laborious task that high-profile compilers, supporting many architectures, cannot keep up with hardware innovation and are actually out-of-date. Iterative compilation driven by machine learning has been shown to be efficient at generating portable optimization models automatically. However, good quality models require costly, repetitive, and extensive training which greatly hinders the wide adoption of this powerful technique. In this work, we show that much of this cost is spent collecting training data, runtime measurements for different optimization decisions, which contribute little to the final heuristic. Current implementations evaluate randomly chosen, often redundant, training examples a pre-configured, almost always excessive, number of times – a large source of wasted effort. Our approach optimizes not only the selection of training examples but also the number of samples per example, independently. To evaluate, we construct 11 high-quality models which use a combination of optimization settings to predict the runtime of benchmarks from the SPAPT suite. Our novel, broadly applicable, methodology is able to reduce the training overhead by up to 26x compared to an approach with a fixed number of sample runs, transforming what is potentially months of work into days.",
"Generative adversarial networks (GANs) are highly effective unsupervised learning frameworks that can generate very sharp data, even for data such as images with complex, highly multimodal distributions. However GANs are known to be very hard to train, suffering from problems such as mode collapse and disturbing visual artifacts. Batch normalization (BN) techniques have been introduced to address the training. Though BN accelerates the training in the beginning, our experiments show that the use of BN can be unstable and negatively impact the quality of the trained model. The evaluation of BN and numerous other recent schemes for improving GAN training is hindered by the lack of an effective objective quality measure for GAN models. To address these issues, we first introduce a weight normalization (WN) approach for GAN training that significantly improves the stability, efficiency and the quality of the generated samples. To allow a methodical evaluation, we introduce squared Euclidean reconstruction error on a test set as a new objective measure, to assess training performance in terms of speed, stability, and quality of generated samples. Our experiments with a standard DCGAN architecture on commonly used datasets (CelebA, LSUN bedroom, and CIFAR-10) indicate that training using WN is generally superior to BN for GANs, achieving 10 lower mean squared loss for reconstruction and significantly better qualitative results than BN. We further demonstrate the stability of WN on a 21-layer ResNet trained with the CelebA data set. The code for this paper is available at this https URL",
"In this paper, we propose to train a network with binary weights and low-bitwidth activations, designed especially for mobile devices with limited power consumption. Most previous works on quantizing CNNs uncritically assume the same architecture, though with reduced precision. However, we take the view that for best performance it is possible (and even likely) that a different architecture may be better suited to dealing with low precision weights and activations. Specifically, we propose a \"network expansion\" strategy in which we aggregate a set of homogeneous low-precision branches to implicitly reconstruct the full-precision intermediate feature maps. Moreover, we also propose a group-wise feature approximation strategy which is very flexible and highly accurate. Experiments on ImageNet classification tasks demonstrate the superior performance of the proposed model, named Group-Net, over various popular architectures. In particular, with binary weights and activations, we outperform the previous best binary neural network in terms of accuracy as well as saving more than 5 times computational complexity on ImageNet with ResNet-18 and ResNet-50.",
"Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative validation performance of networks with HyperNet-generated weights, we can effectively search over a wide range of architectures at the cost of a single training run. To facilitate this search, we develop a flexible mechanism based on memory read-writes that allows us to define a wide range of network connectivity patterns, with ResNet, DenseNet, and FractalNet blocks as special cases. We validate our method (SMASH) on CIFAR-10 and CIFAR-100, STL-10, ModelNet10, and Imagenet32x32, achieving competitive performance with similarly-sized hand-designed networks. Our code is available at this https URL",
"A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. But training with large batch size often results in the lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome this optimization difficulties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy.",
"Training data-driven approaches for complex industrial system health monitoring is challenging. When data on faulty conditions are rare or not available, the training has to be performed in a unsupervised manner. In addition, when the observation period, used for training, is kept short, to be able to monitor the system in its early life, the training data might not be representative of all the system normal operating conditions. In this paper, we propose five approaches to perform fault detection in such context. Two approaches rely on the data from the unit to be monitored only: the baseline is trained on the early life of the unit. An incremental learning procedure tries to learn new operating conditions as they arise. Three other approaches take advantage of data from other similar units within a fleet. In two cases, units are directly compared to each other with similarity measures, and the data from similar units are combined in the training set. We propose, in the third case, a new deep-learning methodology to perform, first, a feature alignment of different units with an Unsupervised Feature Alignment Network (UFAN). Then, features of both units are combined in the training set of the fault detection neural network.The approaches are tested on a fleet comprising 112 units, observed over one year of data. All approaches proposed here are an improvement to the baseline, trained with two months of data only. As units in the fleet are found to be very dissimilar, the new architecture UFAN, that aligns units in the feature space, is outperforming others.",
"We present a probabilistic model for generating personalised recommendations of items to users of a web service. The Matchbox system makes use of content information in the form of user and item meta data in combination with collaborative filtering information from previous user behavior in order to predict the value of an item for a user. Users and items are represented by feature vectors which are mapped into a low-dimensional trait space' in which similarity is measured in terms of inner products. The model can be trained from different types of feedback in order to learn user-item preferences. Here we present three alternatives: direct observation of an absolute rating each user gives to some items, observation of a binary preference (like don't like) and observation of a set of ordinal ratings on a user-specific scale. Efficient inference is achieved by approximate message passing involving a combination of Expectation Propagation (EP) and Variational Message Passing. We also include a dynamics model which allows an item's popularity, a user's taste or a user's personal rating scale to drift over time. By using Assumed-Density Filtering (ADF) for training, the model requires only a single pass through the training data. This is an on-line learning algorithm capable of incrementally taking account of new data so the system can immediately reflect the latest user preferences. We evaluate the performance of the algorithm on the MovieLens and Netflix data sets consisting of approximately 1,000,000 and 100,000,000 ratings respectively. This demonstrates that training the model using the on-line ADF approach yields state-of-the-art performance with the option of improving performance further if computational resources are available by performing multiple EP passes over the training data."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | Many studies on GPS trajectory mining exist, such as user activity estimation @cite_3 @cite_2 @cite_17 @cite_16 , transportation mode detection @cite_4 @cite_43 , and region analysis @cite_37 @cite_0 . A typical approach to tackle these tasks first extracts stay-points as a clue for solving them. Therefore, we believe that stay-point extraction is a key technology of many GPS trajectory mining tasks. | {
"cite_N": [
"@cite_37",
"@cite_4",
"@cite_3",
"@cite_0",
"@cite_43",
"@cite_2",
"@cite_16",
"@cite_17"
],
"mid": [
"2779280457",
"1998451808",
"2012580531",
"2022749020",
"2146299864",
"2149880269",
"2065500819",
"1982838671"
],
"abstract": [
"We tackle the problem of extracting stay regions from a geospatial trajectory where a user has stayed longer than a certain time threshold. There are four major difficulties with this problem: (1) stay regions are not only point-type ones such as at a bus-stop but large and arbitrary-shaped ones such as at a shopping mall; (2) trajectories contain spatial outliers; (3) there are missing points in trajectories; and (4) trajectories should be analyzed in an online mode. Previous algorithms cannot overcome these difficulties simultaneously. Density-based batch algorithms have advantages over the previous algorithms in discovering of arbitrary-shaped clusters from spatial data containing outliers; however, they do not consider temporal durations and thus have not been used for extracting stay regions. We extended a density-based algorithm so that it would work in a duration-based manner online and have robustness to missing points in stay regions while keeping its advantages. Experiments on real trajectories of 13 users conducting their daily activities for three weeks demonstrated that our algorithm statistically significantly outperformed five state-of-the-art algorithms in terms of F1 score and works well without trajectory preprocessing consisting of filtering, interpolating, and smoothing.",
"The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"The increasing pervasiveness of location-acquisition technologies (GPS, GSM networks, etc.) is leading to the collection of large spatio-temporal datasets and to the opportunity of discovering usable knowledge about movement behaviour, which fosters novel applications and services. In this paper, we move towards this direction and develop an extension of the sequential pattern mining paradigm that analyzes the trajectories of moving objects. We introduce trajectory patterns as concise descriptions of frequent behaviours, in terms of both space (i.e., the regions of space visited during movements) and time (i.e., the duration of movements). In this setting, we provide a general formal statement of the novel mining problem and then study several different instantiations of different complexity. The various approaches are then empirically evaluated over real data and synthetic benchmarks, comparing their strengths and weaknesses.",
"User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"GPS (Globe Positioning System) trajectory data provide a new way for city travel analysis others than traditional travel diary data. But generally raw GPS traces do not include information on trip purposes or activities. Earlier studies addressed this issue through a combination of manual and computer-assisted data processing steps. Nevertheless, geographic context databases provide the possibility for automatic activity identification based on GPS trajectories since each activity is uniquely defined by a set of features such as location and duration. Distinguished with most existing methods using two dimensional factors, this paper presents a novel approach using spatial temporal attractiveness of POIs (Point of Interests) to identify activity-locations as well as durations from raw GPS trajectory. We also introduce an algorithm to figure out how the intersections of trajectories and spatial-temporal attractiveness prisms indicate the potential possibilities for activities. Finally, Experiments using real world GPS tracking data, road networks and POIs are conducted for evaluations of the proposed approach.",
"This paper describes a system that takes as input GPS data streams generated by users' phones and creates a searchable database of locations and activities. The system is called iDiary and turns large GPS signals collected from smartphones into textual descriptions of the trajectories. The system features a user interface similar to Google Search that allows users to type text queries on their activities (e.g., \"Where did I buy books?\") and receive textual answers based on their GPS signals. iDiary uses novel algorithms for semantic compression (known as coresets) and trajectory clustering of massive GPS signals in parallel to compute the critical locations of a user. Using an external database, we then map these locations to textual descriptions and activities so that we can apply text mining techniques on the resulting data (e.g. LSA or transportation mode recognition). We provide experimental results for both the system and algorithms and compare them to existing commercial and academic state-of-the-art. This is the first GPS system that enables text-searchable activities from GPS data.",
"Managing and mining data derived from moving objects is becoming an important issue in the last years. In this paper, we are interested in mining trajectories of moving objects such as vehicles in the road network. We propose a method for dense route discovery by clustering similar road sections according to both traffic and location in each time period. The traffic estimation is based on the collected spatiotemporal trajectories. We also propose a characterization approach of the temporal evolution of dense routes by a graph of route connection over consecutive time periods. This graph is labelled by a degree of evolution. We have implemented and tested the proposed algorithms, which have shown their effectiveness and efficiency.",
"Location is a key context ingredient and many existing pervasive applications rely on the current locations of their users. However, with the ability to predict the future location and movement behavior of a user, the usability of these applications can be greatly improved. In this paper, we propose an approach to predict both the intended destination and the future route of a person. Rather than predicting the destination and future route separately, we have focused on making prediction in an integrated way by exploiting personal movement data (i.e. trajectories) collected by GPS. Since trajectories contain daily whereabouts information of a person, the proposed approach first detects the significant places where the person may depart from or go to using a clustering-based algorithm called FBM (Forward-Backward Matching), then abstracts the trajectories based on a space partitioning method, and finally extracts movement patterns from the abstracted trajectories using an extended CRPM (Continuous Route Pattern Mining) algorithm. Extracted movement patterns are organized in terms of origin-destination couples. The prediction is made based on a pattern tree built from these movement patterns. With the real personal movement data of 14 participants, we conducted a number of experiments to evaluate the performance of our system. The results show that our approach can achieve approximately 80 and 60 accuracy in destination prediction and 1-step prediction, respectively, and result in an average deviation of approximately 60 m in continuous future route prediction. Finally, based on the proposed approach, we implemented a prototype running on mobile phones, which can extract patterns from a user's historical movement data and predict the destination and future route."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | Various stay-point extraction methods have already been proposed. For example, Ashbrook and Starner @cite_13 @cite_19 use a modified @math -means method, @cite_20 use DBSCAN @cite_9 , and @cite_39 employ Mean-Shift @cite_24 , all of which are based on clustering. @cite_25 and @cite_26 assume that stay-points are positions within a constant radius from a center where the stay time exceeds a constant time. More recently, @cite_23 developed a more robust stay-point extraction algorithm that considers outliers and missing points in GPS trajectories. | {
"cite_N": [
"@cite_26",
"@cite_9",
"@cite_39",
"@cite_24",
"@cite_19",
"@cite_23",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"2779280457",
"1993221692",
"2076943964",
"2199495299",
"2605102252",
"2157575532",
"1982838671",
"2952769483",
"2474996220"
],
"abstract": [
"We tackle the problem of extracting stay regions from a geospatial trajectory where a user has stayed longer than a certain time threshold. There are four major difficulties with this problem: (1) stay regions are not only point-type ones such as at a bus-stop but large and arbitrary-shaped ones such as at a shopping mall; (2) trajectories contain spatial outliers; (3) there are missing points in trajectories; and (4) trajectories should be analyzed in an online mode. Previous algorithms cannot overcome these difficulties simultaneously. Density-based batch algorithms have advantages over the previous algorithms in discovering of arbitrary-shaped clusters from spatial data containing outliers; however, they do not consider temporal durations and thus have not been used for extracting stay regions. We extended a density-based algorithm so that it would work in a duration-based manner online and have robustness to missing points in stay regions while keeping its advantages. Experiments on real trajectories of 13 users conducting their daily activities for three weeks demonstrated that our algorithm statistically significantly outperformed five state-of-the-art algorithms in terms of F1 score and works well without trajectory preprocessing consisting of filtering, interpolating, and smoothing.",
"We propose a novel keypoint-based method for long-term model-free object tracking in a combined matching-and-tracking framework. In order to localise the object in every frame, each keypoint casts votes for the object center. As erroneous keypoints are hard to avoid, we employ a novel consensus-based scheme for outlier detection in the voting behaviour. To make this approach computationally feasible, we propose not to employ an accumulator space for votes, but rather to cluster votes directly in the image space. By transforming votes based on the current keypoint constellation, we account for changes of the object in scale and rotation. In contrast to competing approaches, we refrain from updating the appearance information, thus avoiding the danger of making errors. The use of fast keypoint detectors and binary descriptors allows for our implementation to run in real-time. We demonstrate experimentally on a diverse dataset that is as large as 60 sequences that our method outperforms the state-of-the-art when high accuracy is required and visualise these results by employing a variant of success plots.",
"Spatio-temporal and geo-referenced datasets are growing rapidly, with the rapid development of some technology, such as GPS, satellite systems. At present, many scholars are very interested in the clustering of the trajectory. Existing trajectory clustering algorithms group similar trajectories as a whole and can't distinguish the direction of trajectory. Our key finding is that clustering trajectories as a whole could miss common sub-trajectories and trajectory has direction information. In many applications, discovering common sub-trajectories is very useful. In this paper, we present a trajectory clustering algorithm CTHD (clustering of trajectory based on hausdorff distance). In the CTHD, the trajectory is firstly described by a sequence of flow vectors and partitioned into a set of sub-trajectory. Next the similarity between trajectories is measured by their respective Hausdorff distances. Finally, the trajectories are clustered by the DBSCAN clustering algorithm. The proposed algorithm is different from other schemes using Hausdorff distance that the flow vectors include the position and direction. So it can distinguish the trajectories in different directions. The experimental result shows the phenomenon.",
"In k-means clustering we are given a set of n data points in d-dimensional space Rd and an integer k, and the problem is to determine a set of k points in ÓC;d, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomial-time algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the extremely high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance.We consider the question of whether there exists a simple and practical approximation algorithm for k-means clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9+e)-approximation algorithm. We show that the approximation factor is almost tight, by giving an example for which the algorithm achieves an approximation factor of (9-e). To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with Lloyd's algorithm, this heuristic performs quite well in practice.",
"We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15 points, while converging three times as fast as other triplet-based losses.",
"We consider the problem of retrieving the database points nearest to a given hyperplane query without exhaustively scanning the entire database. For this problem, we propose two hashing-based solutions. Our first approach maps the data to 2-bit binary keys that are locality sensitive for the angle between the hyperplane normal and a database point. Our second approach embeds the data into a vector space where the euclidean norm reflects the desired distance between the original points and hyperplane query. Both use hashing to retrieve near points in sublinear time. Our first method's preprocessing stage is more efficient, while the second has stronger accuracy guarantees. We apply both to pool-based active learning: Taking the current hyperplane classifier as a query, our algorithm identifies those points (approximately) satisfying the well-known minimal distance-to-hyperplane selection criterion. We empirically demonstrate our methods' tradeoffs and show that they make it practical to perform active selection with millions of unlabeled points.",
"Location is a key context ingredient and many existing pervasive applications rely on the current locations of their users. However, with the ability to predict the future location and movement behavior of a user, the usability of these applications can be greatly improved. In this paper, we propose an approach to predict both the intended destination and the future route of a person. Rather than predicting the destination and future route separately, we have focused on making prediction in an integrated way by exploiting personal movement data (i.e. trajectories) collected by GPS. Since trajectories contain daily whereabouts information of a person, the proposed approach first detects the significant places where the person may depart from or go to using a clustering-based algorithm called FBM (Forward-Backward Matching), then abstracts the trajectories based on a space partitioning method, and finally extracts movement patterns from the abstracted trajectories using an extended CRPM (Continuous Route Pattern Mining) algorithm. Extracted movement patterns are organized in terms of origin-destination couples. The prediction is made based on a pattern tree built from these movement patterns. With the real personal movement data of 14 participants, we conducted a number of experiments to evaluate the performance of our system. The results show that our approach can achieve approximately 80 and 60 accuracy in destination prediction and 1-step prediction, respectively, and result in an average deviation of approximately 60 m in continuous future route prediction. Finally, based on the proposed approach, we implemented a prototype running on mobile phones, which can extract patterns from a user's historical movement data and predict the destination and future route.",
"We study exact recovery conditions for convex relaxations of point cloud clustering problems, focusing on two of the most common optimization problems for unsupervised clustering: @math -means and @math -median clustering. Motivations for focusing on convex relaxations are: (a) they come with a certificate of optimality, and (b) they are generic tools which are relatively parameter-free, not tailored to specific assumptions over the input. More precisely, we consider the distributional setting where there are @math clusters in @math and data from each cluster consists of @math points sampled from a symmetric distribution within a ball of unit radius. We ask: what is the minimal separation distance between cluster centers needed for convex relaxations to exactly recover these @math clusters as the optimal integral solution? For the @math -median linear programming relaxation we show a tight bound: exact recovery is obtained given arbitrarily small pairwise separation @math between the balls. In other words, the pairwise center separation is @math . Under the same distributional model, the @math -means LP relaxation fails to recover such clusters at separation as large as @math . Yet, if we enforce PSD constraints on the @math -means LP, we get exact cluster recovery at center separation @math . In contrast, common heuristics such as Lloyd's algorithm (a.k.a. the @math -means algorithm) can fail to recover clusters in this setting; even with arbitrarily large cluster separation, k-means++ with overseeding by any constant factor fails with high probability at exact cluster recovery. To complement the theoretical analysis, we provide an experimental study of the recovery guarantees for these various methods, and discuss several open problems which these experiments suggest.",
"Road information is fundamental not only in the military field but also common daily living. Automatic road extraction from a remote sensing images can provide references for city planning as well as transportation database and map updating. However, owing to the spectral similarity between roads and impervious structures, the current methods solely using spectral characteristics are often ineffective. By contrast, the detailed information discernible from the high-resolution aerial images enables road extraction with spatial texture features. In this study, a knowledge-based method is established and proposed; this method incorporates the spatial texture feature into urban road extraction. The spatial texture feature is initially extracted by the local Moran’s I, and the derived texture is added to the spectral bands of image for image segmentation. Subsequently, features like brightness, standard deviation, rectangularity, aspect ratio, and area are selected to form the hypothesis and verification model based on road knowledge. Finally, roads are extracted by applying the hypothesis and verification model and are post-processed based on the mathematical morphology. The newly proposed method is evaluated by conducting two experiments. Results show that the completeness, correctness, and quality of the results could reach approximately 94 , 90 and 86 respectively, indicating that the proposed method is effective for urban road extraction."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | The challenge that most resembles our personalized visited-POI assignment task is detecting semantic locations from GPS trajectory data @cite_22 @cite_28 @cite_26 . @cite_22 extracted stay-points from trajectories and combined them with street addresses obtained by a reverse geocoder. Their method assigns a semantic label to stay-points by yellow-pages. This strategy resembles a nearest neighbor assignment to stay-points by a POI database. @cite_28 also extracted semantic locations from GPS trajectories in the same manner. @cite_26 extracted stay-points from user trajectories and applied a hierarchical clustering algorithm to combine stay-points to create hierarchical stay areas on a diagram called a tree-based hierarchical graph. The key difference between our personalized visited-POI assignment task and a semantic location detection task is that the semantic location is essentially determined on the basis of stay-points, while in this paper we determine a visited-POI on the basis of whether the user actually visits it. | {
"cite_N": [
"@cite_28",
"@cite_26",
"@cite_22"
],
"mid": [
"2614175038",
"1998451808",
"2439008650"
],
"abstract": [
"Identifying visited points of interest (PoIs) from vehicle trajectories remains an open problem that is difficult due to vehicles parking often at some distance from the visited PoI and due to some regions having a high PoI density. We propose a visited PoI extraction (VPE) method that identifies visited PoIs using a Bayesian network. The method considers stay duration, weekday, arrival time, and PoI category to compute the probability that a PoI is visited. We also provide a method to generate labeled data from unlabeled GPS trajectories. An experimental evaluation shows that VPE achieves a precision@3 value of 0.8, indicating that VPE is able to model the relationship between the temporal features of a stop and the category of the visited PoI.",
"The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"The prevalence of smartphones and mobile social networks allow the users to share their location-based life experience much easier. The large amount of data generated in related location-based social networks provides informative cues on user's behaviors and preferences to support personalized location-based services, like point-of-interest (POI) recommendation. Yet achieving accurate personalized POI recommendation is challenging as the data available for each user is highly sparse. In addition, the computational complexity is high due to the large number of users. In this paper, a novel methodology for personalized successive POI recommendation is proposed. First, the preferred successive category of location is predicted using a third-rank tensor computed based on the partially observed transitions between the categories of user's successive locations where the missing transitions are uncovered by inferring the group preference. The group is achieved according to users' demographics and frequently visited locations. Then, a bipartite graph is constructed based on the recommended categories for each user. To obtain the personalized ranking of locations, a distance weighted HITS algorithm is proposed so that the location authority score is updated iteratively according to the visiting frequency of the group and some distance constraints. The proposed two-step approach with the category prediction incorporated aims to boost the location prediction performance via the smoothing and at the same time reduce the complexity. Experimental results obtained based on the real-world location-based social network data show that the proposed approach outperforms the existing state-of-the-art methods by a large margin."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | POI recommendation tasks are closely related to our target task. Many previous studies have addressed POI recommendations @cite_29 @cite_31 @cite_14 @cite_27 @cite_33 . Most used the collaborative filtering (CF) approach, which requires inter-user information, to achieve recommendations. @cite_31 performed co-clustering on users and stay-points to improve CF recommendations. @cite_27 proposed a framework that fuses user preferences to a POI with both social and geographical influences. @cite_14 showed that the most frequently used check-in history in location-based social networks is first-visit POIs and proposed a personalized PageRank-based method to improve the accuracy of estimating first-visit POI recommendations. @cite_33 introduced a time-aware feature into a CF-based approach and showed that incorporating temporal and spatial influences improves the accuracy of POI recommendations. These studies exploit other user check-in histories to recommend POIs to users. @cite_29 studied location recommendation with a location category hierarchy and concluded that since different users have varying levels of expertise and preferences, they should be treated differently in the recommendation process. | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_29",
"@cite_27",
"@cite_31"
],
"mid": [
"2567312369",
"2044672016",
"2964057288",
"2073013176",
"2084677224"
],
"abstract": [
"Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20 in [email protected] and [email protected]",
"With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy.",
"Point-of-interest (POI) recommendation is an important application for location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. Previous studies show that modeling the sequential pattern of user check-ins is necessary for POI recommendation. Markov chain model, recurrent neural network, and the word2vec framework are used to model check-in sequences in previous work. However, all previous sequential models ignore the fact that check-in sequences on different days naturally exhibit the various temporal characteristics, for instance, \"work\" on weekday and \"entertainment\" on weekend. In this paper, we take this challenge and propose a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation. Inspired by the success of the word2vec framework to model the sequential contexts, we propose a temporal POI embedding model to learn POI representations under some particular temporal state. The temporal POI embedding model captures the contextual check-in information in sequences and the various temporal characteristics on different days as well. Furthermore, We propose a new way to incorporate the geographical influence into the pairwise preference ranking method through discriminating the unvisited POIs according to geographical information. Then we develop a geographically hierarchical pairwise preference ranking model. Finally, we propose a unified framework to recommend POIs combining these two models. To verify the effectiveness of our proposed method, we conduct experiments on two real-life datasets. Experimental results show that the Geo-Teaser model outperforms state-of-the-art models. Compared with the best baseline competitor, the Geo-Teaser model improves at least 20 on both datasets for all metrics.",
"The availability of user check-in data in large volume from the rapid growing location based social networks (LBSNs) enables many important location-aware services to users. Point-of-interest (POI) recommendation is one of such services, which is to recommend places where users have not visited before. Several techniques have been recently proposed for the recommendation service. However, no existing work has considered the temporal information for POI recommendations in LBSNs. We believe that time plays an important role in POI recommendations because most users tend to visit different places at different time in a day, visiting a restaurant at noon and visiting a bar at night. In this paper, we define a new problem, namely, the time-aware POI recommendation, to recommend POIs for a given user at a specified time in a day. To solve the problem, we develop a collaborative recommendation model that is able to incorporate temporal information. Moreover, based on the observation that users tend to visit nearby POIs, we further enhance the recommendation model by considering geographical information. Our experimental results on two real-world datasets show that the proposed approach outperforms the state-of-the-art POI recommendation methods substantially.",
"Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | Other studies on a location naming task @cite_18 @cite_36 and a POI recommendation task @cite_15 use a supervised learning algorithm @cite_10 @cite_44 to build POI ranking models. To formalize their problems as a ranking challenge, they look at a location (i.e., longitude and latitude) as a query and a user's check-in data to it as relevant labels. Their methods utilize user history, the statistics of POIs in a check-in service, and other information to generate features. Then the ranking model uses the features to rank POI candidates. The key difference between a visited-POI assignment task and a POI recommendation task is the latter's requirement for significant location extraction. Previous POI recommendation studies assume that significant locations are given, but our visited-POI assignment task does not. One straightforward approach is cascading a stay-point extraction algorithm and a POI recommendation method. We regard the nearest neighbor method @cite_22 and learning-to-rank methods @cite_18 @cite_15 as similar approaches to our proposed method We do not consider the method by @cite_36 to be a similar method because it requires check-in histories of many users to calculate latent topic features. Its other part is equivalent to another previous work @cite_18 . . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_36",
"@cite_44",
"@cite_15",
"@cite_10"
],
"mid": [
"2044672016",
"2964057288",
"2084677224",
"2567312369",
"2059512502",
"2439008650"
],
"abstract": [
"With the rapid growth of location-based social networks, Point of Interest (POI) recommendation has become an important research problem. However, the scarcity of the check-in data, a type of implicit feedback data, poses a severe challenge for existing POI recommendation methods. Moreover, different types of context information about POIs are available and how to leverage them becomes another challenge. In this paper, we propose a ranking based geographical factorization method, called Rank-GeoFM, for POI recommendation, which addresses the two challenges. In the proposed model, we consider that the check-in frequency characterizes users' visiting preference and learn the factorization by ranking the POIs correctly. In our model, POIs both with and without check-ins will contribute to learning the ranking and thus the data sparsity problem can be alleviated. In addition, our model can easily incorporate different types of context information, such as the geographical influence and temporal influence. We propose a stochastic gradient descent based algorithm to learn the factorization. Experiments on publicly available datasets under both user-POI setting and user-time-POI setting have been conducted to test the effectiveness of the proposed method. Experimental results under both settings show that the proposed method outperforms the state-of-the-art methods significantly in terms of recommendation accuracy.",
"Point-of-interest (POI) recommendation is an important application for location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. Previous studies show that modeling the sequential pattern of user check-ins is necessary for POI recommendation. Markov chain model, recurrent neural network, and the word2vec framework are used to model check-in sequences in previous work. However, all previous sequential models ignore the fact that check-in sequences on different days naturally exhibit the various temporal characteristics, for instance, \"work\" on weekday and \"entertainment\" on weekend. In this paper, we take this challenge and propose a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation. Inspired by the success of the word2vec framework to model the sequential contexts, we propose a temporal POI embedding model to learn POI representations under some particular temporal state. The temporal POI embedding model captures the contextual check-in information in sequences and the various temporal characteristics on different days as well. Furthermore, We propose a new way to incorporate the geographical influence into the pairwise preference ranking method through discriminating the unvisited POIs according to geographical information. Then we develop a geographically hierarchical pairwise preference ranking model. Finally, we propose a unified framework to recommend POIs combining these two models. To verify the effectiveness of our proposed method, we conduct experiments on two real-life datasets. Experimental results show that the Geo-Teaser model outperforms state-of-the-art models. Compared with the best baseline competitor, the Geo-Teaser model improves at least 20 on both datasets for all metrics.",
"Recommending users with their preferred points-of-interest (POIs), e.g., museums and restaurants, has become an important feature for location-based social networks (LBSNs), which benefits people to explore new places and businesses to discover potential customers. However, because users only check in a few POIs in an LBSN, the user-POI check-in interaction is highly sparse, which renders a big challenge for POI recommendations. To tackle this challenge, in this study we propose a new POI recommendation approach called GeoSoCa through exploiting geographical correlations, social correlations and categorical correlations among users and POIs. The geographical, social and categorical correlations can be learned from the historical check-in data of users on POIs and utilized to predict the relevance score of a user to an unvisited POI so as to make recommendations for users. First, in GeoSoCa we propose a kernel estimation method with an adaptive bandwidth to determine a personalized check-in distribution of POIs for each user that naturally models the geographical correlations between POIs. Then, GeoSoCa aggregates the check-in frequency or rating of a user's friends on a POI and models the social check-in frequency or rating as a power-law distribution to employ the social correlations between users. Further, GeoSoCa applies the bias of a user on a POI category to weigh the popularity of a POI in the corresponding category and models the weighed popularity as a power-law distribution to leverage the categorical correlations between POIs. Finally, we conduct a comprehensive performance evaluation for GeoSoCa using two large-scale real-world check-in data sets collected from Foursquare and Yelp. Experimental results show that GeoSoCa achieves significantly superior recommendation quality compared to other state-of-the-art POI recommendation techniques.",
"Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20 in [email protected] and [email protected]",
"In location-based social networks (LBSNs), new successive point-of-interest (POI) recommendation is a newly formulated task which tries to regard the POI a user currently visits as his POI-related query and recommend new POIs the user has not visited before. While carefully designed methods are proposed to solve this problem, they ignore the essence of the task which involves retrieval and recommendation problem simultaneously and fail to employ the social relations or temporal information adequately to improve the results. In order to solve this problem, we propose a new model called location and time aware social collaborative retrieval model (LTSCR), which has two distinct advantages: (1) it models the location, time, and social information simultaneously for the successive POI recommendation task; (2) it efficiently utilizes the merits of the collaborative retrieval model which leverages weighted approximately ranked pairwise (WARP) loss for achieving better top-n ranking results, just as the new successive POI recommendation task needs. We conducted some comprehensive experiments on publicly available datasets and demonstrate the power of the proposed method, with 46.6 growth in Precision@5 and 47.3 improvement in Recall@5 over the best previous method.",
"The prevalence of smartphones and mobile social networks allow the users to share their location-based life experience much easier. The large amount of data generated in related location-based social networks provides informative cues on user's behaviors and preferences to support personalized location-based services, like point-of-interest (POI) recommendation. Yet achieving accurate personalized POI recommendation is challenging as the data available for each user is highly sparse. In addition, the computational complexity is high due to the large number of users. In this paper, a novel methodology for personalized successive POI recommendation is proposed. First, the preferred successive category of location is predicted using a third-rank tensor computed based on the partially observed transitions between the categories of user's successive locations where the missing transitions are uncovered by inferring the group preference. The group is achieved according to users' demographics and frequently visited locations. Then, a bipartite graph is constructed based on the recommended categories for each user. To obtain the personalized ranking of locations, a distance weighted HITS algorithm is proposed so that the location authority score is updated iteratively according to the visiting frequency of the group and some distance constraints. The proposed two-step approach with the category prediction incorporated aims to boost the location prediction performance via the smoothing and at the same time reduce the complexity. Experimental results obtained based on the real-world location-based social network data show that the proposed approach outperforms the existing state-of-the-art methods by a large margin."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | More recently, several novel tasks have been proposed that are related to visited-POI assignments. For example, @cite_34 characterized the life cycle of POIs and investigated the POI evolution process over time. Espin- @cite_6 tackled a task that clusters users based on spatio-temporal dimensions with a non-negative tensor factorization method. They clearly focus on different targets. | {
"cite_N": [
"@cite_34",
"@cite_6"
],
"mid": [
"2507653743",
"2567312369"
],
"abstract": [
"A Point of Interest (POI) refers to a specific location that people may find useful or interesting. While a large body of research has been focused on identifying and recommending POIs, there are few studies on characterizing the life cycle of POIs. Indeed, a comprehensive understanding of POI life cycle can be helpful for various tasks, such as urban planning, business site selection, and real estate evaluation. In this paper, we develop a framework, named POLIP, for characterizing the POI life cycle with multiple data sources. Specifically, to investigate the POI evolution process over time, we first formulate a serial classification problem to predict the life status of POIs. The prediction approach is designed to integrate two important perspectives: 1) the spatial-temporal dependencies associated with the prosperity of POIs, and 2) the human mobility dynamics hidden in the citywide taxicab data related to the POIs at multiple granularity levels. In addition, based on the predicted life statuses in successive time windows for a given POI, we design an algorithm to characterize its life cycle. Finally, we performed extensive experiments using large-scale and real-world datasets. The results demonstrate the feasibility in automatic characterizing POI life cycle and shed important light on future research directions.",
"Successive point-of-interest (POI) recommendation in location-based social networks (LBSNs) becomes a significant task since it helps users to navigate a number of candidate POIs and provides the best POI recommendations based on users' most recent check-in knowledge. However, all existing methods for successive POI recommendation only focus on modeling the correlation between POIs based on users' check-in sequences, but ignore an important fact that successive POI recommendation is a time-subtle recommendation task. In fact, even with the same previous check-in information, users would prefer different successive POIs at different time. To capture the impact of time on successive POI recommendation, in this paper, we propose a spatial-temporal latent ranking (STELLAR) method to explicitly model the interactions among user, POI, and time. In particular, the proposed STELLAR model is built upon a ranking-based pairwise tensor factorization framework with a fine-grained modeling of user-POI, POI-time, and POI-POI interactions for successive POI recommendation. Moreover, we propose a new interval-aware weight utility function to differentiate successive check-ins' correlations, which breaks the time interval constraint in prior work. Evaluations on two real-world datasets demonstrate that the STELLAR model outperforms state-of-the-art successive POI recommendation model about 20 in [email protected] and [email protected]"
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | @cite_41 focused on the periodic behaviors of users and formalized POI check-in patterns as a stochastic point process. An interesting aspect of their method is that they take into account a factor of the influence of the close friends of users. In contrast, our task detects actual visited-POIs from obtained raw GPS trajectories and POI information, which includes the user's periodic behaviors without being limited to them. Therefore, our task indirectly includes their task, even though it does not specifically focus on periodic behaviors. | {
"cite_N": [
"@cite_41"
],
"mid": [
"2964057288"
],
"abstract": [
"Point-of-interest (POI) recommendation is an important application for location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. Previous studies show that modeling the sequential pattern of user check-ins is necessary for POI recommendation. Markov chain model, recurrent neural network, and the word2vec framework are used to model check-in sequences in previous work. However, all previous sequential models ignore the fact that check-in sequences on different days naturally exhibit the various temporal characteristics, for instance, \"work\" on weekday and \"entertainment\" on weekend. In this paper, we take this challenge and propose a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation. Inspired by the success of the word2vec framework to model the sequential contexts, we propose a temporal POI embedding model to learn POI representations under some particular temporal state. The temporal POI embedding model captures the contextual check-in information in sequences and the various temporal characteristics on different days as well. Furthermore, We propose a new way to incorporate the geographical influence into the pairwise preference ranking method through discriminating the unvisited POIs according to geographical information. Then we develop a geographically hierarchical pairwise preference ranking model. Finally, we propose a unified framework to recommend POIs combining these two models. To verify the effectiveness of our proposed method, we conduct experiments on two real-life datasets. Experimental results show that the Geo-Teaser model outperforms state-of-the-art models. Compared with the best baseline competitor, the Geo-Teaser model improves at least 20 on both datasets for all metrics."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | @cite_30 proposed a sequential personalized spatial item recommendation framework (SPORE), which recommends a sequence of POIs based on individual POI-visit histories. Their target closely resembles ours. However, the essential difference is that their task assumes a sequence of check-in records as input, unlike raw GPS trajectories for our case. This means that their method does not assume that an input sequence (check-in records) contains any false positive information, which is one of the main challenges of our task. In addition, SPORE, their proposed algorithm, cannot be directly applied to GPS trajectories since it does not have a mechanism that removes false positive stay-points, while our method can remove such meaningless stay-points. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2964057288"
],
"abstract": [
"Point-of-interest (POI) recommendation is an important application for location-based social networks (LBSNs), which learns the user preference and mobility pattern from check-in sequences to recommend POIs. Previous studies show that modeling the sequential pattern of user check-ins is necessary for POI recommendation. Markov chain model, recurrent neural network, and the word2vec framework are used to model check-in sequences in previous work. However, all previous sequential models ignore the fact that check-in sequences on different days naturally exhibit the various temporal characteristics, for instance, \"work\" on weekday and \"entertainment\" on weekend. In this paper, we take this challenge and propose a Geo-Temporal sequential embedding rank (Geo-Teaser) model for POI recommendation. Inspired by the success of the word2vec framework to model the sequential contexts, we propose a temporal POI embedding model to learn POI representations under some particular temporal state. The temporal POI embedding model captures the contextual check-in information in sequences and the various temporal characteristics on different days as well. Furthermore, We propose a new way to incorporate the geographical influence into the pairwise preference ranking method through discriminating the unvisited POIs according to geographical information. Then we develop a geographically hierarchical pairwise preference ranking model. Finally, we propose a unified framework to recommend POIs combining these two models. To verify the effectiveness of our proposed method, we conduct experiments on two real-life datasets. Experimental results show that the Geo-Teaser model outperforms state-of-the-art models. Compared with the best baseline competitor, the Geo-Teaser model improves at least 20 on both datasets for all metrics."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | @cite_32 proposed a task that detects personally semantic places from GPS trajectories. Their proposed task also appears to closely resemble ours. However, their target is to detect places ( frequently visited by an individual user) that might have such important semantic meanings as home or office. In this perspective, their target is closely related to @cite_41 , as explained above. In contrast, our proposed task detects not only frequently visited places like homes and offices but also every POI that the user actually visits regardless of the frequency. | {
"cite_N": [
"@cite_41",
"@cite_32"
],
"mid": [
"1998451808",
"2067193733"
],
"abstract": [
"The collection of huge amount of tracking data made possible by the widespread use of GPS devices, enabled the analysis of such data for several applications domains, ranging from traffic management to advertisement and social studies. However, the raw positioning data, as it is detected by GPS devices, lacks of semantic information since this data does not natively provide any additional contextual information like the places that people visited or the activities performed. Traditionally, this information is collected by hand filled questionnaire where a limited number of users are asked to annotate their tracks with the activities they have done. With the purpose of getting large amount of semantically rich trajectories, we propose an algorithm for automatically annotating raw trajectories with the activities performed by the users. To do this, we analyse the stops points trying to infer the Point Of Interest (POI) the user has visited. Based on the category of the POI and a probability measure based on the gravity law, we infer the activity performed. We experimented and evaluated the method in a real case study of car trajectories, manually annotated by users with their activities. Experimental results are encouraging and will drive our future works.",
"With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between location significance and user authority is exploited for determining significance, as are aspects such as the number of visits to a location, the durations of the visits, and the distances users travel to reach locations. Studies using up to 100 million GPS records from a confined spatio-temporal region demonstrate that the proposal is effective and is capable of outperforming baseline methods and an extension of an existing proposal."
]
} |
1901.06257 | 2910992039 | Knowledge discovery from GPS trajectory data is an important topic in several scientific areas, including data mining, human behavior analysis, and user modeling. This paper proposes a task that assigns personalized visited-POIs. Its goal is to estimate fine-grained and pre-defined locations (i.e., points of interest (POI)) that are actually visited by users and assign visited-location information to the corresponding span of their (personal) GPS trajectories. We also introduce a novel algorithm to solve this assignment task. First, we exhaustively extract stay-points as candidates for significant locations using a variant of a conventional stay-point extraction method. Then we select significant locations and simultaneously assign visited-POIs to them by considering various aspects, which we formulate in integer linear programming. Experimental results conducted on an actual user dataset show that our method achieves higher accuracy in the visited-POI assignment task than the various cascaded procedures of conventional methods. | @cite_40 employed a Bayesian network to detect the categories of visited-POIs, such as hospitals and universities, from the GPS trajectories of vehicles. Their motivation is closely related to ours. The essential difference is that they only detect the categories of visited-POIs; we detect the visited-POIs themselves. Additionally, they used vehicles' GPS trajectories, whereas we target the trajectories obtained from the mobile devices of users. Thus, our challenge is much more complicated. | {
"cite_N": [
"@cite_40"
],
"mid": [
"2614175038"
],
"abstract": [
"Identifying visited points of interest (PoIs) from vehicle trajectories remains an open problem that is difficult due to vehicles parking often at some distance from the visited PoI and due to some regions having a high PoI density. We propose a visited PoI extraction (VPE) method that identifies visited PoIs using a Bayesian network. The method considers stay duration, weekday, arrival time, and PoI category to compute the probability that a PoI is visited. We also provide a method to generate labeled data from unlabeled GPS trajectories. An experimental evaluation shows that VPE achieves a precision@3 value of 0.8, indicating that VPE is able to model the relationship between the temporal features of a stop and the category of the visited PoI."
]
} |
1907.08015 | 2956604637 | The evolution and development of events have their own basic principles, which make events happen sequentially. Therefore, the discovery of such evolutionary patterns among events are of great value for event prediction, decision-making and scenario design of dialog systems. However, conventional knowledge graph mainly focuses on the entities and their relations, which neglects the real world events. In this paper, we present a novel type of knowledge base - Event Logic Graph (ELG), which can reveal evolutionary patterns and development logics of real world events. Specifically, ELG is a directed cyclic graph, whose nodes are events, and edges stand for the sequential, causal or hypernym-hyponym (is-a) relations between events. We constructed two domain ELG: financial domain ELG, which consists of more than 1.5 million of event nodes and more than 1.8 million of directed edges, and travel domain ELG, which consists of about 30 thousand of event nodes and more than 234 thousand of directed edges. Experimental results show that ELG is effective for the task of script event prediction. | The most relevant research area with ELG is script learning. The use of scripts in AI dates back to the 1970s @cite_4 @cite_17 . In this study, are an influential early encoding of situation-specific world event. In recent years, a growing body of research has investigated statistical script learning. , proposed unsupervised induction of from raw newswire text, with as the evaluation metric. , used bigram model to explicitly model the temporal order of event pairs. However, they all utilized a very simple representation of event as the form of (). To overcome the drawback of this event representation, Pichotta and Mooney @cite_14 presented an approach that employed events with multiple arguments. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_17"
],
"mid": [
"2145374219",
"2342155211",
"2535977253"
],
"abstract": [
"Scripts represent knowledge of stereotypical event sequences that can aid text understanding. Initial statistical methods have been developed to learn probabilistic scripts from raw text corpora; however, they utilize a very impoverished representation of events, consisting of a verb and one dependent argument. We present a script learning approach that employs events with multiple arguments. Unlike previous work, we model the interactions between multiple entities in a script. Experiments on a large corpus using the task of inferring held-out events (the “narrative cloze evaluation”) demonstrate that modeling multi-argument events improves predictive accuracy.",
"There is a small but growing body of research on statistical scripts, models of event sequences that allow probabilistic inference of implicit events from documents. These systems operate on structured verb-argument events produced by an NLP pipeline. We compare these systems with recent Recurrent Neural Net models that directly operate on raw tokens to predict sentences, finding the latter to be roughly comparable to the former in terms of predicting missing events in documents.",
"This paper addresses the problem of automatic temporal annotation of realistic human actions in video using minimal manual supervision. To this end we consider two associated problems: (a) weakly-supervised learning of action models from readily available annotations, and (b) temporal localization of human actions in test videos. To avoid the prohibitive cost of manual annotation for training, we use movie scripts as a means of weak supervision. Scripts, however, provide only implicit, noisy, and imprecise information about the type and location of actions in video. We address this problem with a kernel-based discriminative clustering algorithm that locates actions in the weakly-labeled training data. Using the obtained action samples, we train temporal action detectors and apply them to locate actions in the raw video data. Our experiments demonstrate that the proposed method for weakly-supervised learning of action models leads to significant improvement in action detection. We present detection results for three action classes in four feature length movies with challenging and realistic video data."
]
} |
1907.07826 | 2959681520 | Detecting emotions from text is an extension of simple sentiment polarity detection. Instead of considering only positive or negative sentiments, emotions are conveyed using more tangible manner; thus, they can be expressed as many shades of gray. This paper manifests the results of our experimentation for fine-grained emotion analysis on Bangla text. We gathered and annotated a text corpus consisting of user comments from several Facebook groups regarding socio-economic and political issues, and we made efforts to extract the basic emotions (sadness, happiness, disgust, surprise, fear, anger) conveyed through these comments. Finally, we compared the results of the five most popular classical machine learning techniques namely Naive Bayes, Decision Tree, k-Nearest Neighbor (k-NN), Support Vector Machine (SVM) and K-Means Clustering with several combinations of features. Our best model (SVM with a non-linear radial-basis function (RBF) kernel) achieved an overall average accuracy score of 52.98 and an F1 score (macro) of 0.3324 | In a different paper @cite_0 , the authors described the preparation of the Bengali WordNet Affect containing six types of emotion words. They employed an automatic method of sense disambiguation. The Bengali WordNet Affect could be useful for emotion-related language processing tasks in Bengali. | {
"cite_N": [
"@cite_0"
],
"mid": [
"193025648"
],
"abstract": [
"Rapid growth of blogs in the Web 2.0 and the handshaking between multilingual search and sentiment analysis motivate us to develop a blog based emotion analysis system for Bengali. The present paper describes the identification, visualization and tracking of bloggers' emotions with respect to time from Bengali blog documents. A simple pre-processing technique has been employed to retrieve and store the bloggers' comments on specific topics. The assignment of Ekman's six basic emotions to the bloggers' comments is carried out at word, sentence and paragraph level granularities using the Bengali WordNet AffectLists. The evaluation produces the precision, recall and F-Score of 59.36 , 64.98 and 62.17 respectively for 1100 emotional comments retrieved from 20 blog documents. Each of the bloggers' emotions with respect to different timestamps is visualized by an emotion graph. The emotion graphs of 20 bloggers demonstrate that the system performs satisfactorily in case of emotion tracking."
]
} |
1907.07826 | 2959681520 | Detecting emotions from text is an extension of simple sentiment polarity detection. Instead of considering only positive or negative sentiments, emotions are conveyed using more tangible manner; thus, they can be expressed as many shades of gray. This paper manifests the results of our experimentation for fine-grained emotion analysis on Bangla text. We gathered and annotated a text corpus consisting of user comments from several Facebook groups regarding socio-economic and political issues, and we made efforts to extract the basic emotions (sadness, happiness, disgust, surprise, fear, anger) conveyed through these comments. Finally, we compared the results of the five most popular classical machine learning techniques namely Naive Bayes, Decision Tree, k-Nearest Neighbor (k-NN), Support Vector Machine (SVM) and K-Means Clustering with several combinations of features. Our best model (SVM with a non-linear radial-basis function (RBF) kernel) achieved an overall average accuracy score of 52.98 and an F1 score (macro) of 0.3324 | On a case study for Bengali @cite_4 , the authors considered 1,100 sentences on eight different topics. They prepared a knowledge base for emoticons and also employed a morphological analyzer to identify the lexical keywords from the Bengali WordNet Affect lists. They claimed an overall precision, recall and F1-Score (micro) of , and respectively. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2065586386"
],
"abstract": [
"The present discussion highlights the aspects of an ongoing doctoral thesis grounded on the analysis and tracking of emotions from English and Bengali texts. Development of lexical resources and corpora meets the preliminary urgencies. The research spectrum aims to identify the evaluative emotional expressions at word, phrase, sentence, and document level granularities along with their associated holders and topics. Tracking of emotions based on topic or event was carried out by employing sense based affect scoring techniques. The labeled emotion corpora are being prepared from unlabeled examples to cope with the scarcity of emotional resources, especially for the resource constraint language like Bengali. Different unsupervised, supervised and semi-supervised strategies, adopted for coloring each outline of the research spectrum produce satisfactory outcomes"
]
} |
1907.07885 | 2960548256 | We introduce a formal framework for analyzing trades in financial markets. An exchange is where multiple buyers and sellers participate to trade. These days, all big exchanges use computer algorithms that implement double sided auctions to match buy and sell requests and these algorithms must abide by certain regulatory guidelines. For example, market regulators enforce that a matching produced by exchanges should be , and . To verify these properties of trades, we first formally define these notions in a theorem prover and then give formal proofs of relevant results on matchings. Finally, we use this framework to verify properties of two important classes of double sided auctions. All the definitions and results presented in this paper are completely formalised in the Coq proof assistant without adding any additional axioms to it. | There is no prior work known to us which formalizes financial algorithms used by the exchanges. Passmore and Ignatovich in @cite_17 highlight the significance, opportunities and challenges involved in formalizing financial markets. Their work describes in detail the whole spectrum of financial algorithms that need to be verified for ensuring safe and fair markets. Matching algorithms used by the exchanges are at the core of this whole spectrum. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2735447866"
],
"abstract": [
"Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Aesthetic Integration, we have pioneered the use of formal verification for analysing the safety and fairness of financial algorithms. With a focus on financial infrastructure (e.g., the matching logics of exchanges and dark pools and FIX connectivity between trading systems), we describe the landscape, and illustrate our Imandra formal verification system on a number of real-world examples. We sketch many open problems and future directions along the way."
]
} |
1907.07885 | 2960548256 | We introduce a formal framework for analyzing trades in financial markets. An exchange is where multiple buyers and sellers participate to trade. These days, all big exchanges use computer algorithms that implement double sided auctions to match buy and sell requests and these algorithms must abide by certain regulatory guidelines. For example, market regulators enforce that a matching produced by exchanges should be , and . To verify these properties of trades, we first formally define these notions in a theorem prover and then give formal proofs of relevant results on matchings. Finally, we use this framework to verify properties of two important classes of double sided auctions. All the definitions and results presented in this paper are completely formalised in the Coq proof assistant without adding any additional axioms to it. | On the other hand, there are quite a few works formalizing various concepts from auction theory @cite_15 @cite_6 @cite_7 . Most of these works focus on the Vickrey auction mechanism. In Vickrey auction, there is a single seller with different items and multiple buyers with valuations for every subsets of items. Each buyer places bids for every combination of the items. At the end of bidding, the aim of seller is to maximise total value of the items by suitably assigning the items to the buyers. | {
"cite_N": [
"@cite_15",
"@cite_7",
"@cite_6"
],
"mid": [
"2495217534",
"2118442327",
"2059840483"
],
"abstract": [
"The goal of this chapter is to describe efficient auctions for multiple, indivisible objects in terms of the duality theory of linear programming. Because of its well-known incentive properties, we shall focus on Vickrey auctions. These are efficient auctions in which buyers pay the social opportunity cost of their purchases and consequently are rewarded with their (social) marginal product. We use the assignment model to frame our analysis.",
"The monopolist's theory of optimal single-item auctions for agents with independent private values can be summarized by two statements. The first is from Myerson [8]: the optimal auction is Vickrey with a reserve price. The second is from Bulow and Klemperer [1]: it is better to recruit one more bidder and run the Vickrey auction than to run the optimal auction. These results hold for single-item auctions under the assumption that the agents' valuations are independently and identically drawn from a distribution that satisfies a natural (and prevalent) regularity condition. These fundamental guarantees for the Vickrey auction fail to hold in general single-parameter agent mechanism design problems. We give precise (and weak) conditions under which approximate analogs of these two results hold, thereby demonstrating that simple mechanisms remain almost optimal in quite general single-parameter agent settings.",
"We introduce formal methods' of mechanized reasoning from computer science to address two problems in auction design and practice: is a given auction design soundly specified, possessing its intended properties; and, is the design faithfully implemented when actually run? Failure on either front can be hugely costly in large auctions. In the familiar setting of the combinatorial Vickrey auction, we use a mechanized reasoner, Isabelle, to first ensure that the auction has a set of desired properties (e.g. allocating all items at non-negative prices), and to then generate verified executable code directly from the specified design. Having established the expected results in a known context, we intend next to use formal methods to verify new auction designs."
]
} |
1907.07729 | 2961115932 | We consider the problem of rigid registration, where we wish to jointly register multiple point sets via rigid transforms. This arises in applications such as sensor network localization, multiview registration, and protein structure determination. The least-squares estimator for this problem can be reduced to a rank-constrained semidefinite program (REG-SDP). It was recently shown that by formally applying the alternating direction method of multipliers (ADMM), we can derive an iterative solver (REG-ADMM) for REG-SDP, wherein each subproblem admits a simple closed-form solution. The empirical success of REG-ADMM has been demonstrated for multiview registration. However, its convergence does not follow from the existing literature on nonconvex ADMM. In this work, we study the convergence of REG-ADMM and our main findings are as follows. We prove that any fixed point of REG-ADMM is a stationary (KKT) point of REG-SDP. Moreover, for clean measurements, we give an explicit formula for the ADMM parameter @math , for which REG-ADMM is guaranteed to converge to the global optimum (with arbitrary initialization). If the noise is low, we can still show that the iterates converge to the global optimum, provided they are initialized sufficiently close to the optimum. On the other hand, if the noise is high, we explain why REG-ADMM becomes unstable if @math is less than some threshold, irrespective of the initialization. We present simulation results to support our theoretical predictions. The novelty of our analysis lies in the fact that we exploit the notion of tightness of convex relaxation to arrive at our convergence results. | The rank-restricted subset @math of the PSD cone is nonconvex, which implies that standard convergence result for ADMM @cite_27 does not directly apply to . However, we do leverage the convergence of convex ADMM for analyzing the convergence of when the noise is low. A phase transition phenomena similar to the one cited above has been analyzed in @cite_6 , albeit in the context of phase synchronization. Our proof of existence of the phase transition for is in the spirit of this analysis. | {
"cite_N": [
"@cite_27",
"@cite_6"
],
"mid": [
"2962853966",
"1800334520"
],
"abstract": [
"In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasi-norm, Schatten-q quasi-norm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.",
"Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the @math minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability. @PARASPLIT This paper provides the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. It also describes tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints. @PARASPLIT These applications depend on foundational research in conic geometry. This paper introduces a new summary parameter, called the statistical dimension, that canonically extends the dimension of a linear subspace to the class of convex cones. The main technical result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone."
]
} |
1907.07729 | 2961115932 | We consider the problem of rigid registration, where we wish to jointly register multiple point sets via rigid transforms. This arises in applications such as sensor network localization, multiview registration, and protein structure determination. The least-squares estimator for this problem can be reduced to a rank-constrained semidefinite program (REG-SDP). It was recently shown that by formally applying the alternating direction method of multipliers (ADMM), we can derive an iterative solver (REG-ADMM) for REG-SDP, wherein each subproblem admits a simple closed-form solution. The empirical success of REG-ADMM has been demonstrated for multiview registration. However, its convergence does not follow from the existing literature on nonconvex ADMM. In this work, we study the convergence of REG-ADMM and our main findings are as follows. We prove that any fixed point of REG-ADMM is a stationary (KKT) point of REG-SDP. Moreover, for clean measurements, we give an explicit formula for the ADMM parameter @math , for which REG-ADMM is guaranteed to converge to the global optimum (with arbitrary initialization). If the noise is low, we can still show that the iterates converge to the global optimum, provided they are initialized sufficiently close to the optimum. On the other hand, if the noise is high, we explain why REG-ADMM becomes unstable if @math is less than some threshold, irrespective of the initialization. We present simulation results to support our theoretical predictions. The novelty of our analysis lies in the fact that we exploit the notion of tightness of convex relaxation to arrive at our convergence results. | The theoretical convergence of ADMM for nonconvex problems has been studied in @cite_17 @cite_0 @cite_7 . However, a crucial working assumption common to these results does not hold in our case. More precisely, observe that we can rewrite as where @math is the indicator function associated with a feasible set @math @cite_27 , namely, @math if @math , and @math otherwise. Notice that, because of the indicator functions, the objective function in is non-differentiable in @math and @math . This violates a regularity assumption common in existing analyses of nonconvex ADMM, namely, that the objective must be smooth in one variable. In these works, convergence results are obtained by proving a monotonic decrease in the augmented Lagrangian. This requires: (i) bounding successive difference in dual variables by successive difference in primal variable, which is where the assumption of smoothness is used; (ii) requiring that the parameter @math is above a certain threshold. In particular, it is not clear whether this thresholding of the value of @math is fundamental to convergence, or just an artifact of the analysis. | {
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_7",
"@cite_17"
],
"mid": [
"2962853966",
"2105693192",
"1549918636",
"1923817890"
],
"abstract": [
"In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, ( (x_0, ,x_p,y) ), subject to coupled linear equality constraints. Our ADMM updates each of the primal variables (x_0, ,x_p,y ), followed by updating the dual variable. We separate the variable y from (x_i )’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, ( _q ) quasi-norm, Schatten-q quasi-norm ( (0<q<1 )), minimax concave penalty (MCP), and smoothly clipped absolute deviation penalty. It also allows nonconvex constraints such as compact manifolds (e.g., spherical, Stiefel, and Grassman manifolds) and linear complementarity constraints. Also, the (x_0 )-block can be almost any lower semi-continuous function. By applying our analysis, we show, for the first time, that several ADMM algorithms applied to solve nonconvex models in statistical learning, optimization on manifold, and matrix decomposition are guaranteed to converge. Our results provide sufficient conditions for ADMM to converge on (convex or nonconvex) monotropic programs with three or more blocks, as they are special cases of our model. ADMM has been regarded as a variant to the augmented Lagrangian method (ALM). We present a simple example to illustrate how ADMM converges but ALM diverges with bounded penalty parameter ( ). Indicated by this example and other analysis in this paper, ADMM might be a better choice than ALM for some nonconvex nonsmooth problems, because ADMM is not only easier to implement, it is also more likely to converge for the concerned scenarios.",
"We analyze the convergence rate of the alternating direction method of multipliers (ADMM) for minimizing the sum of two or more nonsmooth convex separable functions subject to linear constraints. Previous analysis of the ADMM typically assumes that the objective function is the sum of only two convex functions defined on two separable blocks of variables even though the algorithm works well in numerical experiments for three or more blocks. Moreover, there has been no rate of convergence analysis for the ADMM without strong convexity in the objective function. In this paper we establish the global R-linear convergence of the ADMM for minimizing the sum of any number of convex separable functions, assuming that a certain error bound condition holds true and the dual stepsize is sufficiently small. Such an error bound condition is satisfied for example when the feasible set is a compact polyhedron and the objective function consists of a smooth strictly convex function composed with a linear mapping, and a nonsmooth @math l1 regularizer. This result implies the linear convergence of the ADMM for contemporary applications such as LASSO without assuming strong convexity of the objective function.",
"The formulation @math minx,yf(x)+g(y),subjecttoAx+By=b,where f and g are extended-value convex functions, arises in many application areas such as signal processing, imaging and image processing, statistics, and machine learning either naturally or after variable splitting. In many common problems, one of the two objective functions is strictly convex and has Lipschitz continuous gradient. On this kind of problem, a very effective approach is the alternating direction method of multipliers (ADM or ADMM), which solves a sequence of f g-decoupled subproblems. However, its effectiveness has not been matched by a provably fast rate of convergence; only sublinear rates such as O(1 k) and @math O(1 k2) were recently established in the literature, though the O(1 k) rates do not require strong convexity. This paper shows that global linear convergence can be guaranteed under the assumptions of strong convexity and Lipschitz gradient on one of the two functions, along with certain rank assumptions on A and B. The result applies to various generalizations of ADM that allow the subproblems to be solved faster and less exactly in certain manners. The derived rate of convergence also provides some theoretical guidance for optimizing the ADM parameters. In addition, this paper makes meaningful extensions to the existing global convergence theory of ADM generalizations.",
"The alternating direction method with multipliers (ADMM) has been one of most powerful and successful methods for solving various convex or nonconvex composite problems that arise in the fields of image & signal processing and machine learning. In convex settings, numerous convergence results have been established for ADMM as well as its varieties. However, due to the absence of convexity, the convergence analysis of nonconvex ADMM is generally very difficult. In this paper we study the Bregman modification of ADMM (BADMM), which includes the conventional ADMM as a special case and often leads to an improvement of the performance of the algorithm. Under certain assumptions, we prove that the iterative sequence generated by BADMM converges to a stationary point of the associated augmented Lagrangian function. The obtained results underline the feasibility of ADMM in applications under nonconvex settings."
]
} |
1907.07729 | 2961115932 | We consider the problem of rigid registration, where we wish to jointly register multiple point sets via rigid transforms. This arises in applications such as sensor network localization, multiview registration, and protein structure determination. The least-squares estimator for this problem can be reduced to a rank-constrained semidefinite program (REG-SDP). It was recently shown that by formally applying the alternating direction method of multipliers (ADMM), we can derive an iterative solver (REG-ADMM) for REG-SDP, wherein each subproblem admits a simple closed-form solution. The empirical success of REG-ADMM has been demonstrated for multiview registration. However, its convergence does not follow from the existing literature on nonconvex ADMM. In this work, we study the convergence of REG-ADMM and our main findings are as follows. We prove that any fixed point of REG-ADMM is a stationary (KKT) point of REG-SDP. Moreover, for clean measurements, we give an explicit formula for the ADMM parameter @math , for which REG-ADMM is guaranteed to converge to the global optimum (with arbitrary initialization). If the noise is low, we can still show that the iterates converge to the global optimum, provided they are initialized sufficiently close to the optimum. On the other hand, if the noise is high, we explain why REG-ADMM becomes unstable if @math is less than some threshold, irrespective of the initialization. We present simulation results to support our theoretical predictions. The novelty of our analysis lies in the fact that we exploit the notion of tightness of convex relaxation to arrive at our convergence results. | We do not make such smoothness assumptions in our analysis. We can afford to do this since we are analyzing a special class of problems, as opposed to the more general setups in @cite_17 @cite_0 @cite_7 . Instead of showing a monotonic decrease in the augmented Lagrangian, our analysis relies on the phenomenon of tightness of convex relaxation. This provides more insights into the convergence behavior of the algorithm. For instance, our explanation in Section shows that the instability of the algorithm (in the high noise regime) for low values of @math is fundamental, while suggesting why this instability is not observed in the low noise regime. | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_17"
],
"mid": [
"2147656689",
"1988351624",
"2121275167"
],
"abstract": [
"This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis",
"We discuss a new robust convergence analysis of the well-known subspace iteration algorithm for computing the dominant singular vectors of a matrix, also known as simultaneous iteration or power method. The result characterizes the convergence behavior of the algorithm when a large amount noise is introduced after each matrix-vector multiplication. While interesting in its own right, the main motivation comes from the problem of privacy-preserving spectral analysis where noise is added in order to achieve the privacy guarantee known as differential privacy. This result leads to nearly tight worst-case bounds for the problem of computing a differentially private low-rank approximation in the spectral norm. Our results extend to privacy-preserving principal component analysis. We obtain improvements for several variants of differential privacy that have been considered. The running time of our algorithm is nearly linear in the input sparsity leading to strong improvements in running time over previous work. Complementing our worst-case bounds, we show that the error dependence of our algorithm on the matrix dimension can be replaced by a tight dependence on the coherence of the matrix. This parameter is always bounded by the matrix dimension but often much smaller. Indeed, the assumption of low coherence is essential in several machine learning and signal processing applications.",
"We study learning formulations with non-convex regularizaton that are natural for sparse linear models. There are two approaches to this problem: • Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. • Convex relaxation such as L1-regularization that solves the problem under some conditions. However it often leads to sub-optimal sparsity in reality. This paper tries to remedy the above gap between theory and practice. In particular, we investigate a multi-stage convex relaxation scheme for solving problems with non-convex regularization. Theoretically, we analyze the behavior of a resulting two-stage relaxation scheme for the capped-L1 regularization. Our performance bound shows that the procedure is superior to the standard L1 convex relaxation for learning sparse targets. Experiments confirm the effectiveness of this method on some simulation and real data."
]
} |
1907.07803 | 2959891429 | One problem when studying how to find and fix syntax errors is how to get natural and representative examples of syntax errors. Most syntax error datasets are not free, open, and public, or they are extracted from novice programmers and do not represent syntax errors that the general population of developers would make. Programmers of all skill levels post questions and answers to Stack Overflow which may contain snippets of source code along with corresponding text and tags. Many snippets do not parse, thus they are ripe for forming a corpus of syntax errors and corrections. Our primary contribution is an approach for extracting natural syntax errors and their corresponding human made fixes to help syntax error research. A Python abstract syntax tree parser is used to determine preliminary errors and corrections on code blocks extracted from the SOTorrent data set. We further analyzed our code by executing the corrections in a Python interpreter. We applied our methodology to produce a public data set of 62,965 Python Stack Overflow code snippets with corresponding tags, errors, and stack traces. We found that errors made by Stack Overflow users do not match errors made by student developers or random mutations, implying there is a serious representativeness risk within the field. Finally we share our dataset openly so that future researchers can re-use and extend our syntax errors and fixes. | Syntactically incorrect code is artificially derivable, as formal programming languages provide grammar rules which can be referred to for correctness. Random token level insertions, deletions, and replacements were performed to generate syntax errors from existing open source Java projects @cite_2 . 10.7287 peerj.preprints.1132v1 created Python syntax errors from valid code mined from GitHub by applying mutations on tokens, characters, and lines @cite_13 . Although generated errors are appealing due to the availability of open source code, Just:2014:MVS:2635868.2635929 demonstrated limitations of using mutations for software testing research @cite_10 . Given the task of using mutants as replacements for real faults in automated software testing research, only 73 , when accounting for code coverage, the mutant to fault coupling effect is small @cite_10 . | {
"cite_N": [
"@cite_10",
"@cite_13",
"@cite_2"
],
"mid": [
"2060384944",
"1975394407",
"2767824559"
],
"abstract": [
"A frustrating aspect of software development is that compiler error messages often fail to locate the actual cause of a syntax error. An errant semicolon or brace can result in many errors reported throughout the file. We seek to find the actual source of these syntax errors by relying on the consistency of software: valid source code is usually repetitive and unsurprising. We exploit this consistency by constructing a simple N-gram language model of lexed source code tokens. We implemented an automatic Java syntax-error locator using the corpus of the project itself and evaluated its performance on mutated source code from several projects. Our tool, trained on the past versions of a project, can effectively augment the syntax error locations produced by the native compiler. Thus we provide a methodology and tool that exploits the naturalness of software source code to detect syntax errors alongside the parser.",
"Software developers often duplicate source code to replicate functionality. This practice can hinder the maintenance of a software project: bugs may arise when two identical code segments are edited inconsistently. This paper presents DejaVu, a highly scalable system for detecting these general syntactic inconsistency bugs. DejaVu operates in two phases. Given a target code base, a parallel inconsistent clone analysis first enumerates all groups of source code fragments that are similar but not identical. Next, an extensible buggy change analysis framework refines these results, separating each group of inconsistent fragments into a fine-grained set of inconsistent changes and classifying each as benign or buggy. On a 75+ million line pre-production commercial code base, DejaVu executed in under five hours and produced a report of over 8,000 potential bugs. Our analysis of a sizable random sample suggests with high likelihood that at this report contains at least 2,000 true bugs and 1,000 code smells. These bugs draw from a diverse class of software defects and are often simple to correct: syntactic inconsistencies both indicate problems and suggest solutions.",
"Building language models for source code enables a large set of improvements on traditional software engineering tasks. One promising application is automatic code completion. State-of-the-art techniques capture code regularities at token level with lexical information. Such language models are more suitable for predicting short token sequences, but become less effective with respect to long statement level predictions. In this paper, we have proposed PCC to optimize the token level based language modeling. Specifically, PCC introduced an intermediate representation (IR) for source code, which puts tokens into groups using lexeme and variable relative order. In this way, PCC is able to handle long token sequences, i.e., group sequences, to suggest a complete statement with the precise synthesizer. Further more, PCC employed a fuzzy matching technique which combined genetic and longest common sub-sequence algorithms to make the prediction more accurate. We have implemented a code completion plugin for Eclipse and evaluated it on open-source Java projects. The results have demonstrated the potential of PCC in generating precise long statement level predictions. In 30 -60 of the cases, it can correctly suggest the complete statement with only six candidates, and 40 -90 of the cases with ten candidates."
]
} |
1907.07803 | 2959891429 | One problem when studying how to find and fix syntax errors is how to get natural and representative examples of syntax errors. Most syntax error datasets are not free, open, and public, or they are extracted from novice programmers and do not represent syntax errors that the general population of developers would make. Programmers of all skill levels post questions and answers to Stack Overflow which may contain snippets of source code along with corresponding text and tags. Many snippets do not parse, thus they are ripe for forming a corpus of syntax errors and corrections. Our primary contribution is an approach for extracting natural syntax errors and their corresponding human made fixes to help syntax error research. A Python abstract syntax tree parser is used to determine preliminary errors and corrections on code blocks extracted from the SOTorrent data set. We further analyzed our code by executing the corrections in a Python interpreter. We applied our methodology to produce a public data set of 62,965 Python Stack Overflow code snippets with corresponding tags, errors, and stack traces. We found that errors made by Stack Overflow users do not match errors made by student developers or random mutations, implying there is a serious representativeness risk within the field. Finally we share our dataset openly so that future researchers can re-use and extend our syntax errors and fixes. | Automated source code repair, like identifying and refactoring improper method names, also required a labeled dataset of valid and invalid source code @cite_0 . Program repair is often viewed as different than syntax error correction because testing is performed which serves as a benchmark for repaired code, while syntax errors rely primarily on parseability. | {
"cite_N": [
"@cite_0"
],
"mid": [
"1475493299"
],
"abstract": [
"Automated program repair can potentially reduce debugging costs and improvesoftware quality but recent studies have drawn attention to shortcomings inthe quality of automatically generated repairs. We propose a new kind ofrepair that uses the large body of existing open-source code to findpotential fixes. The key challenges lie in efficiently finding codesemantically similar (but not identical) to defective code and thenappropriately integrating that code into a buggy program. We presentSearchRepair, a repair technique that addresses these challenges by(1) encoding a large database of human-written code fragments as SMTconstraints on input-output behavior, (2) localizing a given defect to likelybuggy program fragments and deriving the desired input-output behavior forcode to replace those fragments, (3) using state-of-the-art constraintsolvers to search the database for fragments that satisfy that desiredbehavior and replacing the likely buggy code with these potential patches, and (4) validating that the patches repair the bug against program testsuites. We find that SearchRepair repairs 150 (19 ) of 778 benchmark Cdefects written by novice students, 20 of which are not repaired by GenProg, TrpAutoRepair, and AE. We compare the quality of the patches generated by thefour techniques by measuring how many independent, not-used-during-repairtests they pass, and find that SearchRepair-repaired programs pass 97.3 ofthe tests, on average, whereas GenProg-, TrpAutoRepair-, and AE-repairedprograms pass 68.7 , 72.1 , and 64.2 of the tests, respectively. We concludethat SearchRepair produces higher-quality repairs than GenProg, TrpAutoRepair, and AE, and repairs some defects those tools cannot."
]
} |
1907.07803 | 2959891429 | One problem when studying how to find and fix syntax errors is how to get natural and representative examples of syntax errors. Most syntax error datasets are not free, open, and public, or they are extracted from novice programmers and do not represent syntax errors that the general population of developers would make. Programmers of all skill levels post questions and answers to Stack Overflow which may contain snippets of source code along with corresponding text and tags. Many snippets do not parse, thus they are ripe for forming a corpus of syntax errors and corrections. Our primary contribution is an approach for extracting natural syntax errors and their corresponding human made fixes to help syntax error research. A Python abstract syntax tree parser is used to determine preliminary errors and corrections on code blocks extracted from the SOTorrent data set. We further analyzed our code by executing the corrections in a Python interpreter. We applied our methodology to produce a public data set of 62,965 Python Stack Overflow code snippets with corresponding tags, errors, and stack traces. We found that errors made by Stack Overflow users do not match errors made by student developers or random mutations, implying there is a serious representativeness risk within the field. Finally we share our dataset openly so that future researchers can re-use and extend our syntax errors and fixes. | Free and open datasets of naturally made errors and their fixes are more difficult to obtain. Blackbox, a data collection project within the BlueJ Java development environment, requires manual staff contact for access to data and forbids the release of the raw dataset @cite_11 . Pritchard:2015:FDE:2846680.2846681 analyzed Python programs submitted to CS Circles, an online tool for learning Python @cite_12 . kelley2018system studied Python code submitted by students in an introductory programming course at MIT @cite_4 . Gathering this data without privileged access to the provided code submissions is difficult, limiting the reproducibility of their research. Our research used Stack Overflow and is advantageous as the raw content is freely accessible to the internet, revisions and history is tracked, and contributors have a wide range of software engineering expertise and skill sets Stack Overflow 2019 Survey https: insights.stackoverflow.com survey 2019 https: insights.stackoverflow.com survey 2019 . | {
"cite_N": [
"@cite_4",
"@cite_12",
"@cite_11"
],
"mid": [
"2057833160",
"2610548325",
"2406533925"
],
"abstract": [
"Automatically observing and recording the programming behaviour of novices is an established computing education research technique. However, prior studies have been conducted at a single institution on a small or medium scale, without the possibility of data re-use. Now, the widespread availability of always-on Internet access allows for data collection at a much larger, global scale. In this paper we report on the Blackbox project, begun in June 2013. Blackbox is a perpetual data collection project that collects data from worldwide users of the BlueJ IDE -- a programming environment designed for novice programmers. Over one hundred thousand users have already opted-in to Blackbox. The collected data is anonymous and is available to other researchers for use in their own studies, thus benefitting the larger research community. In this paper, we describe the data available via Blackbox, show some examples of analyses that can be performed using the collected data, and discuss some of the analysis challenges that lie ahead.",
"When programmers look for how to achieve certain programming tasks, Stack Overflow is a popular destination in search engine results. Over the years, Stack Overflow has accumulated an impressive knowledge base of snippets of code that are amply documented. We are interested in studying how programmers use these snippets of code in their projects. Can we find Stack Overflow snippets in real projects? When snippets are used, is this copy literal or does it suffer adaptations? And are these adaptations specializations required by the idiosyncrasies of the target artifact, or are they motivated by specific requirements of the programmer? The large-scale study presented on this paper analyzes 909k non-fork Python projects hosted on Github, which contain 290M function definitions, and 1.9M Python snippets captured in Stack Overflow. Results are presented as quantitative analysis of block-level code cloning intra and inter Stack Overflow and GitHub, and as an analysis of programming behaviors through the qualitative analysis of our findings.",
"Enriched by natural language texts, Stack Overflow code snippets arean invaluable code-centric knowledge base of small units ofsource code. Besides being useful for software developers, theseannotated snippets can potentially serve as the basis for automatedtools that provide working code solutions to specific natural languagequeries. With the goal of developing automated tools with the Stack Overflowsnippets and surrounding text, this paper investigates the followingquestions: (1) How usable are the Stack Overflow code snippets? and(2) When using text search engines for matching on the naturallanguage questions and answers around the snippets, what percentage ofthe top results contain usable code snippets?A total of 3M code snippets are analyzed across four languages: C #,Java, JavaScript, and Python. Python and JavaScript proved to be thelanguages for which the most code snippets are usable. Conversely,Java and C # proved to be the languages with the lowest usabilityrate. Further qualitative analysis on usable Python snippets showsthe characteristics of the answers that solve the original question. Finally,we use Google search to investigate the alignment ofusability and the natural language annotations around code snippets, andexplore how to make snippets in Stack Overflow anadequate base for future automatic program generation."
]
} |
1907.07769 | 2959758584 | We present a voice conversion solution using recurrent sequence to sequence modeling for DNNs. Our solution takes advantage of recent advances in attention based modeling in the fields of Neural Machine Translation (NMT), Text-to-Speech (TTS) and Automatic Speech Recognition (ASR). The problem consists of converting between voices in a parallel setting when @math audio pairs are available. Our seq2seq architecture makes use of a hierarchical encoder to summarize input audio frames. On the decoder side, we use an attention based architecture used in recent TTS works. Since there is a dearth of large multispeaker voice conversion databases needed for training DNNs, we resort to training the network with a large single speaker dataset as an autoencoder. This is then adapted for the smaller multispeaker voice conversion datasets available for voice conversion. In contrast with other voice conversion works that use @math , duration and linguistic features, our system uses mel spectrograms as the audio representation. Output mel frames are converted back to audio using a wavenet vocoder. | Pertinent to our discussion are seq2seq modeling works @cite_42 @cite_16 . In these works, additional loss terms are introduced to encourage the model to learn alignment and to preserve linguistic context. Alignment is maintained by noting that the attention curve is predominantly diagonal (in the voice conversion problem) between source and target, and including in the loss function a diagonal penalty matrix - a term referred to as guided attention in the TTS work @cite_2 . An additional consideration is to prevent the decoder from 'losing' linguistic context, as would arise when it simply learns to reconstruct the output of the target. This was addressed by using additional neural networks that ensure that the hidden representation produced by the encoder (similar reasoning applies to the decoder) was capable of reconstructing the input, and thereby retained context information. These manifest as additional loss terms - we also glean a similarity to cycle consistency losses @cite_20 - that they call 'context preservation losses'. Also noteworthy is that these approaches use non-recurrent architectures for their seq2seq modeling. | {
"cite_N": [
"@cite_42",
"@cite_20",
"@cite_16",
"@cite_2"
],
"mid": [
"2952470929",
"2774848319",
"2950429209",
"2099119623"
],
"abstract": [
"Recently, there has been an increasing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. One approach is the attention-based encoder-decoder framework that learns a mapping between variable-length input and output sequences in one step using a purely data-driven method. The attention model has often been shown to improve the performance over another end-to-end approach, the Connectionist Temporal Classification (CTC), mainly because it explicitly uses the history of the target character without any conditional independence assumptions. However, we observed that the performance of the attention has shown poor results in noisy condition and is hard to learn in the initial training stage with long input sequences. This is because the attention model is too flexible to predict proper alignments in such cases due to the lack of left-to-right constraints as used in CTC. This paper presents a novel method for end-to-end speech recognition to improve robustness and achieve fast convergence by using a joint CTC-attention model within the multi-task learning framework, thereby mitigating the alignment issue. An experiment on the WSJ and CHiME-4 tasks demonstrates its advantages over both the CTC and attention-based encoder-decoder baselines, showing 5.4-14.6 relative improvements in Character Error Rate (CER).",
"We propose a parallel-data-free voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is general purpose, high quality, and parallel-data free and works without any extra data, modules, or alignment procedure. It also avoids over-smoothing, which occurs in many conventional statistical model-based VC methods. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from unpaired data. Furthermore, the adversarial loss contributes to reducing over-smoothing of the converted feature sequence. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a parallel-data-free VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based method under advantageous conditions with parallel and twice the amount of data.",
"This paper describes a method based on a sequence-to-sequence learning (Seq2Seq) with attention and context preservation mechanism for voice conversion (VC) tasks. Seq2Seq has been outstanding at numerous tasks involving sequence modeling such as speech synthesis and recognition, machine translation, and image captioning. In contrast to current VC techniques, our method 1) stabilizes and accelerates the training procedure by considering guided attention and proposed context preservation losses, 2) allows not only spectral envelopes but also fundamental frequency contours and durations of speech to be converted, 3) requires no context information such as phoneme labels, and 4) requires no time-aligned source and target speech data in advance. In our experiment, the proposed VC framework can be trained in only one day, using only one GPU of an NVIDIA Tesla K80, while the quality of the synthesized speech is higher than that of speech converted by Gaussian mixture model-based VC and is comparable to that of speech generated by recurrent neural network-based text-to-speech synthesis, which can be regarded as an upper limit on VC performance.",
"In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss. In binary classification one typically tries to minimizes the error rate. But in structured prediction each task often has its own measure of performance such as the BLEU score in machine translation or the intersection-over-union score in PASCAL segmentation. The most common approaches to structured prediction, structural SVMs and CRFs, do not minimize the task loss: the former minimizes a surrogate loss with no guarantees for task loss and the latter minimizes log loss independent of task loss. The main contribution of this paper is a theorem stating that a certain perceptron-like learning rule, involving features vectors derived from loss-adjusted inference, directly corresponds to the gradient of task loss. We give empirical results on phonetic alignment of a standard test set from the TIMIT corpus, which surpasses all previously reported results on this problem."
]
} |
1907.07769 | 2959758584 | We present a voice conversion solution using recurrent sequence to sequence modeling for DNNs. Our solution takes advantage of recent advances in attention based modeling in the fields of Neural Machine Translation (NMT), Text-to-Speech (TTS) and Automatic Speech Recognition (ASR). The problem consists of converting between voices in a parallel setting when @math audio pairs are available. Our seq2seq architecture makes use of a hierarchical encoder to summarize input audio frames. On the decoder side, we use an attention based architecture used in recent TTS works. Since there is a dearth of large multispeaker voice conversion databases needed for training DNNs, we resort to training the network with a large single speaker dataset as an autoencoder. This is then adapted for the smaller multispeaker voice conversion datasets available for voice conversion. In contrast with other voice conversion works that use @math , duration and linguistic features, our system uses mel spectrograms as the audio representation. Output mel frames are converted back to audio using a wavenet vocoder. | Developments in the generative modeling (primarily, Variational Autoencoders @cite_5 and Generative Adversarial Networks @cite_21 ) front have led to their use in voice conversion problems. In @cite_11 , a learned similarity metric obtained through a GAN discriminator is used to correct oversmoothed speech that results from maximum likelihood training, which imposes a particular form for the loss function (usually the MSE). A conditional VAEGAN @cite_35 setup is used in @cite_19 to implement voice conversion, with conditioning on speakers, together with a Wasserstein GAN discriminator @cite_31 to fix the blurriness issue associated with VAEs. Moreover, an important apparatus that is of use in training non-parallel voice setups consists of Cycle Consistency Losses from the famous CycleGAN @cite_20 work for images. This forms a building block in the papers @cite_51 and @cite_39 . | {
"cite_N": [
"@cite_35",
"@cite_21",
"@cite_39",
"@cite_19",
"@cite_5",
"@cite_31",
"@cite_51",
"@cite_20",
"@cite_11"
],
"mid": [
"2771099609",
"2962879692",
"2585635281",
"2949257576",
"2605135824",
"2964268978",
"2523469089",
"2774848319",
"2952030765"
],
"abstract": [
"In this paper, we propose a model using generative adversarial net (GAN) to generate realistic text. Instead of using standard GAN, we combine variational autoencoder (VAE) with generative adversarial net. The use of high-level latent random variables is helpful to learn the data distribution and solve the problem that generative adversarial net always emits the similar data. We propose the VGAN model where the generative model is composed of recurrent neural network and VAE. The discriminative model is a convolutional neural network. We train the model via policy gradient. We apply the proposed model to the task of text generation and compare it to other recent neural network based models, such as recurrent neural network language model and SeqGAN. We evaluate the performance of the model by calculating negative log-likelihood and the BLEU score. We conduct experiments on three benchmark datasets, and results show that our model outperforms other previous models.",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"The main contribution of this paper is a simple semisupervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market- 1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at https: github.com layumi Person-reID_GAN.",
"The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL",
"Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.",
"As a new way of training generative models, Generative Adversarial Net (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.",
"As a new way of training generative models, Generative Adversarial Nets (GAN) that uses a discriminative model to guide the training of the generative model has enjoyed considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens. A major reason lies in that the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model. Also, the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is non-trivial to balance its current score and the future one once the entire sequence has been generated. In this paper, we propose a sequence generation framework, called SeqGAN, to solve the problems. Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.",
"We propose a parallel-data-free voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is general purpose, high quality, and parallel-data free and works without any extra data, modules, or alignment procedure. It also avoids over-smoothing, which occurs in many conventional statistical model-based VC methods. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from unpaired data. Furthermore, the adversarial loss contributes to reducing over-smoothing of the converted feature sequence. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a parallel-data-free VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based method under advantageous conditions with parallel and twice the amount of data.",
"This paper proposes an approach for applying GANs to NMT. We build a conditional sequence generative adversarial net which comprises of two adversarial sub models, a generator and a discriminator. The generator aims to generate sentences which are hard to be discriminated from human-translated sentences (i.e., the golden target sentences), And the discriminator makes efforts to discriminate the machine-generated sentences from human-translated ones. The two sub models play a mini-max game and achieve the win-win situation when they reach a Nash Equilibrium. Additionally, the static sentence-level BLEU is utilized as the reinforced objective for the generator, which biases the generation towards high BLEU points. During training, both the dynamic discriminator and the static BLEU objective are employed to evaluate the generated sentences and feedback the evaluations to guide the learning of the generator. Experimental results show that the proposed model consistently outperforms the traditional RNNSearch and the newly emerged state-of-the-art Transformer on English-German and Chinese-English translation tasks."
]
} |
1907.07769 | 2959758584 | We present a voice conversion solution using recurrent sequence to sequence modeling for DNNs. Our solution takes advantage of recent advances in attention based modeling in the fields of Neural Machine Translation (NMT), Text-to-Speech (TTS) and Automatic Speech Recognition (ASR). The problem consists of converting between voices in a parallel setting when @math audio pairs are available. Our seq2seq architecture makes use of a hierarchical encoder to summarize input audio frames. On the decoder side, we use an attention based architecture used in recent TTS works. Since there is a dearth of large multispeaker voice conversion databases needed for training DNNs, we resort to training the network with a large single speaker dataset as an autoencoder. This is then adapted for the smaller multispeaker voice conversion datasets available for voice conversion. In contrast with other voice conversion works that use @math , duration and linguistic features, our system uses mel spectrograms as the audio representation. Output mel frames are converted back to audio using a wavenet vocoder. | Our work is influenced by recent TTS works involving transfer learning and speaker adaptation. The recently published work @cite_49 demonstrates a methodology to use adapt a trained network for new speakers with a wavenet. Likewise, in @cite_8 , a speaker embedding is extracted using a discriminative network for unseen, new speakers which is then used to condition a TTS pipeline similar to Tacotron. This philosophy is also used in @cite_15 where schemes are used to learn speaker embeddings extracted separately or trained as part of the model during adaptation. In all these contexts, it is emphasized that the onus is on adapting to small, limited data corpuses, thereby circumventing the need to obtain large datasets to train these models from scratch. In our work, we use the same idea to get around the problem of not having enough data to train in the voice conversion dataset under consideration. However, in our work, instead of producing new speaker embeddings, we retrain the model for each new @math pair, a process that is rapid owing to the small size of the corpus. | {
"cite_N": [
"@cite_15",
"@cite_49",
"@cite_8"
],
"mid": [
"2892620417",
"2963432880",
"2913271971"
],
"abstract": [
"We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deployed as a TTS system. Instead, the aim is to produce a network that requires few data at deployment time to rapidly adapt to new speakers. We introduce and benchmark three strategies: (i) learning the speaker embedding while keeping the WaveNet core fixed, (ii) fine-tuning the entire architecture with stochastic gradient descent, and (iii) predicting the speaker embedding with a trained neural network encoder. The experiments show that these approaches are successful at adapting the multi-speaker neural network to new speakers, obtaining state-of-the-art results in both sample naturalness and voice similarity with merely a few minutes of audio data from new speakers.",
"We describe a neural network-based system for text-to-speech (TTS) synthesis that is able to generate speech audio in the voice of many different speakers, including those unseen during training. Our system consists of three independently trained components: (1) a speaker encoder network, trained on a speaker verification task using an independent dataset of noisy speech from thousands of speakers without transcripts, to generate a fixed-dimensional embedding vector from seconds of reference speech from a target speaker; (2) a sequence-to-sequence synthesis network based on Tacotron 2, which generates a mel spectrogram from text, conditioned on the speaker embedding; (3) an auto-regressive WaveNet-based vocoder that converts the mel spectrogram into a sequence of time domain waveform samples. We demonstrate that the proposed model is able to transfer the knowledge of speaker variability learned by the discriminatively-trained speaker encoder to the new task, and is able to synthesize natural speech from speakers that were not seen during training. We quantify the importance of training the speaker encoder on a large and diverse speaker set in order to obtain the best generalization performance. Finally, we show that randomly sampled speaker embeddings can be used to synthesize speech in the voice of novel speakers dissimilar from those used in training, indicating that the model has learned a high quality speaker representation.",
"Recently, speaker adaptation of neural TTS models received significant interest, and several studies focusing on this topic have been published. All of them explore an adaptation of an initial multi-speaker model trained on a corpus containing from tens to hundreds of individual speaker voices.In this work we focus on a challenging task of TTS voice conversion where an initial system is trained on a single-speaker data and then need to be adapted to a variety of external speaker voices. The TTS voice conversion setup represents a very important use case. Transcribed multi-speaker datasets might be unavailable for many languages while any TTS technology provider is expected to have at least one suitable single-speaker dataset per supported language.We present a neural TTS system comprising separate prosody generator and synthesizer DNN models. The system is trained on a high quality proprietary male speaker dataset. We show that the system models can be converted to a variety of external male and female ordinary voices and an extremely expressive artist’s voice and present crowd-base subjective evaluation results."
]
} |
1907.08038 | 2956579178 | We propose a novel algorithm to ensure @math -differential privacy for answering range queries on trajectory data. In order to guarantee privacy, differential privacy mechanisms add noise to either data or query, thus introducing errors to queries made and potentially decreasing the utility of information. In contrast to the state-of-the-art, our method achieves significantly lower error as it is the first data- and query-aware approach for such queries. The key challenge for answering range queries on trajectory data privately is to ensure an accurate count. Simply representing a trajectory as a set instead of of points will generally lead to highly inaccurate query answers as it ignores the sequential dependency of location points in trajectories, i.e., will violate the consistency of trajectory data. Furthermore, trajectories are generally unevenly distributed across a city and adding noise uniformly will generally lead to a poor utility. To achieve differential privacy, our algorithm adaptively adds noise to the input data according to the given query set. It first privately partitions the data space into uniform regions and computes the traffic density of each region. The regions and their densities, in addition to the given query set, are then used to estimate the distribution of trajectories over the queried space, which ensures high accuracy for the given query set. We show the accuracy and efficiency of our algorithm using extensive empirical evaluations on real and synthetic data sets. | @cite_20 define the dependency between cells instead of points by mapping the trajectories to a grid to count the movement frequencies between the adjacent cells. However, a frequency vector only maintains the number of transitions for a group of observations without information about the spatial adjacency of two vectors. This information is crucial for a range query as it counts the vectors overlapping the query area. In our work, we use a spatial histogram that is a grid but captures both the cell counts and the spatial adjacency of trajectories. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1002055276"
],
"abstract": [
"Due to the high uptake of location-based services (LBSs), large spatio-temporal datasets of moving objects' trajectories are being created every day. An important task in spatial data analytics is to service range queries by returning trajectory counts within a queried region. The question of how to keep an individual user's data private whilst enabling spatial data analytics by third parties has become an urgent research direction. Indeed, it is increasingly becoming a concern for users. To preserve privacy we discard individual trajectories and aggregate counts over a spatial and temporal partition. However the privacy gained comes at a cost to utility: trajectories passing through multiple cells and re-entering a query region, lead to inaccurate query responses. This is known as the distinct counting problem. We propose the Connection Aware Spatial Euler (CASE) histogram to address this long-standing problem. The CASE histogram maintains the connectivity of a moving object path, but does not require the ID of an object to distinguish multiple entries into an arbitrary query region. Our approach is to process trajectories offline into aggregate counts which are sent to third parties, rather than the original trajectories. We also explore modifications of our aggregate counting approach that preserve differential privacy. Theoretically and experimentally we demonstrate that our method provides a high level of accuracy compared to the best known methods for the distinct counting problem, whilst preserving privacy. We conduct our experiments on both synthetic and real datasets over two competitive Euler histogram-based methods presented in the literature. Our methods enjoy improvements to accuracy from 10 up to 70 depending on trip data and query region size, with the greatest increase seen on the Microsoft T-Drive real dataset, representing a more than tripling of accuracy."
]
} |
1907.08038 | 2956579178 | We propose a novel algorithm to ensure @math -differential privacy for answering range queries on trajectory data. In order to guarantee privacy, differential privacy mechanisms add noise to either data or query, thus introducing errors to queries made and potentially decreasing the utility of information. In contrast to the state-of-the-art, our method achieves significantly lower error as it is the first data- and query-aware approach for such queries. The key challenge for answering range queries on trajectory data privately is to ensure an accurate count. Simply representing a trajectory as a set instead of of points will generally lead to highly inaccurate query answers as it ignores the sequential dependency of location points in trajectories, i.e., will violate the consistency of trajectory data. Furthermore, trajectories are generally unevenly distributed across a city and adding noise uniformly will generally lead to a poor utility. To achieve differential privacy, our algorithm adaptively adds noise to the input data according to the given query set. It first privately partitions the data space into uniform regions and computes the traffic density of each region. The regions and their densities, in addition to the given query set, are then used to estimate the distribution of trajectories over the queried space, which ensures high accuracy for the given query set. We show the accuracy and efficiency of our algorithm using extensive empirical evaluations on real and synthetic data sets. | Recently, @cite_13 developed a mechanism named Private Spatial Histogram for range queries on trajectories. publishes a synthetic spatial histogram under @math -differential privacy. It is a query-aware mechanism that extends the idea of in @cite_3 . , takes a spatial histogram and a query set as input and utilizes the correlation between the queries to estimate the distribution of the original histogram privately. To maintain the consistency in the histogram, uses a approach that may result in a histogram far from the original spatial histogram. The reason lies in the approach that locally ensures consistency and may lead to overcorrection. In this paper, we propose a data- and query- aware mechanism that utilizes the trajectories density in different regions as well as the given queries correlation to estimate the optimal spatial histogram with significantly higher utility. Our query-aware strategy employs a linear programming approach to provide a guarantee on optimally consistent histogram which leads to significant improvement in the utility of results. | {
"cite_N": [
"@cite_13",
"@cite_3"
],
"mid": [
"1002055276",
"2859661414"
],
"abstract": [
"Due to the high uptake of location-based services (LBSs), large spatio-temporal datasets of moving objects' trajectories are being created every day. An important task in spatial data analytics is to service range queries by returning trajectory counts within a queried region. The question of how to keep an individual user's data private whilst enabling spatial data analytics by third parties has become an urgent research direction. Indeed, it is increasingly becoming a concern for users. To preserve privacy we discard individual trajectories and aggregate counts over a spatial and temporal partition. However the privacy gained comes at a cost to utility: trajectories passing through multiple cells and re-entering a query region, lead to inaccurate query responses. This is known as the distinct counting problem. We propose the Connection Aware Spatial Euler (CASE) histogram to address this long-standing problem. The CASE histogram maintains the connectivity of a moving object path, but does not require the ID of an object to distinguish multiple entries into an arbitrary query region. Our approach is to process trajectories offline into aggregate counts which are sent to third parties, rather than the original trajectories. We also explore modifications of our aggregate counting approach that preserve differential privacy. Theoretically and experimentally we demonstrate that our method provides a high level of accuracy compared to the best known methods for the distinct counting problem, whilst preserving privacy. We conduct our experiments on both synthetic and real datasets over two competitive Euler histogram-based methods presented in the literature. Our methods enjoy improvements to accuracy from 10 up to 70 depending on trip data and query region size, with the greatest increase seen on the Microsoft T-Drive real dataset, representing a more than tripling of accuracy.",
"Studying trajectories of individuals has received growing interest. The aggregated movement behaviour of people provides important insights about their habits, interests, and lifestyles. Understanding and utilizing trajectory data is a crucial part of many applications such as location based services, urban planning, and traffic monitoring systems. Spatial histograms and spatial range queries are key components in such applications to efficiently store and answer queries on trajectory data. A spatial histogram maintains the sequentiality of location points in a trajectory by a strong sequential dependency among histogram cells. This dependency is an essential property in answering spatial range queries. However, the trajectories of individuals are unique and even aggregating them in spatial histograms cannot completely ensure an individual's privacy. A key technique to ensure privacy for data publishing ϵ-differential privacy as it provides a strong guarantee on an individual's provided data. Our work is the first that guarantees ϵ-differential privacy for spatial histograms on trajectories, while ensuring the sequentiality of trajectory data, i.e., its consistency. Consistency is key for any database and our proposed mechanism, PriSH, synthesizes a spatial histogram and ensures the consistency of published histogram with respect to the strong dependency constraint. In extensive experiments on real and synthetic datasets, we show that (1) PriSH is highly scalable with the dataset size and granularity of the space decomposition, (2) the distribution of aggregate trajectory information in the synthesized histogram accurately preserves the distribution of original histogram, and (3) the output has high accuracy in answering arbitrary spatial range queries."
]
} |
1907.07723 | 2959678280 | We study the problem of repeated play in a zero-sum game in which the payoff matrix may change, in a possibly adversarial fashion, on each round; we call these Online Matrix Games. Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program. But when the payoff matrix evolves over time our goal is to find a sequential algorithm that can compete with, in a certain sense, the NE of the long-term-averaged payoff matrix. We design an algorithm with small NE regret--that is, we ensure that the long-term payoff of both players is close to minimax optimum in hindsight. Our algorithm achieves near-optimal dependence with respect to the number of rounds and depends poly-logarithmically on the number of available actions of the players. Additionally, we show that the naive reduction, where each player simply minimizes its own regret, fails to achieve the stated objective regardless of which algorithm is used. We also consider the so-called bandit setting, where the feedback is significantly limited, and we provide an algorithm with small NE regret using one-point estimates of each payoff matrix. | The reader familiar with Online Convex Optimization (OCO) may find it closely related to the OMG problem. In the OCO setting, a player is given a convex, closed, and bounded action set @math , and must repeatedly choose an action @math before the convex function @math is revealed. The player's goal is to obtain sublinear defined as @math . This problem is well studied and several algorithms such as Online Gradient Descent @cite_12 , Regularized Follow the Leader @cite_22 @cite_53 and Perturbed Follow the Leader @cite_6 achieve optimal individual regret bounds that scale as @math . The most natural (although incorrect) approach to attack the OMG problem is to equip each of the players with a sublinear individual regret algorithm. However, we will show in that if both players use an algorithm that guarantees sublinear individual regret, then it is impossible to achieve sublinear NE regret when the payoff matrices are chosen adversarially. In other words, the algorithms for the OCO setting cannot be directly applied to the OMG problem considered in this paper. | {
"cite_N": [
"@cite_53",
"@cite_22",
"@cite_12",
"@cite_6"
],
"mid": [
"2256838191",
"2749843017",
"2473549844",
"2563280975"
],
"abstract": [
"We provide the first oracle efficient sublinear regret algorithms for adversarial versions of the contextual bandit problem. In this problem, the learner repeatedly makes an action on the basis of a context and receives reward for the chosen action, with the goal of achieving reward competitive with a large class of policies. We analyze two settings: i) in the transductive setting the learner knows the set of contexts a priori, ii) in the small separator setting, there exists a small set of contexts such that any two policies behave differently on one of the contexts in the set. Our algorithms fall into the Follow-The-Perturbed-Leader family (Kalai & Vempala, 2005) and achieve regret O(T3 4√K log(N)) in the transductive setting and O(T2 3d3 4K√log(N)) in the separator setting, where T is the number of rounds, K is the number of actions, N is the number of base-line policies, and d is the size of the separator. We actually solve the more general adversarial contextual semi-bandit linear optimization problem, whilst in the full information setting we address the even more general contextual combinatorial optimization. We provide several extensions and implications of our algorithms, such as switching regret and efficient learning with predictable sequences.",
"This paper considers online convex optimization (OCO) with stochastic constraints, which generalizes Zinkevich's OCO over a known simple fixed set by introducing multiple stochastic functional constraints that are i.i.d. generated at each round and are disclosed to the decision maker only after the decision is made. This formulation arises naturally when decisions are restricted by stochastic environments or deterministic environments with noisy observations. It also includes many important problems as special cases, such as OCO with long term constraints, stochastic constrained convex optimization, and deterministic constrained convex optimization. To solve this problem, this paper proposes a new algorithm that achieves @math expected regret and constraint violations and @math high probability regret and constraint violations. Experiments on a real-world data center scheduling problem further verify the performance of the new algorithm.",
"We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .",
"We present a unified, black-box-style method for developing and analyzing online convex optimization (OCO) algorithms for full-information online learning in delayed-feedback environments. Our new, simplified analysis enables us to substantially improve upon previous work and to solve a number of open problems from the literature. Specifically, we develop and analyze asynchronous AdaGrad-style algorithms from the Follow-the-Regularized-Leader (FTRL) and Mirror-Descent family that, unlike previous works, can handle projections and adapt both to the gradients and the delays, without relying on either strong convexity or smoothness of the objective function, or data sparsity. Our unified framework builds on a natural reduction from delayed-feedback to standard (non-delayed) online learning. This reduction, together with recent unification results for OCO algorithms, allows us to analyze the regret of generic FTRL and Mirror-Descent algorithms in the delayed-feedback setting in a unified manner using standard proof techniques. In addition, the reduction is exact and can be used to obtain both upper and lower bounds on the regret in the delayed-feedback setting."
]
} |
1907.07723 | 2959678280 | We study the problem of repeated play in a zero-sum game in which the payoff matrix may change, in a possibly adversarial fashion, on each round; we call these Online Matrix Games. Finding the Nash Equilibrium (NE) of a two player zero-sum game is core to many problems in statistics, optimization, and economics, and for a fixed game matrix this can be easily reduced to solving a linear program. But when the payoff matrix evolves over time our goal is to find a sequential algorithm that can compete with, in a certain sense, the NE of the long-term-averaged payoff matrix. We design an algorithm with small NE regret--that is, we ensure that the long-term payoff of both players is close to minimax optimum in hindsight. Our algorithm achieves near-optimal dependence with respect to the number of rounds and depends poly-logarithmically on the number of available actions of the players. Additionally, we show that the naive reduction, where each player simply minimizes its own regret, fails to achieve the stated objective regardless of which algorithm is used. We also consider the so-called bandit setting, where the feedback is significantly limited, and we provide an algorithm with small NE regret using one-point estimates of each payoff matrix. | Related to the OMG problem with bandit feedback is the seminal work of @cite_4 . They provide the first sublinear regret bound for Online Convex Optimization with bandit feedback, using a one-point estimate of the gradient. The one-point gradient estimate used in @cite_4 is similar to those independently proposed in @cite_35 and in @cite_44 . The regret bound provided in @cite_4 is @math , which is suboptimal. In @cite_53 , the authors give the first @math bound for the special case when the functions are linear. More recently, @cite_39 and @cite_16 designed the first efficient algorithms with @math regret for the general online convex optimization case; unfortunately, the dependence on the dimension @math in the regret rate is a very large polynomial. Our one-point matrix estimate is most closely related to the random estimator in @cite_8 for linear functions. It is possible to use the more sophisticated techniques from @cite_53 @cite_39 @cite_16 to improve our NE regret bound in section ; however, the result does not seem to be immediate and we leave this as future work. | {
"cite_N": [
"@cite_35",
"@cite_4",
"@cite_8",
"@cite_53",
"@cite_39",
"@cite_44",
"@cite_16"
],
"mid": [
"2473549844",
"2120745256",
"2004001705",
"2301614296",
"2284345772",
"2951332996",
"2152898676"
],
"abstract": [
"We consider the adversarial convex bandit problem and we build the first @math -time algorithm with @math -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves @math -regret, and we show that a simple variant of this algorithm can be run in @math -time per step at the cost of an additional @math factor in the regret. These results improve upon the @math -regret and @math -time result of the first two authors, and the @math -regret and @math -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve @math -regret, and moreover that this regret is unimprovable (the current best lower bound being @math and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order @math .",
"In the online linear optimization problem, a learner must choose, in each round, a decision from a set D ⊂ ℝn in order to minimize an (unknown and changing) linear cost function. We present sharp rates of convergence (with respect to additive regret) for both the full information setting (where the cost function is revealed at the end of each round) and the bandit setting (where only the scalar cost incurred is revealed). In particular, this paper is concerned with the price of bandit information, by which we mean the ratio of the best achievable regret in the bandit setting to that in the full-information setting. For the full information case, the upper bound on the regret is O*( √nT), where n is the ambient dimension and T is the time horizon. For the bandit case, we present an algorithm which achieves O*(n3 2 √T) regret — all previous (nontrivial) bounds here were O(poly(n)T2 3) or worse. It is striking that the convergence rate for the bandit setting is only a factor of n worse than in the full information case — in stark contrast to the K-arm bandit setting, where the gap in the dependence on K is exponential (√TK vs. √T log K). We also present lower bounds showing that this gap is at least √n, which we conjecture to be the correct order. The bandit algorithm we present can be implemented efficiently in special cases of particular interest, such as path planning and Markov Decision Problems.",
"We study a general online convex optimization problem. We have a convex set S and an unknown sequence of cost functions c 1 , c 2 ,..., and in each period, we choose a feasible point x t in S, and learn the cost c t (x t ). If the function c t is also revealed after each period then, as Zinkevich shows in [25], gradient descent can be used on these functions to get regret bounds of O(√n). That is, after n rounds, the total cost incurred will be O(√n) more than the cost of the best single feasible decision chosen with the benefit of hindsight, min x Σ ct(x).We extend this to the \"bandit\" setting, where, in each period, only the cost c t (x t ) is revealed, and bound the expected regret as O(n3 4).Our approach uses a simple approximation of the gradient that is computed from evaluating c t at a single (random) point. We show that this biased estimate is sufficient to approximate gradient descent on the sequence of functions. In other words, it is possible to use gradient descent without seeing anything more than the value of the functions at a single point. The guarantees hold even in the most general case: online against an adaptive adversary.For the online linear optimization problem [15], algorithms with low regrets in the bandit setting have recently been given against oblivious [1] and adaptive adversaries [19]. In contrast to these algorithms, which distinguish between explicit explore and exploit periods, our algorithm can be interpreted as doing a small amount of exploration in each period.",
"We consider the problem of online convex optimization against an arbitrary adversary with bandit feedback, known as bandit convex optimization. We give the first @math -regret algorithm for this setting based on a novel application of the ellipsoid method to online learning. This bound is known to be tight up to logarithmic factors. Our analysis introduces new tools in discrete convex geometry.",
"The study of online convex optimization in the bandit setting was initiated by Kleinberg (2004) and (2005). Such a setting models a decision maker that has to make decisions in the face of adversarially chosen convex loss functions. Moreover, the only information the decision maker receives are the losses. The identities of the loss functions themselves are not revealed. In this setting, we reduce the gap between the best known lower and upper bounds for the class of smooth convex functions, i.e. convex functions with a Lipschitz continuous gradient. Building upon existing work on selfconcordant regularizers and one-point gradient estimation, we give the first algorithm whose expected regret is O(T ), ignoring constant and logarithmic factors.",
"In this paper, we study a special bandit setting of online stochastic linear optimization, where only one-bit of information is revealed to the learner at each round. This problem has found many applications including online advertisement and online recommendation. We assume the binary feedback is a random variable generated from the logit model, and aim to minimize the regret defined by the unknown linear function. Although the existing method for generalized linear bandit can be applied to our problem, the high computational cost makes it impractical for real-world problems. To address this challenge, we develop an efficient online learning algorithm by exploiting particular structures of the observation model. Specifically, we adopt online Newton step to estimate the unknown parameter and derive a tight confidence region based on the exponential concavity of the logistic loss. Our analysis shows that the proposed algorithm achieves a regret bound of @math , which matches the optimal result of stochastic linear bandits.",
"We address online linear optimization problems when the possible actions of the decision maker are represented by binary vectors. The regret of the decision maker is the difference between her realized loss and the minimal loss she would have achieved by picking, in hindsight, the best possible action. Our goal is to understand the magnitude of the best possible minimax regret. We study the problem under three different assumptions for the feedback the decision maker receives: full information, and the partial information models of the so-called “semi-bandit” and “bandit” problems. In the full information case we show that the standard exponentially weighted average forecaster is a provably suboptimal strategy. For the semi-bandit model, by combining the Mirror Descent algorithm and the INF Implicitely Normalized Forecaster strategy, we are able to prove the first optimal bounds. Finally, in the bandit case we discuss existing results in light of a new lower bound, and suggest a conjecture on the optimal regret in that case."
]
} |
1901.06026 | 2909735549 | In crowd counting datasets, people appear at different scales, depending on their distance to the camera. To address this issue, we propose a novel multi-branch scale-aware attention network that exploits the hierarchical structure of convolutional neural networks and generates, in a single forward pass, multi-scale density predictions from different layers of the architecture. To aggregate these maps into our final prediction, we present a new soft attention mechanism that learns a set of gating masks. Furthermore, we introduce a scale-aware loss function to regularize the training of different branches and guide them to specialize on a particular scale. As this new training requires ground-truth annotations for the size of each head, we also propose a simple, yet effective technique to estimate it automatically. Finally, we present an ablation study on each of these components and compare our approach against the literature on 4 crowd counting datasets: UCF-QNRF, ShanghaiTech A & B and UCF_CC_50. Without bells and whistles, our approach achieves state-of-the-art on all these datasets. We observe a remarkable improvement on the UCF-QNRF (25 ) and a significant one on the others (around 10 ). | Attention models have been widely used for many computer vision tasks like image classification @cite_16 @cite_47 , object detection @cite_19 @cite_20 , semantic segmentation @cite_5 @cite_10 , saliency detection @cite_13 and, very recently, crowd counting @cite_30 . These models work by learning an intermediate attention map that is used to select the most relevant piece of information for visual analysis. The most similar works to ours are the ones of @cite_5 and @cite_30 . Both approaches extract multi-scale features from several resized input images and use an attention mechanism to weight the importance of each pixel of each feature map. One clear drawback of these approaches is that their inference is slow, as each test image needs to be re-sized and fed into the CNN model multiple times. Instead, our approach is much faster: it requires a single input image and a single pass through the model, as our multi-scale features are generated by pooling information from different layers of the same network instead of multiple passes through the same network. | {
"cite_N": [
"@cite_13",
"@cite_30",
"@cite_19",
"@cite_5",
"@cite_47",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"2503388974",
"2776207810",
"2158865742",
"2962961439",
"2787420051",
"2550553598",
"2962891704",
"2951260882"
],
"abstract": [
"We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.",
"We aim to model the top-down attention of a convolutional neural network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. We show a theoretic connection between the proposed contrastive attention formulation and the Class Activation Map computation. Efficient implementation of Excitation Backprop for common neural network layers is also presented. In experiments, we visualize the evidence of a model’s classification decision by computing the proposed top-down attention maps. For quantitative evaluation, we report the accuracy of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. Finally, we demonstrate applications of our method in model interpretation and data annotation assistance for facial expression analysis and medical imaging tasks.",
"Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms average- and max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.",
"We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parametrised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.",
"We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.",
"Visual attention has been successfully applied in structural prediction tasks such as visual captioning and question answering. Existing visual attention models are generally spatial, i.e., the attention is modeled as spatial probabilities that re-weight the last conv-layer feature map of a CNN encoding an input image. However, we argue that such spatial attention does not necessarily conform to the attention mechanism — a dynamic feature extractor that combines contextual fixations over time, as CNN features are naturally spatial, channel-wise and multi-layer. In this paper, we introduce a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN. In the task of image captioning, SCA-CNN dynamically modulates the sentence generation context in multi-layer feature maps, encoding where (i.e., attentive spatial locations at multiple layers) and what (i.e., attentive channels) the visual attention is. We evaluate the proposed SCA-CNN architecture on three benchmark image captioning datasets: Flickr8K, Flickr30K, and MSCOCO. It is consistently observed that SCA-CNN significantly outperforms state-of-the-art visual attention-based image captioning methods.",
"Incorporating multi-scale features in fully convolutional neural networks (FCNs) has been a key element to achieving state-of-the-art performance on semantic image segmentation. One common way to extract multi-scale features is to feed multiple resized input images to a shared deep network and then merge the resulting features for pixelwise classification. In this work, we propose an attention mechanism that learns to softly weight the multi-scale features at each pixel location. We adapt a state-of-the-art semantic image segmentation model, which we jointly train with multi-scale input images and the attention model. The proposed attention model not only outperforms averageand max-pooling, but allows us to diagnostically visualize the importance of features at different positions and scales. Moreover, we show that adding extra supervision to the output at each scale is essential to achieving excellent performance when merging multi-scale features. We demonstrate the effectiveness of our model with extensive experiments on three challenging datasets, including PASCAL-Person-Part, PASCAL VOC 2012 and a subset of MS-COCO 2014.",
"We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images."
]
} |
1901.06024 | 2950571912 | Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment. | Benchmarks of bugs are assets that have been used in software bug-related research fields to support empirical evaluations. Several benchmarks were first created for the software testing research community, such as Siemens @cite_18 and SIR @cite_7 , two notable and well-cited benchmarks. The majority of bugs in these two benchmarks were seeded in existing program versions without bugs, which is farther away from that targets real bugs. | {
"cite_N": [
"@cite_18",
"@cite_7"
],
"mid": [
"2156723666",
"1804189410"
],
"abstract": [
"Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http: defects4j.org.",
"Benchmarks set standards for innovation in computer architecture research and industry product development. Consequently, it is of paramount importance that the benchmarks used in computer architecture research and development are representative of real-world applications. However, composing such representative workloads poses practical challenges to application analysis teams and benchmark developers - (1) Benchmarks that are standardized are open-source whereas applications of interest are typically proprietary, (2) Benchmarks are rigid, measure single-point performance, and only represent a sample of the application behavior space, (3) Benchmark suites take several years to develop, but applications evolve at a faster rate, and (4) Benchmarks geared towards temperature and power characterization are difficult to develop and standardize. The objective of this dissertation is to develop an adaptive benchmark generation strategy to construct synthetic benchmarks to address these benchmarking challenges. We propose an approach for automatically distilling key hardware-independent performance attributes of a proprietary workload and capture them into a miniature synthetic benchmark clone. The advantage of the benchmark clone is that it hides the functional meaning of the code, but exhibits similar performance and power characteristics as the target application across a wide range of microarchitecture configurations. Moreover, the dynamic instruction count of the synthetic benchmark clone is substantially shorter than the proprietary application, greatly reducing overall simulation time—for the SPEC CPU 2000 suite, the simulation time reduction is over five orders of magnitude compared to the entire benchmark execution. We develop an adaptive benchmark generation strategy that trades off accuracy to provide the flexibility to easily alter program characteristics. The parameterization of workload metrics makes it possible to succinctly describe an application's behavior using a limited number of fundamental program characteristics. This provides the ability to alter workload characteristics and construct scalable benchmarks that allows researchers to explore a wider range of the application behavior space, conduct program behavior studies, and model emerging workloads. The parameterized workload model is the foundation for automatically constructing power and temperature oriented synthetic workloads. We show that machine learning algorithms can be effectively used to search the application behavior space to automatically construct benchmarks for evaluating the power and temperature characteristics of a computer architecture design. The need for a scientific approach to construct synthetic benchmarks, to complement application benchmarks, has long been recognized by the computer architecture research community, and this dissertation work is a significant step towards achieving that goal."
]
} |
1901.06024 | 2950571912 | Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment. | To the best of our knowledge, the first benchmarks proposed for automatic program repair research are ManyBugs and IntroClass @cite_4 . ManyBugs contains 185 bugs collected from nine large, popular, open-source programs. On the other hand, IntroClass targets small programs written by novices, and contains 998 bugs collected from student-written versions of six small programming assignments in an undergraduate programming course. Both benchmarks are for the C language. | {
"cite_N": [
"@cite_4"
],
"mid": [
"1475493299"
],
"abstract": [
"Automated program repair can potentially reduce debugging costs and improvesoftware quality but recent studies have drawn attention to shortcomings inthe quality of automatically generated repairs. We propose a new kind ofrepair that uses the large body of existing open-source code to findpotential fixes. The key challenges lie in efficiently finding codesemantically similar (but not identical) to defective code and thenappropriately integrating that code into a buggy program. We presentSearchRepair, a repair technique that addresses these challenges by(1) encoding a large database of human-written code fragments as SMTconstraints on input-output behavior, (2) localizing a given defect to likelybuggy program fragments and deriving the desired input-output behavior forcode to replace those fragments, (3) using state-of-the-art constraintsolvers to search the database for fragments that satisfy that desiredbehavior and replacing the likely buggy code with these potential patches, and (4) validating that the patches repair the bug against program testsuites. We find that SearchRepair repairs 150 (19 ) of 778 benchmark Cdefects written by novice students, 20 of which are not repaired by GenProg, TrpAutoRepair, and AE. We compare the quality of the patches generated by thefour techniques by measuring how many independent, not-used-during-repairtests they pass, and find that SearchRepair-repaired programs pass 97.3 ofthe tests, on average, whereas GenProg-, TrpAutoRepair-, and AE-repairedprograms pass 68.7 , 72.1 , and 64.2 of the tests, respectively. We concludethat SearchRepair produces higher-quality repairs than GenProg, TrpAutoRepair, and AE, and repairs some defects those tools cannot."
]
} |
1901.06024 | 2950571912 | Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment. | More recently other benchmarks were proposed for automatic program repair. Codeflaws @cite_5 contains 3,902 bugs extracted from programming contests available on Codeforces. Codeflaws is also for the C language, and the programs range from one to 322 lines of code. QuixBugs @cite_15 is a multi-lingual benchmark, which contains single line bugs from 40 programs translated to both Java and Python languages. | {
"cite_N": [
"@cite_5",
"@cite_15"
],
"mid": [
"2762550985",
"2883590803"
],
"abstract": [
"Recent years have seen an explosion of work in automated program repair. While previous work has focused exclusively on tools for single languages, recent work in multi-language transformation has opened the door for multi-language program repair tools. Evaluating the performance of such a tool requires having a benchmark set of similar buggy programs in different languages. We present QuixBugs, consisting of 40 programs translated to both Python and Java, each with a bug on a single line. The QuixBugs benchmark suite is based on problems from the Quixey Challenge, where programmers were given a short buggy program and 1 minute to fix the bug.",
"The characterization of bug datasets is essential to support the evaluation of automatic program repair tools. In a previous work, we manually studied almost 400 human-written patches (bug fixes) from the Defects4J dataset and annotated them with properties, such as repair patterns. However, manually finding these patterns in different datasets is tedious and time-consuming. To address this activity, we designed and implemented PPD, a detector of repair patterns in patches, which performs source code change analysis at abstract-syntax tree level. In this paper, we report on PPD and its evaluation on Defects4J, where we compare the results from the automated detection with the results from the previous manual analysis. We found that PPD has overall precision of 91 and overall recall of 92 , and we conclude that PPD has the potential to detect as many repair patterns as human manual analysis."
]
} |
1901.06024 | 2950571912 | Benchmarks of bugs are essential to empirically evaluate automatic program repair tools. In this paper, we present Bears, a project for collecting and storing bugs into an extensible bug benchmark for automatic repair studies in Java. The collection of bugs relies on commit building state from Continuous Integration (CI) to find potential pairs of buggy and patched program versions from open-source projects hosted on GitHub. Each pair of program versions passes through a pipeline where an attempt of reproducing a bug and its patch is performed. The core step of the reproduction pipeline is the execution of the test suite of the program on both program versions. If a test failure is found in the buggy program version candidate and no test failure is found in its patched program version candidate, a bug and its patch were successfully reproduced. The uniqueness of Bears is the usage of CI (builds) to identify buggy and patched program version candidates, which has been widely adopted in the last years in open-source projects. This approach allows us to collect bugs from a diversity of projects beyond mature projects that use bug tracking systems. Moreover, Bears was designed to be publicly available and to be easily extensible by the research community through automatic creation of branches with bugs in a given GitHub repository, which can be used for pull requests in the Bears repository. We present in this paper the approach employed by Bears, and we deliver the version 1.0 of Bears, which contains 251 reproducible bugs collected from 72 projects that use the Travis CI and Maven build environment. | The closest benchmarks to are Defects4J @cite_12 and Bugs.jar @cite_0 , both for Java. Defects4J contains 395 reproducible bugs collected from six projects, and Bugs.jar contains 1,158 reproducible bugs collected from eight Apache projects. To collect bugs, the approach used for both benchmarks is based on bug tracking systems, and they contain bugs from large, mature projects. , on the other hand, was designed to collect bugs from a diversity of projects other than large and mature ones: we break the need of projects using bug tracking systems. Note that bug tracking systems are used in the direction of documenting bugs. Continuous Integration, on the other hand, is used to actually build and test a project, which is closer to the task of identifying reproducible bugs. | {
"cite_N": [
"@cite_0",
"@cite_12"
],
"mid": [
"2883977877",
"2156723666"
],
"abstract": [
"We present Bugs.jar, a large-scale dataset for research in automated debugging, patching, and testing of Java programs. Bugs.jar is comprised of 1,158 bugs and patches, drawn from 8 large, popular open-source Java projects, spanning 8 diverse and prominent application categories. It is an order of magnitude larger than Defects4J, the only other dataset in its class. We discuss the methodology used for constructing Bugs.jar, the representation of the dataset, several use-cases, and an illustration of three of the use-cases through the application of 3 specific tools on Bugs.jar, namely our own tool, E lixir , and two third-party tools, Ekstazi and JaCoCo.",
"Empirical studies in software testing research may not be comparable, reproducible, or characteristic of practice. One reason is that real bugs are too infrequently used in software testing research. Extracting and reproducing real bugs is challenging and as a result hand-seeded faults or mutants are commonly used as a substitute. This paper presents Defects4J, a database and extensible framework providing real bugs to enable reproducible studies in software testing research. The initial version of Defects4J contains 357 real bugs from 5 real-world open source pro- grams. Each real bug is accompanied by a comprehensive test suite that can expose (demonstrate) that bug. Defects4J is extensible and builds on top of each program’s version con- trol system. Once a program is configured in Defects4J, new bugs can be added to the database with little or no effort. Defects4J features a framework to easily access faulty and fixed program versions and corresponding test suites. This framework also provides a high-level interface to common tasks in software testing research, making it easy to con- duct and reproduce empirical studies. Defects4J is publicly available at http: defects4j.org."
]
} |
1901.06144 | 2910986363 | Accurate, nontrivial quantum operations on many qubits are experimentally challenging. As opposed to the standard approach of compiling larger unitaries into sequences of 2-qubit gates, we propose a protocol on Hamiltonian control fields which implements highly selective multi-qubit gates in a strongly-coupled many-body quantum system. We exploit the selectiveness of resonant driving to exchange only 2 out of @math eigenstates of some background Hamiltonian, and discuss a basis transformation, the eigengate, that makes this operation relevant to the computational basis. The latter has a second use as a Hahn echo which undoes the dynamical phases due to the background Hamiltonian. We find that the error of such protocols scales favourably with the gate time as @math , but the protocol becomes inefficient with a growing number of qubits N. The framework is numerically tested in the context of a spin chain model first described by Polychronakos, for which we show that an earlier solution method naturally gives rise to an eigengate. Our techniques could be of independent interest for the theory of driven many-body systems. | We previously described a very similar resonantly driven gate in Ref. @cite_12 , which was based on the so-called Krawtchouk spin chain. In the present work, we generalize many aspects of this first result, and show how the same line of reasoning applies to a very different system featuring long-range rather than just nearest-neighbor interactions. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2736372133"
],
"abstract": [
"textabstractWe propose a strategy for engineering multiqubit quantum gates. As a first step, it employs an eigengate to map states in the computational basis to eigenstates of a suitable many-body Hamiltonian. The second step employs resonant driving to enforce a transition between a single pair of eigenstates, leaving all others unchanged. The procedure is completed by mapping back to the computational basis. We demonstrate the strategy for the case of a linear array with an even number N of qubits, with specific XX+YY couplings between nearest neighbors. For this so-called Krawtchouk chain, a two-body driving term leads to the iSWAP_N gate, which we numerically test for N = 4 and 6."
]
} |
1901.06144 | 2910986363 | Accurate, nontrivial quantum operations on many qubits are experimentally challenging. As opposed to the standard approach of compiling larger unitaries into sequences of 2-qubit gates, we propose a protocol on Hamiltonian control fields which implements highly selective multi-qubit gates in a strongly-coupled many-body quantum system. We exploit the selectiveness of resonant driving to exchange only 2 out of @math eigenstates of some background Hamiltonian, and discuss a basis transformation, the eigengate, that makes this operation relevant to the computational basis. The latter has a second use as a Hahn echo which undoes the dynamical phases due to the background Hamiltonian. We find that the error of such protocols scales favourably with the gate time as @math , but the protocol becomes inefficient with a growing number of qubits N. The framework is numerically tested in the context of a spin chain model first described by Polychronakos, for which we show that an earlier solution method naturally gives rise to an eigengate. Our techniques could be of independent interest for the theory of driven many-body systems. | The most obvious competitor of our protocol is conventional compiling of any quantum operation into a universal set of single- and two-qubit gates. Extensive research efforts have greatly optmized compiling methods, and in the asymptotics of many qubits, compiling approach becomes increasingly favorable compared to our proposal. For a recent overview, see Ref. @cite_27 . We present our work not as an alternative to compiling, but rather as a creative twist to the fields of condensed matter and quantum control, which might find applications on highly specialized systems. We also present our methods, such as the eigengate presented in Sec. , as tools that may find applications elsewhere. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2801305581"
],
"abstract": [
"We construct quantum circuits which exactly encode the spectra of correlated electron models up to errors from rotation synthesis. By invoking these circuits as oracles within the recently introduced \"qubitization\" framework, one can use quantum phase estimation to sample states in the Hamiltonian eigenbasis with optimal query complexity @math where @math is an absolute sum of Hamiltonian coefficients and @math is target precision. For both the Hubbard model and electronic structure Hamiltonian in a second quantized basis diagonalizing the Coulomb operator, our circuits have T gate complexity @math where @math is number of orbitals in the basis. Compared to prior approaches, our algorithms are asymptotically more efficient in gate complexity and require fewer T gates near the classically intractable regime. Compiling to surface code fault-tolerant gates and assuming per gate error rates of one part in a thousand reveals that one can error correct phase estimation on interesting instances of these problems beyond the current capabilities of classical methods using only about one million superconducting qubits in a matter of hours."
]
} |
1901.05997 | 2910606695 | Anecdotal evidence has emerged suggesting that state-sponsored organizations, like the Russian Internet Research Agency, have exploited mainstream social. Their primary goal is apparently to conduct information warfare operations to manipulate public opinion using accounts disguised as "normal" people. To increase engagement and credibility of their posts, these accounts regularly share images. However, the use of images by state-sponsored accounts has yet to be examined by the research community. In this work, we address this gap by analyzing a ground truth dataset of 1.8M images posted to Twitter by called Russian trolls. More specifically, we analyze the content of the images, as well as the posting activity of the accounts. Among other things, we find that image posting activity of Russian trolls is tightly coupled with real-world events, and that their targets, as well as the content shared, changed over time. When looking at the interplay between domains that shared the same images as state-sponsored trolls, we find clear cut differences in the origin and or spread of images across the Web. Overall, our findings provide new insight into how state-sponsored trolls operate, and specifically how they use imagery to achieve their goals. | Other work has studied state-sponsored accounts' behavior on, and use of, social networks. Specifically, @cite_16 analyze the advertisements purchased by Russian accounts on Facebook. By performing clustering and semantic analysis, they identify their targeted campaigns over time, concluding that their main goal is to sway division on the community, and also that the most effective campaigns share similar characteristics. @cite_46 compare a set of Russian troll accounts against a random set of Twitter users, showing that Russian troll accounts exhibit different behaviors in the use of the Twitter platform when compared to random users. In follow up work, @cite_0 analyze the activities of Russian and Iranian trolls on Twitter and Reddit. They find substantial differences between them (e.g., Russian trolls were pro-Trump, Iranian ones anti-Trump), that their behavior and targets vary greatly over time, and that Russian trolls discuss different topics across Web communities (e.g., they discuss about cryptocurrencies on Reddit but not on Twitter). Also, @cite_28 examine the exploitation of various Web platforms (e.g., social networks and search engines), showing that state-sponsored accounts use them to advance their propaganda by promoting content and their own controlled domains. | {
"cite_N": [
"@cite_0",
"@cite_28",
"@cite_46",
"@cite_16"
],
"mid": [
"2900289476",
"2786091114",
"2950434393",
"2897648888"
],
"abstract": [
"Over the past few years, extensive anecdotal evidence emerged that suggests the involvement of state-sponsored actors (or \"trolls\") in online political campaigns with the goal to manipulate public opinion and sow discord. Recently, Twitter and Reddit released ground truth data about Russian and Iranian state-sponsored actors that were active on their platforms. In this paper, we analyze these ground truth datasets across several axes to understand how these actors operate, how they evolve over time, who are their targets, how their strategies changed over time, and what is their influence to the Web's information ecosystem. Among other things we find: a) campaigns of these actors were influenced by real-world events; b) these actors were employing different tactics and had different targets over time, thus their automated detection is not straightforward; and c) Russian trolls were clearly pro-Trump, whereas Iranian trolls were anti-Trump. Finally, using Hawkes Processes, we quantified the influence that these actors had to four Web communities: Reddit, Twitter, 4chan's Politically Incorrect board ( pol ), and Gab, finding that Russian trolls were more influential than Iranians with the exception of pol .",
"Over the past couple of years, anecdotal evidence has emerged linking coordinated campaigns by state-sponsored actors with efforts to manipulate public opinion on the Web, often around major political events, through dedicated accounts, or \"trolls.\" Although they are often involved in spreading disinformation on social media, there is little understanding of how these trolls operate, what type of content they disseminate, and most importantly their influence on the information ecosystem. In this paper, we shed light on these questions by analyzing 27K tweets posted by 1K Twitter users identified as having ties with Russia's Internet Research Agency and thus likely state-sponsored trolls. We compare their behavior to a random set of Twitter users, finding interesting differences in terms of the content they disseminate, the evolution of their account, as well as their general behavior and use of the Twitter platform. Then, using a statistical model known as Hawkes Processes, we quantify the influence that these accounts had on the dissemination of news on social platforms such as Twitter, Reddit, and 4chan. Overall, our findings indicate that Russian troll accounts managed to stay active for long periods of time and to reach a substantial number of Twitter users with their messages. When looking at their ability of spreading news content and making it viral, however, we find that their effect on social platforms was minor, with the significant exception of news published by the Russian state-sponsored news outlet RT (Russia Today).",
"Social media, once hailed as a vehicle for democratization and the promotion of positive social change across the globe, are under attack for becoming a tool of political manipulation and spread of disinformation. A case in point is the alleged use of trolls by Russia to spread malicious content in Western elections. This paper examines the Russian interference campaign in the 2016 US presidential election on Twitter. Our aim is twofold: first, we test whether predicting users who spread trolls' content is feasible in order to gain insight on how to contain their influence in the future; second, we identify features that are most predictive of users who either intentionally or unintentionally play a vital role in spreading this malicious content. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016, by about 5.7 million users. This dataset includes accounts associated with the Russian trolls identified by the US Congress. Proposed models are able to very accurately identify users who spread the trolls' content (average AUC score of 96 , using 10-fold validation). We show that political ideology, bot likelihood scores, and some activity-related account meta data are the most predictive features of whether a user spreads trolls' content or not.",
"The Russia-based Internet Research Agency (IRA) carried out a broad information campaign in the U.S. before and after the 2016 presidential election. The organization created an expansive set of internet properties: web domains, Facebook pages, and Twitter bots, which received traffic via purchased Facebook ads, tweets, and search engines indexing their domains. We investigate the scope of IRA activities in 2017, joining data from Facebook and Twitter with logs from the Internet Explorer 11 and Edge browsers and the Bing.com search engine. The studies demonstrate both the ease with which malicious actors can harness social media and search engines for propaganda campaigns, and the ability to track and understand such activities by fusing content and activity resources from multiple internet services. We show how cross-platform analyses can provide an unprecedented lens on attempts to manipulate opinions and elections in democracies."
]
} |
1901.05997 | 2910606695 | Anecdotal evidence has emerged suggesting that state-sponsored organizations, like the Russian Internet Research Agency, have exploited mainstream social. Their primary goal is apparently to conduct information warfare operations to manipulate public opinion using accounts disguised as "normal" people. To increase engagement and credibility of their posts, these accounts regularly share images. However, the use of images by state-sponsored accounts has yet to be examined by the research community. In this work, we address this gap by analyzing a ground truth dataset of 1.8M images posted to Twitter by called Russian trolls. More specifically, we analyze the content of the images, as well as the posting activity of the accounts. Among other things, we find that image posting activity of Russian trolls is tightly coupled with real-world events, and that their targets, as well as the content shared, changed over time. When looking at the interplay between domains that shared the same images as state-sponsored trolls, we find clear cut differences in the origin and or spread of images across the Web. Overall, our findings provide new insight into how state-sponsored trolls operate, and specifically how they use imagery to achieve their goals. | Finally, @cite_18 use machine learning to detect Twitter users that are likely to share content that originates from Russian state-sponsored accounts. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2963523292"
],
"abstract": [
"During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or “tweets”). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting."
]
} |
1901.06033 | 2910986402 | The Variational Auto-Encoder (VAE) model is a popular method to learn at once a generative model and embeddings for data living in a high-dimensional space. In the real world, many datasets may be assumed to be hierarchically structured. Traditionally, VAE uses a Euclidean latent space, but tree-like structures cannot be efficiently embedded in such spaces as opposed to hyperbolic spaces with negative curvature. We therefore endow VAE with a Poincar 'e ball model of hyperbolic geometry and derive the necessary methods to work with two main Gaussian generalisations on that space. We empirically show better generalisation to unseen data than the Euclidean counterpart, and can qualitatively and quantitatively better recover hierarchical structures. | In the BNP 's literature, explicitly modelling the hierarchical structure of data has been a long-going trend . Embedding graphs in hyperbolic spaces has been empirically shown to yield a more compact representation compared to Euclidean space, especially for low dimensions. @cite_6 studied the trade-offs of tree embeddings in the Poincar 'e disc. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1968305696"
],
"abstract": [
"We present the H3 layout technique for drawing large directed graphs as node-link diagrams in 3D hyperbolic space. We can lay out much larger structures than can be handled using traditional techniques for drawing general graphs because we assume a hierarchical nature of the data. We impose a hierarchy on the graph by using domain-specific knowledge to find an appropriate spanning tree. Links which are not part of the spanning tree do not influence the layout but can be selectively drawn by user request. The volume of hyperbolic 3-space increases exponentially, as opposed to the familiar geometric increase of euclidean 3-space. We exploit this exponential amount of room by computing the layout according to the hyperbolic metric. We optimize the cone tree layout algorithm for 3D hyperbolic space by placing children on a hemisphere around the cone mouth instead of on its perimeter. Hyperbolic navigation affords a Focus+Context view of the structure with minimal visual clutter. We have successfully laid out hierarchies of over 20,000 nodes. Our implementation accommodates navigation through graphs too large to be rendered interactively by allowing the user to explicitly prune or expand subtrees."
]
} |
1901.06081 | 2911064732 | Abstract This paper presents a novel iterative deep learning framework and applies it to document enhancement and binarization. Unlike the traditional methods that predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce uniform images of the degraded input images, which in turn allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) that uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) that uses a stack of different neural networks for iterative output refinement. Given the learned nature of the uniform and enhanced image, the binarization map can be easily obtained through use of a global or local threshold. The experimental results on several public benchmark data sets show that our proposed method provides a new, clean version of the degraded image, one that is suitable for visualization and which shows promising results for binarization using Otsu’s global threshold, based on enhanced images learned iteratively by the neural network. | Binarization is a classical research problem for document analysis and many document binarization methods have been proposed over the past two decades in the literature. It aims to convert each pixel in a document image into either text or background. The most popular and simple method is the Otsu @cite_53 , which is a nonparametric and unsupervised method of automatic threshold selection approach for gray-scale image binarization. It selects the global threshold based on the gray-scale histogram without any priori knowledge thus the computational complexity is linear. The Otsu method works very well on uniform and clean images while produces poor results on degraded document images with nonuniform background. In order to solve this problem, local adaptive threshold methods have been proposed, such as Sauvola @cite_39 , Niblack @cite_30 , Pai @cite_31 and AdOtsu @cite_40 @cite_29 . These methods compute the local threshold for each pixel based on the local statistic information, such as the mean and standard deviation of a local area around the pixel. It should be noted that binarization is not always the goal. Methods such as Otsu can also be used for strong contrast enhancement. | {
"cite_N": [
"@cite_30",
"@cite_53",
"@cite_29",
"@cite_39",
"@cite_40",
"@cite_31"
],
"mid": [
"2005008005",
"2759766068",
"2128060444",
"2036723249",
"2098287947",
"2751352153"
],
"abstract": [
"Adaptive binarization is an important first step in many document analysis and OCR processes. This paper describes a fast adaptive binarization algorithm that yields the same quality of binarization as the Sauvola method,1 but runs in time close to that of global thresholding methods (like Otsu's method2), independent of the window size. The algorithm combines the statistical constraints of Sauvola's method with integral images.3 Testing on the UW-1 dataset demonstrates a 20-fold speedup compared to the original Sauvola algorithm.",
"Abstract This paper presents an effective approach for the local threshold binarization of degraded document images. We utilize the structural symmetric pixels (SSPs) to calculate the local threshold in neighborhood and the voting result of multiple thresholds will determine whether one pixel belongs to the foreground or not. The SSPs are defined as the pixels around strokes whose gradient magnitudes are large enough and orientations are symmetric opposite. The compensated gradient map is used to extract the SSP so as to weaken the influence of document degradations. To extract SSP candidates with large magnitudes and distinguish the faint characters and bleed-through background, we propose an adaptive global threshold selection algorithm. To further extract pixels with opposite orientations, an iterative stroke width estimation algorithm is applied to ensure the proper size of neighborhood used in orientation judgement. At last, we present a multiple threshold vote based framework to deal with some inaccurate detections of SSP. The experimental results on seven public document image binarization datasets show that our method is accurate and robust compared with many traditional and state-of-the-art document binarization approaches based on multiple evaluation measures.",
"A new method is presented for adaptive document image binarization, where the page is considered as a collection of subcomponents such as text, background and picture. The problems caused by noise, illumination and many source type-related degradations are addressed. Two new algorithms are applied to determine a local threshold for each pixel. The performance evaluation of the algorithm utilizes test images with ground-truth, evaluation metrics for binarization of textual and synthetic images, and a weight-based ranking procedure for the \"nal result presentation. The proposed algorithms were tested with images including di!erent types of document components and degradations. The results were compared with a number of known techniques in the literature. The benchmarking results show that the method adapts and performs well in each case qualitatively and quantitatively. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"This paper presents a novel preprocessing method based on mathematical morphology techniques to improve the subsequent thresholding quality of raw degraded word images. The raw degraded word images contain undesirable shapes called critical shadows on the background that cause noise in binary images. This noise constitutes obstacles to posterior segmentation of characters. Direct application of a thresholding method produces inadequate binary versions of these degraded word images. Our preprocessing method called Shadow Location and Lightening (SL*L) adaptively, accurately and without manual fine-tuning of parameters locates these critical shadows on grayscale degraded images using morphological operations, and lightens them before applying eventual thresholding process. In this way, enhanced binary images without unpredictable and inappropriate noise can be provided to subsequent segmentation of characters. Then, adequate binary characters can be segmented and extracted as input data to optical character recognition (OCR) applications saving computational effort and increasing recognition rate. The proposed method is experimentally tested with a set of several raw degraded images extracted from real photos acquired by unsophisticated imaging systems. A qualitative analysis of experimental results led to conclusions that the thresholding result quality was significantly improved with the proposed preprocessing method. Also, a quantitative evaluation using a testing data of 1194 degraded word images showed the essentiality and effectiveness of the proposed preprocessing method to increase segmentation and recognition rates of their characters. Furthermore, an advantage of the proposed method is that Otsu's method as a simple and easily implementable global thresholding technique can be sufficient to reducing computational load.",
"In this paper, we present a binarization technique specifically designed for historical document images. Existing methods for this problem focus on either finding a good global threshold or adapting the threshold for each area so that to remove smear, strains, uneven illumination etc. We propose a hybrid approach that first applies a global thresholding method and, then, identifies the image areas that are more likely to still contain noise. Each of these areas is re-processed separately to achieve better quality of binarization. We evaluate the proposed approach for different kinds of degradation problems. The results show that our method can handle hard cases while documents already in good condition are not affected drastically.",
"Abstract The binarization of degraded document images is a challenging problem in terms of document analysis. Binarization is a classification process in which intra-image pixels are assigned to either of the two following classes: foreground text and background. Most of the algorithms are constructed on low-level features in an unsupervised manner, and the consequent disenabling of full utilization of input-domain knowledge considerably limits distinguishing of background noises from the foreground. In this paper, a novel supervised-binarization method is proposed, in which a hierarchical deep supervised network (DSN) architecture is learned for the prediction of the text pixels at different feature levels. With higher-level features, the network can differentiate text pixels from background noises, whereby severe degradations that occur in document images can be managed. Alternatively, foreground maps that are predicted at lower-level features present a higher visual quality at the boundary area. Compared with those of traditional algorithms, binary images generated by our architecture have cleaner background and better-preserved strokes. The proposed approach achieves state-of-the-art results over widely used DIBCO datasets, revealing the robustness of the presented method."
]
} |
1901.06081 | 2911064732 | Abstract This paper presents a novel iterative deep learning framework and applies it to document enhancement and binarization. Unlike the traditional methods that predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce uniform images of the degraded input images, which in turn allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) that uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) that uses a stack of different neural networks for iterative output refinement. Given the learned nature of the uniform and enhanced image, the binarization map can be easily obtained through use of a global or local threshold. The experimental results on several public benchmark data sets show that our proposed method provides a new, clean version of the degraded image, one that is suitable for visualization and which shows promising results for binarization using Otsu’s global threshold, based on enhanced images learned iteratively by the neural network. | Other priori knowledge of text is also exploit for binarization, such as the edge pixels extracted by edge detectors. For example, the Canny edge detector is used to extract edge pixels in @cite_6 and then the closed image edges are considered as seeds to find the text region. The transition pixel which is a generation of the edge pixel is introduced in @cite_28 is computed based on the intensity differences in a small neighbor regions and the statistic information of these pixels are used to compute the threshold. @cite_44 , structural symmetric pixels around strokes are used to compute the local threshold. Howe @cite_24 proposes a promising method which can tune the parameters automatically with a global energy function as a loss which incorporates edge discontinuities (Canny detector is used). | {
"cite_N": [
"@cite_28",
"@cite_44",
"@cite_24",
"@cite_6"
],
"mid": [
"2468724597",
"2098800028",
"1978854150",
"2013420608"
],
"abstract": [
"This paper presents a novel scene text detection algorithm, Canny Text Detector, which takes advantage of the similarity between image edge and text for effective text localization with improved recall rate. As closely related edge pixels construct the structural information of an object, we observe that cohesive characters compose a meaningful word sentence sharing similar properties such as spatial location, size, color, and stroke width regardless of language. However, prevalent scene text detection approaches have not fully utilized such similarity, but mostly rely on the characters classified with high confidence, leading to low recall rate. By exploiting the similarity, our approach can quickly and robustly localize a variety of texts. Inspired by the original Canny edge detector, our algorithm makes use of double threshold and hysteresis tracking to detect texts of low confidence. Experimental results on public datasets demonstrate that our algorithm outperforms the state-of the-art scene text detection methods in terms of detection rate.",
"The paper makes two contributions: it provides (1) an operational definition of textons, the putative elementary units of texture perception, and (2) an algorithm for partitioning the image into disjoint regions of coherent brightness and texture, where boundaries of regions are defined by peaks in contour orientation energy and differences in texton densities across the contour. B. Julesz (1981) introduced the term texton, analogous to a phoneme in speech recognition, but did not provide an operational definition for gray-level images. We re-invent textons as frequently co-occurring combinations of oriented linear filter outputs. These can be learned using a K-means approach. By mapping each pixel to its nearest texton, the image can be analyzed into texton channels, each of which is a point set where discrete techniques such as Voronoi diagrams become applicable. Local histograms of texton frequencies can be used with a spl chi sup 2 test for significant differences to find texture boundaries. Natural images contain both textured and untextured regions, so we combine this cue with that of the presence of peaks of contour energy derived from outputs of odd- and even-symmetric oriented Gaussian derivative filters. Each of these cues has a domain of applicability, so to facilitate cue combination we introduce a gating operator based on a statistical test for isotropy of Delaunay neighbors. Having obtained a local measure of how likely two nearby pixels are to belong to the same region, we use the spectral graph theoretic framework of normalized cuts to find partitions of the image into regions of coherent texture and brightness. Experimental results on a wide range of images are shown.",
"Text detection in videos is challenging due to low resolution and complex background of videos. Besides, an arbitrary orientation of scene text lines in video makes the problem more complex and challenging. This paper presents a new method that extracts text lines of any orientations based on gradient vector flow (GVF) and neighbor component grouping. The GVF of edge pixels in the Sobel edge map of the input frame is explored to identify the dominant edge pixels which represent text components. The method extracts edge components corresponding to dominant pixels in the Sobel edge map, which we call text candidates (TC) of the text lines. We propose two grouping schemes. The first finds nearest neighbors based on geometrical properties of TC to group broken segments and neighboring characters which results in word patches. The end and junction points of skeleton of the word patches are considered to eliminate false positives, which output the candidate text components (CTC). The second is based on the direction and the size of the CTC to extract neighboring CTC and to restore missing CTC, which enables arbitrarily oriented text line detection in video frame. Experimental results on different datasets, including arbitrarily oriented text data, nonhorizontal and horizontal text data, Hua's data and ICDAR-03 data (camera images), show that the proposed method outperforms existing methods in terms of recall, precision and f-measure.",
"This paper introduces a novel binarization method based on the concept of transition pixel, a generalization of edge pixels. Such pixels are characterized by extreme transition values computed using pixel-intensity differences in a small neighborhood. We show how to adjust the threshold of several binary threshold methods which compute gray-intensity thresholds, using the gray-intensity mean and variance of the pixels in the transition set. Our experiments show that the new approach yields segmentation performance superior to several with current state-of-the-art binarization algorithms."
]
} |
1901.06081 | 2911064732 | Abstract This paper presents a novel iterative deep learning framework and applies it to document enhancement and binarization. Unlike the traditional methods that predict the binary label of each pixel on the input image, we train the neural network to learn the degradations in document images and produce uniform images of the degraded input images, which in turn allows the network to refine the output iteratively. Two different iterative methods have been studied in this paper: recurrent refinement (RR) that uses the same trained neural network in each iteration for document enhancement and stacked refinement (SR) that uses a stack of different neural networks for iterative output refinement. Given the learned nature of the uniform and enhanced image, the binarization map can be easily obtained through use of a global or local threshold. The experimental results on several public benchmark data sets show that our proposed method provides a new, clean version of the degraded image, one that is suitable for visualization and which shows promising results for binarization using Otsu’s global threshold, based on enhanced images learned iteratively by the neural network. | Convolutional neural networks achieve good performance on various applications, which is also applied in document analysis. For example, the winner of the recent DIBCO event @cite_23 uses the U-Net convolutional network architecture for accurate pixel classification. @cite_20 , the fully convolutional neural network is applied at multiple image scales. The deep encoder-decoder architecture is used for binarization in @cite_18 @cite_36 . A hierarchical deep supervised network is proposed in @cite_10 for document binarization, which achieves start-of-the-art performance on several benchmark data sets. @cite_16 , the Grid Long Short-Term Memory (Grid LSTM) network is used for binarization. However, it achieves lower performance than Vo's method @cite_10 . | {
"cite_N": [
"@cite_18",
"@cite_36",
"@cite_23",
"@cite_16",
"@cite_10",
"@cite_20"
],
"mid": [
"802468761",
"1923404803",
"2326498110",
"2953111739",
"2884366600",
"1521436688"
],
"abstract": [
"Convolutional Neural Networks have systematically shown good performance in Computer Vision and in Handwritten Text Recognition tasks. This paper proposes the use of these models for document image binarization. The main idea is to classify each pixel of the image into foreground and background from a sliding window centered at the pixel to be classified. An experimental analysis on the effect of sensitive parameters and some working topologies are proposed using two different corpora, of very different properties: DIBCO and Santgall.",
"Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 73.0 ).",
"In recent years, deep convolutional neural networks have achieved state of the art performance in various computer vision task such as classification, detection or segmentation. Due to their outstanding performance, CNNs are more and more used in the field of document image analysis as well. In this work, we present a CNN architecture that is trained with the recently proposed PHOC representation. We show empirically that our CNN architecture is able to outperform state of the art results for various word spotting benchmarks while exhibiting short training and test times.",
"Convolutional neural networks (CNNs) have been extensively applied for image recognition problems giving state-of-the-art results on recognition, detection, segmentation and retrieval. In this work we propose and evaluate several deep neural network architectures to combine image information across a video over longer time periods than previously attempted. We propose two methods capable of handling full length videos. The first method explores various convolutional temporal feature pooling architectures, examining the various design choices which need to be made when adapting a CNN for this task. The second proposed method explicitly models the video as an ordered sequence of frames. For this purpose we employ a recurrent neural network that uses Long Short-Term Memory (LSTM) cells which are connected to the output of the underlying CNN. Our best networks exhibit significant performance improvements over previously published results on the Sports 1 million dataset (73.1 vs. 60.9 ) and the UCF-101 datasets with (88.6 vs. 88.0 ) and without additional optical flow information (82.6 vs. 72.8 ).",
"Convolutional neural networks (CNNs) have achieved great successes in many computer vision problems. Unlike existing works that designed CNN architectures to improve performance on a single task of a single domain and not generalizable, we present IBN-Net, a novel convolutional architecture, which remarkably enhances a CNN’s modeling ability on one domain (e.g. Cityscapes) as well as its generalization capacity on another domain (e.g. GTA5) without finetuning. IBN-Net carefully integrates Instance Normalization (IN) and Batch Normalization (BN) as building blocks, and can be wrapped into many advanced deep networks to improve their performances. This work has three key contributions. (1) By delving into IN and BN, we disclose that IN learns features that are invariant to appearance changes, such as colors, styles, and virtuality reality, while BN is essential for preserving content related information. (2) IBN-Net can be applied to many advanced deep architectures, such as DenseNet, ResNet, ResNeXt, and SENet, and consistently improve their performance without increasing computational cost. (3) When applying the trained networks to new domains, e.g. from GTA5 to Cityscapes, IBN-Net achieves comparable improvements as domain adaptation methods, even without using data from the target domain. With IBN-Net, we won the 1st place on the WAD 2018 Challenge Drivable Area track, with an mIoU of 86.18 .",
"Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods."
]
} |
1901.06237 | 2909579777 | Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing. | Related Crowdsourcing Methodologies. Balancing the demands that accuracy requirements and budget limits place on crowdsourcing experiments has been the focus of research in various communities, including machine learning , human computation , data management , and computer vision . The crowdsourcing mechanisms used in practice, e.g., collecting image labels to train computer vision systems, are typically agnostic to the difficulty of a task, assigning the same fixed number of crowd workers to each task. Notable exceptions are the recent works by @cite_10 , @cite_0 , and @cite_2 , who proposed flexible worker assignment schemes. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_2"
],
"mid": [
"1601808502",
"2279346661",
"2554839354"
],
"abstract": [
"Crowdsourcing systems allocate tasks to a group of workers over the Internet, which have become an effective paradigm for human-powered problem solving such as image classification, optical character recognition and proofreading. In this paper, we focus on incentivizing crowd workers to label a set of binary tasks under strict budget constraint. We properly profile the tasks' difficulty levels and workers' quality in crowdsourcing systems, where the collected labels are aggregated with sequential Bayesian approach. To stimulate workers to undertake crowd labeling tasks, the interaction between workers and the platform is modeled as a reverse auction. We reveal that the platform utility maximization could be intractable, for which an incentive mechanism that determines the winning bid and payments with polynomial-time computation complexity is developed. Moreover, we theoretically prove that our mechanism is truthful, individually rational and budget feasible. Through extensive simulations, we demonstrate that our mechanism utilizes budget efficiently to achieve high platform utility with polynomial computation complexity.",
"Microtask crowdsourcing has enabled dataset advances in social science and machine learning, but existing crowdsourcing schemes are too expensive to scale up with the expanding volume of data. To scale and widen the applicability of crowdsourcing, we present a technique that produces extremely rapid judgments for binary and categorical labels. Rather than punishing all errors, which causes workers to proceed slowly and deliberately, our technique speeds up workers' judgments to the point where errors are acceptable and even expected. We demonstrate that it is possible to rectify these errors by randomizing task order and modeling response latency. We evaluate our technique on a breadth of common labeling tasks such as image verification, word similarity, sentiment analysis and topic classification. Where prior work typically achieves a 0.25x to 1x speedup over fixed majority vote, our approach often achieves an order of magnitude (10x) speedup.",
"Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous “information pieceworkers”, have emerged as an effective paradigm for human-powered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these low-paid workers can be unreliable, nearly all crowdsourcers must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in some way such as majority voting. In this paper, we consider a model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, based on low-rank matrix approximation, significantly outperforms majority voting and, in fact, is order-optimal through comparison to an oracle that knows the reliability of every worker."
]
} |
1901.06237 | 2909579777 | Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing. | Our work is different from previously-proposed crowdsourcing methodologies with adaptive worker assignments because these assume that the same workers can be employed with user profile tracking.'' The worker-task allocation scheme by @cite_7 relies on being able to incrementally estimate [the workers' accuracy] based on their previous work.'' The algorithm by @cite_3 relies on a majority-voting-efficient fusion method to estimate the answers to each of the tasks,'' which also requires user profile tracking. Our methodology does not include user profile tracking because in our experiments using the Amazon Mechanical Turk Internet marketplace, we cannot request the same workers in an incremental scheme to estimate the accuracy of their work. Our work makes use of the optimality of majority voting (MV) under certain conditions (Theorem 2). @cite_8 also point out that to use MV, the probability of correct labeling of each worker should be higher than 0.5. | {
"cite_N": [
"@cite_3",
"@cite_7",
"@cite_8"
],
"mid": [
"2098865355",
"2953080935",
"2398690976"
],
"abstract": [
"Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit answers of low quality. Currently, most requesters rely on redundancy to identify the correct answers. However, redundancy is not a panacea. Massive redundancy is expensive, increasing significantly the cost of crowdsourced solutions. Therefore, we need techniques that will accurately estimate the quality of the workers, allowing for the rejection and blocking of the low-performing workers and spammers. However, existing techniques cannot separate the true (unrecoverable) error rate from the (recoverable) biases that some workers exhibit. This lack of separation leads to incorrect assessments of a worker's quality. We present algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error. Our algorithm generates a scalar score representing the inherent quality of each worker. We illustrate how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers. We present experimental results demonstrating the performance of the proposed algorithm under a variety of settings.",
"To ensure quality results from crowdsourced tasks, requesters often aggregate worker responses and use one of a plethora of strategies to infer the correct answer from the set of noisy responses. However, all current models assume prior knowledge of all possible outcomes of the task. While not an unreasonable assumption for tasks that can be posited as multiple-choice questions (e.g. n-ary classification), we observe that many tasks do not naturally fit this paradigm, but instead demand a free-response formulation where the outcome space is of infinite size (e.g. audio transcription). We model such tasks with a novel probabilistic graphical model, and design and implement LazySusan, a decision-theoretic controller that dynamically requests responses as necessary in order to infer answers to these tasks. We also design an EM algorithm to jointly learn the parameters of our model while inferring the correct answers to multiple tasks at a time. Live experiments on Amazon Mechanical Turk demonstrate the superiority of LazySusan at solving SAT Math questions, eliminating 83.2 of the error and achieving greater net utility compared to the state-ofthe-art strategy, majority-voting. We also show in live experiments that our EM algorithm outperforms majority-voting on a visualization task that we design.",
"We explore the problem of assigning heterogeneous tasks to workers with different, unknown skill sets in crowdsourcing markets such as Amazon Mechanical Turk. We first formalize the online task assignment problem, in which a requester has a fixed set of tasks and a budget that specifies how many times he would like each task completed. Workers arrive one at a time (with the same worker potentially arriving multiple times), and must be assigned to a task upon arrival. The goal is to allocate workers to tasks in a way that maximizes the total benefit that the requester obtains from the completed work. Inspired by recent research on the online adwords problem, we present a two-phase exploration-exploitation assignment algorithm and prove that it is competitive with respect to the optimal offline algorithm which has access to the unknown skill levels of each worker. We empirically evaluate this algorithm using data collected on Mechanical Turk and show that it performs better than random assignment or greedy algorithms. To our knowledge, this is the first work to extend the online primal-dual technique used in the online adwords problem to a scenario with unknown parameters, and the first to offer an empirical validation of an online primal-dual algorithm."
]
} |
1901.06237 | 2909579777 | Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing. | Our work is distinct from prior work in that our system not only learns an optimal crowd worker allocation that is adapted to task difficulty, but also a mapping from data features to crowd worker allocations. In their award-winning paper, @cite_2 addressed a related data-focused problem -- how to solicit fewer human responses when answer agreement is expected and more responses otherwise, based on predicting from a visual question whether a crowd would agree on one answer. Their method computes the required budget after a classifier had been applied to rank the ambiguity in the data. Their system solicits at most five answers for the @math data points predicted to reflect the greatest likelihood for crowd disagreement and one answer for the remaining visual questions, where @math is the extra budget available. In our paradigm, the output of BUOCA determines the specific budget level when a sufficient number of data has been labeled so that the training of BUOCA-ML is expected to be successful. BUOCA-ML is then trained with the labels obtained with this training budget (note that the training budget is different from the budget needed to apply BUOCA-ML in phase 2). | {
"cite_N": [
"@cite_2"
],
"mid": [
"2146928171"
],
"abstract": [
"In real crowdsourcing applications, each label from a crowd usually comes with a certain cost. Given a pre-fixed amount of budget, since different tasks have different ambiguities and different workers have different expertises, we want to find an optimal way to allocate the budget among instance-worker pairs such that the overall label quality can be maximized. To address this issue, we start from the simplest setting in which all workers are assumed to be perfect. We formulate the problem as a Bayesian Markov Decision Process (MDP). Using the dynamic programming (DP) algorithm, one can obtain the optimal allocation policy for a given budget. However, DP is computationally intractable. To solve the computational challenge, we propose a novel approximate policy which is called optimistic knowledge gradient. It is practically efficient while theoretically its consistency can be guaranteed. We then extend the MDP framework to deal with inhomogeneous workers and tasks with contextual information available. The experiments on both simulated and real data demonstrate the superiority of our method."
]
} |
1901.06237 | 2909579777 | Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing. | A flexible crowdsourcing scheme that collects additional labels for tweets that are estimated to be difficult to understand because they contain sarcasm has been proposed by @cite_0 . Their estimation is based on a Natural Language Processing (NLP) analysis, for example, whether the tweet included texting lingo, such as lol , rofl , or OMG , or the tweeter highlighted words by writing them with all capital letters. In our work, we also use NLP tools to analyze the labeling difficulty of tweets, including sarcasm. Different from the work by @cite_0 , which relies on handcrafted decision trees to compute the number of workers to allocate to a specific tweet, we propose a general, automatic scheme to allocate workers. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2048747536"
],
"abstract": [
"We describe a system that classifies the polarity of Spanish tweets. We adopt a hybrid approach, which combines machine learning and linguistic knowledge acquired by means of NLP. We use part-of-speech tags, syntactic dependencies and semantic knowledge as features for a supervised classifier. Lexical particularities of the language used in Twitter are taken into account in a pre-processing step. Experimental results improve over those of pure machine learning approaches and confirm the practical utility of the proposal."
]
} |
1901.06237 | 2909579777 | Due to concerns about human error in crowdsourcing, it is standard practice to collect labels for the same data point from multiple internet workers. We here show that the resulting budget can be used more effectively with a flexible worker assignment strategy that asks fewer workers to analyze easy-to-label data and more workers to analyze data that requires extra scrutiny. Our main contribution is to show how the allocations of the number of workers to a task can be computed optimally based on task features alone, without using worker profiles. Our target tasks are delineating cells in microscopy images and analyzing the sentiment toward the 2016 U.S. presidential candidates in tweets. We first propose an algorithm that computes budget-optimized crowd worker allocation (BUOCA). We next train a machine learning system (BUOCA-ML) that predicts an optimal number of crowd workers needed to maximize the accuracy of the labeling. We show that the computed allocation can yield large savings in the crowdsourcing budget (up to 49 percent points) while maintaining labeling accuracy. Finally, we envisage a human-machine system for performing budget-optimized data analysis at a scale beyond the feasibility of crowdsourcing. | Related Methods for Image Segmentation. Many solutions have been proposed for crowdsourcing the task of image segmentation. The most common proposed solution requires task requesters to collect redundant data from multiple crowd workers and uses majority voting (e.g., majority of the decisions of 5 workers per task @cite_5 ). In one study, as much as 32 obtained from internet workers had to be discarded @cite_4 . Our study shows that intelligent allocation of crowd efforts can be used to achieve high quality segmentation while satisfying budget constraints. | {
"cite_N": [
"@cite_5",
"@cite_4"
],
"mid": [
"2279346661",
"1570705485"
],
"abstract": [
"Microtask crowdsourcing has enabled dataset advances in social science and machine learning, but existing crowdsourcing schemes are too expensive to scale up with the expanding volume of data. To scale and widen the applicability of crowdsourcing, we present a technique that produces extremely rapid judgments for binary and categorical labels. Rather than punishing all errors, which causes workers to proceed slowly and deliberately, our technique speeds up workers' judgments to the point where errors are acceptable and even expected. We demonstrate that it is possible to rectify these errors by randomizing task order and modeling response latency. We evaluate our technique on a breadth of common labeling tasks such as image verification, word similarity, sentiment analysis and topic classification. Where prior work typically achieves a 0.25x to 1x speedup over fixed majority vote, our approach often achieves an order of magnitude (10x) speedup.",
"Crowdsourcing has become an eective and popular tool for human-powered computation to label large datasets. Since the workers can be unreliable, it is common in crowdsourcing to assign multiple workers to one task, and to aggregate the labels in order to obtain results of high quality. In this paper, we provide nite-sample exponential bounds on the error rate (in probability and in expectation) of general aggregation rules under the Dawid-Skene crowdsourcing model. The bounds are derived for multi-class labeling, and can be used to analyze many aggregation methods, including majority voting, weighted majority voting and the oracle Maximum A Posteriori (MAP) rule. We show that the oracle MAP rule approximately optimizes our upper bound on the mean error rate of weighted majority voting in certain setting. We propose an iterative weighted majority voting (IWMV) method that optimizes the error rate bound and approximates the oracle MAP rule. Its one step version has a provable theoretical guarantee on the error rate. The IWMV method is intuitive and computationally simple. Experimental results on simulated and real data show that IWMV performs at least on par with the state-of-the-art methods, and it has a much lower computational cost (around one hundred times faster) than the state-of-the-art methods."
]
} |
1901.06199 | 2909896778 | Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST. | The research of image super-resolution can be divided into two categories: one is based on single image super-resolution (SISR), and the other is based on multiple image super-resolution (MISR) @cite_14 . Our work can be cast into the first category. We will focus on single image super-resolution (SISR) and will not further discuss approaches that reconstruct HR images from multiple images. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2028790650"
],
"abstract": [
"A super-resolution (SR) method based on compressive sensing (CS), structural self-similarity (SSSIM), and dictionary learning is proposed for reconstructing remote sensing images. This method aims to identify a dictionary that represents high resolution (HR) image patches in a sparse manner. Extra information from similar structures which often exist in remote sensing images can be introduced into the dictionary, thereby enabling an HR image to be reconstructed using the dictionary in the CS framework. We use the K-Singular Value Decomposition method to obtain the dictionary and the orthogonal matching pursuit method to derive sparse representation coefficients. To evaluate the effectiveness of the proposed method, we also define a new SSSIM index, which reflects the extent of SSSIM in an image. The most significant difference between the proposed method and traditional sample-based SR methods is that the proposed method uses only a low-resolution image and its own interpolated image instead of other HR images in a database. We simulate the degradation mechanism of a uniform 2 × 2 blur kernel plus a downsampling by a factor of 2 in our experiments. Comparative experimental results with several image-quality-assessment indexes show that the proposed method performs better in terms of the SR effectivity and time efficiency. In addition, the SSSIM index is strongly positively correlated with the SR quality."
]
} |
1901.06199 | 2909896778 | Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST. | Recently, convolutional neural network (CNN) based SR algorithms have shown excellent performance. In @cite_1 , the authors encoded a sparse representation prior into a feed-forward network architecture based on the learned iterative shrinkage and thresholding algorithm (LISTA) @cite_8 . @cite_24 @cite_7 used bicubic interpolation to downscale an image as input image and trained a three layer convolutional network end-to-end. The deeply-recursive convolutional network (DRCN) @cite_17 is a highly effective architecture that allows long-range pixel dependencies while keeping the number of model parameters small. @cite_10 and @cite_25 proposed a perceptual loss function to reconstruct visually more convincing HR images. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_1",
"@cite_24",
"@cite_10",
"@cite_25",
"@cite_17"
],
"mid": [
"2476548250",
"2949128343",
"2866634454",
"2950116990",
"2795824235",
"2963814095",
"2505593925"
],
"abstract": [
"Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.",
"Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods.",
"Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.",
"Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.",
"Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN and EnhanceNet.",
"Despite that convolutional neural networks (CNN) have recently demonstrated high-quality reconstruction for single-image super-resolution (SR), recovering natural and realistic texture remains a challenging problem. In this paper, we show that it is possible to recover textures faithful to semantic classes. In particular, we only need to modulate features of a few intermediate layers in a single network conditioned on semantic segmentation probability maps. This is made possible through a novel Spatial Feature Transform (SFT) layer that generates affine transformation parameters for spatial-wise feature modulation. SFT layers can be trained end-to-end together with the SR network using the same loss function. During testing, it accepts an input image of arbitrary size and generates a high-resolution image with just a single forward pass conditioned on the categorical priors. Our final results show that an SR network equipped with SFT can generate more realistic and visually pleasing textures in comparison to state-of-the-art SRGAN [27] and EnhanceNet [38].",
"One impressive advantage of convolutional neural networks (CNNs) is their ability to automatically learn feature representation from raw pixels, eliminating the need for hand-designed procedures. However, recent methods for single image super-resolution (SR) fail to maintain this advantage. They utilize CNNs in two decoupled steps, i.e., first upsampling the low resolution (LR) image to the high resolution (HR) size with hand-designed techniques (e.g., bicubic interpolation), and then applying CNNs on the upsampled LR image to reconstruct HR results. In this paper, we seek an alternative and propose a new image SR method, which jointly learns the feature extraction, upsampling and HR reconstruction modules, yielding a completely end-to-end trainable deep CNN. As opposed to existing approaches, the proposed method conducts upsampling in the latent feature space with filters that are optimized for the task of image SR. In addition, the HR reconstruction is performed in a multi-scale manner to simultaneously incorporate both short- and long-range contextual information, ensuring more accurate restoration of HR images. To facilitate network training, a new training approach is designed, which jointly trains the proposed deep network with a relatively shallow network, leading to faster convergence and more superior performance. The proposed method is extensively evaluated on widely adopted data sets and improves the performance of state-of-the-art methods with a considerable margin. Moreover, in-depth ablation studies are conducted to verify the contribution of different network designs to image SR, providing additional insights for future research."
]
} |
1901.06199 | 2909896778 | Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST. | Generative Adversarial Nets (GAN) is proposed by Goodfellow @cite_6 which contains two parts, a generator and a discriminator. The generator is responsible for generating images close to the real pictures to fool the discriminator, and the discriminator is responsible to discriminate the picture from the generator or real pictures. Adversarial examples problem is also proposed and there are many methods to solve it such as @cite_28 . | {
"cite_N": [
"@cite_28",
"@cite_6"
],
"mid": [
"2596763562",
"2964218010"
],
"abstract": [
"Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.",
"Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally."
]
} |
1901.06199 | 2909896778 | Generative Adversarial Networks (GAN) receive great attentions recently due to its excellent performance in image generation, transformation, and super-resolution. However, GAN has rarely been studied and trained for classification, leading that the generated images may not be appropriate for classification. In this paper, we propose a novel Generative Adversarial Classifier (GAC) particularly for low-resolution Handwriting Character Recognition. Specifically, involving additionally a classifier in the training process of normal GANs, GAC is calibrated for learning suitable structures and restored characters images that benefits the classification. Experimental results show that our proposed method can achieve remarkable performance in handwriting characters 8x super-resolution, approximately 10 and 20 higher than the present state-of-the-art methods respectively on benchmark data CASIA-HWDB1.1 and MNIST. | In 2016, @cite_21 proposed DCGAN which is stable in most settings and shows the vector arithmetics as an intrinsic property of the representations learned by the Generator. @cite_23 proposed the conditional GAN, the idea is to use labels for some data to help network build salient representations, it can control the generator's outputs without changing the architecture by adding label as another input to the generator. @cite_27 proposed the SRGAN by reconstructing the HR image with GAN based on Resnet @cite_19 and it achieves remarkable performance in human vision but low PSNR. The Triple-GAN is proposed by @cite_0 which contains three parts, a classifier @math that (approximately) characterizes the conditional distribution @math , a class-conditional generator @math that (approximately) characterizes the conditional distribution in the other direction @math , and a discriminator @math that distinguishes whether a pair of data @math comes from the true distribution @math , the final goal of Triple-GAN is to predict the labels @math for unlabeled data as well as to generate new samples @math conditioned on @math . | {
"cite_N": [
"@cite_21",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_23"
],
"mid": [
"2787223504",
"2596763562",
"2964218010",
"2810518847",
"2806935606"
],
"abstract": [
"We propose in this paper a new approach to train the Generative Adversarial Nets (GANs) with a mixture of generators to overcome the mode collapsing problem. The main intuition is to employ multiple generators, instead of using a single one as in the original GAN. The idea is simple, yet proven to be extremely effective at covering diverse data modes, easily overcoming the mode collapsing problem and delivering state-of-the-art results. A minimax formulation was able to establish among a classifier, a discriminator, and a set of generators in a similar spirit with GAN. Generators create samples that are intended to come from the same distribution as the training data, whilst the discriminator determines whether samples are true data or generated by generators, and the classifier specifies which generator a sample comes from. The distinguishing feature is that internal samples are created from multiple generators, and then one of them will be randomly selected as final output similar to the mechanism of a probabilistic mixture model. We term our method Mixture Generative Adversarial Nets (MGAN). We develop theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generators’ distributions and the empirical data distribution is minimal, whilst the JSD among generators’ distributions is maximal, hence effectively avoiding the mode collapsing problem. By utilizing parameter sharing, our proposed model adds minimal computational cost to the standard GAN, and thus can also efficiently scale to large-scale datasets. We conduct extensive experiments on synthetic 2D data and natural image databases (CIFAR-10, STL-10 and ImageNet) to demonstrate the superior performance of our MGAN in achieving state-of-the-art Inception scores over latest baselines, generating diverse and appealing recognizable objects at different resolutions, and specializing in capturing different types of objects by the generators.",
"Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.",
"Generative Adversarial Nets (GANs) have shown promise in image generation and semi-supervised learning (SSL). However, existing GANs in SSL have two problems: (1) the generator and the discriminator (i.e. the classifier) may not be optimal at the same time; and (2) the generator cannot control the semantics of the generated samples. The problems essentially arise from the two-player formulation, where a single discriminator shares incompatible roles of identifying fake samples and predicting labels and it only estimates the data without considering the labels. To address the problems, we present triple generative adversarial net (Triple-GAN), which consists of three players---a generator, a discriminator and a classifier. The generator and the classifier characterize the conditional distributions between images and labels, and the discriminator solely focuses on identifying fake image-label pairs. We design compatible utilities to ensure that the distributions characterized by the classifier and the generator both converge to the data distribution. Our results on various datasets demonstrate that Triple-GAN as a unified model can simultaneously (1) achieve the state-of-the-art classification results among deep generative models, and (2) disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.",
"In standard generative adversarial network (SGAN), the discriminator estimates the probability that the input data is real. The generator is trained to increase the probability that fake data is real. We argue that it should also simultaneously decrease the probability that real data is real because 1) this would account for a priori knowledge that half of the data in the mini-batch is fake, 2) this would be observed with divergence minimization, and 3) in optimal settings, SGAN would be equivalent to integral probability metric (IPM) GANs. We show that this property can be induced by using a relativistic discriminator which estimate the probability that the given real data is more realistic than a randomly sampled fake data. We also present a variant in which the discriminator estimate the probability that the given real data is more realistic than fake data, on average. We generalize both approaches to non-standard GAN loss functions and we refer to them respectively as Relativistic GANs (RGANs) and Relativistic average GANs (RaGANs). We show that IPM-based GANs are a subset of RGANs which use the identity function. Empirically, we observe that 1) RGANs and RaGANs are significantly more stable and generate higher quality data samples than their non-relativistic counterparts, 2) Standard RaGAN with gradient penalty generate data of better quality than WGAN-GP while only requiring a single discriminator update per generator update (reducing the time taken for reaching the state-of-the-art by 400 ), and 3) RaGANs are able to generate plausible high resolutions images (256x256) from a very small sample (N=2011), while GAN and LSGAN cannot; these images are of significantly better quality than the ones generated by WGAN-GP and SGAN with spectral normalization.",
"We propose an adversarial learning approach for generating multi-turn dialogue responses. Our proposed framework, hredGAN, is based on conditional generative adversarial networks (GANs). The GAN's generator is a modified hierarchical recurrent encoder-decoder network (HRED) and the discriminator is a word-level bidirectional RNN that shares context and word embeddings with the generator. During inference, noise samples conditioned on the dialogue history are used to perturb the generator's latent space to generate several possible responses. The final response is the one ranked best by the discriminator. The hredGAN shows improved performance over existing methods: (1) it generalizes better than networks trained using only the log-likelihood criterion, and (2) it generates longer, more informative and more diverse responses with high utterance and topic relevance even with limited training data. This improvement is demonstrated on the Movie triples and Ubuntu dialogue datasets using both automatic and human evaluations."
]
} |
1907.07469 | 2960922699 | In this work, we propose an edge detection algorithm by estimating a lifetime of an event produced from dynamic vision sensor (DVS), also known as event camera. The event camera, unlike traditional CMOS camera, generates sparse event data at a pixel whose log-intensity changes. Due to this characteristic, theoretically, there is only one or no event at the specific time, which makes it difficult to grasp the world captured by the camera at a particular moment. In this work, we present an algorithm that keeps the event alive until the corresponding event is generated in a nearby pixel so that the shape of an edge is preserved. Particularly, we consider a pixel area to fit a plane on Surface of Active Events (SAE) and call the point inside the pixel area closest to the plane as a intra-pixel-area event. These intra-pixel-area events help the fitting plane algorithm to estimate life time robustly and precisely. Our algorithm performs better in terms of sharpness and similarity metric than the accumulation of events over fixed counts or time intervals, when compared with the existing edge detection algorithms, both qualitatively and quantitatively. | Some research aim to detect edges, not just line segments that are frequently found in artifacts. F. Barranco al @cite_0 detects the contour of foreground objects. They extract features from the accumulated events such as orientation, timestamp, motion, and time texture. Then the boundary is predicted from the learned Structured Random Forest (SRF) given DVS features. However, since this algorithm is developed for object segmentation, it is prioritised to detect the boundary of the foreground, and the performance of the overall edge extraction may be degraded. E. Mueggler al @cite_8 estimates the lifetime of event from local plane fitting on the SAE based on event-based visual flow @cite_4 . However, na "ive RANdom SAmple Consensus (RANSAC) method could not be successfully adapted for the event camera, thus causing imprecise estimation. Therefore, we propose a intra-pixel-area approach for RANSAC in order to estimate a local plane robustly and precisely. Also, we quantitatively evaluate algorithms in terms of similarity metric, which have not been done in most of the previous works. | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_8"
],
"mid": [
"2216124221",
"1969366022",
"2462462929"
],
"abstract": [
"The bio-inspired, asynchronous event-based dynamic vision sensor records temporal changes in the luminance of the scene at high temporal resolution. Since events are only triggered at significant luminance changes, most events occur at the boundary of objects and their parts. The detection of these contours is an essential step for further interpretation of the scene. This paper presents an approach to learn the location of contours and their border ownership using Structured Random Forests on event-based features that encode motion, timing, texture, and spatial orientations. The classifier integrates elegantly information over time by utilizing the classification results previously computed. Finally, the contour detection and boundary assignment are demonstrated in a layer-segmentation of the scene. Experimental results demonstrate good performance in boundary detection and segmentation.",
"In this paper, we address the problems of contour detection, bottom-up grouping, object detection and semantic segmentation on RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset (, ECCV, 2012). We propose algorithms for object boundary detection and hierarchical segmentation that generalize the @math gPb-ucm approach of (TPAMI, 2011) by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We train RGB-D object detectors by analyzing and computing histogram of oriented gradients on the depth image and using them with deformable part models (, TPAMI, 2010). We observe that this simple strategy for training object detectors significantly outperforms more complicated models in the literature. We then turn to the problem of semantic segmentation for which we propose an approach that classifies superpixels into the dominant object categories in the NYUD2 dataset. We design generic and class-specific features to encode the appearance and geometry of objects. We also show that additional features computed from RGB-D object detectors and scene classifiers further improves semantic segmentation accuracy. In all of these tasks, we report significant improvements over the state-of-the-art.",
"In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail."
]
} |
1907.07581 | 2958911020 | Online personalized news product needs a suitable cover for the article. The news cover demands to be with high image quality, and draw readers' attention at same time, which is extraordinary challenging due to the subjectivity of the task. In this paper, we assess the news cover from image clarity and object salience perspective. We propose an end-to-end multi-task learning network for image clarity assessment and semantic segmentation simultaneously, the results of which can be guided for news cover assessment. The proposed network is based on a modified DeepLabv3+ model. The network backbone is used for multiple scale spatial features exaction, followed by two branches for image clarity assessment and semantic segmentation, respectively. The experiment results show that the proposed model is able to capture important content in images and performs better than single-task learning baselines on our proposed game content based CIA dataset. | . Human visual system is highly sensitive to edge and contour information of an image @cite_17 . Some IQA studies take edge structure information as the main image quality consideration, for example, in @cite_13 the authors apply edge information for both blur and noise detection, which are the major factors on image quality degradation. In @cite_19 , an edge model is employed to extract salient edge information for screen content images assessment, which outperforms the other state-of-the-art IQA models of the day. | {
"cite_N": [
"@cite_19",
"@cite_13",
"@cite_17"
],
"mid": [
"2508724573",
"2141983208",
"1975115580"
],
"abstract": [
"Since the human visual system (HVS) is highly sensitive to edges, a novel image quality assessment (IQA) metric for assessing screen content images (SCIs) is proposed in this paper. The turnkey novelty lies in the use of an existing parametric edge model to extract two types of salient attributes — namely, edge contrast and edge width, for the distorted SCI under assessment and its original SCI, respectively. The extracted information is subject to conduct similarity measurements on each attribute, independently. The obtained similarity scores are then combined using our proposed edge-width pooling strategy to generate the final IQA score. Hopefully, this score is consistent with the judgment made by the HVS. Experimental results have shown that the proposed IQA metric produces higher consistency with that of the HVS on the evaluation of the image quality of the distorted SCI than that of other state-of-the-art IQA metrics.",
"Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.",
"We propose a highly unsupervised, training free, no reference image quality assessment (IQA) model that is based on the hypothesis that distorted images have certain latent characteristics that differ from those of “natural” or “pristine” images. These latent characteristics are uncovered by applying a “topic model” to visual words extracted from an assortment of pristine and distorted images. For the latent characteristics to be discriminatory between pristine and distorted images, the choice of the visual words is important. We extract quality-aware visual words that are based on natural scene statistic features [1]. We show that the similarity between the probability of occurrence of the different topics in an unseen image and the distribution of latent topics averaged over a large number of pristine natural images yields a quality measure. This measure correlates well with human difference mean opinion scores on the LIVE IQA database [2]."
]
} |
1907.07581 | 2958911020 | Online personalized news product needs a suitable cover for the article. The news cover demands to be with high image quality, and draw readers' attention at same time, which is extraordinary challenging due to the subjectivity of the task. In this paper, we assess the news cover from image clarity and object salience perspective. We propose an end-to-end multi-task learning network for image clarity assessment and semantic segmentation simultaneously, the results of which can be guided for news cover assessment. The proposed network is based on a modified DeepLabv3+ model. The network backbone is used for multiple scale spatial features exaction, followed by two branches for image clarity assessment and semantic segmentation, respectively. The experiment results show that the proposed model is able to capture important content in images and performs better than single-task learning baselines on our proposed game content based CIA dataset. | In recent years, the idea of employing a CNN based approach for no-reference IQA (NR-IQA) tasks is arising, and meanwhile the performance of NR-IQA has been significantly improved under such methods @cite_24 @cite_14 . For example, in @cite_14 , a CNN is directly utilized for image quality prediction without a reference image, which integrates the feature learning and regression into one optimization process. One common ground behind those models is that these network architectures are shallower and narrower, which are not deep enough for learning high-level features. The emergence of deeper CNN, such as ResNet-101 @cite_7 and Xception @cite_8 , further promotes the representational abilities of those models. For example, DeepLabv3+ @cite_22 , employs atrous convolution to extract dense feature maps and capture global multiple scale context, resulting in significant performance improvement over semantic segmentation tasks. In @cite_20 , DeepLab based network is applied to excavate spatial features of hyper spectral images, and achieves outstanding performance. | {
"cite_N": [
"@cite_14",
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_24",
"@cite_20"
],
"mid": [
"2509123681",
"2258484932",
"1997974943",
"2963998559",
"2415731916",
"2614256707"
],
"abstract": [
"This paper presents a no reference image (NR) quality assessment (IQA) method based on a deep convolutional neural network (CNN). The CNN takes unpreprocessed image patches as an input and estimates the quality without employing any domain knowledge. By that, features and natural scene statistics are learnt purely data driven and combined with pooling and regression in one framework. We evaluate the network on the LIVE database and achieve a linear Pearson correlation superior to state-of-the-art NR IQA methods. We also apply the network to the image forensics task of decoder-sided quantization parameter estimation and also here achieve correlations of r = 0.989.",
"Convolutional neural network (CNN) has achieved the state-of-the-art performance in many different visual tasks. Learned from a large-scale training data set, CNN features are much more discriminative and accurate than the handcrafted features. Moreover, CNN features are also transferable among different domains. On the other hand, traditional dictionary-based features (such as BoW and spatial pyramid matching) contain much more local discriminative and structural information, which is implicitly embedded in the images. To further improve the performance, in this paper, we propose to combine CNN with dictionary-based models for scene recognition and visual domain adaptation (DA). Specifically, based on the well-tuned CNN models (e.g., AlexNet and VGG Net), two dictionary-based representations are further constructed, namely, mid-level local representation (MLR) and convolutional Fisher vector (CFV) representation. In MLR, an efficient two-stage clustering method, i.e., weighted spatial and feature space spectral clustering on the parts of a single image followed by clustering all representative parts of all images, is used to generate a class-mixture or a class-specific part dictionary. After that, the part dictionary is used to operate with the multiscale image inputs for generating mid-level representation. In CFV, a multiscale and scale-proportional Gaussian mixture model training strategy is utilized to generate Fisher vectors based on the last convolutional layer of CNN. By integrating the complementary information of MLR, CFV, and the CNN features of the fully connected layer, the state-of-the-art performance can be achieved on scene recognition and DA problems. An interested finding is that our proposed hybrid representation (from VGG net trained on ImageNet) is also complementary to GoogLeNet and or VGG-11 (trained on Place205) greatly.",
"This paper addresses the problem of general-purpose No-Reference Image Quality Assessment (NR-IQA) with the goal of developing a real-time, cross-domain model that can predict the quality of distorted images without prior knowledge of non-distorted reference images and types of distortions present in these images. The contributions of our work are two-fold: first, the proposed method is highly efficient. NR-IQA measures are often used in real-time imaging or communication systems, therefore it is important to have a fast NR-IQA algorithm that can be used in these real-time applications. Second, the proposed method has the potential to be used in multiple image domains. Previous work on NR-IQA focus primarily on predicting quality of natural scene image with respect to human perception, yet, in other image domains, the final receiver of a digital image may not be a human. The proposed method consists of the following components: (1) a local feature extractor, (2) a global feature extractor and (3) a regression model. While previous approaches usually treat local feature extraction and regression model training independently, we propose a supervised method based on back-projection, which links the two steps by learning a compact set of filters which can be applied to local image patches to obtain discriminative local features. Using a small set of filters, the proposed method is extremely fast. We have tested this method on various natural scene and document image datasets and obtained state-of-the-art results.",
"During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem.",
"Deep convolutional neural networks (CNNs) have been immensely successful in many high-level computer vision tasks given large labelled datasets. However, for video semantic object segmentation, a domain where labels are scarce, effectively exploiting the representation power of CNN with limited training data remains a challenge. Simply borrowing the existing pre-trained CNN image recognition model for video segmentation task can severely hurt performance. We propose a semi-supervised approach to adapting CNN image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of video data. By explicitly modelling and compensating for the domain shift from the source domain to the target domain, this proposed approach underpins a robust semantic object segmentation method against the changes in appearance, shape and occlusion in natural videos. We present extensive experiments on challenging datasets that demonstrate the superior performance of our approach compared with the state-of-the-art methods.",
"In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets."
]
} |
1907.07581 | 2958911020 | Online personalized news product needs a suitable cover for the article. The news cover demands to be with high image quality, and draw readers' attention at same time, which is extraordinary challenging due to the subjectivity of the task. In this paper, we assess the news cover from image clarity and object salience perspective. We propose an end-to-end multi-task learning network for image clarity assessment and semantic segmentation simultaneously, the results of which can be guided for news cover assessment. The proposed network is based on a modified DeepLabv3+ model. The network backbone is used for multiple scale spatial features exaction, followed by two branches for image clarity assessment and semantic segmentation, respectively. The experiment results show that the proposed model is able to capture important content in images and performs better than single-task learning baselines on our proposed game content based CIA dataset. | . MTL is based on a fundamental idea that different tasks could share a common low level representation. In many computer vision tasks, MTL has exhibited advantages in performance improvement and memory saving. In @cite_6 , one unified architecture which jointly learn low-, mid-, and high-level vision tasks is introduced. With such a universal network, the tasks of boundary detection, normal estimation, saliency estimation, semantic segmentation, semantic boundary detection, proposal generation, and object detection can be simultaneously addressed. In @cite_21 , a multi-task learning network with "cross-stitch" units is proposed, which shows dramatically improved performance over one-task based baselines on the NYUv2 dataset @cite_3 . However, prior studies have not explored multi-task learning architecture or approach for IQA and semantic segmentation, which is our target method in this work. | {
"cite_N": [
"@cite_21",
"@cite_3",
"@cite_6"
],
"mid": [
"2891303672",
"2526041965",
"2029731618"
],
"abstract": [
"Multi-Task Learning (MTL) is appealing for deep learning regularization. In this paper, we tackle a specific MTL context denoted as primary MTL, where the ultimate goal is to improve the performance of a given primary task by leveraging several other auxiliary tasks. Our main methodological contribution is to introduce ROCK, a new generic multi-modal fusion block for deep learning tailored to the primary MTL context. ROCK architecture is based on a residual connection, which makes forward prediction explicitly impacted by the intermediate auxiliary representations. The auxiliary predictor's architecture is also specifically designed to our primary MTL context, by incorporating intensive pooling operators for maximizing complementarity of intermediate representations. Extensive experiments on NYUv2 dataset (object detection with scene classification, depth prediction, and surface normal estimation as auxiliary tasks) validate the relevance of the approach and its superiority to flat MTL approaches. Our method outperforms state-of-the-art object detection models on NYUv2 by a large margin, and is also able to handle large-scale heterogeneous inputs (real and synthetic images) and missing annotation modalities.",
"We explore architectures for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that (1) stratified sampling allows us to add diversity during batch updates and (2) sampled multi-scale features allow us to explore more nonlinear predictors (multiple fully-connected layers followed by ReLU) that improve overall accuracy. Finally, our objective is to show how a architecture can get performance better than (or comparable to) the architectures designed for a particular task. Interestingly, our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context, surface normal estimation on NYUDv2 dataset, and edge detection on BSDS without contextual post-processing.",
"We address the task of learning a semantic segmentation from weakly supervised data. Our aim is to devise a system that predicts an object label for each pixel by making use of only image level labels during training – the information whether a certain object is present or not in the image. Such coarse tagging of images is faster and easier to obtain as opposed to the tedious task of pixelwise labeling required in state of the art systems. We cast this task naturally as a multiple instance learning (MIL) problem. We use Semantic Texton Forest (STF) as the basic framework and extend it for the MIL setting. We make use of multitask learning (MTL) to regularize our solution. Here, an external task of geometric context estimation is used to improve on the task of semantic segmentation. We report experimental results on the MSRC21 and the very challenging VOC2007 datasets. On MSRC21 dataset we are able, by using 276 weakly labeled images, to achieve the performance of a supervised STF trained on pixelwise labeled training set of 56 images, which is a significant reduction in supervision needed."
]
} |
1907.07671 | 2958881886 | Stress research is a rapidly emerging area in thefield of electroencephalography (EEG) based signal processing.The use of EEG as an objective measure for cost effective andpersonalized stress management becomes important in particularsituations such as the non-availability of mental health this http URL this study, long-term stress is classified using baseline EEGsignal recordings. The labelling for the stress and control groupsis performed using two methods (i) the perceived stress scalescore and (ii) expert evaluation. The frequency domain featuresare extracted from five-channel EEG recordings in addition tothe frontal and temporal alpha and beta asymmetries. The alphaasymmetry is computed from four channels and used as a feature.Feature selection is also performed using a t-test to identifystatistically significant features for both stress and control groups.We found that support vector machine is best suited to classifylong-term human stress when used with alpha asymmetry asa feature. It is observed that expert evaluation based labellingmethod has improved the classification accuracy up to 85.20 .Based on these results, it is concluded that alpha asymmetry maybe used as a potential bio-marker for stress classification, when labels are assigned using expert evaluation. | Hemispheric specialization is a major concern in neuro-physiological research. Generally, a healthy brain at rest has a fairly balanced level of activity in both hemispheres of brain @cite_30 . The left hemisphere is associated with the processing of positive emotions, while the right hemisphere is associated with the processing of negative emotions @cite_16 . The extent of asymmetry has been suggested to vary under conditions of chronic stress @cite_12 . Frontal asymmetry is highly related to post-traumatic stress disorder (PTSD) @cite_31 . The results in @cite_32 , have shown that major depression disorder (MDD) group is significantly right lateralized relative to controls, and both MDD and PTSD displayed more left- than right-frontal activity. | {
"cite_N": [
"@cite_30",
"@cite_32",
"@cite_31",
"@cite_16",
"@cite_12"
],
"mid": [
"2040325003",
"2019370496",
"1744806699",
"2098580305",
"2133085034"
],
"abstract": [
"To test the hypothesis that activation asymmetries of the most anterior parts of the prefrontal cortex may be related to state-dependent regulation of emotion, spontaneous changes of cortical activation asymmetries from one session to a second one were related to spontaneous mood changes in two large samples (ns = 56 and 128). The interval between sessions was 2 to 4 weeks. Results show that mood changes specifically covary with changes of EEG asymmetry at the frontopolar electrode positions, but not with changes at other locations (dorsolateral frontal, temporal, and pariet al). Anxiety, tension, and depression were found to decrease when frontopolar activation asymmetry shifted to the right. Taking the new findings into account may contribute to the refinement and extension of theories on EEG laterality and emotion.",
"We review evidence for partially segregated networks of brain areas that carry out different attentional functions. One system, which includes parts of the intrapariet al cortex and superior frontal cortex, is involved in preparing and applying goal-directed (top-down) selection for stimuli and responses. This system is also modulated by the detection of stimuli. The other system, which includes the temporopariet al cortex and inferior frontal cortex, and is largely lateralized to the right hemisphere, is not involved in top-down selection. Instead, this system is specialized for the detection of behaviourally relevant stimuli, particularly when they are salient or unexpected. This ventral frontopariet al network works as a ‘circuit breaker’ for the dorsal system, directing attention to salient events. Both attentional systems interact during normal vision, and both are disrupted in unilateral spatial neglect.",
"Abstract Can semantic corpora be coupled to dynamical simulations in such a way so as to extract new associations from the data that were hitherto unapparent? We attempt to do this within neuroscience as an application domain, by introducing the notion of the semantome and coupling it to the connectome of the human brain network. This is implemented using BrainX3, a virtual reality simulation cum data mining platform that can be used for visualization, analysis and feature extraction of neuroscience data. We use this system to explore anatomical, functional and symptomatic semantics associated to simulated neuronal activity of a healthy brain, one with stroke and one perturbed by transcranial magnetic stimulation. In particular, we find that pariet al and occipital lesions in stroke affect the visual processing pathway leading to symptoms such as visual neglect, depression and photo-sensitivity seizures. Integrating semantomics with connectomics thus generates hypotheses about symptoms, functions and brain activity that supplement existing tools for diagnosis of mental illness. Our results suggest a new approach to big data with potential applications to other domains.",
"In recent years, many new cortical areas have been identified in the macaque monkey. The number of identified connections between areas has increased even more dramatically. We report here on (1) a summary of the layout of cortical areas associated with vision and with other modalities, (2) a computerized database for storing and representing large amounts of information on connectivity patterns, and (3) the application of these data to the analysis of hierarchical organization of the cerebral cortex. Our analysis concentrates on the visual system, which includes 25 neocortical areas that are predominantly or exclusively visual in function, plus an additional 7 areas that we regard as visual-association areas on the basis of their extensive visual inputs. A total of 305 connections among these 32 visual and visual-association areas have been reported. This represents 31 of the possible number of pathways it each area were connected with all others. The actual degree of connectivity is likely to be closer to 40 . The great majority of pathways involve reciprocal connections between areas. There are also extensive connections with cortical areas outside the visual system proper, including the somatosensory cortex, as well as neocortical, transitional, and archicortical regions in the temporal and frontal lobes. In the somatosensory motor system, there are 62 identified pathways linking 13 cortical areas, suggesting an overall connectivity of about 40 . Based on the laminar patterns of connections between areas, we propose a hierarchy of visual areas and of somato sensory motor areas that is more comprehensive than those suggested in other recent studies. The current version of the visual hierarchy includes 10 levels of cortical processing. Altogether, it contains 14 levels if one includes the retina and lateral geniculate nucleus at the bottom as well as the entorhinal cortex and hippocampus at the top. Within this hierarchy, there are multiple, intertwined processing streams, which, at a low level, are related to the compartmental organization of areas V1 and V2 and, at a high level, are related to the distinction between processing centers in the temporal and pariet al lobes. However, there are some pathways and relationships (about 10 of the total) whose descriptions do not fit cleanly into this hierarchical scheme for one reason or another. In most instances, though, it is unclear whether these represent genuine exceptions to a strict hierarchy rather than inaccuracies or uncertainties in the reported assignment.",
"Abstract This commentary provides reflections on the current state of affairs in research on EEG frontal asymmetries associated with affect. Although considerable progress has occurred since the first report on this topic 25 years ago, research on frontal EEG asymmetries associated with affect has largely evolved in the absence of any serious connection with neuroscience research on the structure and function of the primate prefrontal cortex (PFC). Such integration is important as this work progresses since the neuroscience literature can help to understand what the prefrontal cortex is “doing” in affective processing. Data from the neuroscience literature on the heterogeneity of different sectors of the PFC are introduced and more specific hypotheses are offered about what different sectors of the PFC might be doing in affect. A number of methodological issues associated with EEG measures of functional prefrontal asymmetries are also considered."
]
} |
1907.07543 | 2960456850 | Despite the recent success of deep transfer learning approaches in NLP, there is a lack of quantitative studies demonstrating the gains these models offer in low-shot text classification tasks over existing paradigms. Deep transfer learning approaches such as BERT and ULMFiT demonstrate that they can beat state-of-the-art results on larger datasets, however when one has only 100-1000 labelled examples per class, the choice of approach is less clear, with classical machine learning and deep transfer learning representing valid options. This paper compares the current best transfer learning approach with top classical machine learning approaches on a trinary sentiment classification task to assess the best paradigm. We find that BERT, representing the best of deep transfer learning, is the best performing approach, outperforming top classical machine learning algorithms by 9.7 on average when trained with 100 examples per class, narrowing to 1.8 at 1000 labels per class. We also show the robustness of deep transfer learning in moving across domains, where the maximum loss in accuracy is only 0.7 in similar domain tasks and 3.2 cross domain, compared to classical machine learning which loses up to 20.6 . | It is well established that there is no single classical machine learning classifier that consistently achieves the best classification performance. For example between the works in @cite_17 @cite_4 @cite_3 they showed that various classical machine learning approaches all slightly out performed each other. This is a long known phenomenon that all of these models have different strengths depending on the specific task and dataset. As such we have considered two of these (Na "ive Bayes and SVM) to give a fair representation and alleviate the bias of a single classifier. | {
"cite_N": [
"@cite_4",
"@cite_3",
"@cite_17"
],
"mid": [
"2050496630",
"2900946294",
"2063198586"
],
"abstract": [
"Machine learning classifiers have recently emerged as a way to predict the introduction of bugs in changes made to source code files. The classifier is first trained on software history, and then used to predict if an impending change causes a bug. Drawbacks of existing classifier-based bug prediction techniques are insufficient performance for practical use and slow prediction times due to a large number of machine learned features. This paper investigates multiple feature selection techniques that are generally applicable to classification-based bug prediction methods. The techniques discard less important features until optimal classification performance is reached. The total number of features used for training is substantially reduced, often to less than 10 percent of the original. The performance of Naive Bayes and Support Vector Machine (SVM) classifiers when using this technique is characterized on 11 software projects. Naive Bayes using feature selection provides significant improvement in buggy F-measure (21 percent improvement) over prior change classification bug prediction results (by the second and fourth authors [28]). The SVM's improvement in buggy F-measure is 9 percent. Interestingly, an analysis of performance for varying numbers of features shows that strong performance is achieved at even 1 percent of the original number of features.",
"In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805.10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem. More precisely, we constructed a binary classification task for which (i) a robust classifier exists; yet no non-trivial accuracy can be obtained with an efficient algorithm in (ii) the statistical query model. In the present paper we significantly strengthen both (i) and (ii): we now construct a task which admits (i') a maximally robust classifier (that is it can tolerate perturbations of size comparable to the size of the examples themselves); and moreover we prove computational hardness of learning this task under (ii') a standard cryptographic assumption.",
"Objectives: To investigate whether (1) machine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers; (2) classifier performance varies with optimization; and (3) the number of citations to screen can be reduced. Methods: We used an open-source, data-mining suite to process and classify biomedical citations that point to mostly nonrandomized studies from 2 systematic reviews. We built training and test sets for citation portions and compared classifier performance by considering the value of indexing, various feature sets, and optimization. We conducted our experiments in 2 phases. The design of phase I with no optimization was: 4 classifiersx3 feature setsx3 citation portions. Classifiers included k-nearest neighbor, naive Bayes, complement naive Bayes, and evolutionary support vector machine. Feature sets included bag of words, and 2- and 3-term n-grams. Citation portions included titles, titles and abstracts, and full citations with metadata. Phase II with optimization involved a subset of the classifiers, as well as features extracted from full citations, and full citations with overweighted titles. We optimized features and classifier parameters by manually setting information gain thresholds outside of a process for iterative grid optimization with 10-fold cross-validations. We independently tested models on data reserved for that purpose and statistically compared classifier performance on 2 types of feature sets. We estimated the number of citations needed to screen by reviewers during a second pass through a reduced set of citations. Results: In phase I, the evolutionary support vector machine returned the best recall for bag of words extracted from full citations; the best classifier with respect to overall performance was k-nearest neighbor. No classifier attained good enough recall for this task without optimization. In phase II, we boosted performance with optimization for evolutionary support vector machine and complement naive Bayes classifiers. Generalization performance was better for the latter in the independent tests. For evolutionary support vector machine and complement naive Bayes classifiers, the initial retrieval set was reduced by 46 and 35 , respectively. Conclusions: Machine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers. Optimization can markedly improve performance of classifiers. However, generalizability varies with the classifier. The number of citations to screen during a second independent pass through the citations can be substantially reduced."
]
} |
1907.07613 | 2960281739 | Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed. | In this section, we review related work on tracking-by-detection, tracking by template-matching, memory networks and multi-task learning. A preliminary version of our work appears in ECCV 2018 @cite_63 . This paper contains additional improvements in both methodology and experiments, including: 1) we propose a negative memory unit that stores distractor templates to cancel out wrong responses from the object template; 2) we design an auxiliary classification loss to facilitate the tracker's robustness to appearance changes; 3) we conduct comprehensive experiments on the VOT datasets, including VOT-2015, VOT-2016 and VOT-2017. | {
"cite_N": [
"@cite_63"
],
"mid": [
"2963471260"
],
"abstract": [
"Template-matching methods for visual tracking have gained popularity recently due to their comparable performance and fast speed. However, they lack effective ways to adapt to changes in the target object’s appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target’s appearance variations during tracking. An LSTM is used as a memory controller, where the input is the search feature map and the outputs are the control signals for the reading and writing process of the memory block. As the location of the target is at first unknown in the search feature map, an attention mechanism is applied to concentrate the LSTM input on the potential target. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. Unlike tracking-by-detection methods where the object’s information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target’s appearance changes by updating the external memory. Moreover, unlike other tracking methods where the model capacity is fixed after offline training – the capacity of our tracker can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on OTB and VOT demonstrates that our tracker MemTrack performs favorably against state-of-the-art tracking methods while retaining real-time speed of 50 fps."
]
} |
1907.07613 | 2960281739 | Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed. | Tracking-by-detection treats object tracking as a detection problem within an ROI image, where an online learned classifier is used to distinguish the target from the background. The difficulty of updating the classifier to adapt to appearance variations is that the bounding box predicted on each frame may not be accurate, which produces degraded training samples and thus gradually causes the tracker to drift. Numerous algorithms have been designed to mitigate the sample ambiguity caused by inaccurate predicted bounding boxes. @cite_50 formulates the online model learning process in a semi-supervised fashion by combining a given prior and the trained classifier. @cite_23 proposes a multiple instance learning scheme to solve the problem of inaccurate examples for online training. Instead of only focusing on facilitating the training process of the tracker, @cite_80 decomposes the tracking task into three parts---tracking, learning and detection, where a optical flow tracker is used for frame-to-frame tracking and an online trained detector is adopted to re-detect the target when drifting occurs. | {
"cite_N": [
"@cite_80",
"@cite_23",
"@cite_50"
],
"mid": [
"2009243364",
"2480631127",
"2051832123"
],
"abstract": [
"Adaptive tracking-by-detection methods have been widely studied with promising results. These methods first train a classifier in an online manner. Then, a sliding window is used to extract some samples from the local regions surrounding the former object location at the new frame. The classifier is then applied to these samples where the location of sample with maximum classifier score is the new object location. However, such classifier may be inaccurate when the training samples are imprecise which causes drift. Multiple instance learning (MIL) method is recently introduced into the tracking task, which can alleviate drift to some extent. However, the MIL tracker may detect the positive sample that is less important because it does not discriminatively consider the sample importance in its learning procedure. In this paper, we present a novel online weighted MIL (WMIL) tracker. The WMIL tracker integrates the sample importance into an efficient online learning procedure by assuming the most important sample (i.e., the tracking result in current frame) is known when training the classifier. A new bag probability function combining the weighted instance probability is proposed via which the sample importance is considered. Then, an efficient online approach is proposed to approximately maximize the bag likelihood function, leading to a more robust and much faster tracker. Experimental results on various benchmark video sequences demonstrate the superior performance of our algorithm to state-of-the-art tracking algorithms.",
"Tracking by detection based object tracking methods encounter numerous complications including object appearance changes, size and shape deformations, partial and full occlusions, which make online adaptation of classifiers and object models a substantial challenge. In this paper, we employ an object proposal network that generates a small yet refined set of bounding box candidates to mitigate the this object model refitting problem by concentrating on hard negatives when we update the classifier. This helps improving the discriminative power as hard negatives are likely to be due to background and other distractions. Another intuition is that, in each frame, applying the classifier only on the refined set of object-like candidates would be sufficient to eliminate most of the false positives. Incorporating an object proposal makes the tracker robust against shape deformations since they are handled naturally by the proposal stage. We demonstrate evaluations on the PETS 2016 dataset and compare with the state-of-theart trackers. Our method provides the superior results.",
"Most tracking-by-detection algorithms train discriminative classifiers to separate target objects from their surrounding background. In this setting, noisy samples are likely to be included when they are not properly sampled, thereby causing visual drift. The multiple instance learning (MIL) paradigm has been recently applied to alleviate this problem. However, important prior information of instance labels and the most correct positive instance (i.e., the tracking result in the current frame) can be exploited using a novel formulation much simpler than an MIL approach. In this paper, we show that integrating such prior information into a supervised learning algorithm can handle visual drift more effectively and efficiently than the existing MIL tracker. We present an online discriminative feature selection algorithm that optimizes the objective function in the steepest ascent direction with respect to the positive samples while in the steepest descent direction with respect to the negative ones. Therefore, the trained classifier directly couples its score with the importance of samples, leading to a more robust and efficient tracker. Numerous experimental evaluations with state-of-the-art algorithms on challenging sequences demonstrate the merits of the proposed algorithm."
]
} |
1907.07613 | 2960281739 | Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed. | With the widespread use of CNNs in the computer vision community, many methods @cite_1 have applied CNNs as the classifier to localize the target. @cite_49 uses two fully convolutional neural networks to estimate the target's bounding box, including a GNet that captures category information and an SNet that classifies the target from the background. @cite_29 presents a multi-domain learning framework to learn the shared representation of objects from different sequences. Motived by Dropout @cite_12 , BranchOut @cite_40 adopts multiple branches of fully connected layers, from which a random subset are selected for training, which regularizes the neural networks to avoid overfitting. Unlike these tracking-by-detection algorithms, which need costly stochastic gradient decent (SGD) updating, our method runs completely feed-forward and adapts to the object's appearance variations through a memory writing process, thus achieving real-time performance. | {
"cite_N": [
"@cite_29",
"@cite_1",
"@cite_40",
"@cite_49",
"@cite_12"
],
"mid": [
"2410641892",
"2756815061",
"2248723555",
"2750020389",
"2963873961"
],
"abstract": [
"Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.",
"Convolutional Neural Network (CNN) image classifiers are traditionally designed to have sequential convolutional layers with a single output layer. This is based on the assumption that all target classes should be treated equally and exclusively. However, some classes can be more difficult to distinguish than others, and classes may be organized in a hierarchy of categories. At the same time, a CNN is designed to learn internal representations that abstract from the input data based on its hierarchical layered structure. So it is natural to ask if an inverse of this idea can be applied to learn a model that can predict over a classification hierarchy using multiple output layers in decreasing order of class abstraction. In this paper, we introduce a variant of the traditional CNN model named the Branch Convolutional Neural Network (B-CNN). A B-CNN model outputs multiple predictions ordered from coarse to fine along the concatenated convolutional layers corresponding to the hierarchical structure of the target classes, which can be regarded as a form of prior knowledge on the output. To learn with B-CNNs a novel training strategy, named the Branch Training strategy (BT-strategy), is introduced which balances the strictness of the prior with the freedom to adjust parameters on the output layers to minimize the loss. In this way we show that CNN based models can be forced to learn successively coarse to fine concepts in the internal layers at the output stage, and that hierarchical prior knowledge can be adopted to boost CNN models' classification performance. Our models are evaluated to show that the B-CNN extensions improve over the corresponding baseline CNN on the benchmark datasets MNIST, CIFAR-10 and CIFAR-100.",
"Deep learning methods such as convolutional neural networks (CNNs) can deliver highly accurate classification results when provided with large enough data sets and respective labels. However, using CNNs along with limited labeled data can be problematic, as this leads to extensive overfitting. In this letter, we propose a novel method by considering a pretrained CNN designed for tackling an entirely different classification problem, namely, the ImageNet challenge, and exploit it to extract an initial set of representations. The derived representations are then transferred into a supervised CNN classifier, along with their class labels, effectively training the system. Through this two-stage framework, we successfully deal with the limited-data problem in an end-to-end processing scheme. Comparative results over the UC Merced Land Use benchmark prove that our method significantly outperforms the previously best stated results, improving the overall accuracy from 83.1 up to 92.4 . Apart from statistical improvements, our method introduces a novel feature fusion algorithm that effectively tackles the large data dimensionality by using a simple and computationally efficient approach.",
"Recently using convolutional neural networks (CNNs) has gained popularity in visual tracking, due to its robust feature representation of images. Recent methods perform online tracking by fine-tuning a pre-trained CNN model to the specific target object using stochastic gradient descent (SGD) back-propagation, which is usually time-consuming. In this paper, we propose a recurrent filter generation methods for visual tracking. We directly feed the target's image patch to a recurrent neural network (RNN) to estimate an object-specific filter for tracking. As the video sequence is a spatiotemporal data, we extend the matrix multiplications of the fully-connected layers of the RNN to a convolution operation on feature maps, which preserves the target's spatial structure and also is memory-efficient. The tracked object in the subsequent frames will be fed into the RNN to adapt the generated filters to appearance variations of the target. Note that once the off-line training process of our network is finished, there is no need to fine-tune the network for specific objects, which makes our approach more efficient than methods that use iterative fine-tuning to online learn the target. Extensive experiments conducted on widely used benchmarks, OTB and VOT, demonstrate encouraging results compared to other recent methods.",
"Recently using convolutional neural networks (CNNs) has gained popularity in visual tracking, due to its robust feature representation of images. Recent methods perform online tracking by fine-tuning a pre-trained CNN model to the specific target object using stochastic gradient descent (SGD) back-propagation, which is usually time-consuming. In this paper, we propose a recurrent filter generation methods for visual tracking. We directly feed the target's image patch to a recurrent neural network (RNN) to estimate an object-specific filter for tracking. As the video sequence is a spatiotemporal data, we extend the matrix multiplications of the fully-connected layers of the RNN to a convolution operation on feature maps, which preserves the target's spatial structure and also is memory-efficient. The tracked object in the subsequent frames will be fed into the RNN to adapt the generated filters to appearance variations of the target. Note that once the off-line training process of our network is finished, there is no need to fine-tune the network for specific objects, which makes our approach more efficient than methods that use iterative fine-tuning to online learn the target. Extensive experiments conducted on widely used benchmarks, OTB and VOT, demonstrate encouraging results compared to other recent methods."
]
} |
1907.07613 | 2960281739 | Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed. | Matching-based methods have recently gained popularity due to their fast speed and promising performance. The most notable is the fully convolutional Siamese network (SiamFC) @cite_58 . Although it only uses the first frame as the template, SiamFC achieves competitive results and fast speed. The key deficiency of SiamFC is that it lacks an effective model for online updating. To address this, @cite_30 updates the model using linear interpolation of new templates with a small learning rate, but only sees modest improvements in accuracy. RFL (Recurrent Filter Learning) @cite_18 adopts a convolutional LSTM for model updating, where the forget and input gates control the linear combination of the historical target information (, memory states of the LSTM) and the object's current template automatically. Guo @cite_10 propose a dynamic Siamese network with two general transformations for target appearance variation and background suppression. He @cite_64 design two branches of Siamese networks with a channel-wise attention mechanism aiming to improve the robustness and discrimination ability of the matching network. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_64",
"@cite_58",
"@cite_10"
],
"mid": [
"2776035257",
"2787941778",
"2963854930",
"2526782364",
"2963308316"
],
"abstract": [
"How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking. Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond realtime speed. However, they still have a big gap to classification & updating based trackers in tolerating the temporal changes of objects and imaging conditions. In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames. We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Unlike state-of-theart trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG. More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects. As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors.",
"Observing that Semantic features learned in an image classification task and Appearance features learned in a similarity matching task complement each other, we build a twofold Siamese network, named SA-Siam, for real-time object tracking. SA-Siam is composed of a semantic branch and an appearance branch. Each branch is a similarity-learning Siamese network. An important design choice in SA-Siam is to separately train the two branches to keep the heterogeneity of the two types of features. In addition, we propose a channel attention mechanism for the semantic branch. Channel-wise weights are computed according to the channel activations around the target position. While the inherited architecture from SiamFC SiamFC allows our tracker to operate beyond real-time, the twofold design and the attention mechanism significantly improve the tracking performance. The proposed SA-Siam outperforms all other real-time trackers by a large margin on OTB-2013 50 100 benchmarks.",
"Observing that Semantic features learned in an image classification task and Appearance features learned in a similarity matching task complement each other, we build a twofold Siamese network, named SA-Siam, for real-time object tracking. SA-Siam is composed of a semantic branch and an appearance branch. Each branch is a similaritylearning Siamese network. An important design choice in SA-Siam is to separately train the two branches to keep the heterogeneity of the two types of features. In addition, we propose a channel attention mechanism for the semantic branch. Channel-wise weights are computed according to the channel activations around the target position. While the inherited architecture from SiamFC [3] allows our tracker to operate beyond real-time, the twofold design and the attention mechanism significantly improve the tracking performance. The proposed SA-Siam outperforms all other real-time trackers by a large margin on OTB-2013 50 100 benchmarks.",
"Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.",
"In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4 by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5 absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20 relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications."
]
} |
1907.07613 | 2960281739 | Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed. | To further improve the speed of SiamFC, @cite_55 reduces the feature computation cost for easy frames, by using deep reinforcement learning to train policies for early stopping the feed-forward calculations of the CNN when the response confidence is high enough. SINT @cite_43 also uses Siamese networks for visual tracking and has higher accuracy, but runs much slower than SiamFC (2 fps vs 86 fps) due to the use of a deeper CNN (VGG16) for feature extraction, and optical flow for its candidate sampling strategy. @cite_15 proposes a dual deep network by exploiting hierarchical features of CNN layers for object tracking. Unlike other template-matching models that use sliding windows or random sampling to generate candidate image patches for testing, GOTURN @cite_65 directly regresses the coordinates of the target's bounding box by comparing the previous and current image patches. Despite its fast speed and advantage on handling scale and aspect ratio changes , its tracking accuracy is much lower than other state-of-the-art trackers. | {
"cite_N": [
"@cite_43",
"@cite_55",
"@cite_65",
"@cite_15"
],
"mid": [
"2776035257",
"2799058067",
"2886910176",
"2526782364"
],
"abstract": [
"How to effectively learn temporal variation of target appearance, to exclude the interference of cluttered background, while maintaining real-time response, is an essential problem of visual object tracking. Recently, Siamese networks have shown great potentials of matching based trackers in achieving balanced accuracy and beyond realtime speed. However, they still have a big gap to classification & updating based trackers in tolerating the temporal changes of objects and imaging conditions. In this paper, we propose dynamic Siamese network, via a fast transformation learning model that enables effective online learning of target appearance variation and background suppression from previous frames. We then present elementwise multi-layer fusion to adaptively integrate the network outputs using multi-level deep features. Unlike state-of-theart trackers, our approach allows the usage of any feasible generally- or particularly-trained features, such as SiamFC and VGG. More importantly, the proposed dynamic Siamese network can be jointly trained as a whole directly on the labeled video sequences, thus can take full advantage of the rich spatial temporal information of moving objects. As a result, our approach achieves state-of-the-art performance on OTB-2013 and VOT-2015 benchmarks, while exhibits superiorly balanced accuracy and real-time response over state-of-the-art competitors.",
"Visual object tracking has been a fundamental topic in recent years and many deep learning based trackers have achieved state-of-the-art performance on multiple benchmarks. However, most of these trackers can hardly get top performance with real-time speed. In this paper, we propose the Siamese region proposal network (Siamese-RPN) which is end-to-end trained off-line with large-scale image pairs. Specifically, it consists of Siamese subnetwork for feature extraction and region proposal subnetwork including the classification branch and regression branch. In the inference phase, the proposed framework is formulated as a local one-shot detection task. We can pre-compute the template branch of the Siamese subnetwork and formulate the correlation layers as trivial convolution layers to perform online tracking. Benefit from the proposal refinement, traditional multi-scale test and online fine-tuning can be discarded. The Siamese-RPN runs at 160 FPS while achieving leading performance in VOT2015, VOT2016 and VOT2017 real-time challenges.",
"Recently, Siamese networks have drawn great attention in visual tracking community because of their balanced accuracy and speed. However, features used in most Siamese tracking approaches can only discriminate foreground from the non-semantic backgrounds. The semantic backgrounds are always considered as distractors, which hinders the robustness of Siamese trackers. In this paper, we focus on learning distractor-aware Siamese networks for accurate and long-term tracking. To this end, features used in traditional Siamese trackers are analyzed at first. We observe that the imbalanced distribution of training data makes the learned features less discriminative. During the off-line training phase, an effective sampling strategy is introduced to control this distribution and make the model focus on the semantic distractors. During inference, a novel distractor-aware module is designed to perform incremental learning, which can effectively transfer the general embedding to the current video domain. In addition, we extend the proposed approach for long-term tracking by introducing a simple yet effective local-to-global search region strategy. Extensive experiments on benchmarks show that our approach significantly outperforms the state-of-the-arts, yielding 9.6 relative gain in VOT2016 dataset and 35.9 relative gain in UAV20L dataset. The proposed tracker can perform at 160 FPS on short-term benchmarks and 110 FPS on long-term benchmarks.",
"Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method."
]
} |
1907.07613 | 2960281739 | Template-matching methods for visual tracking have gained popularity recently due to their good performance and fast speed. However, they lack effective ways to adapt to changes in the target object's appearance, making their tracking accuracy still far from state-of-the-art. In this paper, we propose a dynamic memory network to adapt the template to the target's appearance variations during tracking. The reading and writing process of the external memory is controlled by an LSTM network with the search feature map as input. A spatial attention mechanism is applied to concentrate the LSTM input on the potential target as the location of the target is at first unknown. To prevent aggressive model adaptivity, we apply gated residual template learning to control the amount of retrieved memory that is used to combine with the initial template. In order to alleviate the drift problem, we also design a "negative" memory unit that stores templates for distractors, which are used to cancel out wrong responses from the object template. To further boost the tracking performance, an auxiliary classification loss is added after the feature extractor part. Unlike tracking-by-detection methods where the object's information is maintained by the weight parameters of neural networks, which requires expensive online fine-tuning to be adaptable, our tracker runs completely feed-forward and adapts to the target's appearance changes by updating the external memory. Moreover, the capacity of our model is not determined by the network size as with other trackers --- the capacity can be easily enlarged as the memory requirements of a task increase, which is favorable for memorizing long-term object information. Extensive experiments on the OTB and VOT datasets demonstrate that our trackers perform favorably against state-of-the-art tracking methods while retaining real-time speed. | Multi-task learning has been successfully used in many applications of machine learning, ranging from natural language processing @cite_37 and speech recognition @cite_81 to computer vision @cite_39 . @cite_70 estimates the street direction in an autonomous driving car by predicting various characteristics of the road, which serves as an auxiliary task. @cite_44 introduces auxiliary tasks of estimating head pose and facial attributes to boost the performance of facial landmark detection, while @cite_7 boosted the performance of a human pose estimation network by adding human joint detectors as auxiliary tasks. Recent works combining object detection and semantic segmentation @cite_45 @cite_77 , as well as image depth estimation and semantic segmentation @cite_83 @cite_53 , also demonstrate the effectiveness of multi-task learning on improving the generalization ability of neural networks. Observing that the CNN learned for object similarity matching lacks the generalization ability of invariance to appearance variations, we propose to add an auxiliary task, object classification, to regularize the CNN so that it learns object semantics. | {
"cite_N": [
"@cite_37",
"@cite_7",
"@cite_70",
"@cite_53",
"@cite_39",
"@cite_44",
"@cite_77",
"@cite_81",
"@cite_45",
"@cite_83"
],
"mid": [
"2526782364",
"1907729166",
"2963749571",
"2588595876",
"2743157634",
"2410641892",
"1665222191",
"2584117724",
"2785325870",
"2796292145"
],
"abstract": [
"Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.",
"This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model’s parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.",
"Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc.). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: “different instances but a similar viewpoint and category” and “different viewpoints of the same instance”. By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"This paper explores multi-task learning (MTL) for face recognition. First, we propose a multi-task convolutional neural network (CNN) for face recognition, where identity classification is the main task and pose, illumination, and expression (PIE) estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weights to each side task, which solves the crucial problem of balancing between different tasks in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses in a joint framework. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the PIE variations from the learnt identity features. Extensive experiments on the entire multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in multi-PIE for face recognition. Our approach is also applicable to in-the-wild data sets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.",
"Learning visual representations with self-supervised learning has become popular in computer vision. The idea is to design auxiliary tasks where labels are free to obtain. Most of these tasks end up providing data to learn specific kinds of invariance useful for recognition. In this paper, we propose to exploit different self-supervised approaches to learn representations invariant to (i) inter-instance variations (two objects in the same class should have similar features) and (ii) intra-instance variations (viewpoint, pose, deformations, illumination, etc). Instead of combining two approaches with multi-task learning, we argue to organize and reason the data with multiple variations. Specifically, we propose to generate a graph with millions of objects mined from hundreds of thousands of videos. The objects are connected by two types of edges which correspond to two types of invariance: \"different instances but a similar viewpoint and category\" and \"different viewpoints of the same instance\". By applying simple transitivity on the graph with these edges, we can obtain pairs of images exhibiting richer visual invariance. We use this data to train a Triplet-Siamese network with VGG16 as the base architecture and apply the learned representations to different recognition tasks. For object detection, we achieve 63.2 mAP on PASCAL VOC 2007 using Fast R-CNN (compare to 67.3 with ImageNet pre-training). For the challenging COCO dataset, our method is surprisingly close (23.5 ) to the ImageNet-supervised counterpart (24.4 ) using the Faster R-CNN framework. We also show that our network can perform significantly better than the ImageNet network in the surface normal estimation task.",
"Convolutional neural networks (CNNs) have shown great performance as general feature representations for object recognition applications. However, for multi-label images that contain multiple objects from different categories, scales and locations, global CNN features are not optimal. In this paper, we incorporate local information to enhance the feature discriminative power. In particular, we first extract object proposals from each image. With each image treated as a bag and object proposals extracted from it treated as instances, we transform the multi-label recognition problem into a multi-class multi-instance learning problem. Then, in addition to extracting the typical CNN feature representation from each proposal, we propose to make use of ground-truth bounding box annotations (strong labels) to add another level of local information by using nearest-neighbor relationships of local regions to form a multi-view pipeline. The proposed multi-view multiinstance framework utilizes both weak and strong labels effectively, and more importantly it has the generalization ability to even boost the performance of unseen categories by partial strong labels from other categories. Our framework is extensively compared with state-of-the-art handcrafted feature based methods and CNN based methods on two multi-label benchmark datasets. The experimental results validate the discriminative power and the generalization ability of the proposed framework. With strong labels, our framework is able to achieve state-of-the-art results in both datasets.",
"The problem of learning several related tasks has recently been addressed with success by the so-called multi-task formulation, that discovers underlying common structure between tasks. Metric Learning for Kernel Regression (MLKR) aims at finding the optimal linear subspace for reducing the squared error of a Nadaraya-Watson estimator. In this paper, we propose two Multi-Task extensions of MLKR. The first one is a direct application of multi-task formulation to MLKR algorithm and the second one, the so-called Hard-MT-MLKR, lets us learn same-complexity predictors with fewer parameters, reducing overfitting issues. We apply the proposed method to Action Unit (AU) intensity prediction as a response to the Facial Expression Recognition and Analysis challenge (FERA'15). Our system improves the baseline results on the test set by 24 in terms of Intraclass Correlation Coefficient (ICC).",
"When analysing human activities using data mining or machine learning techniques, it can be useful to infer properties such as the gender or age of the people involved. This paper focuses on the sub-problem of gender recognition, which has been studied extensively in the literature, with two main problems remaining unsolved: how to improve the accuracy on real-world face images, and how to generalise the models to perform well on new datasets. We address these problems by collecting five million weakly labelled face images, and performing three different experiments, investigating: the performance difference between convolutional neural networks (CNNs) of differing depths and a support vector machine approach using local binary pattern features on the same training data, the effect of contextual information on classification accuracy, and the ability of convolutional neural networks and large amounts of training data to generalise to cross-database classification. We report record-breaking results on both the Labeled Faces in the Wild (LFW) dataset, achieving an accuracy of 98.90 , and the Images of Groups (GROUPS) dataset, achieving an accuracy of 91.34 for cross-database gender classification.",
"Over the last years, deep convolutional neural networks (ConvNets) have transformed the field of computer vision thanks to their unparalleled capacity to learn high level semantic image features. However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale. Therefore, unsupervised semantic feature learning, i.e., learning without requiring manual annotation effort, is of crucial importance in order to successfully harvest the vast amount of visual data that are available today. In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input. We demonstrate both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. Specifically, our results on those benchmarks demonstrate dramatic improvements w.r.t. prior state-of-the-art approaches in unsupervised representation learning and thus significantly close the gap with supervised feature learning. For instance, in PASCAL VOC 2007 detection task our unsupervised pre-trained AlexNet model achieves the state-of-the-art (among unsupervised methods) mAP of 54.4 that is only 2.4 points lower from the supervised case. We get similarly striking results when we transfer our unsupervised learned features on various other tasks, such as ImageNet classification, PASCAL classification, PASCAL segmentation, and CIFAR-10 classification. The code and models of our paper will be published on: this https URL .",
"We introduce a seemingly impossible task: given only an audio clip of someone speaking, decide which of two face images is the speaker. In this paper we study this, and a number of related cross-modal tasks, aimed at answering the question: how much can we infer from the voice about the face and vice versa? We study this task \"in the wild\", employing the datasets that are now publicly available for face recognition from static images (VGGFace) and speaker identification from audio (VoxCeleb). These provide training and testing scenarios for both static and dynamic testing of cross-modal matching. We make the following contributions: (i) we introduce CNN architectures for both binary and multi-way cross-modal face and audio matching, (ii) we compare dynamic testing (where video information is available, but the audio is not from the same video) with static testing (where only a single still image is available), and (iii) we use human testing as a baseline to calibrate the difficulty of the task. We show that a CNN can indeed be trained to solve this task in both the static and dynamic scenarios, and is even well above chance on 10-way classification of the face given the voice. The CNN matches human performance on easy examples (e.g. different gender across faces) but exceeds human performance on more challenging examples (e.g. faces with the same gender, age and nationality)."
]
} |
1907.07647 | 2960675232 | Particle Swarm Optimisation (PSO) is a powerful optimisation algorithm that can be used to locate global maxima in a search space. Recent interest in swarms of Micro Aerial Vehicles (MAVs) begs the question as to whether PSO can be used as a method to enable real robotic swarms to locate a target goal point. However, the original PSO algorithm does not take into account collisions between particles during search. In this paper we propose a novel algorithm called Force Field Particle Swarm Optimisation (FFPSO) that designates repellent force fields to particles such that these fields provide an additional velocity component into the original PSO equations. We compare the performance of FFPSO with PSO and show that it has the ability to reduce the number of particle collisions during search to 0 whilst also being able to locate a target of interest in a similar amount of time. The scalability of the algorithm is also demonstrated via a set of experiments that considers how the number of crashes and the time taken to find the goal varies according to swarm size. Finally, we demonstrate the algorithms applicability on a swarm of real MAVs. | We begin by reviewing the most related work in the area of PSO, Potential Field methods and Flocking. PSO itself is a vast field with applications in many different areas (see @cite_22 for details), our aim here is not to cover the entirety of this but only what is relevant to aerial and swarm robotics. We also review some relevant work in the area of Potential Field methods, which are almost identical in nature to the force field" used in this work. However, we feel the alternative name is more appropriate in our work due to the 3-dimensional and finite nature of our fields acting around aerial robots. Finally, we review similar collision avoidance strategies employed in flocking algorithms. | {
"cite_N": [
"@cite_22"
],
"mid": [
"1859314164"
],
"abstract": [
"Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms."
]
} |
1907.07647 | 2960675232 | Particle Swarm Optimisation (PSO) is a powerful optimisation algorithm that can be used to locate global maxima in a search space. Recent interest in swarms of Micro Aerial Vehicles (MAVs) begs the question as to whether PSO can be used as a method to enable real robotic swarms to locate a target goal point. However, the original PSO algorithm does not take into account collisions between particles during search. In this paper we propose a novel algorithm called Force Field Particle Swarm Optimisation (FFPSO) that designates repellent force fields to particles such that these fields provide an additional velocity component into the original PSO equations. We compare the performance of FFPSO with PSO and show that it has the ability to reduce the number of particle collisions during search to 0 whilst also being able to locate a target of interest in a similar amount of time. The scalability of the algorithm is also demonstrated via a set of experiments that considers how the number of crashes and the time taken to find the goal varies according to swarm size. Finally, we demonstrate the algorithms applicability on a swarm of real MAVs. | PSO has been applied to Unmanned Aerial Vehicles (UAVs) and MAVs in various ways already. Optimal route planning for MAVs is an optimisation problem that is tackled in @cite_19 @cite_1 by constructing complex fitness functions consisting of a number of different metrics that would affect the success of an MAV carrying out reconnaissance missions. These works modify the fitness function, whereas our work modifies the PSO equation directly. In the case of complicated fitness functions, the computational requirements of evaluating them for each individual at each time step could be far greater than our proposed method. In @cite_6 @cite_9 , PSO is used to tune the parameters of a PID controller for an AR.Drone by constructing a multi-objective fitness function that takes into account a number of performance metrics w.r.t. the PID controller. In @cite_16 , PSO is hybridised with a Genetic Algorithm (GA) in order to optimise formation reconfiguration in swarms of UAVs. A hybrid algorithm is proposed that combines the advantages of both optimisation methods and is shown to outperform PSO in a series of simulated experiments. This algorithm optimises the control inputs of the UAVs such that optimal swarm reconfiguration can be achieved in battle-like simulations. | {
"cite_N": [
"@cite_9",
"@cite_1",
"@cite_6",
"@cite_19",
"@cite_16"
],
"mid": [
"2023086460",
"2471940543",
"1970751891",
"2064646314",
"1859314164"
],
"abstract": [
"The initial state of an Unmanned Aerial Vehicle (UAV) system and the relative state of the system, the continuous inputs of each flight unit are piecewise linear by a Control Parameterization and Time Discretization (CPTD) method. The approximation piecewise linearization control inputs are used to substitute for the continuous inputs. In this way, the multi-UAV formation reconfiguration problem can be formulated as an optimal control problem with dynamical and algebraic constraints. With strict constraints and mutual interference, the multi-UAV formation reconfiguration in 3-D space is a complicated problem. The recent boom of bio-inspired algorithms has attracted many researchers to the field of applying such intelligent approaches to complicated optimization problems in multi-UAVs. In this paper, a Hybrid Particle Swarm Optimization and Genetic Algorithm (HPSOGA) is proposed to solve the multi-UAV formation reconfiguration problem, which is modeled as a parameter optimization problem. This new approach combines the advantages of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA), which can find the time-optimal solutions simultaneously. The proposed HPSOGA will also be compared with basic PSO algorithm and the series of experimental results will show that our HPSOGA outperforms PSO in solving multi-UAV formation reconfiguration problem under complicated environments.",
"In this paper, a proposed particle swarm optimization called multi-objective particle swarm optimization (MOPSO) with an accelerated update methodology is employed to tune Proportional-Integral-Derivative (PID) controller for an AR.Drone quadrotor. The proposed approach is to modify the velocity formula of the general PSO systems in order for improving the searching efficiency and actual execution time. Three PID control parameters, i.e., the proportional gain Kp, integral gain K; and derivative gain Kd are required to form a parameter vector which is considered as a particle of PSO. To derive the optimal PID parameters for the Ar.Drone, the modified update method is employed to move the positions of all particles in the population. In the meanwhile, multi-objective functions defined for PID controller optimization problems are minimized. The results verify that the proposed MOPSO is able to perform appropriately in Ar.Drone control system.",
"Abstract This paper presents an optimized reconfigurable control design methodology by separating control commands distribution task from flight controller for different types of fault handling. The proposed strategy improves the flight control performance in normal and fault situations. The particle swarm optimization (PSO) based multi-input multi-output (MIMO) linear quadratic regulator (LQR) is used to produce virtual command signals. A modified weighted pseudo-inverse (WPI) based cascaded re- allocation technique is employed for effective implementation of commands to redundant control surfaces in a realistic nonlinear aircraft benchmark model. Control surface fault modelling is performed for the evaluation of optimized reconfiguration based modular flight control strategy. Simulation results show that acceptable fault tolerant control (FTC) performance can be achieved by using swarm intelligence based optimization technique for modular control design.",
"The main focus of this paper is to develop an optimization method for the automatic fighter tracking (AFT) problem. The AFT problem is similar to a general evader-pursuer maneuvering automation problem between the dynamic systems of two highly interactive objects. This paper proposes a particle swarm optimizer-based variable feedback gain controller (PSO-based VFGC) for dealing with AFT problems. The PSO-based VFGC is designed to obtain the control value of a pursuer through an error-feedback gain controller. Once conditions of system closed-loop stability have been satisfied, the optimal feedback gains can be obtained through PSO, and the actual control values can be derived from the obtained values. Simulation results confirm the capabilities of the proposed method by comparing the results against two other methods in the field: the weight matrix value defined Ricatti equation, and the linear matrix inequality (LMI) based linear quadratic regulator (LQR). The performance of the proposed method is superior to that of its alternatives.",
"Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms."
]
} |
1907.07647 | 2960675232 | Particle Swarm Optimisation (PSO) is a powerful optimisation algorithm that can be used to locate global maxima in a search space. Recent interest in swarms of Micro Aerial Vehicles (MAVs) begs the question as to whether PSO can be used as a method to enable real robotic swarms to locate a target goal point. However, the original PSO algorithm does not take into account collisions between particles during search. In this paper we propose a novel algorithm called Force Field Particle Swarm Optimisation (FFPSO) that designates repellent force fields to particles such that these fields provide an additional velocity component into the original PSO equations. We compare the performance of FFPSO with PSO and show that it has the ability to reduce the number of particle collisions during search to 0 whilst also being able to locate a target of interest in a similar amount of time. The scalability of the algorithm is also demonstrated via a set of experiments that considers how the number of crashes and the time taken to find the goal varies according to swarm size. Finally, we demonstrate the algorithms applicability on a swarm of real MAVs. | The work most related to ours in theoretical approach is @cite_18 . In this work each individual ePuck robot represents a particle in the PSO algorithm where the aim is to find an area of interest. However, the main contributions of our work compared to @cite_18 is that we extend this model to 3 dimensions for aerial vehicles and we show our algorithm operating on a real swarm, whereas @cite_18 only tests the algorithm in simulation. Similar to @cite_7 , @cite_18 employs a simple Braitenburg collision avoidance scheme in which particles instantaneously move in opposite directions after a collision and then continue to follow the original velocity before the collision occurred. | {
"cite_N": [
"@cite_18",
"@cite_7"
],
"mid": [
"836179420",
"2124419806"
],
"abstract": [
"This work presents a method to build a robust controller for a hose transportation system performed by aerial robots. We provide the system dynamic model, equations and desired equilibrium criteria. Control is obtained through PID controllers tuned by particle swarm optimization (PSO). The control strategy is illustrated for three quadrotors carrying two sections of a hose, but the model can be easily expanded to a bigger number of quadrotors system, due to the approach modularity. Experiments demonstrate the PSO tuning method convergence, which is fast. More than one solution is possible, and control is very robust.",
"We propose a method that relies on markerless visual observations to track the full articulation of two hands that interact with each-other in a complex, unconstrained manner. We formulate this as an optimization problem whose 54-dimensional parameter space represents all possible configurations of two hands, each represented as a kinematic structure with 26 Degrees of Freedom (DoFs). To solve this problem, we employ Particle Swarm Optimization (PSO), an evolutionary, stochastic optimization method with the objective of finding the two-hands configuration that best explains observations provided by an RGB-D sensor. To the best of our knowledge, the proposed method is the first to attempt and achieve the articulated motion tracking of two strongly interacting hands. Extensive quantitative and qualitative experiments with simulated and real world image sequences demonstrate that an accurate and efficient solution of this problem is indeed feasible."
]
} |
1907.07377 | 2959120033 | A Controller Area Network (CAN) bus in the vehicles is an efficient standard bus enabling communication between all Electronic Control Units (ECU). However, CAN bus is not enough to protect itself because of lack of security features. To detect suspicious network connections effectively, the intrusion detection system (IDS) is strongly required. Unlike the traditional IDS for Internet, there are small number of known attack signatures for vehicle networks. Also, IDS for vehicle requires high accuracy because any false-positive error can seriously affect the safety of the driver. To solve this problem, we propose a novel IDS model for in-vehicle networks, GIDS (GAN based Intrusion Detection System) using deep-learning model, Generative Adversarial Nets. GIDS can learn to detect unknown attacks using only normal data. As experiment result, GIDS shows high detection accuracy for four unknown attacks. | The early research for anomaly detection of the in-vehicle system was introduced by Hoppe @cite_3 . He presented three selected characteristics as patterns available for anomaly detection that include the recognition of an increased frequency of cyclic CAN messages, the observation of low-level communication characteristics, and the identification of obvious misuse of message IDs. M "u ter proposed an anomaly detection based entropy @cite_0 . Marchetti analyzed and identified anomalies in the sequence of CAN @cite_6 . The proposed model features low memory and computational footprints. SALMAN proposed a software-based light-weight IDS and two anomaly-based algorithms based on message cycle time analysis and plausibility analysis of messages @cite_2 . It contributed to more advanced research in the field of IDS for in-vehicle networks. | {
"cite_N": [
"@cite_0",
"@cite_6",
"@cite_3",
"@cite_2"
],
"mid": [
"2756382106",
"2739928414",
"2561208905",
"2790864385"
],
"abstract": [
"The Controller Area Network (CAN) was specified with no regards to security mechanisms at all. This fact in combination with the widespread adoption of the CAN standard for connecting more than a hundred Electrical Control Units (ECUs), which control almost every aspect of modern cars, makes the CAN bus a valuable target for adversaries. As vehicles are safety-critical systems and the physical integrity of the driver has the highest priority, it is necessary to invent suitable countermeasures to limit CAN’s security risks. As a matter of fact, the close resemblances of in-vehicle networks to traditional computer networks, enables the use of conventional countermeasures, e.g. Intrusion Detection Systems (IDS). We propose a software-based light-weight IDS relying on properties extracted from the signal database of a CAN domain. Further, we suggest two anomaly-based algorithms based on message cycle time analysis and plausibility analysis of messages (e.g. speed messages). We evaluate our IDS on a simulated setup, as well as a real in-vehicle network, by performing attacks on different parts of the network. Our evaluation shows that the proposed IDS successfully detects malicious events such as injection of malformed CAN frames, unauthorized CAN frames, speedometer plausibility detection and Denial of Service (DoS) attacks. Based on our experience of implementing an in-vehicle IDS, we discuss potential challenges and constraints that engineers might face during the process of implementing an IDS system for in-vehicle networks. We believe that the results of this work can contribute to more advanced research in the field of intrusion detection systems for in-vehicle networks and thereby add to a safer driving experience.",
"This paper proposes a novel intrusion detection algorithm that aims to identify malicious CAN messages injected by attackers in the CAN bus of modern vehicles. The proposed algorithm identifies anomalies in the sequence of messages that flow in the CAN bus and is characterized by small memory and computational footprints, that make it applicable to current ECUs. Its detection performance are demonstrated through experiments carried out on real CAN traffic gathered from an unmodified licensed vehicle.",
"Modern automobiles have been proven vulnerable to hacking by security researchers. By exploiting vulnerabilities in the car's external interfaces, such as wifi, bluetooth, and physical connections, they can access a car's controller area network (CAN) bus. On the CAN bus, commands can be sent to control the car, for example cutting the brakes or stopping the engine. While securing the car's interfaces to the outside world is an important part of mitigating this threat, the last line of defence is detecting malicious behaviour on the CAN bus. We propose an anomaly detector based on a Long Short-Term Memory neural network to detect CAN bus attacks. The detector works by learning to predict the next data word originating from each sender on the bus. Highly surprising bits in the actual next word are flagged as anomalies. We evaluate the detector by synthesizing anomalies with modified CAN bus data. The synthesized anomalies are designed to mimic attacks reported in the literature. We show that the detector can detect anomalies we synthesized with low false alarm rates. Additionally, the granularity of the bit predictions can provide forensic investigators clues as to the nature of flagged anomalies.",
"With the development of 5G and Internet of Vehicles technology, the possibility of remote wireless attack on an in-vehicle network has been proven by security researchers. Anomaly detection technology can effectively alleviate the security threat, as the first line of security defense. Based on this, this paper proposes a distributed anomaly detection system using hierarchical temporal memory (HTM) to enhance the security of a vehicular controller area network bus. The HTM model can predict the flow data in real time, which depends on the state of the previous learning. In addition, we improved the abnormal score mechanism to evaluate the prediction. We manually synthesized field modification and replay attack in data field. Compared with recurrent neural networks and hidden Markov model detection models, the results show that the distributed anomaly detection system based on HTM networks achieves better performance in the area under receiver operating characteristic curve score, precision, and recall."
]
} |
1907.07377 | 2959120033 | A Controller Area Network (CAN) bus in the vehicles is an efficient standard bus enabling communication between all Electronic Control Units (ECU). However, CAN bus is not enough to protect itself because of lack of security features. To detect suspicious network connections effectively, the intrusion detection system (IDS) is strongly required. Unlike the traditional IDS for Internet, there are small number of known attack signatures for vehicle networks. Also, IDS for vehicle requires high accuracy because any false-positive error can seriously affect the safety of the driver. To solve this problem, we propose a novel IDS model for in-vehicle networks, GIDS (GAN based Intrusion Detection System) using deep-learning model, Generative Adversarial Nets. GIDS can learn to detect unknown attacks using only normal data. As experiment result, GIDS shows high detection accuracy for four unknown attacks. | Many security research in various fields has adopted deep-learning methods for IDS. For example, Zhang presented a deep-learning method to detect Web attacks by using the specially designed CNN @cite_7 . The method is based on analyzing the HTTP request packets, to which only some preprocessing is needed whereas the tedious feature extraction is done by the CNN itself. Recently, Generative Adversarial Nets (GAN) was adopted to not only image generation but also other research like anomaly detection. Schlegl proposed AnoGAN, a deep convolutional generative adversarial network to learn a manifold of normal anatomical variability. The model demonstrated that the approach correctly identifies anomalous images, such as images containing retinal fluid @cite_4 . | {
"cite_N": [
"@cite_4",
"@cite_7"
],
"mid": [
"2808763756",
"2949257576"
],
"abstract": [
"Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4 . In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.",
"The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37 , +1.6 and +2.46 improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6 improvement over a strong baseline. The code is available at this https URL"
]
} |
1907.07202 | 2959373581 | Human gaze is known to be a strong indicator of underlying human intentions and goals during manipulation tasks. This work studies gaze patterns of human teachers demonstrating tasks to robots and proposes ways in which such patterns can be used to enhance robot learning. Using both kinesthetic teaching and video demonstrations, we identify novel intention-revealing gaze behaviors during teaching. These prove to be informative in a variety of problems ranging from reference frame inference to segmentation of multi-step tasks. Based on our findings, we propose two proof-of-concept algorithms which show that gaze data can enhance subtask classification for a multi-step task up to 6 and reward inference and policy learning for a single-step task up to 67 . Our findings provide a foundation for a model of natural human gaze in robot learning from demonstration settings and present open problems for utilizing human gaze to enhance robot learning. | There is also a rich body of work on eye gaze for human-robot interaction @cite_9 . use nonverbal cues including gaze to study timing coordination between humans and robots. Gaze information has also been shown to enable the establishment of joint attention between the human and robot partner, the recognition of human behavior and the execution of anticipatory actions @cite_9 . However, these prior works focus on gaze cues generated by the robot and not on gaze cues from humans. More recently, studied human gaze behavior for shared manipulation, where users controlled a robot arm mounted on a wheelchair via a joystick for assistive tasks of daily living. Novel patterns of gaze behaviors were identified, such as people using visual feedback for aligning the robot arm in a certain orientation and cognitive load being higher for teleoperation versus the shared autonomy condition. However, eye gaze behavior of human teachers has not been studied in the context of robot learning from demonstrations. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2033999619"
],
"abstract": [
"As assistive robots become popular in factories and homes, there is greater need for natural, multi-channel communication during collaborative manipulation tasks. Non-verbal communication such as eye gaze can provide information without overloading more taxing channels like speech. However, certain collaborative tasks may draw attention away from these subtle communication modalities. For instance, robot-to-human handovers are primarily manual tasks, and human attention is therefore drawn to robot hands rather than to robot faces during handovers. In this paper, we show that a simple manipulation of a robot’s handover behavior can significantly increase both awareness of the robot’s eye gaze and compliance with that gaze. When eye gaze communication occurs during the robot’s release of an object, delaying object release until the gaze is finished draws attention back to the robot’s head, which increases conscious perception of the robot’s communication. Furthermore, the handover delay increases peoples’ compliance with the robot’s communication over a non-delayed handover, even when compliance results in counterintuitive behavior."
]
} |
1907.07202 | 2959373581 | Human gaze is known to be a strong indicator of underlying human intentions and goals during manipulation tasks. This work studies gaze patterns of human teachers demonstrating tasks to robots and proposes ways in which such patterns can be used to enhance robot learning. Using both kinesthetic teaching and video demonstrations, we identify novel intention-revealing gaze behaviors during teaching. These prove to be informative in a variety of problems ranging from reference frame inference to segmentation of multi-step tasks. Based on our findings, we propose two proof-of-concept algorithms which show that gaze data can enhance subtask classification for a multi-step task up to 6 and reward inference and policy learning for a single-step task up to 67 . Our findings provide a foundation for a model of natural human gaze in robot learning from demonstration settings and present open problems for utilizing human gaze to enhance robot learning. | There has also been some recent work on utilizing human eye gaze for learning algorithms. used demonstrations from a person wearing an eye tracking hardware along with an egocentric camera to simultaneously ground symbols to their instances in the environment and learn the appearance of such object instances. use gaze information as a heuristic to compute a prior distribution of the goal location for reaching motions in a manipulation task. This allows for efficient inference of a multiple-model filtering approach for early intention recognition of reaching actions by pruning model-matching filters that need to be run in parallel. In our work, we show that the use of gaze in conjunction with state-action knowledge can improve reward learning via Bayesian inverse reinforcement learning (BIRL) @cite_8 . | {
"cite_N": [
"@cite_8"
],
"mid": [
"2212494831"
],
"abstract": [
"We present a probabilistic generative model for simultaneously recognizing daily actions and predicting gaze locations in videos recorded from an egocentric camera. We focus on activities requiring eye-hand coordination and model the spatio-temporal relationship between the gaze point, the scene objects, and the action label. Our model captures the fact that the distribution of both visual features and object occurrences in the vicinity of the gaze point is correlated with the verb-object pair describing the action. It explicitly incorporates known properties of gaze behavior from the psychology literature, such as the temporal delay between fixation and manipulation events. We present an inference method that can predict the best sequence of gaze locations and the associated action label from an input sequence of images. We demonstrate improvements in action recognition rates and gaze prediction accuracy relative to state-of-the-art methods, on two new datasets that contain egocentric videos of daily activities and gaze."
]
} |
1907.07384 | 2962029365 | Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods. | A related theoretical study of feature selection via MI has been recently proposed by @cite_25 . The authors show that the problem of finding the minimal feature subset such that the conditional likelihood of the targets is maximized is equivalent to minimizing the CMI. Based on this result, common heuristics for information-theoretic feature selection can be seen as iteratively maximizing the conditional likelihood. Similarly, we show a connection between the CMI and the optimal prediction error. Differently from @cite_25 , we additionally propose a novel stopping condition that is well motivated by our theoretical findings. | {
"cite_N": [
"@cite_25"
],
"mid": [
"2044956198"
],
"abstract": [
"Mutual information (MI) is used in feature selection to evaluate two key-properties of optimal features, the relevance of a feature to the class variable and the redundancy of similar features. Conditional mutual information (CMI), i.e., MI of the candidate feature to the class variable conditioning on the features already selected, is a natural extension of MI but not so far applied due to estimation complications for high dimensional distributions. We propose the nearest neighbor estimate of CMI, appropriate for high-dimensional variables, and build an iterative scheme for sequential feature selection with a termination criterion, called CMINN. We show that CMINN is equivalent to feature selection MI filters, such as mRMR and MaxiMin, in the presence of solely single feature effects, and more appropriate for combined feature effects. We compare CMINN to mRMR and MaxiMin on simulated datasets involving combined effects and confirm the superiority of CMINN in selecting the correct features (indicated also by the termination criterion) and giving best classification accuracy. The application to ten benchmark databases shows that CMINN obtains the same or higher classification accuracy compared to mRMR and MaxiMin at a smaller cardinality of the selected feature subset."
]
} |
1907.07384 | 2962029365 | Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods. | In the information theory literature, @cite_13 also analyzes the connection between CMI and minimum mean square error, deriving a similar result to our Theorem . However, classification problems (i.e., minimum zero-one loss) are not considered and the focus is not on feature selection. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1899249567"
],
"abstract": [
"We study the connection between the highly non-convex loss function of a simple model of the fully-connected feed-forward neural network and the Hamiltonian of the spherical spin-glass model under the assumptions of: i) variable independence, ii) redundancy in network parametrization, and iii) uniformity. These assumptions enable us to explain the complexity of the fully decoupled neural network through the prism of the results from random matrix theory. We show that for large-size decoupled networks the lowest critical values of the random loss function form a layered structure and they are located in a well-defined band lower-bounded by the global minimum. The number of local minima outside that band diminishes exponentially with the size of the network. We empirically verify that the mathematical model exhibits similar behavior as the computer simulations, despite the presence of high dependencies in real networks. We conjecture that both simulated annealing and SGD converge to the band of low critical points, and that all critical points found there are local minima of high quality measured by the test error. This emphasizes a major difference between largeand small-size networks where for the latter poor quality local minima have nonzero probability of being recovered. Finally, we prove that recovering the global minimum becomes harder as the network size increases and that it is in practice irrelevant as global minimum often leads to overfitting."
]
} |
1907.07384 | 2962029365 | Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods. | The authors of @cite_17 propose a nearest neighbor estimator for the CMI and show how it can be used in a classic forward feature selection algorithm. One of the authors' questions is how to devise a suitable stopping condition for such methods. Here we propose a possible answer: our stopping criterion (Section ) is intuitive, applicable to both forward and backward algorithms, and theoretically well-grounded. | {
"cite_N": [
"@cite_17"
],
"mid": [
"2121765205"
],
"abstract": [
"In this paper, we present a new framework for geo-locating an image utilizing a novel multiple nearest neighbor feature matching method using Generalized Minimum Clique Graphs (GMCP). First, we extract local features (e.g., SIFT) from the query image and retrieve a number of nearest neighbors for each query feature from the reference data set. Next, we apply our GMCP-based feature matching to select a single nearest neighbor for each query feature such that all matches are globally consistent. Our approach to feature matching is based on the proposition that the first nearest neighbors are not necessarily the best choices for finding correspondences in image matching. Therefore, the proposed method considers multiple reference nearest neighbors as potential matches and selects the correct ones by enforcing consistency among their global features (e.g., GIST) using GMCP. In this context, we argue that using a robust distance function for finding the similarity between the global features is essential for the cases where the query matches multiple reference images with dissimilar global features. Towards this end, we propose a robust distance function based on the Gaussian Radial Basis Function (G-RBF). We evaluated the proposed framework on a new data set of 102k street view images; the experiments show it outperforms the state of the art by 10 percent."
]
} |
1907.07384 | 2962029365 | Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy of a subset of features in predicting the target variable and the redundancy with respect to other variables. However, existing algorithms are mostly heuristic and do not offer any guarantee on the proposed solution. In this paper, we provide novel theoretical results showing that conditional mutual information naturally arises when bounding the ideal regression classification errors achieved by different subsets of features. Leveraging on these insights, we propose a novel stopping condition for backward and forward greedy methods which ensures that the ideal prediction error using the selected feature subset remains bounded by a user-specified threshold. We provide numerical simulations to support our theoretical claims and compare to common heuristic methods. | Several existing approaches use linear correlation measures to score the different features @cite_31 @cite_11 @cite_5 @cite_6 @cite_7 . Such algorithms are mostly based on the heuristic intuition that a good feature should be highly correlated with the class and lowly correlated with the other features. Instead, we provide a more theoretical justification for this claim (Section ), showing a connection between these two properties and the minimum MSE. | {
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_5",
"@cite_31",
"@cite_11"
],
"mid": [
"2067877017",
"33196642",
"2142057089",
"2253239179",
"2110990863"
],
"abstract": [
"The detection of correlations between different features in a set of feature vectors is a very important data mining task because correlation indicates a dependency between the features or some association of cause and effect between them. This association can be arbitrarily complex, i.e. one or more features might be dependent from a combination of several other features. Well-known methods like the principal components analysis (PCA) can perfectly find correlations which are global, linear, not hidden in a set of noise vectors, and uniform, i.e. the same type of correlation is exhibited in all feature vectors. In many applications such as medical diagnosis, molecular biology, time sequences, or electronic commerce, however, correlations are not global since the dependency between features can be different in different subgroups of the set. In this paper, we propose a method called 4C (Computing Correlation Connected Clusters) to identify local subgroups of the data objects sharing a uniform but arbitrarily complex correlation. Our algorithm is based on a combination of PCA and density-based clustering (DBSCAN). Our method has a determinate result and is robust against noise. A broad comparative evaluation demonstrates the superior performance of 4C over competing methods such as DBSCAN, CLIQUE and ORCLUS.",
"Feature selection is a preprocessing phase to machine learning, which leads to increase the classification accuracy and reduce its complexity. However, the increase of data dimensionality poses a challenge to many existing feature selection methods. This paper formulates and validates a method for selecting optimal feature subset based on the analysis of the Pearson correlation coefficients. We adopt the correlation analysis between two variables as a feature goodness measure. Where, a feature is good if it is highly correlated to the class and is low correlated to the other features. To evaluate the proposed Feature selection method, experiments are applied on NSL-KDD dataset. The experiments shows that, the number of features is reduced from 41 to 17 features, which leads to improve the classification accuracy to 99.1 . Also,The efficiency of the proposed linear correlation feature selection method is demonstrated through extensive comparisons with other well known feature selection methods.",
"Recommender problems with large and dynamic item pools are ubiquitous in web applications like content optimization, online advertising and web search. Despite the availability of rich item meta-data, excess heterogeneity at the item level often requires inclusion of item-specific \"factors\" (or weights) in the model. However, since estimating item factors is computationally intensive, it poses a challenge for time-sensitive recommender problems where it is important to rapidly learn factors for new items (e.g., news articles, event updates, tweets) in an online fashion. In this paper, we propose a novel method called FOBFM (Fast Online Bilinear Factor Model) to learn item-specific factors quickly through online regression. The online regression for each item can be performed independently and hence the procedure is fast, scalable and easily parallelizable. However, the convergence of these independent regressions can be slow due to high dimensionality. The central idea of our approach is to use a large amount of historical data to initialize the online models based on offline features and learn linear projections that can effectively reduce the dimensionality. We estimate the rank of our linear projections by taking recourse to online model selection based on optimizing predictive likelihood. Through extensive experiments, we show that our method significantly and uniformly outperforms other competitive methods and obtains relative lifts that are in the range of 10-15 in terms of predictive log-likelihood, 200-300 for a rank correlation metric on a proprietary My Yahoo! dataset; it obtains 9 reduction in root mean squared error over the previously best method on a benchmark MovieLens dataset using a time-based train test data split.",
"We derive a least-squares formulation for MDDMp technique.A novel multi-label feature extraction algorithm is proposed.Our algorithm maximizes both feature variance and feature-label dependence.Experiments show that our algorithm is a competitive candidate. Dimensionality reduction is an important pre-processing procedure for multi-label classification to mitigate the possible effect of dimensionality curse, which is divided into feature extraction and selection. Principal component analysis (PCA) and multi-label dimensionality reduction via dependence maximization (MDDM) represent two mainstream feature extraction techniques for unsupervised and supervised paradigms. They produce many small and a few large positive eigenvalues respectively, which could deteriorate the classification performance due to an improper number of projection directions. It has been proved that PCA proposed primarily via maximizing feature variance is associated with a least-squares formulation. In this paper, we prove that MDDM with orthonormal projection directions also falls into the least-squares framework, which originally maximizes Hilbert-Schmidt independence criterion (HSIC). Then we propose a novel multi-label feature extraction method to integrate two least-squares formulae through a linear combination, which maximizes both feature variance and feature-label dependence simultaneously and thus results in a proper number of positive eigenvalues. Experimental results on eight data sets show that our proposed method can achieve a better performance, compared with other seven state-of-the-art multi-label feature extraction algorithms.",
"Rank correlation measures are known for their resilience to perturbations in numeric values and are widely used in many evaluation metrics. Such ordinal measures have rarely been applied in treatment of numeric features as a representational transformation. We emphasize the benefits of ordinal representations of input features both theoretically and empirically. We present a family of algorithms for computing ordinal embeddings based on partial order statistics. Apart from having the stability benefits of ordinal measures, these embeddings are highly nonlinear, giving rise to sparse feature spaces highly favored by several machine learning methods. These embeddings are deterministic, data independent and by virtue of being based on partial order statistics, add another degree of resilience to noise. These machine-learning-free methods when applied to the task of fast similarity search outperform state-of-the-art machine learning methods with complex optimization setups. For solving classification problems, the embeddings provide a nonlinear transformation resulting in sparse binary codes that are well-suited for a large class of machine learning algorithms. These methods show significant improvement on VOC 2010 using simple linear classifiers which can be trained quickly. Our method can be extended to the case of polynomial kernels, while permitting very efficient computation. Further, since the popular Min Hash algorithm is a special case of our method, we demonstrate an efficient scheme for computing Min Hash on conjunctions of binary features. The actual method can be implemented in about 10 lines of code in most languages (2 lines in MAT-LAB), and does not require any data-driven optimization."
]
} |
1907.07240 | 2966730110 | Social media has become an integral part of our daily lives. During time-critical events, the public shares a variety of posts on social media including reports for resource needs, damages, and help offerings for the affected community. Such posts can be relevant and may contain valuable situational awareness information. However, the information overload of social media challenges the timely processing and extraction of relevant information by the emergency services. Furthermore, the growing usage of multimedia content in the social media posts in recent years further adds to the challenge in timely mining relevant information from social media. In this paper, we present a novel method for multimodal relevancy classification of social media posts, where relevancy is defined with respect to the information needs of emergency management agencies. Specifically, we experiment with the combination of semantic textual features with the image features to efficiently classify a relevant multimodal social media post. We validate our method using an evaluation of classifying the data from three real-world crisis events. Our experiments demonstrate that features based on the proposed hybrid framework of exploiting both textual and image content improve the performance of identifying relevant posts. In the light of these experiments, the application of the proposed classification method could reduce cognitive load on emergency services, in filtering multimodal public posts at large scale. | There has been extensive research on the topic of social media for emergency management in the last decade @cite_5 @cite_6 . The nature of data generated over social media has such a high volume, variety, and velocity causing the challenges of Big Crisis Data'' that often overwhelm the emergency services @cite_6 . The literature in crisis informatics @cite_17 field has investigated social media for emergency services using diverse multidisciplinary perspectives. User studies with emergency responders have identified information overload as one the key barrier for efficiently using social media platforms by PIOs and emergency services @cite_11 @cite_6 . Such information overload factors include the processing of unstructured and noisy nature of multimodal social media content at large scale, which is beyond the capacity of the limited human resources. Furthermore, characterizing the relevancy of social media content is very contextual, time-sensitive, and often challenging @cite_0 . | {
"cite_N": [
"@cite_17",
"@cite_6",
"@cite_0",
"@cite_5",
"@cite_11"
],
"mid": [
"2898274636",
"1934362406",
"2573190752",
"2887765139",
"2792198542"
],
"abstract": [
"The public expects a prompt response from emergency services to address requests for help posted on social media. However, the information overload of social media experienced by these organizations, coupled with their limited human resources, challenges them to timely identify and prioritize critical requests. This is particularly acute in crisis situations where any delay may have a severe impact on the effectiveness of the response. While social media has been extensively studied during crises, there is limited work on formally characterizing serviceable help requests and automatically prioritizing them for a timely response. In this paper, we present a formal model of serviceability called Social-EOC (Social Emergency Operations Center), which describes the elements of a serviceable message posted in social media that can be expressed as a request. We also describe a system for the discovery and ranking of highly serviceable requests, based on the proposed serviceability model. We validate the model for emergency services, by performing an evaluation based on real-world data from six crises, with ground truth provided by emergency management practitioners. Our experiments demonstrate that features based on the serviceability model improve the performance of discovering and ranking (nDCG up to 25 ) service requests over different baselines. In the light of these experiments, the application of the serviceability model could reduce the cognitive load on emergency operation center personnel, in filtering and ranking public requests at scale.",
"Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"Semi-structured interviews were conducted with U.S. public sector emergency managers to probe barriers to use of social media and reactions to possible software enhancements to support such use. The three most frequently described barriers were lack of personnel time to work on use of social media, lack of policies and guidelines for its use, and concern about the trustworthiness of pulled data. The most popular of the possible technological enhancements described for Twitter are filtering by category of user contributor, and display of posts on a GIS system with a map-based display.",
"Abstract The importance of timely, accurate and effective use of available information is essential to the proper management of emergency situations. In recent years, emerging technologies have provided new approaches towards the distribution and acquisition of crowdsourced information to facilitate situational awareness and management during emergencies. In this regard, internet and social networks have shown potential to be an effective tool in disseminating and obtaining up-to-date information. Among the most popular social networks, research has pointed to Twitter as a source of information that offers valuable real-time data for decision-making. The objective of this paper is to conduct a systematic literature review that provides an overview of the current state of research concerning the use of Twitter to emergencies management, as well as presents the challenges and future research directions.",
"ABSTRACTThe extensive use of social media platforms, especially during disasters, creates unique opportunities for humanitarian organizations to gain situational awareness as disaster unfolds. In addition to textual content, people post overwhelming amounts of imagery content on social networks within minutes of a disaster hit. Studies point to the importance of this online imagery content for emergency response. Despite recent advances in computer vision research, making sense of the imagery content in real-time during disasters remains a challenging task. One of the important challenges is that a large proportion of images shared on social media is redundant or irrelevant, which requires robust filtering mechanisms. Another important challenge is that images acquired after major disasters do not share the same characteristics as those in large-scale image collections with clean annotations of well-defined object categories such as house, car, airplane, cat, dog, etc., used traditionally in computer visi..."
]
} |
1907.07240 | 2966730110 | Social media has become an integral part of our daily lives. During time-critical events, the public shares a variety of posts on social media including reports for resource needs, damages, and help offerings for the affected community. Such posts can be relevant and may contain valuable situational awareness information. However, the information overload of social media challenges the timely processing and extraction of relevant information by the emergency services. Furthermore, the growing usage of multimedia content in the social media posts in recent years further adds to the challenge in timely mining relevant information from social media. In this paper, we present a novel method for multimodal relevancy classification of social media posts, where relevancy is defined with respect to the information needs of emergency management agencies. Specifically, we experiment with the combination of semantic textual features with the image features to efficiently classify a relevant multimodal social media post. We validate our method using an evaluation of classifying the data from three real-world crisis events. Our experiments demonstrate that features based on the proposed hybrid framework of exploiting both textual and image content improve the performance of identifying relevant posts. In the light of these experiments, the application of the proposed classification method could reduce cognitive load on emergency services, in filtering multimodal public posts at large scale. | Among the social media analytics approaches, researchers have modeled public behavior in specific emergencies, addressed the problems of data collection and filtering, classification and summarization as well as visualization of analyzed data for decision support @cite_5 . However, the focus of such works has centered around text analytics except recent studies @cite_13 @cite_15 @cite_4 @cite_12 on processing multimedia content of the social posts. Although current multimodal information processing approaches for social media mining during disasters primarily analyzed only the damage assessment aspect of emergency management. We further complement these recent studies by proposing a generic classification framework for relevant information that exploits both textual and image content of multimodal social media posts. | {
"cite_N": [
"@cite_4",
"@cite_5",
"@cite_15",
"@cite_13",
"@cite_12"
],
"mid": [
"2962870381",
"1939595860",
"2187303655",
"1934362406",
"2535764243"
],
"abstract": [
"The CrisisMMD multimodal Twitter dataset consists of several thousands of manually annotated tweets and images collected during seven major natural disasters including earthquakes, hurricanes, wildfires, and floods that happened in the year 2017 across different parts of the World. The provided datasets include three types of annotations. Informative vs not-informative: Informative Not informative Don’t know or can’t judge Humanitarian categories Affected individuals Infrastructure and utility damage Injured or dead people Missing or found people Rescue, volunteering or donation effort Vehicle damage Other relevant information Not relevant or can’t judge Damage severity assessment Severe damage Mild damage Little or no damage Don’t know or can’t judge Please use alternate download link CrisisNLP, in case if you are having problem in downloading from this site. You can also get tweet ids and a tweet downloader tool too.",
"Today's crises attract great attention on social media, from local and distant citizens as well as from news media. This study investigates the possibilities of real-time and automated analysis of Twitter messages during crises. The analysis was performed through application of an information extraction tool to nearly 97,000 tweets that were published shortly before, during and after a storm hit the Pukkelpop 2011 festival in Belgium. As soon as the storm hit the festival tweet activity increased exponentially, peaking at 576 tweets per minute. The extraction tool enabled analyzing tweets through predefined (geo)graphical displays, message content filters (damage, casualties) and tweet type filters (e.g., retweets). Important topics that emerged were 'early warning tweets', 'rumors' and the 'self-organization of disaster relief' on Twitter. Results indicate that automated filtering of information provides valuable information for operational response and crisis communication. Steps for further research are discussed. © 2012 ISCRAM. Environmental Systems Research Institute, Inc. (ESRI)",
"Microblogging sites such as Twitter can play a vital role in spreading information during “natural” or man-made disasters. But the volume and velocity of tweets posted during crises today tend to be extremely high, making it hard for disaster-affected communities and professional emergency responders to process the information in a timely manner. Furthermore, posts tend to vary highly in terms of their subjects and usefulness; from messages that are entirely off-topic or personal in nature, to messages containing critical information that augments situational awareness. Finding actionable information can accelerate disaster response and alleviate both property and human losses. In this paper, we describe automatic methods for extracting information from microblog posts. Specifically, we focus on extracting valuable “information nuggets”, brief, self-contained information items relevant to disaster response. Our methods leverage machine learning methods for classifying posts and information extraction. Our results, validated over one large disaster-related dataset, reveal that a careful design can yield an effective system, paving the way for more sophisticated data analysis and visualization systems.",
"Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"The first objective towards the effective use of microblogging services such as Twitter for situational awareness during the emerging disasters is discovery of the disaster-related postings. Given the wide range of possible disasters, using a pre-selected set of disaster-related keywords for the discovery is suboptimal. An alternative that we focus on in this work is to train a classifier using a small set of labeled postings that are becoming available as a disaster is emerging. Our hypothesis is that utilizing large quantities of historical microblogs could improve the quality of classification, as compared to training a classifier only on the labeled data. We propose to use unlabeled microblogs to cluster words into a limited number of clusters and use the word clusters as features for classification. To evaluate the proposed semi-supervised approach, we used Twitter data from 6 different disasters. Our results indicate that when the number of labeled tweets is 100 or less, the proposed approach is superior to the standard classification based on the bag or words feature representation. Our results also reveal that the choice of the unlabeled corpus, the choice of word clustering algorithm, and the choice of hyperparameters can have a significant impact on the classification accuracy."
]
} |
1907.07378 | 2957866194 | Competency Questions (CQs) for an ontology and similar artefacts aim to provide insights into the contents of an ontology and to demarcate its scope. The absence of a controlled natural language, tooling and automation to support the authoring of CQs has hampered their effective use in ontology development and evaluation. The few question templates that exists are based on informal analyses of a small number of CQs and have limited coverage of question types and sentence constructions. We aim to fill this gap by proposing a template-based CNL to author CQs, called CLaRO. For its design, we exploited a new dataset of 234 CQs that had been processed automatically into 106 patterns, which we analysed and used to design a template-based CNL, with an additional CNL model and XML serialisation. The CNL was evaluated with a subset of questions from the original dataset and with two sets of newly sourced CQs. The coverage of CLaRO, with its 93 main templates and 41 linguistic variants, is about 90 for unseen questions. CLaRO has the potential to facilitate streamlining formalising ontology content requirements and, given that about one third of the competency questions in the test sets turned out to be invalid questions, assist in writing good questions. | Given that a CNL for CQs is supposed to function for specifying requirements for any ontology, the logic-based knowledge representation must be decoupled from the natural language. At the same time, it is well-known that the other extreme---free-form sentences---makes it exceedingly hard to formalise, be this for query or axiom generation; e.g., most recently, 's system allows free-text as input, but only four types of questions may generate answers in their IR-based approach (some definition questions, yes no, facts, and lists) @cite_7 . A middle way to bridge this gap is to design a CNL. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1579696959"
],
"abstract": [
"Computational semantics and logic-based controlled natural languages (CNL) do not address systematically the word sense disambiguation problem of content words, i.e., they tend to interpret only some functional words that are crucial for construction of discourse representation structures. We show that micro-ontologies and multi-word units allow integration of the rich and polysemous multi-domain background knowledge into CNL thus providing interpretation for the content words. The proposed approach is demonstrated by extending the Attempto Controlled English (ACE) with polysemous and procedural constructs resulting in a more natural CNL named PAO covering narrative multi-domain texts."
]
} |
1907.07378 | 2957866194 | Competency Questions (CQs) for an ontology and similar artefacts aim to provide insights into the contents of an ontology and to demarcate its scope. The absence of a controlled natural language, tooling and automation to support the authoring of CQs has hampered their effective use in ontology development and evaluation. The few question templates that exists are based on informal analyses of a small number of CQs and have limited coverage of question types and sentence constructions. We aim to fill this gap by proposing a template-based CNL to author CQs, called CLaRO. For its design, we exploited a new dataset of 234 CQs that had been processed automatically into 106 patterns, which we analysed and used to design a template-based CNL, with an additional CNL model and XML serialisation. The CNL was evaluated with a subset of questions from the original dataset and with two sets of newly sourced CQs. The coverage of CLaRO, with its 93 main templates and 41 linguistic variants, is about 90 for unseen questions. CLaRO has the potential to facilitate streamlining formalising ontology content requirements and, given that about one third of the competency questions in the test sets turned out to be invalid questions, assist in writing good questions. | CNLs for computation have been proposed as a solution for various information management aspects, such as query formulation to hide SPARQL syntax (e.g., Sparklis @cite_20 and Quelo @cite_15 ), generation of pseudo-NL sentences from axioms in an ontology to formalise them (e.g., ACE @cite_22 ), and software requirements formulation with, notably, the Semantics of Business Vocabulary and Rules (SBVR) @cite_24 . Recent literature reviews on CNLs within the scope of the Semantic Web can be found in @cite_19 @cite_16 and more broadly on CNLs in @cite_3 . They all---22 tools and proposals in @cite_16 and 22 in @cite_19 ---focus on assertions for ontology authoring, even those for queries, such as give me all writers who ...'' rather than which writers...?'', and even where they are questions, they are for instances, rather than the TBox-level of typical CQs, hence, take a different form. | {
"cite_N": [
"@cite_22",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_15",
"@cite_16",
"@cite_20"
],
"mid": [
"1579696959",
"2330889199",
"2466714650",
"2212742528",
"2101493333",
"2774249241",
"2620365397"
],
"abstract": [
"Computational semantics and logic-based controlled natural languages (CNL) do not address systematically the word sense disambiguation problem of content words, i.e., they tend to interpret only some functional words that are crucial for construction of discourse representation structures. We show that micro-ontologies and multi-word units allow integration of the rich and polysemous multi-domain background knowledge into CNL thus providing interpretation for the content words. The proposed approach is demonstrated by extending the Attempto Controlled English (ACE) with polysemous and procedural constructs resulting in a more natural CNL named PAO covering narrative multi-domain texts.",
"One of the core challenges for building the semantic web is the creation of ontologies, a process known as ontology authoring. Controlled natural languages (CNLs) propose different frameworks for interfacing and creating ontologies in semantic web systems using restricted natural language. However, in order to engage non-expert users with no background in knowledge engineering, these language interfacing must be reliable, easy to understand and accepted by users. This paper includes the state-of-the-art for CNLs in terms of ontology authoring and the semantic web. In addition, it includes a detailed analysis of user evaluations with respect to each CNL and offers analytic conclusions with respect to the field.",
"Our goal is to combine the rich multistep inference of symbolic logical reasoning with the generalization capabilities of neural networks. We are particularly interested in complex reasoning about entities and relations in text and large-scale knowledge bases (KBs). (2015) use RNNs to compose the distributed semantics of multi-hop paths in KBs; however for multiple reasons, the approach lacks accuracy and practicality. This paper proposes three significant modeling advances: (1) we learn to jointly reason about relations, entities, and entity-types; (2) we use neural attention modeling to incorporate multiple paths; (3) we learn to share strength in a single RNN that represents logical composition across all relations. On a largescale Freebase+ClueWeb prediction task, we achieve 25 error reduction, and a 53 error reduction on sparse relations due to shared strength. On chains of reasoning in WordNet we reduce error in mean quantile by 84 versus previous state-of-the-art. The code and data are available at this https URL",
"The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.",
"What is here called controlled natural language CNL has traditionally been given many different names. Especially during the last four decades, a wide variety of such languages have been designed. They are applied to improve communication among humans, to improve translation, or to provide natural and intuitive representations for formal notations. Despite the apparent differences, it seems sensible to put all these languages under the same umbrella. To bring order to the variety of languages, a general classification scheme is presented here. A comprehensive survey of existing English-based CNLs is given, listing and describing 100 languages from 1930 until today. Classification of these languages reveals that they form a single scattered cloud filling the conceptual space between natural languages such as English on the one end and formal languages such as propositional logic on the other. The goal of this article is to provide a common terminology and a common model for CNL, to contribute to the understanding of their general nature, to provide a starting point for researchers interested in the area, and to help developers to make design decisions.",
"Research has seen considerable achievements concerning translation of natural language patterns into formal queries for Question Answering (QA) based on Knowledge Graphs (KG). One of the main challenges in this research area is about how to identify which property within a Knowledge Graph matches the predicate found in a Natural Language (NL) relation. Current approaches for formal query generation attempt to resolve this problem mainly by first retrieving the named entity from the KG together with a list of its predicates, then filtering out one from all the predicates of the entity. We attempt an approach to directly match an NL predicate to KG properties that can be employed within QA pipelines. In this paper, we specify a systematic approach as well as providing a tool that can be employed to solve this task. Our approach models KB relations with their underlying parts of speech, we then enhance this with extra attributes obtained from Wordnet and Dependency parsing characteristics. From a question, we model a similar representation of query relations. We then define distance measurements between the query relation and the properties representations from the KG to identify which property is referred to by the relation within the query. We report substantive recall values and considerable precision from our evaluation.",
"Given an image and a natural language query phrase, a grounding system localizes the mentioned objects in the image according to the query's specifications. State-of-the-art methods address the problem by ranking a set of proposal bounding boxes according to the query's semantics, which makes them dependent on the performance of proposal generation systems. Besides, query phrases in one sentence may be semantically related in one sentence and can provide useful cues to ground objects. We propose a novel Multimodal Spatial Regression with semantic Context (MSRC) system which not only predicts the location of ground truth based on proposal bounding boxes, but also refines prediction results by penalizing similarities of different queries coming from same sentences. The advantages of MSRC are twofold: first, it removes the limitation of performance from proposal generation algorithms by using a spatial regression network. Second, MSRC not only encodes the semantics of a query phrase, but also deals with its relation with other queries in the same sentence (i.e., context) by a context refinement network. Experiments show MSRC system provides a significant improvement in accuracy on two popular datasets: Flickr30K Entities and Refer-it Game, with 6.64 and 5.28 increase over the state-of-the-arts respectively."
]
} |
1907.07171 | 2959108703 | An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL | Biases from training data and network architecture both factor into the generalization capacity of learned models @cite_16 @cite_8 @cite_17 . Dataset biases partly comes from human preferences in taking photos: we typically capture images in specific canonical'' views that are not fully representative of the entire visual world @cite_20 @cite_13 . When models are trained to fit these datasets, they inherit the biases in the data. Such biases may result in models that misrepresent the given task -- such as tendencies towards texture bias rather than shape bias on ImageNet classifiers @cite_8 -- which in turn limits their generalization performance on similar objectives @cite_6 . Our latent space trajectories transform the output corresponding to various camera motion and image editing operations, but ultimately we are constrained by biases in the data and cannot extrapolate arbitrarily far beyond the dataset. | {
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_16",
"@cite_13",
"@cite_20",
"@cite_17"
],
"mid": [
"2526782364",
"2807007689",
"1852255964",
"2953218089",
"2737691244",
"2963998559"
],
"abstract": [
"Recently, neuron activations extracted from a pre-trained convolutional neural network (CNN) show promising performance in various visual tasks. However, due to the domain and task bias, using the features generated from the model pre-trained for image classification as image representations for instance retrieval is problematic. In this paper, we propose quartet-net learning to improve the discriminative power of CNN features for instance retrieval. The general idea is to map the features into a space where the image similarity can be better evaluated. Our network differs from the traditional Siamese-net in two ways. First, we adopt a double-margin contrastive loss with a dynamic margin tuning strategy to train the network which leads to more robust performance. Second, we introduce in the mimic learning regularization to improve the generalization ability of the network by preventing it from overfitting to the training data. Catering for the network learning, we collect a large-scale dataset, namely GeoPair, which consists of 68k matching image pairs and 63k non-matching pairs. Experiments on several standard instance retrieval datasets demonstrate the effectiveness of our method.",
"Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and that this failure of generalization also happens with other realistic small image transformations. Furthermore, the deeper the network the more we see these failures to generalize. We show that these failures are related to the fact that the architecture of modern CNNs ignores the classical sampling theorem so that generalization is not guaranteed. We also show that biases in the statistics of commonly used image datasets makes it unlikely that CNNs will learn to be invariant to these transformations. Taken together our results suggest that the performance of CNNs in object recognition falls far short of the generalization capabilities of humans.",
"The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets.",
"Neural networks achieve the state-of-the-art in image classification tasks. However, they can encode spurious variations or biases that may be present in the training data. For example, training an age predictor on a dataset that is not balanced for gender can lead to gender biased predicitons (e.g. wrongly predicting that males are older if only elderly males are in the training set). We present two distinct contributions: 1) An algorithm that can remove multiple sources of variation from the feature representation of a network. We demonstrate that this algorithm can be used to remove biases from the feature representation, and thereby improve classification accuracies, when training networks on extremely biased datasets. 2) An ancestral origin database of 14,000 images of individuals from East Asia, the Indian subcontinent, sub-Saharan Africa, and Western Europe. We demonstrate on this dataset, for a number of facial attribute classification tasks, that we are able to remove racial biases from the network feature representation.",
"In this paper, we study the problem of training large-scale face identification model with imbalanced training data. This problem naturally exists in many real scenarios including large-scale celebrity recognition, movie actor annotation, etc. Our solution contains two components. First, we build a face feature extraction model, and improve its performance, especially for the persons with very limited training samples, by introducing a regularizer to the cross entropy loss for the multi-nomial logistic regression (MLR) learning. This regularizer encourages the directions of the face features from the same class to be close to the direction of their corresponding classification weight vector in the logistic regression. Second, we build a multi-class classifier using MLR on top of the learned face feature extraction model. Since the standard MLR has poor generalization capability for the one-shot classes even if these classes have been oversampled, we propose a novel supervision signal called underrepresented-classes promotion loss, which aligns the norms of the weight vectors of the one-shot classes (a.k.a. underrepresented-classes) to those of the normal classes. In addition to the original cross entropy loss, this new loss term effectively promotes the underrepresented classes in the learned model and leads to a remarkable improvement in face recognition performance. We test our solution on the MS-Celeb-1M low-shot learning benchmark task. Our solution recognizes 94.89 of the test images at the precision of 99 for the one-shot classes. To the best of our knowledge, this is the best performance among all the published methods using this benchmark task with the same setup, including all the participants in the recent MS-Celeb-1M challenge at ICCV 2017.",
"During the last half decade, convolutional neural networks (CNNs) have triumphed over semantic segmentation, which is a core task of various emerging industrial applications such as autonomous driving and medical imaging. However, to train CNNs requires a huge amount of data, which is difficult to collect and laborious to annotate. Recent advances in computer graphics make it possible to train CNN models on photo-realistic synthetic data with computer-generated annotations. Despite this, the domain mismatch between the real images and the synthetic data significantly decreases the models’ performance. Hence we propose a curriculum-style learning approach to minimize the domain gap in semantic segmentation. The curriculum domain adaptation solves easy tasks first in order to infer some necessary properties about the target domain; in particular, the first task is to learn global label distributions over images and local distributions over landmark superpixels. These are easy to estimate because images of urban traffic scenes have strong idiosyncrasies (e.g., the size and spatial relations of buildings, streets, cars, etc.). We then train the segmentation network in such a way that the network predictions in the target domain follow those inferred properties. In experiments, our method significantly outperforms the baselines as well as the only known existing approach to the same problem."
]
} |
1907.07171 | 2959108703 | An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL | The recent progress in generative models has enabled interesting applications for content creation @cite_11 @cite_14 , including variants that enable end users to control and fine-tune the generated output @cite_3 @cite_19 @cite_12 . A by-product the current work is to further enable users to modify various image properties by turning a single knob -- the magnitude of the learned transformation. Similar to @cite_24 , we show that GANs allow users to achieve basic image editing operations by manipulating the latent space. However, we further demonstrate that these image manipulations are not just a simple creativity tool; they also provide us with a window into biases and generalization capacity of these models. | {
"cite_N": [
"@cite_14",
"@cite_3",
"@cite_24",
"@cite_19",
"@cite_12",
"@cite_11"
],
"mid": [
"2963577681",
"2963105487",
"2881214865",
"2530372461",
"2897946384",
"2769521316"
],
"abstract": [
"Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation",
"Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an “inverse model,” a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion , to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website ( https: github.com ToniCreswell InvertingGAN ).",
"We introduce a novel generative autoencoder network model that learns to encode and reconstruct images with high quality and resolution, and supports smooth random sampling from the latent space of the encoder. Generative adversarial networks (GANs) are known for their ability to simulate random high-quality images, but they cannot reconstruct existing images. Previous works have attempted to extend GANs to support such inference but, so far, have not delivered satisfactory high-quality results. Instead, we propose the Progressively Growing Generative Autoencoder (Pioneer) network which achieves high-quality reconstruction with (128 128 ) images without requiring a GAN discriminator. We merge recent techniques for progressively building up the parts of the network with the recently introduced adversarial encoder–generator network. The ability to reconstruct input images is crucial in many real-world applications, and allows for precise intelligent manipulation of existing images. We show promising results in image synthesis and inference, with state-of-the-art results in CelebA inference tasks.",
"Generative Adversarial Networks (GANs) have recently demonstrated the capability to synthesize compelling real-world images, such as room interiors, album covers, manga, faces, birds, and flowers. While existing models can synthesize images based on global constraints such as a class label or caption, they do not provide control over pose or object location. We propose a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location. We show high-quality 128 × 128 image synthesis on the Caltech-UCSD Birds dataset, conditioned on both informal text descriptions and also object location. Our system exposes control over both the bounding box around the bird and its constituent parts. By modeling the conditional distributions over part locations, our system also enables conditioning on arbitrary subsets of parts (e.g. only the beak and tail), yielding an efficient interface for picking part locations.",
"We show how we can globally edit images using textual instructions: given a source image and a textual instruction for the edit, generate a new image transformed under this instruction. To tackle this novel problem, we develop three different trainable models based on RNN and Generative Adversarial Network (GAN). The models (bucket, filter bank, and end-to-end) differ in how much expert knowledge is encoded, with the most general version being purely end-to-end. To train these systems, we use Amazon Mechanical Turk to collect textual descriptions for around 2000 image pairs sampled from several datasets. Experimental results evaluated on our dataset validate our approaches. In addition, given that the filter bank model is a good compromise between generality and performance, we investigate it further by replacing RNN with Graph RNN, and show that Graph RNN improves performance. To the best of our knowledge, this is the first computational photography work on global image editing that is purely based on free-form textual instructions.",
"Generative models, such as variational auto-encoders (VAE) and generative adversarial networks (GAN), have been immensely successful in approximating image statistics in computer vision. VAEs are useful for unsupervised feature learning, while GANs alleviate supervision by penalizing inaccurate samples using an adversarial game. In order to utilize benefits of these two approaches, we combine the VAE under an adversarial setup with auxiliary label information. We show that factorizing the latent space to separate the information needed for reconstruction (a continuous space) from the information needed for image attribute classification (a discrete space), enables the capability to edit specific attributes of an image."
]
} |
1907.07171 | 2959108703 | An open secret in contemporary machine learning is that many models work beautifully on standard benchmarks but fail to generalize outside the lab. This has been attributed to training on biased data, which provide poor coverage over real world events. Generative models are no exception, but recent advances in generative adversarial networks (GANs) suggest otherwise -- these models can now synthesize strikingly realistic and diverse images. Is generative modeling of photos a solved problem? We show that although current GANs can fit standard datasets very well, they still fall short of being comprehensive models of the visual manifold. In particular, we study their ability to fit simple transformations such as camera movements and color changes. We find that the models reflect the biases of the datasets on which they are trained (e.g., centered objects), but that they also exhibit some capacity for generalization: by "steering" in latent space, we can shift the distribution while still creating realistic images. We hypothesize that the degree of distributional shift is related to the breadth of the training data distribution, and conduct experiments that demonstrate this. Code is released on our project page: this https URL | We note a few concurrent papers that also explore trajectories in GAN latent space. @cite_23 learns linear walks in the latent space that correspond to various facial characteristics; they use these walks to measure biases in facial attribute detectors, whereas we study biases in the generative model that originate from training data. @cite_25 also assumes linear latent space trajectories and learns paths for face attribute editing according to semantic concepts such as age and expression, thus demonstrating disentanglement properties of the latent space. @cite_2 applies a linear walk to achieve transformations in learning and editing features that pertain cognitive properties of an image such as memorability, aesthetics, and emotional valence. Unlike these works, we do not require an attribute detector or assessor function to learn the latent space trajectory, and therefore our loss function is based on image similarity between source and target images. In addition to linear walks, we explore using non-linear walks to achieve camera motion and color transformations. | {
"cite_N": [
"@cite_25",
"@cite_23",
"@cite_2"
],
"mid": [
"2963577681",
"2604433135",
"2963426391"
],
"abstract": [
"Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understandings on how GANs are able to map the latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GAN follows a distributed representation but observes the vector arithmetic phenomenon of the output's semantics in latent space. In this work, we interpret the semantics hidden in the latent space of well-trained GANs. We find that the latent code for well-trained generative models, such as ProgressiveGAN and StyleGAN, actually learns a disentangled representation after some linear transformations. We make a rigorous analysis on the encoding of various semantics in the latent space as well as their properties, and then study how these semantics are correlated to each other. Based on our analysis, we propose a simple and general technique, called InterFaceGAN, for semantic face editing in latent space. Given a synthesized face, we are able to faithfully edit its various attributes such as pose, expression, age, presence of eyeglasses, without retraining the GAN model. Furthermore, we show that even the artifacts occurred in output images are able to be fixed using same approach. Extensive results suggest that learning to synthesize faces spontaneously brings a disentangled and controllable facial attribute representation",
"We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models.",
"We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models."
]
} |
1907.07349 | 2960672606 | Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints. | Previous studies on edge server placement have focused on algorithms for clustering access points with the aim to find candidate locations for the servers as cluster heads. In these works, clustering was based on k-means @cite_24 , k-means with mixed-integer quadratic programming @cite_41 , graph theory as in minimum dominating set problem @cite_15 , hierarchical tree-like structures @cite_1 @cite_14 , multi-objective constraint optimization @cite_11 and mixed integer linear programming @cite_30 , and DBScan-clustering combined with optimization based on a facility location problem @cite_3 . A heuristic decision-support management system for server placement was presented in @cite_16 . For clustering, different sets of parameters were considered, such as individual server capacity, the number of servers, geo-locations of servers, minimal latencies and maximized traffic inside the clusters. Typically, co-location is considered where the servers are placed next to access points in a geographical area. | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_41",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"2894009982",
"2887279295",
"2906068868",
"2809740924",
"2281709771",
"2003207175",
"2964083987",
"2098653858",
"1527264199"
],
"abstract": [
"Edge server placement problem is a hot topic in mobile edge computing. In this paper, we study the problem of energy-aware edge server placement and try to find a more effective placement scheme with low energy consumption. Then, we formulate the problem as a multi-objective optimization problem and devise a particle swarm optimization based energy-aware edge server placement algorithm to find the optimal solution. We evaluate the algorithm based on the real dataset from Shanghai Telecom and the results show our algorithm can reduce more than 10 energy consumption with over 15 improvement in computing resource utilization, compared to other algorithms.",
"Edge computing provides an attractive platform for bringing data and processing closer to users in networked environments. Several edge proposals aim to place the edge servers at a couple hop distance from the client to ensure lowest possible compute and network delay. An attractive edge server placement is to co-locate it with existing (cellular) base stations to avoid additional infrastructure establishment costs. However, determining the exact locations for edge servers is an important question that must be resolved for optimal placement. In this paper, we present Anveshak1, a framework that solves the problem of placing edge servers in a geographical topology and provides the optimal solution for edge providers. Our proposed solution considers both end-user application requirements as well as deployment and operating costs incurred by edge platform providers. The placement optimization metric of Anveshak considers the request pattern of users and existing user-established edge servers. In our evaluation based on real datasets, we show that Anveshak achieves 67 increase in user satisfaction while maintaining high server utilization.",
"Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.",
"Abstract With the rapid increase in the development of the Internet of Things and 5G networks in the smart city context, a large amount of data (i.e., big data) is expected to be generated, resulting in increased latency for the traditional cloud computing paradigm. To reduce the latency, mobile edge computing has been considered for offloading a part of the workload from mobile devices to nearby edge servers that have sufficient computation resources. Although there has been significant research in the field of mobile edge computing, little attention has been given to understanding the placement of edge servers in smart cities to optimize the mobile edge computing network performance. In this paper, we study the edge server placement problem in mobile edge computing environments for smart cities. First, we formulate the problem as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server. Then, we adopt mixed integer programming to find the optimal solution. Experimental results based on Shanghai Telecom’s base station dataset show that our approach outperforms several representative approaches in terms of access delay and workload balancing.",
"In this work we investigate optimal geographical caching in heterogeneous cellular networks where different types of base stations (BSs) have different cache capacities. Users request files from a content library according to a known probability distribution. The performance metric is the total hit probability, which is the probability that a user at an arbitrary location in the plane will find the content that it requires in one of the BSs that it is covered by. We consider the problem of optimally placing content in all BSs jointly. As this problem is not convex, we provide a heuristic scheme by finding the optimal placement policy for one type of base station conditioned on the placement in all other types. We demonstrate that these individual optimization problems are convex and we provide an analytical solution. As an illustration, we find the optimal placement policy of the small base stations (SBSs) depending on the placement policy of the macro base stations (MBSs). We show how the hit probability evolves as the deployment density of the SBSs varies. We show that the heuristic of placing the most popular content in the MBSs is almost optimal after deploying the SBSs with optimal placement policies. Also, for the SBSs no such heuristic can be used; the optimal placement is significantly better than storing the most popular content. Finally, we show that solving the individual problems to find the optimal placement policies for different types of BSs iteratively, namely repeatedly updating the placement policies, does not improve the performance.",
"We analyze local search heuristics for the metric k-median and facility location problems. We define the locality gap of a local search procedure for a minimization problem as the maximum ratio of a locally optimum solution (obtained using this procedure) to the global optimum. For k-median, we show that local search with swaps has a locality gap of 5. Furthermore, if we permit up to p facilities to be swapped simultaneously, then the locality gap is 3+2 p. This is the first analysis of a local search for k-median that provides a bounded performance guarantee with only k medians. This also improves the previous known 4 approximation for this problem. For uncapacitated facility location, we show that local search, which permits adding, dropping, and swapping a facility, has a locality gap of 3. This improves the bound of 5 given by M. Korupolu, C. Plaxton, and R. Rajaraman [Analysis of a Local Search Heuristic for Facility Location Problems, Technical Report 98-30, DIMACS, 1998]. We also consider a capacitated facility location problem where each facility has a capacity and we are allowed to open multiple copies of a facility. For this problem we introduce a new local search operation which opens one or more copies of a facility and drops zero or more facilities. We prove that this local search has a locality gap between 3 and 4.",
"The classical center based clustering problems such as k-means median center assume that the optimal clusters satisfy the locality property that the points in the same cluster are close to each other. A number of clustering problems arise in machine learning where the optimal clusters do not follow such a locality property. For instance, consider the r -gather clustering problem where there is an additional constraint that each of the clusters should have at least r points or the capacitated clustering problem where there is an upper bound on the cluster sizes. Consider a variant of the k-means problem that may be regarded as a general version of such problems. Here, the optimal clusters O 1, ..., O k are an arbitrary partition of the dataset and the goal is to output k-centers c 1, ..., c k such that the objective function ( _ i = 1 ^ k _ x O_ i ||x - c_ i ||^ 2 ) is minimized. It is not difficult to argue that any algorithm (without knowing the optimal clusters) that outputs a single set of k centers, will not behave well as far as optimizing the above objective function is concerned. However, this does not rule out the existence of algorithms that output a list of such k centers such that at least one of these k centers behaves well. Given an error parameter e > 0, let l denote the size of the smallest list of k-centers such that at least one of the k-centers gives a (1 + e) approximation w.r.t. the objective function above. In this paper, we show an upper bound on l by giving a randomized algorithm that outputs a list of (2^ O (k ) ) k-centers. We also give a closely matching lower bound of (2^ (k ) ). Moreover, our algorithm runs in time (O (n d 2^ O (k ) ) ). This is a significant improvement over the previous result of Ding and Xu (2015) who gave an algorithm with running time O(n d ⋅ (log n) k ⋅ 2 p o l y(k e)) and output a list of size O((log n) k ⋅ 2 p o l y(k e)). Our techniques generalize for the k-median problem and for many other settings where non-Euclidean distance measures are involved.",
"In this paper, we address the problem of efficient cache placement in multi-hop wireless networks. We consider a network comprising a server with an interface to the wired network, and other nodes requiring access to the information stored at the server. In order to reduce access latency in such a communication environment, an effective strategy is caching the server information at some of the nodes distributed across the network. Caching, however, can imply a considerable overhead cost; for instance, disseminating information incurs additional energy as well as bandwidth burden. Since wireless systems are plagued by scarcity of available energy and bandwidth, we need to design caching strategies that optimally trade-off between overhead cost and access latency. We pose our problem as an integer linear program. We show that this problem is the same as a special case of the connected facility location problem, which is known to be NP-hard. We devise a polynomial time algorithm which provides a suboptimal solution. The proposed algorithm applies to any arbitrary network topology and can be implemented in a distributed and asynchronous manner. In the case of a tree topology, our algorithm gives the optimal solution. In the case of an arbitrary topology, it finds a feasible solution with an objective function value within a factor of 6 of the optimal value. This performance is very close to the best approximate solution known today, which is obtained in a centralized manner. We compare the performance of our algorithm against three candidate cache placement schemes, and show via extensive simulation that our algorithm consistently outperforms these alternative schemes.",
"Workload placement on servers has been traditionally driven by mainly performance objectives. In this work, we investigate the design, implementation, and evaluation of a power-aware application placement controller in the context of an environment with heterogeneous virtualized server clusters. The placement component of the application management middleware takes into account the power and migration costs in addition to the performance benefit while placing the application containers on the physical servers. The contribution of this work is two-fold: first, we present multiple ways to capture the cost-aware application placement problem that may be applied to various settings. For each formulation, we provide details on the kind of information required to solve the problems, the model assumptions, and the practicality of the assumptions on real servers. In the second part of our study, we present the pMapper architecture and placement algorithms to solve one practical formulation of the problem: minimizing power subject to a fixed performance requirement. We present comprehensive theoretical and experimental evidence to establish the efficacy of pMapper."
]
} |
1907.07349 | 2960672606 | Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints. | The computing capacities of edge servers were assumed equal and fixed, except in works @cite_24 @cite_15 , that allowed scaling of the server capacity on-demand to distribute workload evenly, regardless of the resulting cluster size. In @cite_3 , no strict capacity limits were set for servers, but excessive workload can be offloaded to cloud. The studies mainly focused on average workload that can be utilized as a measure to maintain a constant QoS as all times. @cite_16 focused on worst-case workload by utilizing the maximum number of users found in the historical data. Measures to simulate the workload in different granularity were also utilized, e.g. a number of connections to the access points, total session length or total connection time and length of phone calls. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_16",
"@cite_3"
],
"mid": [
"2906068868",
"2111556044",
"2809740924",
"2132319142"
],
"abstract": [
"Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.",
"The advent of cloud computing promises highly available, efficient, and flexible computing services for applications such as web search, email, voice over IP, and web search alerts. Our experience at Google is that realizing the promises of cloud computing requires an extremely scalable backend consisting of many large compute clusters that are shared by application tasks with diverse service level requirements for throughput, latency, and jitter. These considerations impact (a) capacity planning to determine which machine resources must grow and by how much and (b) task scheduling to achieve high machine utilization and to meet service level objectives. Both capacity planning and task scheduling require a good understanding of task resource consumption (e.g., CPU and memory usage). This in turn demands simple and accurate approaches to workload classification-determining how to form groups of tasks (workloads) with similar resource demands. One approach to workload classification is to make each task its own workload. However, this approach scales poorly since tens of thousands of tasks execute daily on Google compute clusters. Another approach to workload classification is to view all tasks as belonging to a single workload. Unfortunately, applying such a coarse-grain workload classification to the diversity of tasks running on Google compute clusters results in large variances in predicted resource consumptions. This paper describes an approach to workload classification and its application to the Google Cloud Backend, arguably the largest cloud backend on the planet. Our methodology for workload classification consists of: (1) identifying the workload dimensions; (2) constructing task classes using an off-the-shelf algorithm such as k-means; (3) determining the break points for qualitative coordinates within the workload dimensions; and (4) merging adjacent task classes to reduce the number of workloads. We use the foregoing, especially the notion of qualitative coordinates, to glean several insights about the Google Cloud Backend: (a) the duration of task executions is bimodal in that tasks either have a short duration or a long duration; (b) most tasks have short durations; and (c) most resources are consumed by a few tasks with long duration that have large demands for CPU and memory.",
"Abstract With the rapid increase in the development of the Internet of Things and 5G networks in the smart city context, a large amount of data (i.e., big data) is expected to be generated, resulting in increased latency for the traditional cloud computing paradigm. To reduce the latency, mobile edge computing has been considered for offloading a part of the workload from mobile devices to nearby edge servers that have sufficient computation resources. Although there has been significant research in the field of mobile edge computing, little attention has been given to understanding the placement of edge servers in smart cities to optimize the mobile edge computing network performance. In this paper, we study the edge server placement problem in mobile edge computing environments for smart cities. First, we formulate the problem as a multi-objective constraint optimization problem that places edge servers in some strategic locations with the objective to make balance the workloads of edge servers and minimize the access delay between the mobile user and edge server. Then, we adopt mixed integer programming to find the optimal solution. Experimental results based on Shanghai Telecom’s base station dataset show that our approach outperforms several representative approaches in terms of access delay and workload balancing.",
"Workload variations on Internet platforms such as YouTube, Flickr, LastFM require novel approaches to dynamic resource provisioning in order to meet QoS requirements, while reducing the Total Cost of Ownership (TCO) of the infrastructures. The economy of scale promise of cloud computing is a great opportunity to approach this problem, by developing elastic large scale server infrastructures. However, a proactive approach to dynamic resource provisioning requires prediction models forecasting future load patterns. On the other hand, unexpected volume and data spikes require reactive provisioning for serving unexpected surges in workloads. When workload can not be predicted, adequate data grouping and placement algorithms may facilitate agile scaling up and down of an infrastructure. In this paper, we analyze a dynamic workload of an on-line music portal and present an elastic Web infrastructure that adapts to workload variations by dynamically scaling up and down servers. The workload is predicted by an autoregressive model capturing trends and seasonal patterns. Further, for enhancing data locality, we propose a predictive data grouping based on the history of content access of a user community. Finally, in order to facilitate agile elasticity, we present a data placement based on workload and access pattern prediction. The experimental results demonstrate that our forecasting model predicts workload with a high precision. Further, the predictive data grouping and placement methods provide high locality, load balance and high utilization of resources, allowing a server infrastructure to scale up and down depending on workload."
]
} |
1907.07349 | 2960672606 | Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints. | To maintain the sufficient QoS within the budget limitations, two main approaches were used for determining the required number of servers. First, a tolerated distance from server was decided and the number of servers was minimized given that the distance constraint is met for each access point @cite_15 @cite_30 @cite_2 . Second, the number of servers was based on budget and servers were placed so that the best proximity is obtained @cite_24 @cite_49 @cite_20 @cite_14 @cite_11 . A third approach was to minimize the distance while penalizing for the number of servers @cite_16 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_24",
"@cite_49",
"@cite_2",
"@cite_15",
"@cite_16",
"@cite_20",
"@cite_11"
],
"mid": [
"1558598144",
"2129486506",
"2063545374",
"2167264214",
"2165598306",
"2906068868",
"2031862949",
"2738547443",
"2021033250"
],
"abstract": [
"In a Content Distribution Network application, we have a set of servers and a set of clients to be connected to the servers. Often there are a few server types and a hard budget constraint on the number of deployed servers of each type. The simplest goal here is to deploy a set of servers subject to these budget constraints in order to minimize the sum of client connection costs. These connection costs often satisfy metricity, since they are typically proportional to the distance between a client and a server within a single autonomous system. A special case of the problem where there is only one server type is the well-studied k-median problem. In this paper, we consider the problem with two server types and call it the budgeted red-blue median problem. We show, somewhat surprisingly, that running a single-swap local search for each server type simultaneously, yields a constant factor approximation for this case. Its analysis is however quite non-trivial compared to that of the k-median problem (, 2004; Gupta and Tangwongsan, 2008). Later we show that the same algorithm yields a constant approximation for the prize-collecting version of the budgeted red-blue median problem where each client can potentially be served with an alternative cost via a different vendor. In the process, we also improve the approximation factor for the prize-collecting k-median problem from 4 (, 2001) to 3+e, which matches the current best approximation factor for the k-median problem.",
"In a Content Distribution Network (CDN), there are m servers storing the data; each of them has a specific bandwidth. All the requests from a particular client should be assigned to one server because of the routing protocol used. The goal is to minimize the total cost of these assignments—cost of each is proportional to the distance between the client and the server as well as the request size—while the load on each server is kept below its bandwidth limit. When each server also has a setup cost, this is an unsplittable hard-capacitated facility location problem. As much attention as facility location problems have received, there has been no nontrivial approximation algorithm when we have hard capacities (i.e., there can only be one copy of each facility whose capacity cannot be violated) and demands are unsplittable (i.e., all the demand from a client has to be assigned to a single facility). We observe it is NP-hard to approximate the cost to within any bounded factor in this case. Thus, for an arbitrary constant e>0, we relax the capacities to a 1+e factor. For the case where capacities are almost uniform, we give a bicriteria O(log n, 1+e)-approximation algorithm for general metrics and a (1+e, 1+e)-approximation algorithm for tree metrics. A bicriteria (α,β)-approximation algorithm produces a solution of cost at most α times the optimum, while violating the capacities by no more than a β factor. We can get the same guarantees for nonuniform capacities if we allow quasipolynomial running time. In our algorithm, some clients guess the facility they are assigned to, and facilities decide the size of the clients they serve. A straightforward approach results in exponential running time. When costs do not satisfy metricity, we show that a 1.5 violation of capacities is necessary to obtain any approximation. It is worth noting that our results generalize bin packing (zero connection costs and facility costs equal to one), knapsack (single facility with all costs being zero), minimum makespan scheduling for related machines (all connection costs being zero), and some facility location problems.",
"Abstract We consider the problem of placing a specified number ( p ) of facilities on the nodes of a network so as to minimize some measure of the distances between facilities. This type of problem models a number of problems arising in facility location, statistical clustering, pattern recognition, and processor allocation problems in multiprocessor systems. We consider the problem under three different objectives, namely minimizing the diameter, minimizing the average distance, and minimizing the variance. We observe that, in general, the problem is NP -hard under any of the objectives. Further, even obtaining a constant factor approximation for any of the objectives is NP -hard. We present a general framework for obtaining near-optimal solutions to the compact location problems for the above measures, when the distances satisfy the triangle inequality. We show that this framework can be extended to the case when there are also node weights. Further, we investigate the complexity and approximability of more general versions of these problems, where two distance values are specified for each pair of potential sites. In these cases, the goal is to a select a specified number of facilities to minimize a function of one distance metric subject to a budget constraint on the other distance metric. We present algorithms that provide solutions which are within a small constant factor of the objective value while violating the budget constraint by only a small constant factor.",
"Wireless networks are increasingly used to carry applications with QoS constraints. Two problems arise when dealing with traffic with QoS constraints. One is admission control, which consists of determining whether it is possible to fulfill the demands of a set of clients. The other is finding an optimal scheduling policy to meet the demands of all clients. In this paper, we propose a framework for jointly addressing three QoS criteria: delay, delivery ratio, and channel reliability. We analytically prove the necessary and sufficient condition for a set of clients to be feasible with respect to the above three criteria. We then establish an efficient algorithm for admission control to decide whether a set of clients is feasible. We further propose two scheduling policies and prove that they are feasibility optimal in the sense that they can meet the demands of every feasible set of clients. In addition, we show that these policies are easily implementable on the IEEE 802.11 mechanisms. We also present the results of simulation studies that appear to confirm the theoretical studies and suggest that the proposed policies outperform others tested under a variety of settings.",
"How to keep the probability of hand-off drops within a prespecified limit is a very important quality-of-service (QoS) issue in cellular networks because mobile users should be able to maintain ongoing sessions even during their hand-off from one cell to another. We design and evaluate predictive and adaptive schemes for bandwidth reservation for the hand-offs of ongoing sessions and the admission control of new connections. We first develop a method to estimate user mobility based on an aggregate history of hand-offs observed in each cell. This method is then used to probabilistically predict mobiles' directions and hand-off times in a cell. For each cell, the bandwidth to be reserved for hand-offs is calculated by estimating the total sum of tractional bandwidths of the expected hand-offs within a mobility-estimation time window. Three different admission-control schemes for new connection requests using this bandwidth reservation are proposed. We also consider variations that utilize the path location information available from the car navigation system or global positioning system. Finally, we evaluate the performance of the proposed schemes extensively to show that they meet our design goal and outperform the static reservation scheme under various scenarios.",
"Remote clouds are gradually unable to achieve ultra-low latency to meet the requirements of mobile users because of the intolerable long distance between remote clouds and mobile users and the network congestion caused by the tremendous number of users. Mobile edge computing, a new paradigm, has been proposed to mitigate aforementioned effects. Existing studies mostly assume the edge servers have been deployed properly and they just pay attention to how to minimize the delay between edge servers and mobile users. In this paper, considering the practical environment, we investigate how to deploy edge servers effectively and economically in wireless metropolitan area networks. Thus, we address the problem of minimizing the number of edge servers while ensuring some QoS requirements. Aiming at more consistence with a generalized condition, we extend the definition of the dominating set, and transform the addressed problem into the minimum dominating set problem in graph theory. In addition, two conditions are considered for the capacities of edge servers: one is that the capacities of edge servers can be configured on demand, and the other is that all the edge servers have the same capacities. For the on-demand condition, a greedy based algorithm is proposed to find the solution, and the key idea is to iteratively choose nodes that can connect as many other nodes as possible under the delay, degree and cluster size constraints. Furthermore, a simulated annealing based approach is given for global optimization. For the second condition, a greedy based algorithm is also proposed to satisfy the capacity constraint of edge servers and minimize the number of edge servers simultaneously. The simulation results show that the proposed algorithms are feasible.",
"Replication of documents on geographically distributed servers can improve both performance and reliability of the Web service. Server selection algorithms allow Web clients to select one of the replicated servers which is \"close\" to them and thereby minimize the response time of the Web service. Using client proxy server traces, we compare the effectiveness of several \"proximity\" metrics including the number of hops between the client and server, the ping round trip time and the HTTP request latency. Based on this analysis, we design two new algorithms for selection of replicated servers and compare their performance against other existing algorithms. We show that the new server selection algorithms improve the performance of other existing algorithms on the average by 55 . In addition, the new algorithms improve the performance of the existing non-replicated Web servers on average by 69 .",
"Abstract Infrastructure as a Service (IaaS) cloud providers typically offer multiple service classes to satisfy users with different requirements and budgets. Cloud providers are faced with the challenge of estimating the minimum resource capacity required to meet Service Level Objectives (SLOs) defined for all service classes. This paper proposes a capacity planning method that is combined with an admission control mechanism to address this challenge. The capacity planning method uses analytical models to estimate the output of a quota-based admission control mechanism and find the minimum capacity required to meet availability SLOs and admission rate targets for all classes. An evaluation using trace-driven simulations shows that our method estimates the best cloud capacity with a mean relative error of 2.5 with respect to the simulation, compared to a 36 relative error achieved by a single-class baseline method that does not consider admission control mechanisms. Moreover, our method exhibited a high SLO fulfillment for both availability and admission rates, and obtained mean CPU utilization over 91 , while the single-class baseline method had values not greater than 78 .",
"This paper addresses the problem of finding the minimum number of vehicles required to visit a set of nodes subject to time window and capacity constraints. The fleet is homogeneous and is located at a common depot. Each node requires the same type of service. An exact method is introduced based on branch and cut. In the computations, ever increasing lower bounds on the optimal solution are obtained by solving a series of relaxed problems that incorporate newly found valid inequalities. Feasible solutions or upper bounds are obtained with the help of greedy randomized adaptive search procedure (GRASP). A wide variety of cuts is introduced to tighten the linear programming (LP) relaxation of the original mixed-integer program. To find violated cuts, it is necessary to solve a separation problem. A substantial portion of the paper is aimed at describing the heuristics developed for this purpose. A new approach for obtaining feasible solutions from the LP relaxation is also discussed. Numerical results for standard 50- and 100-node benchmark problems are reported."
]
} |
1907.07349 | 2960672606 | Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints. | Scalability was considered from the algorithmic scalability and the resulting deployment capacity perspectives. First, the algorithmic scalability was exemplified by the number of access points and edge servers. Basic k-means algorithm was applied in @cite_24 without capacity constraints and hierarchical clustering in @cite_1 , both giving good scalability. In some works, scalability was guaranteed with a two-step approach. Data was partitioned into clusters without applying the capacity constraints, after which the servers were placed to each cluster separately @cite_3 @cite_16 . Similarly, first the servers were placed without considering the capacity constraints and then the access points were assigned to the servers @cite_41 . Such approaches save computation time, but consume memory as the allocation step is carried for the whole data set at once. In the work of @cite_30 , a dense grid was set im a geographical area and the spatial extents of the servers were obtained by merging the grid cells based on user mobility. Here the computational time depends on the number of grid cells and not on the number of access points. Thus, the method scales well on the number of access points, but not with respect to the spatial size. | {
"cite_N": [
"@cite_30",
"@cite_41",
"@cite_1",
"@cite_3",
"@cite_24",
"@cite_16"
],
"mid": [
"2008464909",
"2122985136",
"1975618234",
"179407972",
"2133156997",
"2138583691"
],
"abstract": [
"Scalability and performance are key factors to the success of many enterprises involved in doing business on the web. Maintaining sufficient web resources just to meet performance during peak demands can be costly. Compute Cloud provides a powerful environment to allow dynamic scaling of web applications without the needs for user intervention. In this paper, we present a case study on the scalability and performance of web applications in a Cloud. We describe a novel dynamic scaling architecture with a front-end load-balancer for routing user requests to web applications deployed on virtual machine instances with the goal of maximizing resource utilization in instances while minimizing total number of instances. A scaling algorithm for automated provisioning of virtual resources based on threshold number of active user sessions will be introduced. The on-demand capability of the Cloud to rapidly provision and dynamically allocate resources to users will be discussed. Our work has demonstrated the compelling benefits of a Cloud which is capable of sustaining performance upon sudden load surges, delivering satisfactory IT resources on-demands to users, and maintaining high resource utilization, thus reducing infrastructure and management costs.",
"We consider a market-based resource allocation model for batch jobs in cloud computing clusters. In our model, we incorporate the importance of the due date of a job rather than the number of servers allocated to it at any given time. Each batch job is characterized by the work volume of total computing units (e.g., CPU hours) along with a bound on maximum degree of parallelism. Users specify, along with these job characteristics, their desired due date and a value for finishing the job by its deadline. Given this specification, the primary goal is to determine the scheduling of cloud computing instances under capacity constraints in order to maximize the social welfare (i.e., sum of values gained by allocated users). Our main result is a new ( C (C-k) ⋅ s (s-1))-approximation algorithm for this objective, where C denotes cloud capacity, k is the maximal bound on parallelized execution (in practical settings, k l C) and s is the slackness on the job completion time i.e., the minimal ratio between a specified deadline and the earliest finish time of a job. Our algorithm is based on utilizing dual fitting arguments over a strengthened linear program to the problem. Based on the new approximation algorithm, we construct truthful allocation and pricing mechanisms, in which reporting the job true value and properties (deadline, work volume and the parallelism bound) is a dominant strategy for all users. To that end, we provide a general framework for transforming allocation algorithms into truthful mechanisms in domains of single-value and multi-properties. We then show that the basic mechanism can be extended under proper Bayesian assumptions to the objective of maximizing revenues, which is important for public clouds. We empirically evaluate the benefits of our approach through simulations on data-center job traces, and show that the revenues obtained under our mechanism are comparable with an ideal fixed-price mechanism, which sets an on-demand price using oracle knowledge of users' valuations. Finally, we discuss how our model can be extended to accommodate uncertainties in job work volumes, which is a practical challenge in cloud settings.",
"We investigate a wireless system of multiple cells, each having a downlink shared channel in support of high-speed packet data services. In practice, such a system consists of hierarchically organized entities including a central server, Base Stations (BSs), and Mobile Stations (MSs). Our goal is to improve global resource utilization and reduce regional congestion given asymmetric arrivals and departures of mobile users, a goal requiring load balancing among multiple cells. For this purpose, we propose a scalable cross-layer framework to coordinate packet-level scheduling, call-level cell-site selection and handoff, and system-level cell coverage based on load, throughput, and channel measurements. In this framework, an opportunistic scheduling algorithm--the weighted Alpha-Rule--exploits the gain of multiuser diversity in each cell independently, trading aggregate (mean) down-link throughput for fairness and minimum rate guarantees among MSs. Each MS adapts to its channel dynamics and the load fluctuations in neighboring cells, in accordance with MSs' mobility or their arrival and departure, by initiating load-aware handoff and cell-site selection. The central server adjusts schedulers of all cells to coordinate their coverage by prompting cell breathing or distributed MS handoffs. Across the whole system, BSs and MSs constantly monitor their load, throughput, or channel quality in order to facilitate the overall system coordination. Our specific contributions in such a framework are highlighted by the minimum-rate guaranteed weighted Alpha-Rule scheduling, the load-aware MS handoff cell-site selection, and the Media Access Control (MAC)-layer cell breathing. Our evaluations show that the proposed framework can improve global resource utilization and load balancing, resulting in a smaller blocking rate of MS arrivals without extra resources while the aggregate throughput remains roughly the same or improved at the hot-spots. Our simulation tests also show that the coordinated system is robust to dynamic load fluctuations and is scalable to both the system dimension and the size of MS population.",
"We report on a novel technique called spatial coupling and its application in the analysis of random constraint satisfaction problems (CSP). Spatial coupling was invented as an engineering construction in the area of error correcting codes where it has resulted in efficient capacity-achieving codes for a wide range of channels. However, this technique is not limited to problems in communications, and can be applied in the much broader context of graphical models. We describe here a general methodology for applying spatial coupling to random constraint satisfaction problems and obtain lower bounds for their (rough) satisfiability threshold. The main idea is to construct a distribution of geometrically structured random K-SAT instances - namely the spatially coupled ensemble - which has the same (rough) satisfiability threshold, and is at the same time algorithmically easier to solve. Then by running well-known algorithms on the spatially coupled ensemble we obtain a lower bound on the (rough) satisfiability threshold of the original ensemble. The method is versatile because one can choose the CSP, there is a certain amount of freedom in the construction of the spatially coupled ensemble, and also in the choice of the algorithm. In this work we focus on random K-SAT but we have also checked that the method is successful for Coloring, NAE-SAT and XOR-SAT. We choose Unit Clause propagation for the algorithm which is analyzed over the spatially coupled instances. For K = 3, for instance, our lower bound is equal to 3.67 which is better than the current bounds in the literature. Similarly, for graph 3-colorability we get a bound of 2.22 which is also better than the current bounds in the literature.",
"Datacenter workloads demand high computational capabilities, flexibility, power efficiency, and low cost. It is challenging to improve all of these factors simultaneously. To advance datacenter capabilities beyond what commodity server designs can provide, we have designed and built a composable, reconfigurablefabric to accelerate portions of large-scale software services. Each instantiation of the fabric consists of a 6x8 2-D torus of high-end Stratix V FPGAs embedded into a half-rack of 48 machines. One FPGA is placed into each server, accessible through PCIe, and wired directly to other FPGAs with pairs of 10 Gb SAS cables In this paper, we describe a medium-scale deployment of this fabric on a bed of 1,632 servers, and measure its efficacy in accelerating the Bing web search engine. We describe the requirements and architecture of the system, detail the critical engineering challenges and solutions needed to make the system robust in the presence of failures, and measure the performance, power, and resilience of the system when ranking candidate documents. Under high load, the largescale reconfigurable fabric improves the ranking throughput of each server by a factor of 95 for a fixed latency distribution--- or, while maintaining equivalent throughput, reduces the tail latency by 29",
"SUMMARY Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver computational resources by means of virtualisation technologies. These technologies allow multiple independent virtual machines to reside in apparent isolation on the same physical host. Dynamically scaling applications running on IaaS clouds can lead to varied and unpredictable results because of the performance interference effects associated with co-located virtual machines. Determining appropriate scaling policies in a dynamic non-stationary environment is non-trivial. One principle advantage exhibited by IaaS clouds over their traditional hosting counterparts is the ability to scale resources on-demand. However, a problem arises concerning resource allocation as to which resources should be added and removed when the underlying performance of the resource is in a constant state of flux. Decision theoretic frameworks such as Markov Decision Processes are particularly suited to decision making under uncertainty. By applying a temporal difference, reinforcement learning algorithm known as Q-learning, optimal scaling policies can be determined. Additionally, reinforcement learning techniques typically suffer from curse of dimensionality problems, where the state space grows exponentially with each additional state variable. To address this challenge, we also present a novel parallel Q-learning approach aimed at reducing the time taken to determine optimal policies whilst learning online. Copyright © 2012 John Wiley & Sons, Ltd."
]
} |
1907.07349 | 2960672606 | Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints. | If the aim was to minimize the number of servers, optimization was carried out, for example, with different thresholds of distance @cite_15 @cite_45 or capacity @cite_15 @cite_35 . The effects of capacity constraints on intra-cluster traffic and temporal changes on workload balance is investigated in @cite_30 . In @cite_2 , also the effect of the number of access points to the energy consumption and average resource utilization was explored. In @cite_16 , the cost of the deployment was evaluated as a function of the percentage of people within a given distance from the server. | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_45",
"@cite_2",
"@cite_15",
"@cite_16"
],
"mid": [
"2052220570",
"2118955868",
"2121574851",
"2129486506",
"2028045612",
"2048599615"
],
"abstract": [
"Current capacity planning practices based on heavy over-provisioning of power infrastructure hurt (i) the operational costs of data centers as well as (ii) the computational work they can support. We explore a combination of statistical multiplexing techniques to improve the utilization of the power hierarchy within a data center. At the highest level of the power hierarchy, we employ controlled underprovisioning and over-booking of power needs of hosted workloads. At the lower levels, we introduce the novel notion of soft fuses to flexibly distribute provisioned power among hosted workloads based on their needs. Our techniques are built upon a measurement-driven profiling and prediction framework to characterize key statistical properties of the power needs of hosted workloads and their aggregates. We characterize the gains in terms of the amount of computational work (CPU cycles) per provisioned unit of power Computation per Provisioned Watt (CPW). Our technique is able to double the CPWoffered by a Power Distribution Unit (PDU) running the e-commerce benchmark TPC-W compared to conventional provisioning practices. Over-booking the PDU by 10 based on tails of power profiles yields a further improvement of 20 . Reactive techniques implemented on our Xen VMM-based servers dynamically modulate CPU DVFS states to ensure power draw below the limits imposed by soft fuses. Finally, information captured in our profiles also provide ways of controlling application performance degradation despite overbooking. The 95th percentile of TPC-W session response time only grew from 1.59 sec to 1.78 sec--a degradation of 12 .",
"Large-scale Internet services require a computing infrastructure that can beappropriately described as a warehouse-sized computing system. The cost ofbuilding datacenter facilities capable of delivering a given power capacity tosuch a computer can rival the recurring energy consumption costs themselves.Therefore, there are strong economic incentives to operate facilities as closeas possible to maximum capacity, so that the non-recurring facility costs canbe best amortized. That is difficult to achieve in practice because ofuncertainties in equipment power ratings and because power consumption tends tovary significantly with the actual computing activity. Effective powerprovisioning strategies are needed to determine how much computing equipmentcan be safely and efficiently hosted within a given power budget. In this paper we present the aggregate power usage characteristics of largecollections of servers (up to 15 thousand) for different classes ofapplications over a period of approximately six months. Those observationsallow us to evaluate opportunities for maximizing the use of the deployed powercapacity of datacenters, and assess the risks of over-subscribing it. We findthat even in well-tuned applications there is a noticeable gap (7 - 16 )between achieved and theoretical aggregate peak power usage at the clusterlevel (thousands of servers). The gap grows to almost 40 in wholedatacenters. This headroom can be used to deploy additional compute equipmentwithin the same power budget with minimal risk of exceeding it. We use ourmodeling framework to estimate the potential of power management schemes toreduce peak power and energy usage. We find that the opportunities for powerand energy savings are significant, but greater at the cluster-level (thousandsof servers) than at the rack-level (tens). Finally we argue that systems needto be power efficient across the activity range, and not only at peakperformance levels.",
"This paper proposes and evaluates an approach for power and performance management in virtualized server clusters. The major goal of our approach is to reduce power consumption in the cluster while meeting performance requirements. The contributions of this paper are: (1) a simple but effective way of modeling power consumption and capacity of servers even under heterogeneous and changing workloads, and (2) an optimization strategy based on a mixed integer programming model for achieving improvements on power-efficiency while providing performance guarantees in the virtualized cluster. In the optimization model, we address application workload balancing and the often ignored switching costs due to frequent and undesirable turning servers on off and VM relocations. We show the effectiveness of the approach applied to a server cluster test bed. Our experiments show that our approach conserves about 50 of the energy required by a system designed for peak workload scenario, with little impact on the applications' performance goals. Also, by using prediction in our optimization strategy, further QoS improvement was achieved.",
"In a Content Distribution Network (CDN), there are m servers storing the data; each of them has a specific bandwidth. All the requests from a particular client should be assigned to one server because of the routing protocol used. The goal is to minimize the total cost of these assignments—cost of each is proportional to the distance between the client and the server as well as the request size—while the load on each server is kept below its bandwidth limit. When each server also has a setup cost, this is an unsplittable hard-capacitated facility location problem. As much attention as facility location problems have received, there has been no nontrivial approximation algorithm when we have hard capacities (i.e., there can only be one copy of each facility whose capacity cannot be violated) and demands are unsplittable (i.e., all the demand from a client has to be assigned to a single facility). We observe it is NP-hard to approximate the cost to within any bounded factor in this case. Thus, for an arbitrary constant e>0, we relax the capacities to a 1+e factor. For the case where capacities are almost uniform, we give a bicriteria O(log n, 1+e)-approximation algorithm for general metrics and a (1+e, 1+e)-approximation algorithm for tree metrics. A bicriteria (α,β)-approximation algorithm produces a solution of cost at most α times the optimum, while violating the capacities by no more than a β factor. We can get the same guarantees for nonuniform capacities if we allow quasipolynomial running time. In our algorithm, some clients guess the facility they are assigned to, and facilities decide the size of the clients they serve. A straightforward approach results in exponential running time. When costs do not satisfy metricity, we show that a 1.5 violation of capacities is necessary to obtain any approximation. It is worth noting that our results generalize bin packing (zero connection costs and facility costs equal to one), knapsack (single facility with all costs being zero), minimum makespan scheduling for related machines (all connection costs being zero), and some facility location problems.",
"There is growing interest to replace traditional servers with low-power multicore systems such as ARM Cortex-A9. However, such systems are typically provisioned for mobile applications that have lower memory and I O requirements than server application. Thus, the impact and extent of the imbalance between application and system resources in exploiting energy efficient execution of server workloads is unclear. This paper proposes a trace-driven analytical model for understanding the energy performance of server workloads on ARM Cortex-A9 multicore systems. Key to our approach is the modeling of the degrees of CPU core, memory and I O resource overlap, and in estimating the number of cores and clock frequency that optimizes energy performance without compromising execution time. Since energy usage is the product of utilized power and execution time, the model first estimates the execution time of a program. CPU time, which accounts for both cores and memory response time, is modeled as an M G 1 queuing system. Workload characterization of high performance computing, web hosting and financial computing applications shows that bursty memory traffic fits a Pareto distribution, and non-bursty memory traffic is exponentially distributed. Our analysis using these server workloads reveals that not all server workloads might benefit from higher number of cores or clock frequencies. Applying our model, we predict the configurations that increase energy efficiency by 10 without turning off cores, and up to one third with shutting down unutilized cores. For memory-bounded programs, we show that the limited memory bandwidth might increase both execution time and energy usage, to the point where energy cost might be higher than on a typical x64 multicore system. Lastly, we show that increasing memory and I O bandwidth can improve both the execution time and the energy usage of server workloads on ARM Cortex-A9 systems.",
"Recent data confirm that the power consumption of the information and communications technologies (ICT) and of the Internet itself can no longer be ignored, considering the increasing pervasiveness and the importance of the sector on productivity and economic growth. Although the traffic load of communication networks varies greatly over time and rarely reaches capacity limits, its energy consumption is almost constant. Based on this observation, energy management strategies are being considered with the goal of minimizing the energy consumption, so that consumption becomes proportional to the traffic load either at the individual-device level or for the whole network. The focus of this paper is to minimize the energy consumption of the network through a management strategy that selectively switches off devices according to the traffic level. We consider a set of traffic scenarios and jointly optimize their energy consumption assuming a per-flow routing. We propose a traffic engineering mathematical programming formulation based on integer linear programming that includes constraints on the changes of the device states and routing paths to limit the impact on quality of service and the signaling overhead. We show a set of numerical results obtained using the energy consumption of real routers and study the impact of the different parameters and constraints on the optimal energy management strategy. We also present heuristic results to compare the optimal operational planning with online energy management operation ."
]
} |
1907.07349 | 2960672606 | Edge computing in the Internet of Things brings applications and content closer to the users by introducing an additional computational layer at the network infrastructure, between cloud and the resource-constrained data producing devices and user equipment. This way, the opportunistic nature of the operational environment is addressed by introducing computational power in location with low latency and high bandwidth. However, location-aware deployment of edge computing infrastructure requires careful placement scheme for edge servers. To provide the best possible Quality of Service for the user applications, their proximity needs to be optimized. Moreover, the deployment faces practical limitations in budget limitations, hardware requirements of servers and in online load balancing between servers. To address these challenges, we formulate the edge server placement as a capacitated location-allocation problem, while minimizing the distance between servers and access points of a real city-wide Wi-Fi network deployment. In our algorithm, we utilize both upper and lower server capacity constraints for load balancing. Furthermore, we enable sharing of workload between servers to facilitate deployments with low capacity servers. The performance of the algorithm is demonstrated in placement scenarios, exemplified by high capacity servers for edge computing and low capacity servers for Fog computing, with different parameters in a real-world data set. The data set consists of both dense deployment of access points in central areas, but also sparse deployment in suburban areas within the same network infrastructure. In comparison, we show that previous approaches do not sufficiently address such deployment. The presented algorithm is able to provide optimal placements that minimize the distances and provide balanced workload with sharing by following the capacity constraints. | Simulated data sets were utilized in @cite_15 @cite_32 , where the other studies utilized real-world data sets. The data set @cite_47 consists of geo-referenced phone call detail records over the city of Milan for three months' period, which was used in @cite_24 @cite_30 @cite_21 . The Shanghai Telecom data set contains the data of mobile users accessing 3000 base stations with 4.6 million call records and 7.5 million movement traces of 10 thousand users in six successive months @cite_0 . The data set was used in @cite_41 @cite_11 @cite_36 . The data set utilized in @cite_1 consists of thousands of Wi-Fi access points in New York City. In @cite_16 , the data set was obtained through the globally-distributed Planetlab nodes and the measurement nodes, deployed in China @cite_16 . | {
"cite_N": [
"@cite_30",
"@cite_47",
"@cite_41",
"@cite_36",
"@cite_21",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_0",
"@cite_15",
"@cite_16",
"@cite_11"
],
"mid": [
"1993599520",
"2556289220",
"2115785474",
"2047888606",
"2002644929",
"1996448158",
"2105585871",
"1809720746",
"2086787845",
"2022780164",
"2129343844",
"2791401550"
],
"abstract": [
"With billions of handsets in use worldwide, the quantity of mobility data is gigantic. When aggregated they can help understand complex processes, such as the spread viruses, and built better transportation systems, prevent traffic congestion. While the benefits provided by these datasets are indisputable, they unfortunately pose a considerable threat to location privacy. In this paper, we present a new anonymization scheme to release the spatio-temporal density of Paris, in France, i.e., the number of individuals in 989 different areas of the city released every hour over a whole week. The density is computed from a call-data-record (CDR) dataset, provided by the French Telecom operator Orange, containing the CDR of roughly 2 million users over one week. Our scheme is differential private, and hence, provides provable privacy guarantee to each individual in the dataset. Our main goal with this case study is to show that, even with large dimensional sensitive data, differential privacy can provide practical utility with meaningful privacy guarantee, if the anonymization scheme is carefully designed. This work is part of the national project XData (http: xdata.fr) that aims at combining large (anonymized) datasets provided by different service providers (telecom, electricity, water management, postal service, etc.).",
"In this study, with Singapore as an example, we demonstrate how we can use mobile phone call detail record (CDR) data, which contains millions of anonymous users, to extract individual mobility networks comparable to the activity-based approach. Such an approach is widely used in the transportation planning practice to develop urban micro simulations of individual daily activities and travel; yet it depends highly on detailed travel survey data to capture individual activity-based behavior. We provide an innovative data mining framework that synthesizes the state-of-the-art techniques in extracting mobility patterns from raw mobile phone CDR data, and design a pipeline that can translate the massive and passive mobile phone records to meaningful spatial human mobility patterns readily interpretable for urban and transportation planning purposes. With growing ubiquitous mobile sensing, and shrinking labor and fiscal resources in the public sector globally, the method presented in this research can be used as a low-cost alternative for transportation and planning agencies to understand the human activity patterns in cities, and provide targeted plans for future sustainable development.",
"In this paper, we analyze statistical properties of a communication network constructed from the records of a mobile phone company. The network consists of 2.5 million customers that have placed 810 million communications (phone calls and text messages) over a period of 6 months and for whom we have geographical home localization information. It is shown that the degree distribution in this network has a power-law degree distribution k−5 and that the probability that two customers are connected by a link follows a gravity model, i.e. decreases as d−2, where d is the distance between the customers. We also consider the geographical extension of communication triangles and we show that communication triangles are not only composed of geographically adjacent nodes but that they may extend over large distances. This last property is not captured by the existing models of geographical networks and in a last section we propose a new model that reproduces the observed property. Our model, which is based on the migration and on the local adaptation of agents, is then studied analytically and the resulting predictions are confirmed by computer simulations.",
"Several attempts have already been made to use telecommunications networks for urban research, but the datasets employed have typically been neither dynamic nor fine grained. Against this research backdrop the mobile phone network offers a compelling compromise between these extremes: it is both highly mobile and yet still localisable in space. Moreover, the mobile phone’s enormous and enthusiastic adoption across most socioeconomic strata makes it a uniquely useful tool for conducting large-scale, representative behavioural research. In this paper we attempt to connect telecoms usage data from Telecom Italia Mobile (TIM) to a geography of human activity derived from data on commercial premises advertised through Pagine Gialle, the Italian ‘Yellow Pages’. We then employ eigendecomposition—a process similar to factoring but suitable for this complex dataset—to identify and extract recurring patterns of mobile phone usage. The resulting eigenplaces support the computational and comparative analysis of space through the lens of telecommuniations usage and enhance our understanding of the city as a ‘space of flows’.",
"This paper presents a strategy to evaluate long-distance travel patterns by tracking cellular phone positions. The authors first note that long-distance trips are generally under-reported in typical household surveys, because of relative low frequency of these trips. Yet transportation analysis and travel demand forecasting require data, including that for long-distance trips, in order to model the decisions that people make related to travel. They stress that their suggested approach allows passive data collection on many travelers over a long period of time at low costs. They present results of a study in Israel, conducted in 2007, that included an average sample of 10,200 cell phone numbers per week for 16 weeks. The tracking system was based on recording events that contain a change in the position of the cell phone with respect to a given antenna. The method was specifically designed to capture long distance trips, as part of the development of a national demand model conducted for the Economics and Planning Department of the Israel Ministry of Transport. Using this method, origin–destination tables can be constructed directly from the cellular phone positions. The authors conclude that this model offers the advantage of monitoring travel demand at the aggregate level and thus could be useful in several transportation and land use applications.",
"Emerging class of context-aware mobile applications, such as Google Now and Foursquare require continuous location sensing to deliver different location-aware services. Existing research, in finding location at higher abstraction, use GPS and WiFi location interfaces to discover places, which result in high power consumption. These interfaces are also not available on all feature phones that are in majority in developing countries. In this paper, we present a framework PlaceMap that discovers different places and routes, solely using GSM information, i.e., Cell ID. PlaceMap stores and manages all the discovered places and routes, which are used to build spatio-temporal mobility profiles for the users. PlaceMap provides algorithms that can complement GSM-based place discovery with an initial WiFi-based training to increase accuracy. We performed a comprehensive offline evaluation of PlaceMap algorithms on two large real-world diverse datasets, self-collected dataset of 62 participants for 4 weeks in India and MDC dataset of 38 participants for 45 weeks in Switzerland. We found that PlaceMap is able to discover up to 81 of the places correctly as compared to GPS. To corroborate the potential of PlaceMap in real-world, we deployed a life-logging application for a small set of 18 participants and observed similar place discovery accuracy.",
"We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societ al level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing",
"People spend most of their time at a few key locations, such as home and work. Being able to identify how the movements of people cluster around these \"important places\" is crucial for a range of technology and policy decisions in areas such as telecommunications and transportation infrastructure deployment. In this paper, we propose new techniques based on clustering and regression for analyzing anonymized cellular network data to identify generally important locations, and to discern semantically meaningful locations such as home and work. Starting with temporally sparse and spatially coarse location information, we propose a new algorithm to identify important locations. We test this algorithm on arbitrary cellphone users, including those with low call rates, and find that we are within 3 miles of ground truth for 88 of volunteer users. Further, after locating home and work, we achieve commute distance estimates that are within 1 mile of equivalent estimates derived from government census data. Finally, we perform carbon footprint analyses on hundreds of thousands of anonymous users as an example of how our data and algorithms can form an accurate and efficient underpinning for policy and infrastructure studies.",
"The unprecedented growth in mobile data usage is posing significant challenges to cellular operators. One key challenge is how to provide quality of service to subscribers when their residing cell is experiencing a significant amount of traffic, i.e. becoming a traffic hotspot. In this paper, we perform an empirical study on data hotspots in today's cellular networks using a 9-week cellular dataset with 734K+ users and 5327 cell sites. Our analysis examines in details static and dynamic characteristics, predictability, and causes of data hotspots, and their correlation with call hotspots. We believe the understanding of these key issues will lead to more efficient and responsive resource management and thus better QoS provision in cellular networks. To the best of our knowledge, our work is the first to characterize in detail traffic hotspots in today's cellular networks using real data.",
"Modern technologies not only provide a variety of communication modes (e.g., texting, cell phone conversation, and online instant messaging), but also detailed electronic traces of these communications between individuals. These electronic traces indicate that the interactions occur in temporal bursts. Here, we study intercall duration of communications of the 100,000 most active cell phone users of a Chinese mobile phone operator. We confirm that the intercall durations follow a power-law distribution with an exponential cutoff at the population level but find differences when focusing on individual users. We apply statistical tests at the individual level and find that the intercall durations follow a power-law distribution for only 3,460 individuals (3.46 ). The intercall durations for the majority (73.34 ) follow a Weibull distribution. We quantify individual users using three measures: out-degree, percentage of outgoing calls, and communication diversity. We find that the cell phone users with a power-law duration distribution fall into three anomalous clusters: robot-based callers, telecom fraud, and telephone sales. This information is of interest to both academics and practitioners, mobile telecom operators in particular. In contrast, the individual users with a Weibull duration distribution form the fourth cluster of ordinary cell phone users. We also discover more information about the calling patterns of these four clusters (e.g., the probability that a user will call the cr-th most contact and the probability distribution of burst sizes). Our findings may enable a more detailed analysis of the huge body of data contained in the logs of massive users.",
"Pervasive infrastructures, such as cell phone networks, enable to capture large amounts of human behavioral data but also provide information about the structure of cities and their dynamical properties. In this article, we focus on these last aspects by studying phone data recorded during 55 days in 31 Spanish cities. We first define an urban dilatation index which measures how the average distance between individuals evolves during the day, allowing us to highlight different types of city structure. We then focus on hotspots, the most crowded places in the city. We propose a parameter free method to detect them and to test the robustness of our results. The number of these hotspots scales sublinearly with the population size, a result in agreement with previous theoretical arguments and measures on employment datasets. We study the lifetime of these hotspots and show in particular that the hierarchy of permanent ones, which constitute the ‘heart' of the city, is very stable whatever the size of the city. The spatial structure of these hotspots is also of interest and allows us to distinguish different categories of cities, from monocentric and “segregated” where the spatial distribution is very dependent on land use, to polycentric where the spatial mixing between land uses is much more important. These results point towards the possibility of a new, quantitative classification of cities using high resolution spatio-temporal data.",
"Because of the increasing relevance of the Internet of Things and location-based services, researchers are evaluating wireless positioning techniques, such as fingerprinting, on Low Power Wide Area Network (LPWAN) communication. In order to evaluate fingerprinting in large outdoor environments, extensive, time-consuming measurement campaigns need to be conducted to create useful datasets. This paper presents three LPWAN datasets which are collected in large-scale urban and rural areas. The goal is to provide the research community with a tool to evaluate fingerprinting algorithms in large outdoor environments. During a period of three months, numerous mobile devices periodically obtained location data via a GPS receiver which was transmitted via a Sigfox or LoRaWAN message. Together with network information, this location data is stored in the appropriate LPWAN dataset. The first results of our basic fingerprinting implementation, which is also clarified in this paper, indicate a mean location estimation error of 214.58 m for the rural Sigfox dataset, 688.97 m for the urban Sigfox dataset and 398.40 m for the urban LoRaWAN dataset. In the future, we will enlarge our current datasets and use them to evaluate and optimize our fingerprinting methods. Also, we intend to collect additional datasets for Sigfox, LoRaWAN and NB-IoT."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.